rsyncto a large partition somewhere. That's not really a backup. It protects against hardware failure, yes. But not against "oops! I deleted that file 3 weeks ago." What's more I'm sure I'm not doing it as well as it could be; by backing-up to a hard disk, why not backup the entire OS, and the hard disk bootable? Would be complicated if multiple machines backup to one backup server, but for my clients, I most often have one server which backs-up to one set of removable disks
What's more, I moved all my VMs from Jimmy to George yesterday. When I say "move VM" I should say "moved all the services to new CentOS 5 VMs." Which sort of shows up another problem: keeping track of what you've set up where and why. Jimmy had lighttpd running on it. Why? Oh... to see the RRD graphs of the temperature and humidity in the attic. I should document all this, now that I "know it" but ideally it should be automated.
And conformance tests; a bit like unit tests, you run some scripts to see if everything in the new install is working as expected. After all was done, I realised that I hadn't copied over my subversion repositories, nor set them up.
One central issue, I suppose, is config files. Ideally, you just copy in the backed-up config file, start the service, run the test script, verify success. I notice that
--configfilesoption. Combined with
rpm's verify options, maybe one could detect what config files have changed and keep a backup set of them. Of course, things like
/var/spool/hylafax/etc/config.ttyS0would have to be added by hand. As would stuff installed by hand into
And a modified config file implies that the package is being used, so the package would get flagged as important. And then, maybe once a week say, you'd get email "hey, you don't have a conformance script for package X." Or "You didn't write a changelog for the latest changes to file Y."