Fun with mysqldump

mysqldump makes for a quick and dirty way of backing up MySQL. Restore is trivial, if slow. However, it ties up the DB for the entire time it's running. Say you're like me and you have a table with 918,732,676 rows (yes, 9 hundred million rows). This means that the DB is tied up for 5 hours of dumping and 7 days for the restore. Clearly something better is needed.

However, before I work on integrating xtrabackup, I decided to see how slicing the dump with a LIMIT clause would work. mysqldump doesn't allow me to set LIMIT directly, however it does zero validation on the WHERE clause I can set. So:

mysqldump --opt DB --ignore-table BIGTABLE
mysqdump -NB --no-data DB BIGTABLE
MAX=$(mysql -NB DB -e 'select count(*) from BIGTABLE')
INC=$(( $MAX/1000 ))
seq -f '%.0f' 0 $INC $MAX | while read start ; do
mysqldump --compact --no-create-info DB BIGTABLE --where "1 LIMIT $start,$INC"
) > /my/backup.sql

This only works because mysqldump blindly tacks the WHERE clause onto it's select statement, giving us (roughly)

This is a hack and a half: a- while it works now (Perconna 5.1.54) there's no guarantee it won't break at some point. But more importantly, b- this will still take a week to restore.

BTW, black-eyes to the idiots made %g the default seq format.

No comments: