I originally attempted to use mk-parallel-[dump/restore] but I kept running into issues when restoring the data. mydumper (written by Domas Mituzas) has been tested to be faster so I thought I would give it a try.
The dump command is straight forward:
mydumper --database=mydb --rows=100000 --threads=16
I only wanted to dump a certain DB. Also, I found that chunking insert statements made imports faster (I choose 100,000 rows). Last but not least, the number of threads to dump with (generally number of cores your machine has). This will create an export directory with all the SQL files in your current working directory. I didn’t use compression because my database is relatively small so space was not an issue and I did not want the overhead (albeit small amount) of (de)compression.
Since mydumper only dumps table data, you will also need to dump the table structure.
mysqldump --no-data mydb > mydb_table_structure.sql
Parallel restore is a little bit more “complex”. As the mydumper export is not compatible with mk-parallel-restore, you have to do a little pipe magic. This is a slightly modified one-liner from Domas Mituzas found here. His assumes you used compression so no zcat in mine. Be sure you’ve restored the table structure first using the sql file from mysqldump.
cd /var/data/backups/export-20100717-024935 echo *.sql | xargs -n1 -P 16 -I % sh -c 'mysql -uimporter -pasdf -h10.0.1.10 mydb < %'
I ran this from the same machine I was exporting from (which was a different machine than I was importing to). This little gem, is made possible by xargs’ ability to thread; ‘-P 16′ means 16 threads. If you dump/import on the same machine, you probably won’t need the user/password/host parameters in the mysql command.
Pingback: How to keep a passive-master MySQL database warm « Dctr Watson()
Pingback: Google NexusOne phone Guangzhou are stir | Droid Reviews()
Pingback: Private Krankenversicherung()
Pingback: Swingerportal()
Pingback: Unsorted / General | Annotary()