I recently switched to a 27″ iMac workstation running Snow Leopard from Ubuntu 10.10 (Maverick Meerkat). The resident Mac fanbois suggested Homebrew over Fink and MacPorts. At first I didn’t heed their warnings and went with MacPorts (I do love the FreeBSD ports system of which MacPorts is based on). However, when it decided to start setting up its own environment I understood the warnings. So I scrapped that and went with Homebrew.
Small issue with Homebrew is it seems to want to stick to ‘Stable’ releases. I really didn’t want to use MySQL 5.1 since it’s a snail compared to MySQL 5.5. Not having an ultra-fast storage array in my iMac this would become an issue with all the work I have to do. So I set out into Ruby land and banged out my first Homebrew formula.
Since it hasn’t been pulled into the official repo, here’s the latest commit: MySQL 5.5 Homebrew Formula
Mainline Homebrew now has 5.5.x as the default MySQL version
Updated: Oracle released MySQL 5.5.8; the first GA release of the 5.5 series.
Homebrew Formula cookbook
Building MySQL [5.5] from Source
When you have a passive master (in a master-master setup), there is usually no queries going to it. If there are no queries it will not have its buffers/caches at least semi-ready for a production workload. Maatkit (love this toolkit) has a nice little tool to help do this, mk-query-digest. Now this is only 1 of many uses for this tool. So visit the mk-query-digest doc to find out the rest of its many uses.
A hot backup means you don’t have to take the database offline or stop sending traffic to it. This is especially useful for creating new replication slaves. A hot backup takes minutes vs the hours a dump/import would take. Ideally, all the tables in the database need to be InnoDB; however, Percona has adapted a script (innobackupex) to also backup MyISAM tables (if there are MyISAM tables then you probably should stop sending traffic to it as those require table locks). What makes XtraBackup so nice (besides being free) is that it logs C_UD statements to applied during restoration and it can do incremental backups as well.
I originally attempted to use mk-parallel-[dump/restore] but I kept running into issues when restoring the data. mydumper (written by Domas Mituzas) has been tested to be faster so I thought I would give it a try.
I read about this wonderful toolkit called Maatkit in an equally wonderful book High Performance MySQL. I highly recommend both to anyone having to deal with MySQL. I’ll definitely do more posts about Maatkit and its tools as I use them.
I’m going to be up-front now, this is not the best method to run this. In the mk-table-checksum documentation it details the use of the faster/more accurate FNV1A_64 hash UDF; I would recommend using it over MD5 or SHA1. Also, if all your slaves are accessible from the machine running mk-table-checksum then I also recommend the –replicate-check flag instead of manually running the SQL.
This applies to a replication architecture like the one detailed here Improving Replication Performance.
My situation was Master2 (which runs the blackhole engine) ran out of disk space and the current binlog was lost. After freeing up disk space on Master2 and restarting its replication it was now missing hours worth of binlog that its slaves are expecting to find. So to remedy this, I pointed the slaves directly back to Master1 to slurp up the missing binlog and then switch them back to Master2 once they were caught up.