20

Possible Duplicate:
Speeding up mysql dumps and imports

mysqldump is reasonably fast, but dumps of a medium-sized database (20-30 megs) take several minutes to load using mysql my_database < my_dump_file.sql

Are there some mysql settings I can tune to speed up the load? Is there a better way to load saved data?

I've experimented using the mysqlimport utility with CSV-based dumps. These load slightly--but not appreciably--faster. I'm tempted to just copy raw database files around, but that seems like a bad idea.

Community
  • 1
  • 1
Ben K.
  • 1,769
  • 5
  • 18
  • 21
  • Just FYI... I had a Raid 5 drive failing, which caused really bad performance with a MySQL restore. What normally took 40 minutes was pushing 24 hours. Just for reference. – gahooa May 03 '11 at 18:31

6 Answers6

31

Assuming that you're using InnoDB...

I was in the situation of having a pile of existing mysqldump output files that I wanted to import in a reasonable time. The tables (one per file) were about 500MB and contained about 5,000,000 rows of data each. Using the following parameters I was able to reduce the insert time from 32 minutes to under 3 minutes.

innodb_flush_log_at_trx_commit = 2
innodb_log_file_size = 256M
innodb_flush_method = O_DIRECT

You'll also need to have a reasonably large innodb_buffer_pool_size setting.

Because my inserts were a one-off I reverted the settings afterwards. If you're going to keep using them long-term, make sure you know what they're doing.

I found the suggestion to use these settings on Cedric Nilly's blog and the detailed explanation for each of the settings can be found in the MySQL documentation.

Jeff Hiltz
  • 483
  • 4
  • 10
  • This is the second time I use this method. In both cases (different databases), it reduced the import time from hours into few minutes. Thanks! – lepe Feb 02 '15 at 06:54
  • I recently had to do this for a simple table of ~8 columns and mostly int data. Before applying these, i was getting about ~30 inserts/s (indexes disabled). After the change i was getting ~600 inserts/s. The biggest win comes from setting innodb_flush_log_at_trx_commit from '1' (default) to '2', which flushes writes to log every sec, instead of on every transaction (which is after each insert when a autocommit is true. It is true by default) – Adil Aug 06 '15 at 05:56
  • Can you explain what happen when editing this values? For us to understand. I just used this and the speed is awesome. – reignsly Sep 12 '17 at 17:52
6

maatkit - parallel dump

and

maatkit - parallel restore

Very fast.

Jonathan
  • 2,103
  • 4
  • 19
  • 25
  • Thanks, this is exactly what I was looking for! – Ben K. Jul 21 '09 at 19:02
  • 14
    The description says there are bugs in the software and should NOT be used for backups or on important data! Too bad...I wonder if I can just run multiple mysqldumps in parallel on different tables to get a speedup? – davr Sep 20 '11 at 16:57
5

Make sure you are using the --opt option to mysqldump when dumping. This will use bulk insert syntax, delay key updates, etc...


If you are ONLY using MyISAM tables, you can safely copy them by stopping the server, copying them to a stopped server, and starting that.

If you don't want to stop the origin server, you can follow this:

  1. Get a read lock on all tables
  2. Flush all tables
  3. Copy the files
  4. Unlock the tables

But I'm pretty sure your copy-to server needs to be stopped when you put them in place.

gahooa
  • 122,825
  • 12
  • 91
  • 98
  • Actually thats what mysqlhotcopy does – aldrinleal May 01 '11 at 05:58
  • Sadly, I only have one up-vote to give. Changing the way I was performing mysqldump, using the --opt option as you suggested, shaved 5 hours off of my import! – Nate Apr 10 '15 at 18:40
  • @Nate There must be other reasons why your import is now faster, because `--opt` is enabled by default in mysqldump. It has been so at least since v5.5 (2010). – dr_ Oct 28 '16 at 10:14
4

Try out https://launchpad.net/mydumper - multi-threaded mysql backup/restore which is 3x to 10x times faster than mysqldump http://vbtechsupport.com/1695/

kenorb
  • 137,499
  • 74
  • 643
  • 694
p4guru
  • 1,274
  • 2
  • 17
  • 23
3

Are you sure the data is sane, and there aren't any filesystem or system performance issues? Several minutes for a 20-30 meg database is a long time. I'm on a MacBook with 2GB of RAM, a 320GB HD and the standard 2.1GHz processor. I grabbed one of my databases for a quick benchmark:

gavinlaking$ du -sm 2009-07-12.glis
74  2009-07-12.glis
gavinlaking$ mysql -pxxx -e "drop database glis"
gavinlaking$ mysql -pxxx -e "create database glis"
gavinlaking$ time mysql -pxxx glis < 2009-07-12.glis 

real    0m17.009s
user    0m2.021s
sys 0m0.301s

17 seconds for a 74 megabyte file. That seems pretty snappy to me. Even if it was 4 times bigger (making it just shy of 300 megabytes), it finishes in just under 70 seconds.

Gav
  • 10,428
  • 7
  • 31
  • 35
1

There is an method for using LVM snapshots for backup and restore that might be an interesting option for you.

Instead of doing a mysqldump, consider using LVM to take snapshots of your MySQL data directories. Using LVM snapshots allow you to have nearly real time backup capability, support for all storage engines, and incredibly fast recovery. To quote from the link below,

"Recovery time is as fast as putting data back and standard MySQL crash recovery, and it can be reduced even further."

http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/