6

I have 12 GB MySQL InnoDB Database with around 800 queries per second. No sooner I start 'mysqldump' over it to make backup I start getting '503' on my website.

How do I make backup of my live database without affecting the site performance?

RolandoMySQLDBA
  • 182,700
  • 33
  • 317
  • 520
dragosrsupercool
  • 365
  • 1
  • 6
  • 15

4 Answers4

6

try the following command since the database size is not small:

mysqldump -u USER -p --single-transaction --quick --lock-tables=false --all-databases (or) DATABASE | gzip > OUTPUT.gz
Ahmad Abuhasna
  • 2,666
  • 4
  • 23
  • 36
  • 2
    --single-transaction disables --lock-tables, so no need for --lock-tables=false (See http://dba.stackexchange.com/a/60145/877) – RolandoMySQLDBA Dec 03 '14 at 21:43
  • 1
    And what does --quick do? – dragosrsupercool Dec 04 '14 at 10:24
  • @dragosrsupercool this is a quot from MySQL website : --quick:This option is useful for dumping large tables. It forces mysqldump to retrieve rows for a table from the server a row at a time rather than retrieving the entire row set and buffering it in memory before writing it out. – Ahmad Abuhasna Dec 04 '14 at 13:16
  • @RolandoMySQLDBA In-fact I referred to the following website t that describe why use (--lock-tables=false) or (--skip-lock-tables) http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql please advise thanks in advance – Ahmad Abuhasna Dec 04 '14 at 13:20
4

You need to use Percona XtraBackup tool. It works like a charm for huge datasets and doesn't interrupt MySQL operations. http://www.percona.com/doc/percona-xtrabackup/2.2/innobackupex/creating_a_backup_ibk.html There are some tricks but it's worth it.

vfedorkov
  • 178
  • 6
  • All should keep in mind that mysqldump can catch a snapshot from the start of the backup whereas Percona XtraBackup catches snapshot from the completion of the backup. – RolandoMySQLDBA Dec 05 '14 at 22:28
4

You should use Percona Toolkit to backup database to prevent from blocking database:

sudo innobackupex --user=root --password=rootPASSWORD --host localhost /tmp/
sudo innobackupex --apply-log --use-memory=2G /tmp/$TIMESTAMP/

You need enough disk space in /tmp. Once you finish it, you can copy entire directory to another server.

In xtrabackup_binlog_info, there is replication information. (it is useful if you need to setup slave replication)

Gea-Suan Lin
  • 375
  • 1
  • 2
  • 6
2

You must use --single-transaction option

mysqldump -uroot -p --single-transaction databasename > dump.sql

It allows for point-in-time snapshot of data. Once mysqldump starts, all the InnoDB tables will be frozen in time. Suppose you start the mysqldump at 2:30 PM and it finishes at 3:00 PM. All the InnoDB tables dumped will be from 2:30 PM. All other changes (INSERTs, UPDATEs, DELETEs) will continue with interruption and will not be included in the dump.

MyISAM would interfere with the point-in-time backup if the MyISAM tables were being updated by INSERTs, UPDATEs, or DELETEs. If the MyISAM tables were just for reads, --single-transaction would still be fine.

SUGGESTION

If you have MySQL Replication set up, you could go the Slave and run this

mysql -uroot -p... -ANe"STOP SLAVE"
mysqldump -uroot -p --single-transaction databasename > dump.sql
mysql -uroot -p... -ANe"START SLAVE"

The Master wouldn't know of any backups going on.

RolandoMySQLDBA
  • 182,700
  • 33
  • 317
  • 520