I use the following command to backup my MySQL (with mysqldumpl) two times a day:
/usr/local/mysql/bin/mysqldumpl --lock-tables=false --default-character-set=utf8 -h'host' -u'user' -p'pass' -f -v db266991x2616700 | gzip > /Volumes/Backups/backup.sql.gz
I just found out, that the mysqldump is often incomplete. The file is written, but the file sizes vary. Sometimes the backup file is 20 - 30 % smaller than other backup files.
All files end with -- Dump completed on ..., but they are incomplete and just stop in the middle of creating the insert command for a very large table:
LOCK TABLES `log_mail` WRITE;
/*!40000 ALTER TABLE `log_mail` DISABLE KEYS */;
INSERT INTO `log_mail` VALUES (................................STOPS SOMEWHERE HERE
- What could be the reason for that? Could be a reason, that the database is in use and I don't lock the tables?
- How can I prevent this? Is there a way to check for errors and repeat the backup in case of an error? It is important that I have a reliable way to backup the database several times a day without checking every backup manually.
- It is an InnoDB, so I just found out that might be a better command:
mysqldump --single-transaction --skip-lock-tables some_database > some_database.sqlIs this right? What would be the best/right way? - Before my question was closed on stackoverflow one comment said:
"I suggest you redirect stderr to a file and also check the return status code"How would I do that?
/usr/local/mysql/bin/mysqldumpl --lock-tables=false --default-character-set=utf8 -h'host' -u'user' -p'pass' -f -v db266991x2616700 | gzip > /Volumes/Backups/backup.sql.gz >> log_file 2>> /Volumes/Backups/error.logNow I get the error message within the terminal after the dump has started and was running a while: "MySQL server has gone away (2006)". But no error log is written and now the backup file is 0 MB. So I don't know if it is the same issue. – Phantom Dec 20 '20 at 14:42