-
MySQL Backup and Recovery Saturday, 2006/09/30 - 14:21 — yejr
MySQL backup and recovery
Author/Translator: Ye Jinrong (Email: ), Source: http://imysql.cn . Please indicate the author/translator and source when reprinting. It cannot be used for commercial purposes. Violators will be prosecuted.
Date: 2006/10/01
This article discusses the backup and recovery mechanism of MySQL and how to maintain data tables, including the two main table types: MyISAM and Innodb. The MySQL version designed in this article is 5.0.22.
The free backup tools currently supported by MySQL include: mysqldump, mysqlhotcopy. You can also use SQL syntax for backup: BACKUP TABLE or SELECT INTO OUTFILE, or backup binary logs (binlog), or directly copy data files and related configuration files. MyISAM tables are saved as files, so they are relatively easy to back up. Several of the methods mentioned above can be used. All tables in Innodb are stored in the same data file ibdata1 (it may also be multiple files, or independent table space files), which is relatively difficult to back up. Free solutions can be to copy data files and backup binlog. Or use mysqldump.
1.mysqldump
1.1 Backup
mysqldump uses a SQL-level backup mechanism. It exports data tables into SQL script files. It is relatively suitable for upgrading between different MySQL versions. This is also the most commonly used backup method.
Now let’s talk about some of the main parameters of mysqldump:
--compatible=name
It tells mysqldump which database or older version of MySQL server the exported data will be compatible with. Values can be ansi, mysql323, mysql40, postgresql, oracle, mssql, db2, maxdb, no_key_options, no_tables_options, no_field_options, etc. To use several values, separate them with commas. Of course, it does not guarantee complete compatibility, but it does try to be compatible.
--complete-insert, -c
The exported data uses the complete INSERT method including the field names, that is, all values are written on one line. Doing so can improve insertion efficiency, but it may be affected by the max_allowed_packet parameter and cause insertion failure. Therefore, this parameter needs to be used with caution, at least I don't recommend it.
--default-character-set=charset
Specify which character set to use when exporting data. If the data table does not use the default latin1 character set, then this option must be specified when exporting, otherwise garbled characters will occur after importing the data again.
--disable-keys
Tell mysqldump to add /*!40000 ALTER TABLE table DISABLE KEYS */; and /*!40000 ALTER TABLE table ENABLE KEYS */; statements at the beginning and end of the INSERT statement. This can greatly improve the speed of the insert statement because it is The index is rebuilt after all data has been inserted. This option is only suitable for MyISAM tables.
--extended-insert = true|false
By default, mysqldump turns on the --complete-insert mode, so if you don't want to use it, just use this option and set its value to false.
--hex-blob
Export binary string fields using hexadecimal format. This option must be used if there is binary data. The affected field types are BINARY, VARBINARY, and BLOB.
--lock-all-tables,-x
Before starting the export, submit a request to lock all tables in all databases to ensure data consistency. This is a global read lock and is automatically turned off with the --single-transaction and --lock-tables options.
--lock-tables
It is similar to --lock-all-tables, but it locks the currently exported data table instead of locking all tables in the database at once. This option is only applicable to MyISAM tables. If it is an Innodb table, you can use the --single-transaction option.
--no-create-info, -t
Only export the data without adding a CREATE TABLE statement.
--no-data,-d
No data is exported, only the database table structure is exported.
--opt
This is just a quick option, equivalent to also adding --add-drop-tables --add-locking --create-option --disable-keys --extended-insert --lock-tables --quick --set-charset options. This option allows mysqldump to export data quickly, and the exported data can be imported back quickly. This option is enabled by default, but can be disabled with --skip-opt. Note that if you run mysqldump without specifying the --quick or --opt option, the entire result set will be placed in memory. Problems may occur if you export a large database.
--quick,-q
This option is useful when exporting large tables. It forces mysqldump to output the records obtained from the server query directly instead of retrieving all records and caching them in memory.
--routines, -R
Export stored procedures and custom functions.
--single-transaction
This option submits a BEGIN SQL statement before exporting data. BEGIN does not block any applications and ensures a consistent state of the database during export. It only works with transactional tables such as InnoDB and BDB.
This option and the --lock-tables option are mutually exclusive because LOCK TABLES causes any pending transactions to be implicitly committed.
To export large tables, the --quick option should be used in combination.
--triggers
Also export triggers. This option is enabled by default, use --skip-triggers to disable it.
Please refer to the manual for details of other parameters. I usually use the following SQL to back up MyISAM tables:
/usr/local/mysql/bin/mysqldump -uyejr -pyejr
--default-character-set=utf8 --opt --extended-insert=false
--triggers -R --hex-blob -x db_name > db_name.sql
Use the following SQL to back up Innodb tables:
/usr/local/mysql/bin/mysqldump -uyejr -pyejr
--default-character-set=utf8 --opt --extended-insert=false
--triggers -R --hex-blob --single-transaction db_name > db_name.sql
In addition, if you want to implement online backup, you can also use the --master-data parameter, as follows:
/usr/local/mysql/bin/mysqldump -uyejr -pyejr
--default-character-set=utf8 --opt --master-data=1
--single-transaction --flush-logs db_name > db_name.sql
It only requests the lock table at the beginning, then refreshes the binlog, and then adds the CHANGE MASTER statement to the exported file to specify the binlog location of the current backup. If you want to restore this file to the slave, you can use this way to do it.
1.2 Restore The file backed up with mysqldump is a SQL script that can be directly imported. There are two ways to import the data.
Use mysql client directly, for example:
/usr/local/mysql/bin/mysql -uyejr -pyejr db_name < db_name.sql
Using SOURCE syntax is actually not a standard SQL syntax, but a function provided by the mysql client, for example:
SOURCE /tmp/db_name.sql;
Here you need to specify the absolute path of the file, and it must be a file that the mysqld running user (for example, nobody) has permission to read.
2. mysqlhotcopy
2.1 Backup
mysqlhotcopy is a PERL program originally written by Tim Bunce. It uses LOCK TABLES, FLUSH TABLES and cp or scp to quickly back up the database. It is the fastest way to back up a database or a single table, but it can only run on the machine where the database files (including data table definition files, data files, and index files) are located. mysqlhotcopy can only be used to back up MyISAM, and will only run on Unix-like and NetWare systems.
mysqlhotcopy supports copying multiple databases at one time, and also supports regular expressions. Here are a few examples:
root#/usr/local/mysql/bin/mysqlhotcopy -h=localhost -u=yejr -p=yejr
db_name /tmp (change the database directory db_name
Copy to /tmp
Down)
root#/usr/local/mysql/bin/mysqlhotcopy -h=localhost -u=yejr -p=yejr
db_name_1 ... db_name_n /tmp
root#/usr/local/mysql/bin/mysqlhotcopy -h=localhost -u=yejr -p=yejr
db_name./regex/ /tmp
Please refer to the manual for more detailed usage, or call the following command to view the help of mysqlhotcopy:
perldoc /usr/local/mysql/bin/mysqlhotcopy
Note that if you want to use mysqlhotcopy, you must have SELECT and RELOAD (to execute FLUSH TABLES) permissions, and you must also have permission to read the datadir/db_name directory.
2.2 Restore
Mysqlhotcopy backs up the entire database directory. When used, it can be copied directly to the datadir specified by mysqld (here it is /usr/local/mysql/data/). At the same time, attention should be paid to permission issues, as in the following example:
root#cp -rf db_name /usr/local/mysql/data/
root#chown -R nobody:nobody /usr/local/mysql/data/ (change the owner of the db_name directory to mysqld
running user)
3. SQL syntax backup
3.1 Backup
The BACKUP TABLE syntax is actually similar to the working principle of mysqlhotcopy. They both lock the table and then copy the data file. It can achieve online backup, but the effect is not ideal, so it is not recommended. It only copies table structure files and data files, but does not copy index files at the same time, so recovery is slower.
example:
BACK TABLE tbl_name TO '/tmp/db_name/';
Note that you must have FILE permission to execute this SQL, and the directory /tmp/db_name/ must be writable by the mysqld user. The exported file cannot overwrite the existing file to avoid security issues.
SELECT INTO OUTFILE exports the data into an ordinary text file. You can customize the field interval to facilitate processing of this data.
example:
SELECT * INTO OUTFILE '/tmp/db_name/tbl_name.txt' FROM tbl_name;
Note that you must have FILE permission to execute this SQL, and the file /tmp/db_name/tbl_name.txt must be writable by the mysqld user. The exported file cannot overwrite the existing file to avoid security issues.
3.2 To restore files backed up using the BACKUP TABLE method, you can run the RESTORE TABLE statement to restore the data table.
example:
RESTORE TABLE FROM '/tmp/db_name/';
Permission requirements are similar to those described above.
For files backed up using the SELECT INTO OUTFILE method, you can run the LOAD DATA INFILE statement to restore the data table.
example:
LOAD DATA INFILE '/tmp/db_name/tbl_name.txt' INTO TABLE tbl_name;
Permission requirements are similar to those described above. Before importing data, the data table must already exist. If you are worried about data duplication, you can add the REPLACE keyword to replace existing records or use the IGNORE keyword to ignore them.
4. Enable binary log (binlog)
The method of using binlog is relatively more flexible, saves worry and effort, and can also support incremental backup.
Mysqld must be restarted when binlog is enabled. First, close mysqld, open my.cnf, and add the following lines:
server-id=1
log-bin = binlog
log-bin-index = binlog.index
Then start mysqld. Binlog.000001 and binlog.index will be generated during the operation. The former file is mysqld recording all update operations on data, and the latter file is the index of all binlogs, which cannot be easily deleted. Please see the manual for information about binlog.
When you need to back up, you can first execute the SQL statement to let mysqld terminate writing to the current binlog, and then back up the file directly. In this way, the purpose of incremental backup can be achieved:
FLUSH LOGS; If you are backing up the slave server in the replication system, you should also back up the master.info and relay-log.info files.
The backed up binlog file can be viewed using the tool mysqlbinlog provided by MySQL, such as:
/usr/local/mysql/bin/mysqlbinlog /tmp/binlog.000001
This tool allows you to display all SQL statements under a specified database, and can also limit the time range, which is quite convenient. Please refer to the manual for details.
When restoring, you can use statements similar to the following:
/usr/local/mysql/bin/mysqlbinlog /tmp/binlog.000001 | mysql -uyejr -pyejr db_name
Use the SQL statements output by mysqlbinlog directly as input to execute it.
If you have an idle machine, you might as well use this method to back it up. Since the performance requirements of slave machines are relatively low, the cost is low. Incremental backup can be achieved at low cost and part of the data query pressure can be shared. Why not?
5. Direct backup of data files Compared with the previous methods, backing up data files is the most direct, fast and convenient. The disadvantage is that incremental backup is basically impossible. In order to ensure data consistency, the following SQL statement needs to be executed before backing up the file:
FLUSH TABLES WITH READ LOCK; that is, flushing all the data in the memory to the disk and locking the data table to ensure that no new data is written during the copy process. The data backed up by this method is also very simple to restore, just copy it back to the original database directory.
Note that for Innodb type tables, you also need to back up its log files, that is, ib_logfile* files. Because when the Innodb table is damaged, you can rely on these log files to recover.
6. Backup strategy For systems with medium-level business volume, the backup strategy can be determined as follows: full backup for the first time, incremental backup once a day, full backup once a week, and so on. For important and busy systems, you may need a full backup once a day, an incremental backup once an hour, or even more frequently. In order to achieve online backup and incremental backup without affecting online business, the best way is to use the master-slave replication mechanism (replication) to make backups on the slave machine.
7. Data maintenance and disaster recovery. As a DBA (I am not yet, haha), one of the most important tasks is to ensure that data tables can be used safely, stably and at high speed. Therefore, your data tables need to be maintained regularly. The following SQL statement is useful:
CHECK TABLE or REPAIR TABLE, check or maintain MyISAM tables
OPTIMIZE TABLE, optimize MyISAM table
ANALYZE TABLE, analyze the MyISAM table. Of course, the above commands can all be completed through the tool myisamchk, and will not be described in detail here.
Innodb tables can be defragmented and improved indexing speed by executing the following statements:
ALTER TABLE tbl_name ENGINE = Innodb;
This is actually a NULL operation. On the surface, it does nothing, but it actually rearranges the fragments.
Commonly used MyISAM tables can be restored using the methods mentioned above. If the index is broken, you can use the myisamchk tool to rebuild the index. For Innodb tables, it is not so straightforward, because it stores all tables in one table space. However, Innodb has a checking mechanism called fuzzy checkpoint. As long as the log file is saved, errors can be repaired based on the log file. You can add the following parameters in the my.cnf file to let mysqld automatically check the log file when it starts:
innodb_force_recovery = 4
See the manual for information on this parameter.
8. Summarize the data backup and determine the appropriate backup strategy. This is a small part of what a DBA does. Everything is difficult at the beginning.