XTRABACKUP(1) | Percona XtraBackup | XTRABACKUP(1) |
xtrabackup - Percona XtraBackup 8.0 Documentation
The xtrabackup binary is a compiled C program that is linked with the InnoDB libraries and the standard MySQL client libraries.
xtrabackup enables point-in-time backups of InnoDB / XtraDB tables together with the schema definitions, MyISAM tables, and other portions of the server.
The InnoDB libraries provide the functionality to apply a log to data files. The MySQL client libraries are used to parse command-line options and configuration file.
The tool runs in either --backup or --prepare mode, corresponding to the two main functions it performs. There are several variations on these functions to accomplish different tasks, and there are two less commonly used modes, --stats and --print-param.
xtrabackup supports incremental backups. It copies only the data that has changed since the last full backup. You can perform many incremental backups between each full backup, so you can set up a backup process such as a full backup once a week and an incremental backup every day, or full backups every day and incremental backups every hour.
NOTE:
Incremental backups work because each InnoDB page (usually 16kb in size) contains a log sequence number, or LSN. The LSN is the system version number for the entire database. Each page's LSN shows how recently it was changed. An incremental backup copies each page whose LSN is newer than the previous incremental or full backup's LSN. There are two algorithms in use to find the set of such pages to be copied. The first one, available with all the server types and versions, is to check the page LSN directly by reading all the data pages. The second one, available with Percona Server for MySQL, is to enable the changed page tracking feature on the server, which will note the pages as they are being changed. This information will be then written out in a compact separate so-called bitmap file. The xtrabackup binary will use that file to read only the data pages it needs for the incremental backup, potentially saving many read requests. The latter algorithm is enabled by default if the xtrabackup binary finds the bitmap file. It is possible to specify --incremental-force-scan to read all the pages even if the bitmap data is available.
Incremental backups do not actually compare the data files to the previous backup's data files. In fact, you can use --incremental-lsn to perform an incremental backup without even having the previous backup, if you know its LSN. Incremental backups simply read the pages and compare their LSN to the last backup's LSN. You still need a full backup to recover the incremental changes, however; without a full backup to act as a base, the incremental backups are useless.
To make an incremental backup, begin with a full backup as usual. The xtrabackup binary writes a file called xtrabackup_checkpoints into the backup's target directory. This file contains a line showing the to_lsn, which is the database's LSN at the end of the backup. Create the full backup with a command such as the following:
$ xtrabackup --backup --target-dir=/data/backups/base --datadir=/var/lib/mysql/
If you look at the xtrabackup_checkpoints file, you should see contents similar to the following:
backup_type = full-backuped from_lsn = 0 to_lsn = 1291135
Now that you have a full backup, you can make an incremental backup based on it. Use a command such as the following:
$ xtrabackup --backup --target-dir=/data/backups/inc1 \ --incremental-basedir=/data/backups/base --datadir=/var/lib/mysql/
The /data/backups/inc1/ directory should now contain delta files, such as ibdata1.delta and test/table1.ibd.delta. These represent the changes since the LSN 1291135. If you examine the xtrabackup_checkpoints file in this directory, you should see something similar to the following:
backup_type = incremental from_lsn = 1291135 to_lsn = 1291340
The meaning should be self-evident. It's now possible to use this directory as the base for yet another incremental backup:
$ xtrabackup --backup --target-dir=/data/backups/inc2 \ --incremental-basedir=/data/backups/inc1 --datadir=/var/lib/mysql/
The --prepare step for incremental backups is not the same as for normal backups. In normal backups, two types of operations are performed to make the database consistent: committed transactions are replayed from the log file against the data files, and uncommitted transactions are rolled back. You must skip the rollback of uncommitted transactions when preparing a backup, because transactions that were uncommitted at the time of your backup may be in progress, and it is likely that they will be committed in the next incremental backup. You should use the --apply-log-only option to prevent the rollback phase.
NOTE:
Beginning with the full backup you created, you can prepare it, and then apply the incremental differences to it. Recall that you have the following backups:
/data/backups/base /data/backups/inc1 /data/backups/inc2
To prepare the base backup, you need to run --prepare as usual, but prevent the rollback phase:
xtrabackup --prepare --apply-log-only --target-dir=/data/backups/base
The output should end with some text such as the following:
101107 20:49:43 InnoDB: Shutdown completed; log sequence number 1291135
The log sequence number should match the to_lsn of the base backup, which you saw previously.
This backup is actually safe to restore as-is now, even though the rollback phase has been skipped. If you restore it and start MySQL, InnoDB will detect that the rollback phase was not performed, and it will do that in the background, as it usually does for a crash recovery upon start. It will notify you that the database was not shut down normally.
To apply the first incremental backup to the full backup, you should use the following command:
xtrabackup --prepare --apply-log-only --target-dir=/data/backups/base \ --incremental-dir=/data/backups/inc1
This applies the delta files to the files in /data/backups/base, which rolls them forward in time to the time of the incremental backup. It then applies the redo log as usual to the result. The final data is in /data/backups/base, not in the incremental directory. You should see some output such as the following:
incremental backup from 1291135 is enabled. xtrabackup: cd to /data/backups/base/ xtrabackup: This target seems to be already prepared. xtrabackup: xtrabackup_logfile detected: size=2097152, start_lsn=(1291340) Applying /data/backups/inc1/ibdata1.delta ... Applying /data/backups/inc1/test/table1.ibd.delta ... .... snip 101107 20:56:30 InnoDB: Shutdown completed; log sequence number 1291340
Again, the LSN should match what you saw from your earlier inspection of the first incremental backup. If you restore the files from /data/backups/base, you should see the state of the database as of the first incremental backup.
Preparing the second incremental backup is a similar process: apply the deltas to the (modified) base backup, and you will roll its data forward in time to the point of the second incremental backup:
xtrabackup --prepare --target-dir=/data/backups/base \ --incremental-dir=/data/backups/inc2
NOTE:
If you wish to avoid the notice that InnoDB was not shut down normally, when you applied the desired deltas to the base backup, you can run --prepare again without disabling the rollback phase.
After preparing the incremental backups, the base directory contains the same data as the full backup. To restoring this backup, you can use this command: xtrabackup --copy-back --target-dir=BASE-DIR
You may have to change the ownership as detailed on restoring_a_backup.
Incremental streaming backups can be performed with the xbstream streaming option. Currently backups are packed in custom xbstream format. With this feature, you need to take a BASE backup as well. Making a base backup.INDENT 0.0
$ xtrabackup --backup --target-dir=/data/backups
$ xtrabackup --backup --incremental-lsn=LSN-number --stream=xbstream --target-dir=./ > incremental.xbstream
$ xbstream -x < incremental.xbstream
$ xtrabackup --backup --incremental-lsn=LSN-number --stream=xbstream --target-dir=./ $ ssh user@hostname " cat - | xbstream -x -C > /backup-dir/"
xtrabackup supports taking partial backups when the innodb_file_per_table option is enabled. There are three ways to create partial backups:
WARNING:
Restoring partial backups should be done by importing the tables, not by using the --copy-back option. It is not recommended to run incremental backups after running a partial backup.
Although there are some scenarios where restoring can be done by copying back the files, this may lead to database inconsistencies in many cases and it is not a recommended way to do it.
For the purposes of this manual page, we will assume that there is a database named test which contains tables named t1 and t2.
WARNING:
There are multiple ways of specifying which part of the whole data is backed up:
The first method involves the xtrabackup --tables option. The option's value is a regular expression that is matched against the fully-qualified database name and table name using the databasename.tablename format.
To back up only tables in the test database, use the following command:
$ xtrabackup --backup --datadir=/var/lib/mysql --target-dir=/data/backups/ \ --tables="^test[.].*"
To back up only the test.t1 table, use the following command:
$ xtrabackup --backup --datadir=/var/lib/mysql --target-dir=/data/backups/ \ --tables="^test[.]t1"
The --tables-file option specifies a file that can contain multiple table names, one table name per line in the file. Only the tables named in the file will be backed up. Names are matched exactly, case-sensitive, with no pattern or regular expression matching. The table names must be fully-qualified in databasename.tablename format.
$ echo "mydatabase.mytable" > /tmp/tables.txt $ xtrabackup --backup --tables-file=/tmp/tables.txt
The ` --databases` option accepts a space-separated list of the databases and tables to backup in the databasename[.tablename] format. In addition to this list, make sure to specify the mysql, sys, and
performance_schema databases. These databases are required when restoring the databases using xtrabackup --copy-back.
NOTE:
$ xtrabackup --databases='mysql sys performance_schema test ...'
The --databases-file option specifies a file that can contain multiple databases and tables in the databasename[.tablename] format, one element name per line in the file. Names are matched exactly, case-sensitive, with no pattern or regular expression matching.
NOTE:
The procedure is analogous to restoring individual tables : apply the logs and use the --export option:
$ xtrabackup --prepare --export --target-dir=/path/to/partial/backup
When you use the --prepare option on a partial backup, you will see warnings about tables that don't exist. This is because these tables exist in the data dictionary inside InnoDB, but the corresponding .ibd files don't exist. They were not copied into the backup directory. These tables will be removed from the data dictionary, and when you restore the backup and start InnoDB, they will no longer exist and will not cause any errors or warnings to be printed to the log file.
Use --innodb-directories to find the tablespace files. If that fails then use --innodb-force-recovery=1 to ignore this and to permanently lose all changes to the missing tablespace(s).
Restoring should be done by restoring individual tables in the partial backup to the server.
It can also be done by copying back the prepared backup to a "clean" datadir (in that case, make sure to include the mysql database) to the datadir you are moving the backup to. A system database can be created with the following:
$ sudo mysql --initialize --user=mysql
Once you start the server, you may see mysql complaining about missing tablespaces:
2021-07-19T12:42:11.077200Z 1 [Warning] [MY-012351] [InnoDB] Tablespace 4, name 'test1/t1', file './d2/test1.ibd' is missing! 2021-07-19T12:42:11.077300Z 1 [Warning] [MY-012351] [InnoDB] Tablespace 4, name 'test1/t1', file './d2/test1.ibd' is missing!
In order to clean the orphan database from the data dictionary, you must manually create the missing database directory and then DROP this database from the server.
Example of creating the missing database:
$ mkdir /var/lib/mysql/test1/d2
Example of dropping the database from the server:
mysql> DROP DATABASE d2; Query OK, 2 rows affected (0.5 sec)
The xtrabackup binary can analyze InnoDB data files in read-only mode to give statistics about them. To do this, you should use the --stats option. You can combine this with the --tables option to limit the files to examine. It also uses the --use-memory option.
You can perform the analysis on a running server, with some chance of errors due to the data being changed during analysis. Or, you can analyze a backup copy of the database. Either way, to use the statistics feature, you need a clean copy of the database including correctly sized log files, so you need to execute with --prepare twice to use this functionality on a backup.
The result of running on a backup might look like the following:
<INDEX STATISTICS> table: test/table1, index: PRIMARY, space id: 12, root page 3 estimated statistics in dictionary: key vals: 25265338, leaf pages 497839, size pages 498304 real statistics: level 2 pages: pages=1, data=5395 bytes, data/pages=32% level 1 pages: pages=415, data=6471907 bytes, data/pages=95% leaf pages: recs=25958413, pages=497839, data=7492026403 bytes, data/pages=91%
This can be interpreted as follows:
A more detailed example is posted as a MySQL Performance Blog post.
The following script can be used to summarize and tabulate the output of the statistics information:
tabulate-xtrabackup-stats.pl #!/usr/bin/env perl use strict; use warnings FATAL => 'all'; my $script_version = "0.1"; my $PG_SIZE = 16_384; # InnoDB defaults to 16k pages, change if needed. my ($cur_idx, $cur_tbl); my (%idx_stats, %tbl_stats); my ($max_tbl_len, $max_idx_len) = (0, 0); while ( my $line = <> ) { if ( my ($t, $i) = $line =~ m/table: (.*), index: (.*), space id:/ ) { $t =~ s!/!.!; $cur_tbl = $t; $cur_idx = $i; if ( length($i) > $max_idx_len ) { $max_idx_len = length($i); } if ( length($t) > $max_tbl_len ) { $max_tbl_len = length($t); } } elsif ( my ($kv, $lp, $sp) = $line =~ m/key vals: (\d+), \D*(\d+), \D*(\d+)/ ) { @{$idx_stats{$cur_tbl}->{$cur_idx}}{qw(est_kv est_lp est_sp)} = ($kv, $lp, $sp); $tbl_stats{$cur_tbl}->{est_kv} += $kv; $tbl_stats{$cur_tbl}->{est_lp} += $lp; $tbl_stats{$cur_tbl}->{est_sp} += $sp; } elsif ( my ($l, $pages, $bytes) = $line =~ m/(?:level (\d+)|leaf) pages:.*pages=(\d+), data=(\d+) bytes/ ) { $l ||= 0; $idx_stats{$cur_tbl}->{$cur_idx}->{real_pages} += $pages; $idx_stats{$cur_tbl}->{$cur_idx}->{real_bytes} += $bytes; $tbl_stats{$cur_tbl}->{real_pages} += $pages; $tbl_stats{$cur_tbl}->{real_bytes} += $bytes; } } my $hdr_fmt = "%${max_tbl_len}s %${max_idx_len}s %9s %10s %10s\n"; my @headers = qw(TABLE INDEX TOT_PAGES FREE_PAGES PCT_FULL); printf $hdr_fmt, @headers; my $row_fmt = "%${max_tbl_len}s %${max_idx_len}s %9d %10d %9.1f%%\n"; foreach my $t ( sort keys %tbl_stats ) { my $tbl = $tbl_stats{$t}; printf $row_fmt, $t, "", $tbl->{est_sp}, $tbl->{est_sp} - $tbl->{real_pages}, $tbl->{real_bytes} / ($tbl->{real_pages} * $PG_SIZE) * 100; foreach my $i ( sort keys %{$idx_stats{$t}} ) { my $idx = $idx_stats{$t}->{$i}; printf $row_fmt, $t, $i, $idx->{est_sp}, $idx->{est_sp} - $idx->{real_pages}, $idx->{real_bytes} / ($idx->{real_pages} * $PG_SIZE) * 100; } }
The output of the above Perl script, when run against the sample shown in the previously mentioned blog post, will appear as follows:
TABLE INDEX TOT_PAGES FREE_PAGES PCT_FULL art.link_out104 832383 38561 86.8% art.link_out104 PRIMARY 498304 49 91.9% art.link_out104 domain_id 49600 6230 76.9% art.link_out104 domain_id_2 26495 3339 89.1% art.link_out104 from_message_id 28160 142 96.3% art.link_out104 from_site_id 38848 4874 79.4% art.link_out104 revert_domain 153984 19276 71.4% art.link_out104 site_message 36992 4651 83.4%
The columns are the table and index, followed by the total number of pages in that index, the number of pages not actually occupied by data, and the number of bytes of real data as a percentage of the total size of the pages of real data. The first line in the above output, in which the INDEX column is empty, is a summary of the entire table.
The xtrabackup binary integrates with the log_status table. This integration enables xtrabackup to print out the backup's corresponding binary log position, so that you can use this binary log position to provision a new replica or perform point-in-time recovery.
You can find the binary log position corresponding to a backup after the backup has been taken. If your backup is from a server with binary logging enabled, xtrabackup creates a file named xtrabackup_binlog_info in the target directory. This file contains the binary log file name and position of the exact point when the backup was taken.
The output is similar to the following during the backup stage:
210715 14:14:59 Backup created in directory '/backup/' MySQL binlog position: filename 'binlog.000002', position '156' . . . 210715 14:15:00 completed OK!
NOTE:
To perform a point-in-time recovery from an xtrabackup backup, you should prepare and restore the backup, and then replay binary logs from the point shown in the xtrabackup_binlog_info file.
A more detailed procedure is found here.
To set up a new replica, you should prepare the backup, and restore it to the data directory of your new replication replica. If you are using version 8.0.22 or earlier, in your CHANGE MASTER TO command, use the binary log filename and position shown in the xtrabackup_binlog_info file to start replication.
If you are using 8.0.23 or later, use the CHANGE_REPLICATION_SOURCE_TO and the appropriate options. CHANGE_MASTER_TO is deprecated.
A more detailed procedure is found in ../howtos/setting_up_replication.
Percona XtraBackup can export a table that is contained in its own .ibd file. With Percona XtraBackup, you can export individual tables from any InnoDB database, and import them into Percona Server for MySQL with XtraDB or MySQL 8.0. The source doesn't have to be XtraDB or MySQL 8.0, but the destination does. This method only works on individual .ibd files.
The following example exports and imports the following table:
CREATE TABLE export_test ( a int(11) DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Created the table in innodb_file_per_table mode, so after taking a backup as usual with the --backup option, the .ibd file exists in the target directory:
$ find /data/backups/mysql/ -name export_test.* /data/backups/mysql/test/export_test.ibd
when you prepare the backup, add the --export option to the command. Here is an example:
$ xtrabackup --prepare --export --target-dir=/data/backups/mysql/
NOTE:
$ xtrabackup --prepare --export --target-dir=/tmp/table \ --keyring-file-data=/var/lib/mysql-keyring/keyring
Now you should see an .exp file in the target directory:
$ find /data/backups/mysql/ -name export_test.* /data/backups/mysql/test/export_test.exp /data/backups/mysql/test/export_test.ibd /data/backups/mysql/test/export_test.cfg
These three files are the only files required to import the table into a server running Percona Server for MySQL with XtraDB or MySQL 8.0. In case the server uses InnoDB Tablespace Encryption adds an additional .cfp file which contains the transfer key and an encrypted tablespace key.
NOTE:
A tablespace is imported successfully even if the table is from another server, but InnoDB performs a schema validation if the corresponding .cfg file is located in the same directory.
On the destination server running Percona Server for MySQL with XtraDB and innodb_import_table_from_xtrabackup option enabled, or MySQL 8.0, create a table with the same structure, and then perform the following steps:
ERROR 1030 (HY000): Got error -1 from storage engine
The table is imported, and you can run a SELECT to see the imported data.
Percona XtraBackup includes a saved buffer pool dump into a backup to enable reducing the warm up time. It restores the buffer pool state from ib_buffer_pool file after restart. Percona XtraBackup discovers ib_buffer_pool and backs it up automatically. [image]
If the buffer restore option is enabled in my.cnf buffer pool will be in the warm state after backup is restored.
SEE ALSO:
Percona XtraBackup supports streaming mode. Streaming mode sends a backup to STDOUT in the xbstream format instead of copying the files to the backup directory.
This method allows you to use other programs to filter the output of the backup, providing greater flexibility for storage of the backup. For example, compression is achieved by piping the output to a compression utility. One of the benefits of streaming backups and using Unix pipes is that the backups can be automatically encrypted.
To use the streaming feature, you must use the --stream, providing the format of the stream (xbstream ) and where to store the temporary files:
$ xtrabackup --stream=xbstream --target-dir=/tmp
xtrabackup uses xbstream to stream all of the data files to STDOUT, in a special xbstream format. After it finishes streaming all of the data files to STDOUT, it stops xtrabackup and streams the saved log file too.
SEE ALSO:
When compression is enabled, xtrabackup compresses the output data, except for the meta and non-InnoDB files which are not compressed, using the specified compression algorithm. The only currently supported algorithm is quicklz. The resulting files have the qpress archive format, i.e. every *.qp file produced by xtrabackup is essentially a one-file qpress archive and can be extracted and uncompressed by the qpress file archiver which is available from Percona Software repositories.
Using xbstream as a stream option, backups can be copied and compressed in parallel. This option can significantly improve the speed of the backup process. In case backups were both compressed and encrypted, they must be decrypted before they are uncompressed.
Task | Command |
Stream the backup into an archive named backup.xbstream | xtrabackup --backup --stream=xbstream --target-dir=./ > backup.xbstream |
Stream the backup into a compressed archive named backup.xbstream | xtrabackup --backup --stream=xbstream --compress --target-dir=./ > backup.xbstream |
Encrypt the backup | $ xtrabackup --backup --stream=xbstream ./ > backup.xbstream gzip - | openssl des3 -salt -k "password" > backup.xbstream.gz.des3 |
Unpack the backup to the current directory | xbstream -x < backup.xbstream |
Send the backup compressed directly to another host and unpack it | xtrabackup --backup --compress --stream=xbstream --target-dir=./ | ssh user@otherhost "xbstream -x" |
Send the backup to another server using netcat. | On the destination host: 0.0 3.5 $ nc -l 9999 | cat - > /data/backups/backup.xbstream 168u 168u On the source host: 0.0 3.5 $ xtrabackup --backup --stream=xbstream ./ | nc desthost 9999 168u 168u |
Send the backup to another server using a one-liner: | $ ssh user@desthost "( nc -l 9999 > /data/backups/backup.xbstream & )" && xtrabackup --backup --stream=xbstream ./ | nc desthost 9999 |
Throttle the throughput to 10MB/sec using the pipe viewer tool [1] | $ xtrabackup --backup --stream=xbstream ./ | pv -q -L10m ssh user@desthost "cat - > /data/backups/backup.xbstream" |
Checksumming the backup during the streaming: | On the destination host: 0.0 3.5 $ nc -l 9999 | tee >(sha1sum > destination_checksum) > /data/backups/backup.xbstream 168u 168u On the source host: 0.0 3.5 $ xtrabackup --backup --stream=xbstream ./ | tee >(sha1sum > source_checksum) | nc desthost 9999 168u 168u Compare the checksums on the source host: 0.0 3.5 $ cat source_checksum 65e4f916a49c1f216e0887ce54cf59bf3934dbad - 168u 168u Compare the checksums on the destination host: 0.0 3.5 $ cat destination_checksum 65e4f916a49c1f216e0887ce54cf59bf3934dbad - 168u 168u |
Parallel compression with parallel copying backup | xtrabackup --backup --compress --compress-threads=8 --stream=xbstream --parallel=4 --target-dir=./ > backup.xbstream |
Note that the streamed backup will need to be prepared before restoration. Streaming mode does not prepare the backup.
Percona XtraBackup supports encrypting and decrypting local and streaming backups with xbstream option adding another layer of protection. The encryption is implemented using the libgcrypt library from GnuPG.
To make an encrypted backup the following options need to be specified (options --encrypt-key and --encrypt-key-file are mutually exclusive, i.e. just one of them needs to be provided):
Both the --encrypt-key option and --encrypt-key-file option can be used to specify the encryption key. An encryption key can be generated with a command like openssl rand -base64 32
Example output of that command should look like this:
U2FsdGVkX19VPN7VM+lwNI0fePhjgnhgqmDBqbF3Bvs=
This value then can be used as the encryption key
Example of the xtrabackup command using the --encrypt-key should look like this:
$ xtrabackup --backup --encrypt=AES256 --encrypt-key="U2FsdGVkX19VPN7VM+lwNI0fePhjgnhgqmDBqbF3Bvs=" --target-dir=/data/backup
Use the --encrypt-key-file option as follows:
$ xtrabackup --backup --encrypt=AES256 --encrypt-key-file=/data/backups/keyfile --target-dir=/data/backup
NOTE:
Two new options are available for encrypted backups that can be used to speed up the encryption process. These are --encrypt-threads and --encrypt-chunk-size. By using the --encrypt-threads option multiple threads can be specified to be used for encryption in parallel. Option --encrypt-chunk-size can be used to specify the size (in bytes) of the working encryption buffer for each encryption thread (default is 64K).
Backups can be decrypted with xbcrypt. The following one-liner can be used to encrypt the whole folder:
$ for i in `find . -iname "*\.xbcrypt"`; do xbcrypt -d --encrypt-key-file=/root/secret_key --encrypt-algo=AES256 < $i > $(dirname $i)/$(basename $i .xbcrypt) && rm $i; done
Percona XtraBackup --decrypt option has been implemented that can be used to decrypt the backups:
$ xtrabackup --decrypt=AES256 --encrypt-key="U2FsdGVkX19VPN7VM+lwNI0fePhjgnhgqmDBqbF3Bvs=" --target-dir=/data/backup/
Percona XtraBackup doesn't automatically remove the encrypted files. In order to clean up the backup directory users should remove the *.xbcrypt files.
NOTE:
When the files are decrypted, the backup can be prepared.
After the backups have been decrypted, they can be prepared in the same way as the standard full backups with the --prepare option:
$ xtrabackup --prepare --target-dir=/data/backup/
xtrabackup offers the --copy-back option to restore a backup to the server's datadir:
$ xtrabackup --copy-back --target-dir=/data/backup/
It will copy all the data-related files back to the server's datadir, determined by the server's my.cnf configuration file. You should check the last line of the output for a success message:
150318 11:08:13 xtrabackup: completed OK!
SEE ALSO:
The FLUSH TABLES WITH READ LOCK option does the following with a global read lock:
Release the lock with UNLOCK TABLES.
NOTE:
To ensure consistent backups, use the FLUSH TABLES WITH READ LOCK option before taking a non-InnoDB file backup. The option does not affect long-running queries.
Long-running queries with FLUSH TABLES WITH READ LOCK enabled can leave the server in a read-only mode until the queries finish. Killing the FLUSH TABLES WITH READ LOCK does not help if the database is in either the Waiting for table flush or Waiting for master to send event state. To return to normal operation, you must kill any long-running queries.
NOTE:
In order to prevent this from happening two things have been implemented:
You should issue a global lock when no long queries are running. Waiting to issue the global lock for extended period of time is not a good method. The wait can extend the time needed for backup to take place. The --ftwrl-wait-timeout option can limit the waiting time. If it cannot issue the lock during this time, xtrabackup stops the option, exits with an error message, and backup is not be taken.
The default value for this option is zero (0) value which turns off the option.
Another possibility is to specify the type of query to wait on. In this case --ftwrl-wait-query-type. Possible values are all and update. When all is used xtrabackup will wait for all long running queries (execution time longer than allowed by --ftwrl-wait-threshold) to finish before running the FLUSH TABLES WITH READ LOCK. When update is used xtrabackup will wait on UPDATE/ALTER/REPLACE/INSERT queries to finish.
The time needed for a specific query to complete is hard to predict. We assume that the long-running queries will not finish in a timely manner. Other queries which run for a short time finish quickly. xtrabackup uses the value of --ftwrl-wait-threshold option to specify the long-running queries and will block a global lock. In order to use this option xtrabackup user should have PROCESS and SUPER privileges.
The second option is to kill all the queries which prevent from acquiring the global lock. In this case, all queries which run longer than FLUSH TABLES WITH READ LOCK are potential blockers. Although all queries can be killed, additional time can be specified for the short running queries to finish using the --kill-long-queries-timeout option. This option specifies the time for queries to complete, after the value is reached, all the running queries will be killed. The default value is zero, which turns this feature off.
The --kill-long-query-type option can be used to specify all or only SELECT queries that are preventing global lock from being acquired. In order to use this option xtrabackup user should have PROCESS and SUPER privileges.
Running the xtrabackup with the following options will cause xtrabackup to spend no longer than 3 minutes waiting for all queries older than 40 seconds to complete.
$ xtrabackup --backup --ftwrl-wait-threshold=40 \ --ftwrl-wait-query-type=all --ftwrl-wait-timeout=180 \ --kill-long-queries-timeout=20 --kill-long-query-type=all \ --target-dir=/data/backups/
After FLUSH TABLES WITH READ LOCK is issued, xtrabackup will wait for 20 seconds for lock to be acquired. If lock is still not acquired after 20 seconds, it will kill all queries which are running longer that the FLUSH TABLES WITH READ LOCK.
Copying with the --parallel and --compress-threads Options
When making a local or streaming backup with xbstream option, multiple files can be copied at the same time when using the --parallel option. This option specifies the number of threads created by xtrabackup to copy data files.
To take advantage of this option either the multiple tablespaces option must be enabled (innodb_file_per_table) or the shared tablespace must be stored in multiple ibdata files with the innodb_data_file_path option. Having multiple files for the database (or splitting one into many) doesn't have a measurable impact on performance.
As this feature is implemented at the file level, concurrent file transfer can sometimes increase I/O throughput when doing a backup on highly fragmented data files, due to the overlap of a greater number of random read requests. You should consider tuning the filesystem also to obtain the maximum performance (e.g. checking fragmentation).
If the data is stored on a single file, this option will have no effect.
To use this feature, simply add the option to a local backup, for example:
$ xtrabackup --backup --parallel=4 --target-dir=/path/to/backup
By using the xbstream in streaming backups, you can additionally speed up the compression process with the --compress-threads option. This option specifies the number of threads created by xtrabackup for for parallel data compression. The default value for this option is 1.
To use this feature, simply add the option to a local backup, for example:
$ xtrabackup --backup --stream=xbstream --compress --compress-threads=4 --target-dir=./ > backup.xbstream
Before applying logs, compressed files will need to be uncompressed.
The --rsync Option
In order to speed up the backup process and to minimize the time FLUSH TABLES WITH READ LOCK is blocking the writes, the option --rsync should be used. When this option is specified, xtrabackup uses rsync to copy all non-InnoDB files instead of spawning a separate cp for each file, which can be much faster for servers with a large number of databases or tables. xtrabackup will call the rsync twice, once before the FLUSH TABLES WITH READ LOCK and once during to minimize the time the read lock is being held. During the second rsync call, it will only synchronize the changes to non-transactional data (if any) since the first call performed before the FLUSH TABLES WITH READ LOCK. Note that Percona XtraBackup will use Backup locks where available as a lightweight alternative to FLUSH TABLES WITH READ LOCK. This feature is available in Percona Server for MySQL 5.6+. Percona XtraBackup uses this automatically to copy non-InnoDB data to avoid blocking DML queries that modify InnoDB tables.
NOTE:
Recovering up to particular moment in database's history can be done with xtrabackup and the binary logs of the server.
Note that the binary log contains the operations that modified the database from a point in the past. You need a full datadir as a base, and then you can apply a series of operations from the binary log to make the data match what it was at the point in time you want.
$ xtrabackup --backup --target-dir=/path/to/backup $ xtrabackup --prepare --target-dir=/path/to/backup
For more details on these procedures, see creating_a_backup and preparing_a_backup.
Now, suppose that some time has passed, and you want to restore the database to a certain point in the past, having in mind that there is the constraint of the point where the snapshot was taken.
To find out what is the situation of binary logging in the server, execute the following queries:
mysql> SHOW BINARY LOGS; +------------------+-----------+ | Log_name | File_size | +------------------+-----------+ | mysql-bin.000001 | 126 | | mysql-bin.000002 | 1306 | | mysql-bin.000003 | 126 | | mysql-bin.000004 | 497 | +------------------+-----------+
and
mysql> SHOW MASTER STATUS; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000004 | 497 | | | +------------------+----------+--------------+------------------+
The first query will tell you which files contain the binary log and the second one which file is currently being used to record changes, and the current position within it. Those files are stored usually in the datadir (unless other location is specified when the server is started with the --log-bin= option).
To find out the position of the snapshot taken, see the xtrabackup_binlog_info at the backup's directory:
$ cat /path/to/backup/xtrabackup_binlog_info mysql-bin.000003 57
This will tell you which file was used at moment of the backup for the binary log and its position. That position will be the effective one when you restore the backup:
$ xtrabackup --copy-back --target-dir=/path/to/backup
As the restoration will not affect the binary log files (you may need to adjust file permissions, see restoring_a_backup), the next step is extracting the queries from the binary log with mysqlbinlog starting from the position of the snapshot and redirecting it to a file
$ mysqlbinlog /path/to/datadir/mysql-bin.000003 /path/to/datadir/mysql-bin.000004 \ --start-position=57 > mybinlog.sql
Note that if you have multiple files for the binary log, as in the example, you have to extract the queries with one process, as shown above.
Inspect the file with the queries to determine which position or date corresponds to the point-in-time wanted. Once determined, pipe it to the server. Assuming the point is 11-12-25 01:00:00:
$ mysqlbinlog /path/to/datadir/mysql-bin.000003 /path/to/datadir/mysql-bin.000004 \ --start-position=57 --stop-datetime="11-12-25 01:00:00" | mysql -u root -p
and the database will be rolled forward up to that Point-In-Time.
There are options specific to back up from a replication replica.
This option is useful when backing up a replication replica server. It prints the binary log position and name of the source server. It also writes this information to the xtrabackup_slave_info file as a CHANGE MASTER statement.
This option is useful for setting up a new replica for this source. You can start a replica server with this backup and issue the statement saved in the xtrabackup_slave_info file. More details of this procedure can be found in replication_howto.
In order to assure a consistent replication state, this option stops the replication SQL thread and waits to start backing up until Slave_open_temp_tables in SHOW STATUS is zero. If there are no open temporary tables, the backup will take place, otherwise the SQL thread will be started and stopped until there are no open temporary tables. The backup will fail if Slave_open_temp_tables does not become zero after --safe-slave-backup-timeout seconds (defaults to 300 seconds). The replication SQL thread will be restarted when the backup finishes.
Using this option is always recommended when taking backups from a replica server.
WARNING:
Percona XtraBackup supports storing the backups history on the server. This feature was implemented in Percona XtraBackup 2.2. Storing backup history on the server was implemented to provide users with additional information about backups that are being taken. Backup history information will be stored in the PERCONA_SCHEMA.XTRABACKUP_HISTORY table.
To use this feature the following options are available:
NOTE:
If you want access to backup history outside of your backup set in the case of some catastrophic event, you will need to either perform a mysqldump, partial backup or SELECT * on the history table after xtrabackup completes and store the results with you backup set.
For the necessary privileges, see pxb.privilege. PERCONA_SCHEMA.XTRABACKUP_HISTORY table
This table contains the information about the previous server backups. Information about the backups will only be written if the backup was taken with --history option.
Column Name | Description |
uuid | Unique backup id |
name | User provided name of backup series. There may be multiple entries with the same name used to identify related backups in a series. |
tool_name | Name of tool used to take backup |
tool_command | Exact command line given to the tool with --password and --encryption_key obfuscated |
tool_version | Version of tool used to take backup |
ibbackup_version | Version of the xtrabackup binary used to take backup |
server_version | Server version on which backup was taken |
start_time | Time at the start of the backup |
end_time | Time at the end of the backup |
lock_time | Amount of time, in seconds, spent calling and holding locks for FLUSH TABLES WITH READ LOCK |
binlog_pos | Binlog file and position at end of FLUSH TABLES WITH READ LOCK |
innodb_from_lsn | LSN at beginning of backup which can be used to determine prior backups |
innodb_to_lsn | LSN at end of backup which can be used as the starting lsn for the next incremental |
partial | Is this a partial backup, if N that means that it's the full backup |
incremental | Is this an incremental backup |
format | Description of result format (xbstream) |
compact | Is this a compact backup |
compressed | Is this a compressed backup |
encrypted | Is this an encrypted backup |
This page contains notes on various internal aspects of the xtrabackup tool's operation.
xtrabackup opens the source data files in read-write mode, although it does not modify the files. This means that you must run xtrabackup as a user who has permission to write the data files. The reason for opening the files in read-write mode is that xtrabackup uses the embedded InnoDB libraries to open and read the files, and InnoDB opens them in read-write mode because it normally assumes it is going to write to them.
Because xtrabackup reads large amounts of data from the filesystem, it uses posix_fadvise() where possible, to instruct the operating system not to try to cache the blocks it reads from disk. Without this hint, the operating system would prefer to cache the blocks, assuming that xtrabackup is likely to need them again, which is not the case. Caching such large files can place pressure on the operating system's virtual memory and cause other processes, such as the database server, to be swapped out. The xtrabackup tool avoids this with the following hint on both the source and destination files:
posix_fadvise(file, 0, 0, POSIX_FADV_DONTNEED)
In addition, xtrabackup asks the operating system to perform more aggressive read-ahead optimizations on the source files:
posix_fadvise(file, 0, 0, POSIX_FADV_SEQUENTIAL)
When copying the data files to the target directory, xtrabackup reads and writes 1 MB of data at a time. This is not configurable. When copying the log file, xtrabackup reads and writes 512 bytes at a time. This is also not possible to configure, and matches InnoDB's behavior (workaround exists in Percona Server for MySQL because it has an option to tune innodb_log_block_size for XtraDB, and in that case Percona XtraBackup will match the tuning).
After reading from the files, xtrabackup iterates over the 1MB buffer a page at a time, and checks for page corruption on each page with InnoDB's buf_page_is_corrupted() function. If the page is corrupt, it re-reads and retries up to 10 times for each page. It skips this check on the doublewrite buffer.
The xtrabackup binary exits with the traditional success value of 0 after a backup when no error occurs. If an error occurs during the backup, the exit value is 1.
In certain cases, the exit value can be something other than 0 or 1, due to the command-line option code included from the MySQL libraries. An unknown command-line option, for example, will cause an exit code of 255.
This page documents all of the command-line options for the xtrabackup binary.
You invoke xtrabackup in one of the following modes:
When you intend to run xtrabackup in any of these modes, use the following syntax:
$ xtrabackup [--defaults-file=#] --backup|--prepare|--copy-back|--stats [OPTIONS]
For example, the --prepare mode is applied as follows:
$ xtrabackup --prepare --target-dir=/data/backup/mysql/
For all modes, the default options are read from the xtrabackup and mysqld configuration groups from the following files in the given order:
As the first parameter to xtrabackup (in place of the --defaults-file, you may supply one of the following:
There is a large group of InnoDB options that are normally read from the my.cnf configuration file, so that xtrabackup boots up its embedded InnoDB in the same configuration as your current server. You normally do not need to specify them explicitly. These options have the same behavior in InnoDB and XtraDB. See --innodb-miscellaneous for more information.
xtrabackup: Error: missing required privilege LOCK TABLES on *.* xtrabackup: Warning: missing required privilege REPLICATION CLIENT on *.*
When using --compress=quicklz or --compress, the resulting files have the qpress archive format, i.e. every *.qp file produced by xtrabackup is essentially a one-file qpress archive and can be extracted and uncompressed by the qpress file archiver.
--compress=lz4 produces *.lz4 files. You can extract the contents of these files by using a program such as lz4.
SEE ALSO:
When combined with the --copy-back or --move-back option, --datadir refers to the destination directory.
Once connected to the server, in order to perform a backup you will need READ and EXECUTE permissions at a filesystem level in the server's datadir.
The --decompress option may be used with xbstream to decompress individual qpress files.
If you used the lz4 compression algorithm to compress the files (--compress=lz4), change the --decompress parameter accordingly: --decompress=lz4.
With --dump-innodb-buffer-pool, xtrabackup makes a request to the server to start the buffer pool dump (it takes some time to complete and is done in background) at the beginning of a backup provided the status variable innodb_buffer_pool_dump_status reports that the dump has been completed.
$ xtrabackup --backup --dump-innodb-buffer-pool --target-dir=/home/user/backup
By default, this option is set to OFF.
If innodb_buffer_pool_dump_status reports that there is running dump of buffer pool, xtrabackup waits for the dump to complete using the value of --dump-innodb-buffer-pool-timeout
The file ib_buffer_pool stores tablespace ID and page ID data used to warm up the buffer pool sooner.
SEE ALSO:
This option is used in combination with --dump-innodb-buffer-pool. By default, it is set to 10 seconds.
This option is effective if --dump-innodb-buffer-pool option is set to ON. If this option contains a value, xtrabackup sets the MySQL system variable innodb_buffer_pool_dump_pct. As soon as the buffer pool dump completes or it is stopped (see --dump-innodb-buffer-pool-timeout), the value of the MySQL system variable is restored.
SEE ALSO:
It is passed directly to the xtrabackup child process. See the xtrabackup documentation for more details.
--generate-transition-key creates and adds to the keyring a transition key for xtrabackup to use if the master key used for encryption is not found because it has been rotated and purged.
SEE ALSO:
SEE ALSO:
NOTE:
As of Percona XtraBackup 8.0.22-15.0, using a safe-slave-backup option stops the SQL replica thread before copying the InnoDB files.
NOTE:
The --no-server-version-check option disables the server version check.
The default behavior runs a check that compares the source system version to the Percona XtraBackup version. If the source system version is higher than the XtraBackup version, the backup is aborted with a message.
Adding the option overrides this check, and the backup proceeds, but there may be issues with the backup.
See comparison for more information.
When the automatic version check is enabled, xtrabackup performs a version check against the server on the backup stage after creating a server connection. xtrabackup sends the following information to the server:
Each piece of information has a unique identifier. This is a MD5 hash value that Percona Toolkit uses to obtain statistics about how it is used. This is a random UUID; no client information is either collected or stored.
SEE ALSO:
If this option is a relative path, it is interpreted as being relative to the current working directory from which xtrabackup is executed.
In order to perform a backup, you need READ, WRITE, and EXECUTE permissions at a filesystem level for the directory that you supply as the value of --target-dir.
SEE ALSO:
If --transition-key does not have any value, xtrabackup will ask for it. The same passphrase should be specified for the --prepare command.
SEE ALSO:
Percona LLC and/or its affiliates
2009-2022, Percona LLC and/or its affiliates
November 30, 2023 | 8.0 |