PGCOPYDB FORK(1) | pgcopydb | PGCOPYDB FORK(1) |
pgcopydb fork - pgcopydb fork
The main pgcopydb operation is the clone operation, and for historical and user friendliness reasons two aliases are available that implement the same operation:
pgcopydb clone Clone an entire database from source to target fork Clone an entire database from source to target
The command pgcopydb clone copies a database from the given source Postgres instance to the target Postgres instance.
pgcopydb clone: Clone an entire database from source to target usage: pgcopydb clone --source ... --target ... [ --table-jobs ... --index-jobs ... ] --source Postgres URI to the source database --target Postgres URI to the target database --dir Work directory to use --table-jobs Number of concurrent COPY jobs to run --index-jobs Number of concurrent CREATE INDEX jobs to run --restore-jobs Number of concurrent jobs for pg_restore --large-objects-jobs Number of concurrent Large Objects jobs to run --split-tables-larger-than Same-table concurrency size threshold --split-max-parts Maximum number of jobs for Same-table concurrency --estimate-table-sizes Allow using estimates for relation sizes --drop-if-exists On the target database, clean-up from a previous run first --roles Also copy roles found on source to target --no-role-passwords Do not dump passwords for roles --no-owner Do not set ownership of objects to match the original database --no-acl Prevent restoration of access privileges (grant/revoke commands). --no-comments Do not output commands to restore comments --no-tablespaces Do not output commands to select tablespaces --skip-large-objects Skip copying large objects (blobs) --skip-extensions Skip restoring extensions --skip-ext-comments Skip restoring COMMENT ON EXTENSION --skip-collations Skip restoring collations --skip-vacuum Skip running VACUUM ANALYZE --skip-analyze Skip running vacuumdb --analyze-only --skip-db-properties Skip copying ALTER DATABASE SET properties --skip-split-by-ctid Skip spliting tables by ctid --requirements <filename> List extensions requirements --filters <filename> Use the filters defined in <filename> --fail-fast Abort early in case of error --restart Allow restarting when temp files exist already --resume Allow resuming operations after a failure --not-consistent Allow taking a new snapshot on the source database --snapshot Use snapshot obtained with pg_export_snapshot --follow Implement logical decoding to replay changes --plugin Output plugin to use (test_decoding, wal2json) --wal2json-numeric-as-string Print numeric data type as string when using wal2json output plugin --slot-name Use this Postgres replication slot name --create-slot Create the replication slot --origin Use this Postgres replication origin node name --endpos Stop replaying changes when reaching this LSN --use-copy-binary Use the COPY BINARY format for COPY operations
The command pgcopydb fork copies a database from the given source Postgres instance to the target Postgres instance. This command is an alias to the command pgcopydb clone seen above.
The pgcopydb clone command implements both a base copy of a source database into a target database and also a full Logical Decoding client for the wal2json logical decoding plugin.
The pgcopydb clone command implements the following steps:
When filtering is used, the list of objects OIDs that are meant to be filtered out is built during this step.
When filtering is used, the pg_restore --use-list feature is used to filter the list of objects to restore in this step.
This step uses as many as --restore-jobs jobs for pg_restore to share the workload and restore the objects in parallel.
A Postgres connection and a SQL query to the Postgres catalog table pg_class is used to get the list of tables with data to copy around, and the reltuples statistic is used to start with the tables with the greatest number of rows first, as an attempt to minimize the copy time.
This step is much like pg_dump | pg_restore for large objects data parts, except that there isn't a good way to do just that with the tooling.
The primary indexes are created as UNIQUE indexes at this stage.
For each sequence, pgcopydb then calls pg_catalog.setval() on the target database with the information obtained on the source database.
The post-data script is filtered out using the pg_restore --use-list option so that indexes and primary key constraints already created in steps 6 and 7 are properly skipped now.
This step uses as many as --restore-jobs jobs for pg_restore to share the workload and restore the objects in parallel.
Postgres has a notion of a superuser status that can be assigned to any role in the system, and the default role postgres has this status. From the Role Attributes documentation page we see that:
A database superuser bypasses all permission checks, except the right to log in. This is a dangerous privilege and should not be used carelessly; it is best to do most of your work as a role that is not a superuser. To create a new database superuser, use CREATE ROLE name SUPERUSER. You must do this as a role that is already a superuser.
Some Postgres objects can only be created by superusers, and some read and write operations are only allowed to superuser roles, such as the following non-exclusive list:
It is possible to implement a pgcopydb migration that skips the passwords entirely when using the option --no-role-passwords. In that case though authentication might fail until passwords have been setup again correctly.
When such an extension contains Extension Configuration Tables and has been created with a role having superuser status, then the same superuser status is needed again to pg_dump and pg_restore that extension and its current configuration.
When using pgcopydb it is possible to split your migration in privileged and non-privileged parts, like in the following examples:
$ coproc ( pgcopydb snapshot ) # first two commands would use a superuser role to connect $ pgcopydb copy roles --source ... --target ... $ pgcopydb copy extensions --source ... --target ... # now it's possible to use a non-superuser role to connect $ pgcopydb clone --skip-extensions --source ... --target ... $ kill -TERM ${COPROC_PID} $ wait ${COPROC_PID}
In such a script, the calls to pgcopydb copy roles and pgcopydb copy extensions would be done with connection strings that connects with a role having superuser status; and then the call to pgcopydb clone would be done with a non-privileged role, typically the role that owns the source and target databases.
WARNING:
That's because pg_dump filtering (here, there --exclude-table option) does not apply to extension members, and pg_dump does not provide a mechanism to exclude extensions.
When using the --follow option the steps from the pgcopydb follow command are also run concurrently to the main copy. The Change Data Capture is then automatically driven from a prefetch-only phase to the prefetch-and-catchup phase, which is enabled as soon as the base copy is done.
See the command pgcopydb stream sentinel set endpos to remote control the follow parts of the command even while the command is already running.
The command pgcopydb stream cleanup must be used to free resources created to support the change data capture process.
IMPORTANT:
A simple approach to applying changes after the initial base copy has been done follows:
$ pgcopydb clone --follow & # later when the application is ready to make the switch $ pgcopydb stream sentinel set endpos --current # later when the migration is finished, clean-up both source and target $ pgcopydb stream cleanup
In some cases, it might be necessary to have more control over some of the steps taken here. Given pgcopydb flexibility, it's possible to implement the following steps:
In case of crash or other problems with the main operations, it's then possible to resume processing of the base copy and the applying of the changes with the same snapshot again.
This step is also implemented when using pgcopydb clone --follow. That said, if the command was interrupted (or crashed), then the snapshot would be lost.
The following SQL objects are then created:
This step is also implemented when using pgcopydb clone --follow. There is no way to implement Change Data Capture with pgcopydb and skip creating those SQL objects.
Sequences are not handled by Postgres logical decoding, so extra care needs to be implemented manually here.
IMPORTANT:
If the command pgcopydb clone --follow fails it's then possible to start it again. It will automatically discover what was done successfully and what needs to be done again because it failed or was interrupted (table copy, index creation, resuming replication slot consuming, resuming applying changes at the right LSN position, etc).
Here is an example implement the previous steps:
$ pgcopydb snapshot & $ pgcopydb stream setup $ pgcopydb clone --follow & # later when the application is ready to make the switch $ pgcopydb stream sentinel set endpos --current # when the follow process has terminated, re-sync the sequences $ pgcopydb copy sequences # later when the migration is finished, clean-up both source and target $ pgcopydb stream cleanup # now stop holding the snapshot transaction (adjust PID to your environment) $ kill %1
The following options are available to pgcopydb clone:
This limit only applies to the COPY operations, more sub-processes will be running at the same time that this limit while the CREATE INDEX operations are in progress, though then the processes are only waiting for the target Postgres instance to do all the work.
If this value is not set, we reuse the --index-jobs value. If that value is not set either, we use the the default value for --index-jobs.
When this option is used, we run vacuumdb --analyze-only --jobs=<table-jobs> command on the source database that updates the statistics for the number of pages for each relation. Later, we use the number of pages, and the size for each page to estimate the actual size of the tables.
If you wish to run the ANALYZE command manually before running pgcopydb, you can use the --skip-analyze option. This way, you can decrease the time spent on the migration.
This option is useful when the same command is run several times in a row, either to fix a previous mistake or for instance when used in a continuous integration system.
This option causes DROP TABLE and DROP INDEX and other DROP commands to be used. Make sure you understand what you're doing here!
The pg_dumpall --roles-only is used to fetch the list of roles from the source database, and this command includes support for passwords. As a result, this operation requires the superuser privileges.
See also pgcopydb copy roles.
When used, schema that extensions depend-on are also skipped: it is expected that creating needed extensions on the target system is then the responsibility of another command (such as pgcopydb copy extensions), and schemas that extensions depend-on are part of that responsibility.
Because creating extensions require superuser, this allows a multi-steps approach where extensions are dealt with superuser privileges, and then the rest of the pgcopydb operations are done without superuser privileges.
The command pgcopydb list extensions --requirements --json produces such a JSON file and can be used on the target database instance to get started.
See also the command pgcopydb list extensions --available-versions.
See also pgcopydb list extensions.
In some scenarios the list of collations provided by the Operating System on the source and target system might be different, and a mapping then needs to be manually installed before calling pgcopydb.
Then this option allows pgcopydb to skip over collations and assume all the needed collations have been deployed on the target database already.
See also pgcopydb list collations.
This option is useful only when using --estimate-table-sizes and the user runs the relevant ANALYZE command manually before running pgcopydb.
In that case, the --restart option can be used to allow pgcopydb to delete traces from a previous run.
When resuming activity from a previous run, table data that was fully copied over to the target server is not sent again. Table data that was interrupted during the COPY has to be started from scratch even when using --resume: the COPY command in Postgres is transactional and was rolled back.
Same reasonning applies to the CREATE INDEX commands and ALTER TABLE commands that pgcopydb issues, those commands are skipped on a --resume run only if known to have run through to completion on the previous one.
Finally, using --resume requires the use of --not-consistent.
Per the Postgres documentation about pg_export_snapshot:
Now, when the pgcopydb process was interrupted (or crashed) on a previous run, it is possible to resume operations, but the snapshot that was exported does not exists anymore. The pgcopydb command can only resume operations with a new snapshot, and thus can not ensure consistency of the whole data set, because each run is now using their own snapshot.
The replication slot is created using the same snapshot as the main database copy operation, and the changes to the source database are prefetched only during the initial copy, then prefetched and applied in a catchup process.
It is possible to give pgcopydb clone --follow a termination point (the LSN endpos) while the command is running with the command pgcopydb stream sentinel set endpos.
It is possible to use wal2json instead. The support for wal2json is mostly historical in pgcopydb, it should not make a user visible difference whether you use the default test_decoding or wal2json.
You need to have a wal2json plugin version on source database that supports --numeric-data-types-as-string option to use this option.
See also the documentation for wal2json regarding this option for details.
The --endpos option is not aware of transaction boundaries and may truncate output partway through a transaction. Any partially output transaction will not be consumed and will be replayed again when the slot is next read from. Individual messages are never truncated.
See also documentation for pg_recvlogical.
See also documentation for COPY.
Postgres uses a notion of an origin node name as documented in Replication Progress Tracking. This option allows to pick your own node name and defaults to "pgcopydb". Picking a different name is useful in some advanced scenarios like migrating several sources in the same target, where each source should have their own unique origin node name.
PGCOPYDB_SOURCE_PGURI
PGCOPYDB_TARGET_PGURI
PGCOPYDB_TABLE_JOBS
PGCOPYDB_INDEX_JOBS
PGCOPYDB_RESTORE_JOBS
PGCOPYDB_LARGE_OBJECTS_JOBS
PGCOPYDB_SPLIT_TABLES_LARGER_THAN
When --split-tables-larger-than is ommitted from the command line, then this environment variable is used.
PGCOPYDB_SPLIT_MAX_PARTS
PGCOPYDB_ESTIMATE_TABLE_SIZES
When --estimate-table-sizes is ommitted from the command line, then this environment variable is used.
When this option is used, we run vacuumdb --analyze-only --jobs=<table-jobs> command on the source database that updates the statistics for the number of pages for each relation. Later, we use the number of pages, and the size for each page to estimate the actual size of the tables.
If you wish to run the ANALYZE command manually before running pgcopydb, you can use the --skip-analyze option or PGCOPYDB_SKIP_ANALYZE environment variable. This way, you can decrease the time spent on the migration.
PGCOPYDB_OUTPUT_PLUGIN
PGCOPYDB_WAL2JSON_NUMERIC_AS_STRING
When --wal2json-numeric-as-string is ommitted from the command line then this environment variable is used.
PGCOPYDB_DROP_IF_EXISTS
When --drop-if-exists is ommitted from the command line then this environment variable is used.
PGCOPYDB_FAIL_FAST
When --fail-fast is ommitted from the command line then this environment variable is used.
PGCOPYDB_SKIP_VACUUM
PGCOPYDB_SKIP_ANALYZE
PGCOPYDB_SKIP_DB_PROPERTIES
PGCOPYDB_SKIP_CTID_SPLIT
PGCOPYDB_USE_COPY_BINARY
PGCOPYDB_SNAPSHOT
TMPDIR
PGCOPYDB_LOG_TIME_FORMAT
See documentation for strftime(3) for details about the format string. See documentation for isatty(3) for details about detecting if pgcopydb is run in an interactive terminal.
PGCOPYDB_LOG_JSON
{ "timestamp": "2023-04-13 16:53:14", "pid": 87956, "error_level": 4, "error_severity": "INFO", "file_name": "main.c", "file_line_num": 165, "message": "Running pgcopydb version 0.11.19.g2290494.dirty from \"/Users/dim/dev/PostgreSQL/pgcopydb/src/bin/pgcopydb/pgcopydb\"" }
PGCOPYDB_LOG_FILENAME
If the file already exists, its content is overwritten. In other words the previous content would be lost when running the same command twice.
PGCOPYDB_LOG_JSON_FILE
XDG_DATA_HOME
When using Change Data Capture (through --follow option and Postgres logical decoding with wal2json) then pgcopydb pre-fetches changes in JSON files and transform them into SQL files to apply to the target database.
These files are stored at the following location, tried in this order:
$ export PGCOPYDB_SOURCE_PGURI=postgres://pagila:0wn3d@source/pagila $ export PGCOPYDB_TARGET_PGURI=postgres://pagila:0wn3d@target/pagila $ export PGCOPYDB_DROP_IF_EXISTS=on $ pgcopydb clone --table-jobs 8 --index-jobs 12 08:13:13.961 42893 INFO [SOURCE] Copying database from "postgres://pagila:0wn3d@source/pagila?keepalives=1&keepalives_idle=10&keepalives_interval=10&keepalives_count=60" 08:13:13.961 42893 INFO [TARGET] Copying database into "postgres://pagila:0wn3d@target/pagila?keepalives=1&keepalives_idle=10&keepalives_interval=10&keepalives_count=60" 08:13:14.009 42893 INFO Using work dir "/tmp/pgcopydb" 08:13:14.017 42893 INFO Exported snapshot "00000003-000000EB-1" from the source database 08:13:14.019 42904 INFO STEP 1: fetch source database tables, indexes, and sequences 08:13:14.339 42904 INFO Fetched information for 5 tables (including 0 tables split in 0 partitions total), with an estimated total of 1000 thousands tuples and 128 MB on-disk 08:13:14.342 42904 INFO Fetched information for 4 indexes (supporting 4 constraints) 08:13:14.343 42904 INFO Fetching information for 1 sequences 08:13:14.353 42904 INFO Fetched information for 1 extensions 08:13:14.436 42904 INFO Found 1 indexes (supporting 1 constraints) in the target database 08:13:14.443 42904 INFO STEP 2: dump the source database schema (pre/post data) 08:13:14.448 42904 INFO /usr/bin/pg_dump -Fc --snapshot 00000003-000000EB-1 --section=pre-data --section=post-data --file /tmp/pgcopydb/schema/schema.dump 'postgres://pagila:0wn3d@source/pagila?keepalives=1&keepalives_idle=10&keepalives_interval=10&keepalives_count=60' 08:13:14.513 42904 INFO STEP 3: restore the pre-data section to the target database 08:13:14.524 42904 INFO /usr/bin/pg_restore --dbname 'postgres://pagila:0wn3d@target/pagila?keepalives=1&keepalives_idle=10&keepalives_interval=10&keepalives_count=60' --section pre-data --jobs 2 --use-list /tmp/pgcopydb/schema/pre-filtered.list /tmp/pgcopydb/schema/schema.dump 08:13:14.608 42919 INFO STEP 4: starting 8 table-data COPY processes 08:13:14.678 42921 INFO STEP 8: starting 8 VACUUM processes 08:13:14.678 42904 INFO Skipping large objects: none found. 08:13:14.693 42920 INFO STEP 6: starting 2 CREATE INDEX processes 08:13:14.693 42920 INFO STEP 7: constraints are built by the CREATE INDEX processes 08:13:14.699 42904 INFO STEP 9: reset sequences values 08:13:14.700 42959 INFO Set sequences values on the target database 08:13:16.716 42904 INFO STEP 10: restore the post-data section to the target database 08:13:16.726 42904 INFO /usr/bin/pg_restore --dbname 'postgres://pagila:0wn3d@target/pagila?keepalives=1&keepalives_idle=10&keepalives_interval=10&keepalives_count=60' --section post-data --jobs 2 --use-list /tmp/pgcopydb/schema/post-filtered.list /tmp/pgcopydb/schema/schema.dump 08:13:16.751 42904 INFO All step are now done, 2s728 elapsed 08:13:16.752 42904 INFO Printing summary for 5 tables and 4 indexes OID | Schema | Name | Parts | copy duration | transmitted bytes | indexes | create index duration ------+--------+------------------+-------+---------------+-------------------+---------+---------------------- 16398 | public | pgbench_accounts | 1 | 1s496 | 91 MB | 1 | 302ms 16395 | public | pgbench_tellers | 1 | 37ms | 1002 B | 1 | 15ms 16401 | public | pgbench_branches | 1 | 45ms | 71 B | 1 | 18ms 16386 | public | table1 | 1 | 36ms | 984 B | 1 | 21ms 16392 | public | pgbench_history | 1 | 41ms | 0 B | 0 | 0ms Step Connection Duration Transfer Concurrency -------------------------------------------------- ---------- ---------- ---------- ------------ Catalog Queries (table ordering, filtering, etc) source 119ms 1 Dump Schema source 66ms 1 Prepare Schema target 59ms 1 COPY, INDEX, CONSTRAINTS, VACUUM (wall clock) both 2s125 18 COPY (cumulative) both 1s655 128 MB 8 CREATE INDEX (cumulative) target 343ms 2 CONSTRAINTS (cumulative) target 13ms 2 VACUUM (cumulative) target 144ms 8 Reset Sequences both 15ms 1 Large Objects (cumulative) (null) 0ms 0 Finalize Schema both 27ms 2 -------------------------------------------------- ---------- ---------- ---------- ------------ Total Wall Clock Duration both 2s728 24
Dimitri Fontaine
2022-2024, Dimitri Fontaine
August 7, 2024 | 0.17 |