.\" Man page generated from reStructuredText. . . .nr rst2man-indent-level 0 . .de1 rstReportMargin \\$1 \\n[an-margin] level \\n[rst2man-indent-level] level margin: \\n[rst2man-indent\\n[rst2man-indent-level]] - \\n[rst2man-indent0] \\n[rst2man-indent1] \\n[rst2man-indent2] .. .de1 INDENT .\" .rstReportMargin pre: . RS \\$1 . nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin] . nr rst2man-indent-level +1 .\" .rstReportMargin post: .. .de UNINDENT . RE .\" indent \\n[an-margin] .\" old: \\n[rst2man-indent\\n[rst2man-indent-level]] .nr rst2man-indent-level -1 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .in \\n[rst2man-indent\\n[rst2man-indent-level]]u .. .TH "PGCOPYDB STREAM" "1" "Mar 15, 2023" "0.11" "pgcopydb" .SH NAME pgcopydb stream \- pgcopydb stream .sp pgcopydb stream \- Stream changes from source database .sp \fBWARNING:\fP .INDENT 0.0 .INDENT 3.5 \fBThis mode of operations has been designed for unit testing only.\fP .sp Consider using the \fI\%pgcopydb clone\fP (with the \fB\-\-follow\fP option) or the \fI\%pgcopydb follow\fP command instead. .UNINDENT .UNINDENT .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 Some \fIpgcopydb stream\fP commands are still designed for normal operations, rather than unit testing only. .sp The \fI\%pgcopydb stream sentinel set startpos\fP, \fI\%pgcopydb stream sentinel set endpos\fP, \fI\%pgcopydb stream sentinel set apply\fP, and \fI\%pgcopydb stream sentinel set prefetch\fP commands are necessary to communicate with the main \fBpgcopydb clone \-\-follow\fP or \fBpgcopydb follow\fP process. See \fI\%Change Data Capture Example 1\fP for a detailed example using \fI\%pgcopydb stream sentinel set endpos\fP\&. .sp Also the commands \fI\%pgcopydb stream setup\fP and \fI\%pgcopydb stream cleanup\fP might be used directly in normal operations. See \fI\%Change Data Capture Example 2\fP for a detailed example. .UNINDENT .UNINDENT .sp This command prefixes the following sub\-commands: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb stream setup Setup source and target systems for logical decoding cleanup cleanup source and target systems for logical decoding prefetch Stream JSON changes from the source database and transform them to SQL catchup Apply prefetched changes from SQL files to the target database replay Replay changes from the source to the target database, live + create Create resources needed for pgcopydb + drop Drop resources needed for pgcopydb + sentinel Maintain a sentinel table on the source database receive Stream changes from the source database transform Transform changes from the source database into SQL commands apply Apply changes from the source database into the target database pgcopydb stream create slot Create a replication slot in the source database origin Create a replication origin in the target database pgcopydb stream drop slot Drop a replication slot in the source database origin Drop a replication origin in the target database pgcopydb stream sentinel create Create the sentinel table on the source database drop Drop the sentinel table on the source database get Get the sentinel table values on the source database + set Maintain a sentinel table on the source database pgcopydb stream sentinel set startpos Set the sentinel start position LSN on the source database endpos Set the sentinel end position LSN on the source database apply Set the sentinel apply mode on the source database prefetch Set the sentinel prefetch mode on the source database .ft P .fi .UNINDENT .UNINDENT .sp Those commands implement a part of the whole database replay operation as detailed in section \fI\%pgcopydb follow\fP\&. Only use those commands to debug a specific part, or because you know that you just want to implement that step. .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 The sub\-commands \fBstream setup\fP then \fBstream prefetch\fP and \fBstream catchup\fP are higher level commands, that use internal information to know which files to process. Those commands also keep track of their progress. .sp The sub\-commands \fBstream receive\fP, \fBstream transform\fP, and \fBstream apply\fP are lower level interface that work on given files. Those commands still keep track of their progress, but have to be given more information to work. .UNINDENT .UNINDENT .SH PGCOPYDB STREAM SETUP .sp pgcopydb stream setup \- Setup source and target systems for logical decoding .sp The command \fBpgcopydb stream setup\fP connects to the source database and creates a replication slot using the given logical decoding output plugin (either \fI\%test_decoding\fP or \fI\%wal2json\fP), then creates a \fBpgcopydb.sentinel\fP table, and then connects to the target database and creates a replication origin positioned at the LSN position of the just created replication slot. .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb stream setup: Setup source and target systems for logical decoding usage: pgcopydb stream setup \-\-source Postgres URI to the source database \-\-target Postgres URI to the target database \-\-dir Work directory to use \-\-restart Allow restarting when temp files exist already \-\-resume Allow resuming operations after a failure \-\-not\-consistent Allow taking a new snapshot on the source database \-\-snapshot Use snapshot obtained with pg_export_snapshot \-\-plugin Output plugin to use (test_decoding, wal2json) \-\-slot\-name Stream changes recorded by this slot \-\-origin Name of the Postgres replication origin .ft P .fi .UNINDENT .UNINDENT .SH PGCOPYDB STREAM CLEANUP .sp pgcopydb stream cleanup \- cleanup source and target systems for logical decoding .sp The command \fBpgcopydb stream cleanup\fP connects to the source and target databases to delete the objects created in the \fBpgcopydb stream setup\fP step. .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb stream cleanup: cleanup source and target systems for logical decoding usage: pgcopydb stream cleanup \-\-source Postgres URI to the source database \-\-target Postgres URI to the target database \-\-restart Allow restarting when temp files exist already \-\-resume Allow resuming operations after a failure \-\-not\-consistent Allow taking a new snapshot on the source database \-\-snapshot Use snapshot obtained with pg_export_snapshot \-\-slot\-name Stream changes recorded by this slot \-\-origin Name of the Postgres replication origin .ft P .fi .UNINDENT .UNINDENT .SH PGCOPYDB STREAM PREFETCH .sp pgcopydb stream prefetch \- Stream JSON changes from the source database and transform them to SQL .sp The command \fBpgcopydb stream prefetch\fP connects to the source database using the logical replication protocl and the given replication slot. .sp The prefetch command receives the changes from the source database in a streaming fashion, and writes them in a series of JSON files named the same as their origin WAL filename (with the \fB\&.json\fP extension). Each time a JSON file is closed, a subprocess is started to transform the JSON into an SQL file. .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb stream prefetch: Stream JSON changes from the source database and transform them to SQL usage: pgcopydb stream prefetch \-\-source Postgres URI to the source database \-\-dir Work directory to use \-\-restart Allow restarting when temp files exist already \-\-resume Allow resuming operations after a failure \-\-not\-consistent Allow taking a new snapshot on the source database \-\-slot\-name Stream changes recorded by this slot \-\-endpos LSN position where to stop receiving changes .ft P .fi .UNINDENT .UNINDENT .SH PGCOPYDB STREAM CATCHUP .sp pgcopydb stream catchup \- Apply prefetched changes from SQL files to the target database .sp The command \fBpgcopydb stream catchup\fP connects to the target database and applies changes from the SQL files that have been prepared with the \fBpgcopydb stream prefetch\fP command. .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb stream catchup: Apply prefetched changes from SQL files to the target database usage: pgcopydb stream catchup \-\-source Postgres URI to the source database \-\-target Postgres URI to the target database \-\-dir Work directory to use \-\-restart Allow restarting when temp files exist already \-\-resume Allow resuming operations after a failure \-\-not\-consistent Allow taking a new snapshot on the source database \-\-slot\-name Stream changes recorded by this slot \-\-endpos LSN position where to stop receiving changes \-\-origin Name of the Postgres replication origin .ft P .fi .UNINDENT .UNINDENT .SH PGCOPYDB STREAM REPLAY .sp pgcopydb stream replay \- Replay changes from the source to the target database, live .sp The command \fBpgcopydb stream replay\fP connects to the source database and streams changes using the logical decoding protocol, and internally streams those changes to a transform process and then a replay process, which connects to the target database and applies SQL changes. .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb stream replay: Replay changes from the source to the target database, live usage: pgcopydb stream replay \-\-source Postgres URI to the source database \-\-target Postgres URI to the target database \-\-dir Work directory to use \-\-restart Allow restarting when temp files exist already \-\-resume Allow resuming operations after a failure \-\-not\-consistent Allow taking a new snapshot on the source database \-\-slot\-name Stream changes recorded by this slot \-\-endpos LSN position where to stop receiving changes \-\-origin Name of the Postgres replication origin .ft P .fi .UNINDENT .UNINDENT .sp This command is equivalent to running the following script: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb stream receive \-\-to\-stdout | pgcopydb stream transform \- \- | pgcopydb stream apply \- .ft P .fi .UNINDENT .UNINDENT .SH PGCOPYDB STREAM CREATE SLOT .sp pgcopydb stream create slot \- Create a replication slot in the source database .sp The command \fBpgcopydb stream create slot\fP connects to the source database and executes a SQL query to create a logical replication slot using the plugin \fBwal2json\fP\&. .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb create slot: Create a replication slot in the source database usage: pgcopydb create slot \-\-source Postgres URI to the source database \-\-snapshot Use snapshot obtained with pg_export_snapshot \-\-plugin Output plugin to use (test_decoding, wal2json) \-\-slot\-name Use this Postgres replication slot name .ft P .fi .UNINDENT .UNINDENT .SH PGCOPYDB STREAM CREATE ORIGIN .sp pgcopydb stream create origin \- Create a replication origin in the target database .sp The command \fBpgcopydb stream create origin\fP connects to the target database and executes a SQL query to create a logical replication origin. The starting LSN position \fB\-\-startpos\fP is required. .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb stream create origin: Create a replication origin in the target database usage: pgcopydb stream create origin \-\-target Postgres URI to the target database \-\-origin Use this Postgres origin name \-\-start\-pos LSN position from where to start applying changes .ft P .fi .UNINDENT .UNINDENT .SH PGCOPYDB STREAM DROP SLOT .sp pgcopydb stream drop slot \- Drop a replication slot in the source database .sp The command \fBpgcopydb stream drop slot\fP connects to the source database and executes a SQL query to drop the logical replication slot with the given name (that defaults to \fBpgcopydb\fP). .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb stream drop slot: Drop a replication slot in the source database usage: pgcopydb stream drop slot \-\-source Postgres URI to the source database \-\-slot\-name Use this Postgres replication slot name .ft P .fi .UNINDENT .UNINDENT .SH PGCOPYDB STREAM DROP ORIGIN .sp pgcopydb stream drop origin \- Drop a replication origin in the target database .sp The command \fBpgcopydb stream drop origin\fP connects to the target database and executes a SQL query to drop the logical replication origin with the given name (that defaults to \fBpgcopydb\fP). .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C usage: pgcopydb stream drop origin \-\-target Postgres URI to the target database \-\-origin Use this Postgres origin name .ft P .fi .UNINDENT .UNINDENT .SH PGCOPYDB STREAM SENTINEL CREATE .sp pgcopydb stream sentinel create \- Create the sentinel table on the source database .sp The \fBpgcopydb.sentinel\fP table allows to remote control the prefetch and catchup processes of the logical decoding implementation in pgcopydb. .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb stream sentinel create: Create the sentinel table on the source database usage: pgcopydb stream sentinel create \-\-source Postgres URI to the source database \-\-startpos Start replaying changes when reaching this LSN \-\-endpos Stop replaying changes when reaching this LSN .ft P .fi .UNINDENT .UNINDENT .SH PGCOPYDB STREAM SENTINEL DROP .sp pgcopydb stream sentinel drop \- Drop the sentinel table on the source database .sp The \fBpgcopydb.sentinel\fP table allows to remote control the prefetch and catchup processes of the logical decoding implementation in pgcopydb. .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb stream sentinel drop: Drop the sentinel table on the source database usage: pgcopydb stream sentinel drop \-\-source Postgres URI to the source database .ft P .fi .UNINDENT .UNINDENT .SH PGCOPYDB STREAM SENTINEL GET .sp pgcopydb stream sentinel get \- Get the sentinel table values on the source database .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb stream sentinel get: Get the sentinel table values on the source database usage: pgcopydb stream sentinel get \-\-source Postgres URI to the source database \-\-json Format the output using JSON .ft P .fi .UNINDENT .UNINDENT .SH PGCOPYDB STREAM SENTINEL SET STARTPOS .sp pgcopydb stream sentinel set startpos \- Set the sentinel start position LSN on the source database .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb stream sentinel set startpos: Set the sentinel start position LSN on the source database usage: pgcopydb stream sentinel set startpos \-\-source Postgres URI to the source database .ft P .fi .UNINDENT .UNINDENT .SH PGCOPYDB STREAM SENTINEL SET ENDPOS .sp pgcopydb stream sentinel set endpos \- Set the sentinel end position LSN on the source database .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb stream sentinel set endpos: Set the sentinel end position LSN on the source database usage: pgcopydb stream sentinel set endpos \-\-source Postgres URI to the source database \-\-current Use pg_current_wal_flush_lsn() as the endpos .ft P .fi .UNINDENT .UNINDENT .SH PGCOPYDB STREAM SENTINEL SET APPLY .sp pgcopydb stream sentinel set apply \- Set the sentinel apply mode on the source database .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb stream sentinel set apply: Set the sentinel apply mode on the source database usage: pgcopydb stream sentinel set apply \-\-source Postgres URI to the source database .ft P .fi .UNINDENT .UNINDENT .SH PGCOPYDB STREAM SENTINEL SET PREFETCH .sp pgcopydb stream sentinel set prefetch \- Set the sentinel prefetch mode on the source database .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb stream sentinel set prefetch: Set the sentinel prefetch mode on the source database usage: pgcopydb stream sentinel set prefetch \-\-source Postgres URI to the source database .ft P .fi .UNINDENT .UNINDENT .SH PGCOPYDB STREAM RECEIVE .sp pgcopydb stream receive \- Stream changes from the source database .sp The command \fBpgcopydb stream receive\fP connects to the source database using the logical replication protocl and the given replication slot. .sp The receive command receives the changes from the source database in a streaming fashion, and writes them in a series of JSON files named the same as their origin WAL filename (with the \fB\&.json\fP extension). .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb stream receive: Stream changes from the source database usage: pgcopydb stream receive \-\-source ... \-\-source Postgres URI to the source database \-\-dir Work directory to use \-\-to\-stdout Stream logical decoding messages to stdout \-\-restart Allow restarting when temp files exist already \-\-resume Allow resuming operations after a failure \-\-not\-consistent Allow taking a new snapshot on the source database \-\-slot\-name Stream changes recorded by this slot \-\-endpos LSN position where to stop receiving changes .ft P .fi .UNINDENT .UNINDENT .SH PGCOPYDB STREAM TRANSFORM .sp pgcopydb stream transform \- Transform changes from the source database into SQL commands .sp The command \fBpgcopydb stream transform\fP transforms a JSON file as received by the \fBpgcopydb stream receive\fP command into an SQL file with one query per line. .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb stream transform: Transform changes from the source database into SQL commands usage: pgcopydb stream transform \-\-source Postgres URI to the source database \-\-dir Work directory to use \-\-restart Allow restarting when temp files exist already \-\-resume Allow resuming operations after a failure \-\-not\-consistent Allow taking a new snapshot on the source database .ft P .fi .UNINDENT .UNINDENT .sp The command supports using \fB\-\fP as the filename for either the JSON input or the SQL output, or both. In that case reading from standard input and/or writing to standard output is implemented, in a streaming fashion. A classic use case is to use Unix Pipes, see \fI\%pgcopydb stream replay\fP too. .SH PGCOPYDB STREAM APPLY .sp pgcopydb stream apply \- Apply changes from the source database into the target database .sp The command \fBpgcopydb stream apply\fP applies a SQL file as prepared by the \fBpgcopydb stream transform\fP command in the target database. The apply process tracks progress thanks to the Postgres API for \fI\%Replication Progress Tracking\fP\&. .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C pgcopydb stream apply: Apply changes from the source database into the target database usage: pgcopydb stream apply \-\-target Postgres URI to the target database \-\-dir Work directory to use \-\-restart Allow restarting when temp files exist already \-\-resume Allow resuming operations after a failure \-\-not\-consistent Allow taking a new snapshot on the source database \-\-origin Name of the Postgres replication origin .ft P .fi .UNINDENT .UNINDENT .sp This command supports using \fB\-\fP as the filename to read from, and in that case reads from the standard input in a streaming fashion instead. .SH OPTIONS .sp The following options are available to \fBpgcopydb stream\fP sub\-commands: .INDENT 0.0 .TP .B \-\-source Connection string to the source Postgres instance. See the Postgres documentation for \fI\%connection strings\fP for the details. In short both the quoted form \fB\(dqhost=... dbname=...\(dq\fP and the URI form \fBpostgres://user@host:5432/dbname\fP are supported. .TP .B \-\-target Connection string to the target Postgres instance. .TP .B \-\-dir During its normal operations pgcopydb creates a lot of temporary files to track sub\-processes progress. Temporary files are created in the directory location given by this option, or defaults to \fB${TMPDIR}/pgcopydb\fP when the environment variable is set, or then to \fB/tmp/pgcopydb\fP\&. .sp Change Data Capture files are stored in the \fBcdc\fP sub\-directory of the \fB\-\-dir\fP option when provided, otherwise see XDG_DATA_HOME environment variable below. .TP .B \-\-restart When running the pgcopydb command again, if the work directory already contains information from a previous run, then the command refuses to proceed and delete information that might be used for diagnostics and forensics. .sp In that case, the \fB\-\-restart\fP option can be used to allow pgcopydb to delete traces from a previous run. .TP .B \-\-resume When the pgcopydb command was terminated before completion, either by an interrupt signal (such as C\-c or SIGTERM) or because it crashed, it is possible to resume the database migration. .sp To be able to resume a streaming operation in a consistent way, all that\(aqs required is re\-using the same replication slot as in previous run(s). .TP .B \-\-plugin Logical decoding output plugin to use. The default is \fI\%test_decoding\fP which ships with Postgres core itself, so is probably already available on your source server. .sp It is possible to use \fI\%wal2json\fP instead. The support for wal2json is mostly historical in pgcopydb, it should not make a user visible difference whether you use the default test_decoding or wal2json. .TP .B \-\-slot\-name Logical decoding slot name to use. .TP .B \-\-endpos Logical replication target LSN to use. Automatically stop replication and exit with normal exit status 0 when receiving reaches the specified LSN. If there\(aqs a record with LSN exactly equal to lsn, the record will be output. .sp The \fB\-\-endpos\fP option is not aware of transaction boundaries and may truncate output partway through a transaction. Any partially output transaction will not be consumed and will be replayed again when the slot is next read from. Individual messages are never truncated. .sp See also documentation for \fI\%pg_recvlogical\fP\&. .TP .B \-\-origin Logical replication target system needs to track the transactions that have been applied already, so that in case we get disconnected or need to resume operations we can skip already replayed transaction. .sp Postgres uses a notion of an origin node name as documented in \fI\%Replication Progress Tracking\fP\&. This option allows to pick your own node name and defaults to \(dqpgcopydb\(dq. Picking a different name is useful in some advanced scenarios like migrating several sources in the same target, where each source should have their own unique origin node name. .TP .B \-\-startpos Logical replication target system registers progress by assigning a current LSN to the \fB\-\-origin\fP node name. When creating an origin on the target database system, it is required to provide the current LSN from the source database system, in order to properly bootstrap pgcopydb logical decoding. .TP .B \-\-verbose Increase current verbosity. The default level of verbosity is INFO. In ascending order pgcopydb knows about the following verbosity levels: FATAL, ERROR, WARN, INFO, NOTICE, DEBUG, TRACE. .TP .B \-\-debug Set current verbosity to DEBUG level. .TP .B \-\-trace Set current verbosity to TRACE level. .TP .B \-\-quiet Set current verbosity to ERROR level. .UNINDENT .SH ENVIRONMENT .sp PGCOPYDB_SOURCE_PGURI .INDENT 0.0 .INDENT 3.5 Connection string to the source Postgres instance. When \fB\-\-source\fP is ommitted from the command line, then this environment variable is used. .UNINDENT .UNINDENT .sp PGCOPYDB_TARGET_PGURI .INDENT 0.0 .INDENT 3.5 Connection string to the target Postgres instance. When \fB\-\-target\fP is ommitted from the command line, then this environment variable is used. .UNINDENT .UNINDENT .sp TMPDIR .INDENT 0.0 .INDENT 3.5 The pgcopydb command creates all its work files and directories in \fB${TMPDIR}/pgcopydb\fP, and defaults to \fB/tmp/pgcopydb\fP\&. .UNINDENT .UNINDENT .sp XDG_DATA_HOME .INDENT 0.0 .INDENT 3.5 The pgcopydb command creates Change Data Capture files in the standard place XDG_DATA_HOME, which defaults to \fB~/.local/share\fP\&. See the \fI\%XDG Base Directory Specification\fP\&. .UNINDENT .UNINDENT .SH EXAMPLES .sp As an example here is the output generated from running the cdc test case, where a replication slot is created before the initial copy of the data, and then the following INSERT statement is executed: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C begin; with r as ( insert into rental(rental_date, inventory_id, customer_id, staff_id, last_update) select \(aq2022\-06\-01\(aq, 371, 291, 1, \(aq2022\-06\-01\(aq returning rental_id, customer_id, staff_id ) insert into payment(customer_id, staff_id, rental_id, amount, payment_date) select customer_id, staff_id, rental_id, 5.99, \(aq2020\-06\-01\(aq from r; commit; .ft P .fi .UNINDENT .UNINDENT .sp The command then looks like the following, where the \fB\-\-endpos\fP has been extracted by calling the \fBpg_current_wal_lsn()\fP SQL function: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C $ pgcopydb stream receive \-\-slot\-name test_slot \-\-restart \-\-endpos 0/236D668 \-vv 16:01:57 157 INFO Running pgcopydb version 0.7 from \(dq/usr/local/bin/pgcopydb\(dq 16:01:57 157 DEBUG copydb.c:406 Change Data Capture data is managed at \(dq/var/lib/postgres/.local/share/pgcopydb\(dq 16:01:57 157 INFO copydb.c:73 Using work dir \(dq/tmp/pgcopydb\(dq 16:01:57 157 DEBUG pidfile.c:143 Failed to signal pid 34: No such process 16:01:57 157 DEBUG pidfile.c:146 Found a stale pidfile at \(dq/tmp/pgcopydb/pgcopydb.pid\(dq 16:01:57 157 INFO pidfile.c:147 Removing the stale pid file \(dq/tmp/pgcopydb/pgcopydb.pid\(dq 16:01:57 157 INFO copydb.c:254 Work directory \(dq/tmp/pgcopydb\(dq already exists 16:01:57 157 INFO copydb.c:258 A previous run has run through completion 16:01:57 157 INFO copydb.c:151 Removing directory \(dq/tmp/pgcopydb\(dq 16:01:57 157 DEBUG copydb.c:445 rm \-rf \(dq/tmp/pgcopydb\(dq && mkdir \-p \(dq/tmp/pgcopydb\(dq 16:01:57 157 DEBUG copydb.c:445 rm \-rf \(dq/tmp/pgcopydb/schema\(dq && mkdir \-p \(dq/tmp/pgcopydb/schema\(dq 16:01:57 157 DEBUG copydb.c:445 rm \-rf \(dq/tmp/pgcopydb/run\(dq && mkdir \-p \(dq/tmp/pgcopydb/run\(dq 16:01:57 157 DEBUG copydb.c:445 rm \-rf \(dq/tmp/pgcopydb/run/tables\(dq && mkdir \-p \(dq/tmp/pgcopydb/run/tables\(dq 16:01:57 157 DEBUG copydb.c:445 rm \-rf \(dq/tmp/pgcopydb/run/indexes\(dq && mkdir \-p \(dq/tmp/pgcopydb/run/indexes\(dq 16:01:57 157 DEBUG copydb.c:445 rm \-rf \(dq/var/lib/postgres/.local/share/pgcopydb\(dq && mkdir \-p \(dq/var/lib/postgres/.local/share/pgcopydb\(dq 16:01:57 157 DEBUG pgsql.c:2476 starting log streaming at 0/0 (slot test_slot) 16:01:57 157 DEBUG pgsql.c:485 Connecting to [source] \(dqpostgres://postgres@source:/postgres?password=****&replication=database\(dq 16:01:57 157 DEBUG pgsql.c:2009 IDENTIFY_SYSTEM: timeline 1, xlogpos 0/236D668, systemid 7104302452422938663 16:01:57 157 DEBUG pgsql.c:3188 RetrieveWalSegSize: 16777216 16:01:57 157 DEBUG pgsql.c:2547 streaming initiated 16:01:57 157 INFO stream.c:237 Now streaming changes to \(dq/var/lib/postgres/.local/share/pgcopydb/000000010000000000000002.json\(dq 16:01:57 157 DEBUG stream.c:341 Received action B for XID 488 in LSN 0/236D638 16:01:57 157 DEBUG stream.c:341 Received action I for XID 488 in LSN 0/236D178 16:01:57 157 DEBUG stream.c:341 Received action I for XID 488 in LSN 0/236D308 16:01:57 157 DEBUG stream.c:341 Received action C for XID 488 in LSN 0/236D638 16:01:57 157 DEBUG pgsql.c:2867 pgsql_stream_logical: endpos reached at 0/236D668 16:01:57 157 DEBUG stream.c:382 Flushed up to 0/236D668 in file \(dq/var/lib/postgres/.local/share/pgcopydb/000000010000000000000002.json\(dq 16:01:57 157 INFO pgsql.c:3030 Report write_lsn 0/236D668, flush_lsn 0/236D668 16:01:57 157 DEBUG pgsql.c:3107 end position 0/236D668 reached by WAL record at 0/236D668 16:01:57 157 DEBUG pgsql.c:408 Disconnecting from [source] \(dqpostgres://postgres@source:/postgres?password=****&replication=database\(dq 16:01:57 157 DEBUG stream.c:414 streamClose: closing file \(dq/var/lib/postgres/.local/share/pgcopydb/000000010000000000000002.json\(dq 16:01:57 157 INFO stream.c:171 Streaming is now finished after processing 4 messages .ft P .fi .UNINDENT .UNINDENT .sp The JSON file then contains the following content, from the \fIwal2json\fP logical replication plugin. Note that you\(aqre seeing diffent LSNs here because each run produces different ones, and the captures have not all been made from the same run. .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C $ cat /var/lib/postgres/.local/share/pgcopydb/000000010000000000000002.json {\(dqaction\(dq:\(dqB\(dq,\(dqxid\(dq:489,\(dqtimestamp\(dq:\(dq2022\-06\-27 13:24:31.460822+00\(dq,\(dqlsn\(dq:\(dq0/236F5A8\(dq,\(dqnextlsn\(dq:\(dq0/236F5D8\(dq} {\(dqaction\(dq:\(dqI\(dq,\(dqxid\(dq:489,\(dqtimestamp\(dq:\(dq2022\-06\-27 13:24:31.460822+00\(dq,\(dqlsn\(dq:\(dq0/236F0E8\(dq,\(dqschema\(dq:\(dqpublic\(dq,\(dqtable\(dq:\(dqrental\(dq,\(dqcolumns\(dq:[{\(dqname\(dq:\(dqrental_id\(dq,\(dqtype\(dq:\(dqinteger\(dq,\(dqvalue\(dq:16050},{\(dqname\(dq:\(dqrental_date\(dq,\(dqtype\(dq:\(dqtimestamp with time zone\(dq,\(dqvalue\(dq:\(dq2022\-06\-01 00:00:00+00\(dq},{\(dqname\(dq:\(dqinventory_id\(dq,\(dqtype\(dq:\(dqinteger\(dq,\(dqvalue\(dq:371},{\(dqname\(dq:\(dqcustomer_id\(dq,\(dqtype\(dq:\(dqinteger\(dq,\(dqvalue\(dq:291},{\(dqname\(dq:\(dqreturn_date\(dq,\(dqtype\(dq:\(dqtimestamp with time zone\(dq,\(dqvalue\(dq:null},{\(dqname\(dq:\(dqstaff_id\(dq,\(dqtype\(dq:\(dqinteger\(dq,\(dqvalue\(dq:1},{\(dqname\(dq:\(dqlast_update\(dq,\(dqtype\(dq:\(dqtimestamp with time zone\(dq,\(dqvalue\(dq:\(dq2022\-06\-01 00:00:00+00\(dq}]} {\(dqaction\(dq:\(dqI\(dq,\(dqxid\(dq:489,\(dqtimestamp\(dq:\(dq2022\-06\-27 13:24:31.460822+00\(dq,\(dqlsn\(dq:\(dq0/236F278\(dq,\(dqschema\(dq:\(dqpublic\(dq,\(dqtable\(dq:\(dqpayment_p2020_06\(dq,\(dqcolumns\(dq:[{\(dqname\(dq:\(dqpayment_id\(dq,\(dqtype\(dq:\(dqinteger\(dq,\(dqvalue\(dq:32099},{\(dqname\(dq:\(dqcustomer_id\(dq,\(dqtype\(dq:\(dqinteger\(dq,\(dqvalue\(dq:291},{\(dqname\(dq:\(dqstaff_id\(dq,\(dqtype\(dq:\(dqinteger\(dq,\(dqvalue\(dq:1},{\(dqname\(dq:\(dqrental_id\(dq,\(dqtype\(dq:\(dqinteger\(dq,\(dqvalue\(dq:16050},{\(dqname\(dq:\(dqamount\(dq,\(dqtype\(dq:\(dqnumeric(5,2)\(dq,\(dqvalue\(dq:5.99},{\(dqname\(dq:\(dqpayment_date\(dq,\(dqtype\(dq:\(dqtimestamp with time zone\(dq,\(dqvalue\(dq:\(dq2020\-06\-01 00:00:00+00\(dq}]} {\(dqaction\(dq:\(dqC\(dq,\(dqxid\(dq:489,\(dqtimestamp\(dq:\(dq2022\-06\-27 13:24:31.460822+00\(dq,\(dqlsn\(dq:\(dq0/236F5A8\(dq,\(dqnextlsn\(dq:\(dq0/236F5D8\(dq} .ft P .fi .UNINDENT .UNINDENT .sp It\(aqs then possible to transform the JSON into SQL: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C $ pgcopydb stream transform ./tests/cdc/000000010000000000000002.json /tmp/000000010000000000000002.sql .ft P .fi .UNINDENT .UNINDENT .sp And the SQL file obtained looks like this: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C $ cat /tmp/000000010000000000000002.sql BEGIN; \-\- {\(dqxid\(dq:489,\(dqlsn\(dq:\(dq0/236F5A8\(dq} INSERT INTO \(dqpublic\(dq.\(dqrental\(dq (rental_id, rental_date, inventory_id, customer_id, return_date, staff_id, last_update) VALUES (16050, \(aq2022\-06\-01 00:00:00+00\(aq, 371, 291, NULL, 1, \(aq2022\-06\-01 00:00:00+00\(aq); INSERT INTO \(dqpublic\(dq.\(dqpayment_p2020_06\(dq (payment_id, customer_id, staff_id, rental_id, amount, payment_date) VALUES (32099, 291, 1, 16050, 5.99, \(aq2020\-06\-01 00:00:00+00\(aq); COMMIT; \-\- {\(dqxid\(dq: 489,\(dqlsn\(dq:\(dq0/236F5A8\(dq} .ft P .fi .UNINDENT .UNINDENT .SH AUTHOR Dimitri Fontaine .SH COPYRIGHT 2023, Dimitri Fontaine .\" Generated by docutils manpage writer. .