diff --git a/src/current/_includes/v25.1/ldr/create_logically_replicated_stmt.html b/src/current/_includes/v25.1/ldr/create_logically_replicated_stmt.html new file mode 100644 index 00000000000..03bca7093e7 --- /dev/null +++ b/src/current/_includes/v25.1/ldr/create_logically_replicated_stmt.html @@ -0,0 +1,149 @@ + +
+ + + + + + CREATE + + + LOGICALLY + + + REPLICATED + + + TABLE + + + db_object_name + + + TABLES + + + ( + + + logical_replication_resources_list + + + ) + + + FROM + + + TABLE + + + db_object_name + + + TABLES + + + ( + + + logical_replication_resources_list + + + ) + + + ON + + + source_connection_string + + + WITH + + + logical_replication_create_table_options + + + , + + + + +
\ No newline at end of file diff --git a/src/current/_includes/v25.1/ldr/use-create-logically-replicated.md b/src/current/_includes/v25.1/ldr/use-create-logically-replicated.md new file mode 100644 index 00000000000..23ad644db4b --- /dev/null +++ b/src/current/_includes/v25.1/ldr/use-create-logically-replicated.md @@ -0,0 +1 @@ +If your table does not contain any user-defined types or [foregin key]({% link {{ page.version.version }}/foreign-key.md %}) dependencies, use the [`CREATE LOGICALLY REPLICATED`]({% link {{ page.version.version }}/create-logically-replicated.md %}) syntax to start the stream for a fast, offline initial scan and automatic destination table setup. \ No newline at end of file diff --git a/src/current/_includes/v25.1/sidebar-data/sql.json b/src/current/_includes/v25.1/sidebar-data/sql.json index 326d9a980a9..611a720c214 100644 --- a/src/current/_includes/v25.1/sidebar-data/sql.json +++ b/src/current/_includes/v25.1/sidebar-data/sql.json @@ -202,6 +202,12 @@ "/${VERSION}/create-index.html" ] }, + { + "title": "CREATE LOGICALLY REPLICATED", + "urls": [ + "/${VERSION}/create-logically-replicated.html" + ] + }, { "title": "CREATE LOGICAL REPLICATION STREAM", "urls": [ diff --git a/src/current/v25.1/create-logical-replication-stream.md b/src/current/v25.1/create-logical-replication-stream.md index be0ab3e63ae..12c60fc6b41 100644 --- a/src/current/v25.1/create-logical-replication-stream.md +++ b/src/current/v25.1/create-logical-replication-stream.md @@ -14,6 +14,10 @@ The `CREATE LOGICAL REPLICATION STREAM` statement starts [**logical data replica This page is a reference for the `CREATE LOGICAL REPLICATION STREAM` SQL statement, which includes information on its parameters and possible options. For a step-by-step guide to set up LDR, refer to the [Set Up Logical Data Replication]({% link {{ page.version.version }}/set-up-logical-data-replication.md %}) page. +{{site.data.alerts.callout_success}} +If the table you're replicating does not contain [user-defined types]({% link {{ page.version.version }}/enum.md %}) or [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) dependencies, we recommend using the [`CREATE LOGICALLY REPLICATED`]({% link {{ page.version.version }}/create-logically-replicated.md %}) syntax that provides a fast, offline initial scan and automatic table setup on the destination cluster. +{{site.data.alerts.end}} + ## Required privileges `CREATE LOGICAL REPLICATION STREAM` requires one of the following privileges: diff --git a/src/current/v25.1/create-logically-replicated.md b/src/current/v25.1/create-logically-replicated.md new file mode 100644 index 00000000000..3f1fe5a34b6 --- /dev/null +++ b/src/current/v25.1/create-logically-replicated.md @@ -0,0 +1,115 @@ +--- +title: CREATE LOGICALLY REPLICATED +summary: The CREATE LOGICALLY REPLICATED statement starts a new unidirectional or bidirectional LDR stream with a fast, offline scan. +toc: true +--- + +{{site.data.alerts.callout_info}} +{% include feature-phases/preview.md %} + +Logical data replication is only supported in CockroachDB {{ site.data.products.core }} clusters. +{{site.data.alerts.end}} + +{% include_cached new-in.html version="v25.1" %} The `CREATE LOGICALLY REPLICATED` statement starts [**logical data replication (LDR)**]({% link {{ page.version.version }}/logical-data-replication-overview.md %}) on a table(s) that runs between a source and destination cluster in an active-active setup. `CREATE LOGICALLY REPLICATED` creates the new table on the destination cluster automatically and conducts a fast, offline initial scan. It accepts `unidirectional` or `bidirectional` as an option to create either one of the setups automatically. + +Once the offline initial scan completes, the new table will come online and is ready to serve queries. In a bidirectional setup, the second LDR stream will also initialize after the offline initial scan completes. + +{{site.data.alerts.callout_danger}} +If the table to be replicated contains [user-defined types]({% link {{ page.version.version }}/enum.md %}) or [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) dependencies, you must use the [`CREATE LOGICAL REPLICATION STREAM`]({% link {{ page.version.version }}/create-logical-replication-stream.md %}) statement instead. +{{site.data.alerts.end}} + +This page is a reference for the `CREATE LOGICALLY REPLICATED` SQL statement, which includes information on its parameters and options. For a step-by-step guide to set up LDR, refer to the [Set Up Logical Data Replication]({% link {{ page.version.version }}/set-up-logical-data-replication.md %}) page. + +## Required privileges + +`CREATE LOGICALLY REPLICATED` requires one of the following privileges: + +- The [`admin` role]({% link {{ page.version.version }}/security-reference/authorization.md %}#admin-role). +- The [`REPLICATION` system privilege]({% link {{ page.version.version }}/security-reference/authorization.md %}#privileges). + +Use the [`GRANT SYSTEM`]({% link {{ page.version.version }}/grant.md %}) statement: + +{% include_cached copy-clipboard.html %} +~~~ sql +GRANT SYSTEM REPLICATION TO user; +~~~ + +## Synopsis + +
+{% include {{ page.version.version }}/ldr/create_logically_replicated_stmt.html %} +
+ +### Parameters + +Parameter | Description +----------+------------ +`db_object_name` | The fully qualified name of the table on the source or destination cluster. Refer to [Examples](#examples). +`logical_replication_resources_list` | A list of the fully qualified table names on the source or destination cluster to include in the LDR stream. Refer to the [LDR with multiple tables](#multiple-tables) example. +`source_connection_string` | The connection string to the source cluster. Use an [external connection]({% link {{ page.version.version }}/create-external-connection.md %}) to store the source cluster's connection URI. To start LDR, you run `CREATE LOGICALLY REPLICATED` from the destination cluster. +`logical_replication_create_table_options` | Options to modify the behavior of the LDR stream. For a list, refer to [Options](#options). **Note:** `bidirectional` or `unidirectional`is a required option. + +## Options + +Option | Description +-------+------------ +`bidirectional` / `unidirectional` | (**Required**) Specifies whether the LDR stream will be unidirectional or bidirectional. With `bidirectional` specified, LDR will set up two LDR streams between the clusters. +`label` | Tracks LDR metrics at the job level. Add a user-specified string with `label`. For more details, refer to [Metrics labels]({% link {{ page.version.version }}/logical-data-replication-monitoring.md %}#metrics-labels). +`mode` | Determines how LDR replicates the data to the destination cluster. Possible values: `immediate`, `validated`. For more details, refer to [LDR modes](#ldr-modes). + +## LDR modes + +_Modes_ determine how LDR replicates the data to the destination cluster. There are two modes: + +- `immediate` (default): {% include {{ page.version.version }}/ldr/immediate-description.md %} +- `validated`: {% include {{ page.version.version }}/ldr/validated-description.md %} + +## Examples + +`CREATE LOGICALLY REPLICATED` will automatically create the specified source tables on the destination cluster. For unidirectional and bidirectional, you run the statement to start LDR on the destination cluster that does not contain the tables. + +### Unidirectional + +From the destination cluster of the LDR stream, run: + +{% include_cached copy-clipboard.html %} +~~~ sql +CREATE LOGICALLY REPLICATED TABLE {database.public.destination_table_name} FROM TABLE {database.public.source_table_name} ON 'external://source' WITH unidirectional, mode=validated; +~~~ + +Include the following: + +- Fully qualified destination table name. +- Fully qualified source table name. +- [External connection]({% link {{ page.version.version }}/create-external-connection.md %}) for the source cluster. For instructions on creating the external connection for LDR, refer to [Set Up Logical Data Replication]({% link {{ page.version.version }}/set-up-logical-data-replication.md %}#step-2-connect-from-the-destination-to-the-source). +- `unidirectional` option. +- Any other [options](#options). + +### Bidirectional + +Both clusters will act as a source and destination in bidirectional LDR setups. To start the LDR jobs, you must run this statement from the destination cluster that does not contain the tables: + +{% include_cached copy-clipboard.html %} +~~~ sql +CREATE LOGICALLY REPLICATED TABLE {database.public.destination_table_name} FROM TABLE {database.public.source_table_name} ON 'external://source' WITH bidirectional ON 'external://destination', label=track_job; +~~~ + +Include the following: + +- Fully qualified destination table name. +- Fully qualified source table name. +- [External connection]({% link {{ page.version.version }}/create-external-connection.md %}) for the source cluster. For instructions on creating the external connection for LDR, refer to [Set Up Logical Data Replication]({% link {{ page.version.version }}/set-up-logical-data-replication.md %}#step-2-connect-from-the-destination-to-the-source). +- `bidirectional` option defining the external connection for the destination cluster. +- Any other [options](#options). + +### Multiple tables + +To include multiple tables in an LDR stream, add the fully qualified table names in a list format. Ensure that the table name in the source table list and destination table list are in the same order: + +~~~ sql +CREATE LOGICALLY REPLICATED TABLE ({database.public.destination_table_name_1}, {database.public.destination_table_name_2}) FROM TABLE ({database.public.source_table_name_1}, {database.public.source_table_name_2}) ON 'external://source' WITH bidirectional ON 'external://destination', label=track_job; +~~~ + +## See more + +- [`SHOW LOGICAL REPLICATION JOBS`]({% link {{ page.version.version }}/show-logical-replication-jobs.md %}) \ No newline at end of file diff --git a/src/current/v25.1/manage-logical-data-replication.md b/src/current/v25.1/manage-logical-data-replication.md index eb43b0a1932..4256f9479eb 100644 --- a/src/current/v25.1/manage-logical-data-replication.md +++ b/src/current/v25.1/manage-logical-data-replication.md @@ -155,6 +155,10 @@ You have a bidirectional LDR setup with a stream between cluster A to cluster B, CREATE LOGICAL REPLICATION STREAM FROM TABLE {database.public.table_name} ON 'external://{source_external_connection}' INTO TABLE {database.public.table_name}; ~~~ + {{site.data.alerts.callout_info}} + {% include {{ page.version.version }}/ldr/use-create-logically-replicated.md %} + {{site.data.alerts.end}} + #### Coordinate schema changes for unidirectional LDR If you have a unidirectional LDR setup, you should cancel the running LDR stream and redirect all application traffic to the source cluster. @@ -174,6 +178,10 @@ If you have a unidirectional LDR setup, you should cancel the running LDR stream CREATE LOGICAL REPLICATION STREAM FROM TABLE {database.public.table_name} ON 'external://{source_external_connection}' INTO TABLE {database.public.table_name}; ~~~ + {{site.data.alerts.callout_info}} + {% include {{ page.version.version }}/ldr/use-create-logically-replicated.md %} + {{site.data.alerts.end}} + ## Jobs and LDR You can run changefeed and backup [jobs]({% link {{ page.version.version }}/show-jobs.md %}) on any cluster that is involved in an LDR job. Both source and destination clusters in LDR are active, which means they can both serve production reads and writes as well as run [backups]({% link {{ page.version.version }}/backup-and-restore-overview.md %}) and [changefeeds]({% link {{ page.version.version }}/change-data-capture-overview.md %}). diff --git a/src/current/v25.1/set-up-logical-data-replication.md b/src/current/v25.1/set-up-logical-data-replication.md index 538e37a04d8..191117386a0 100644 --- a/src/current/v25.1/set-up-logical-data-replication.md +++ b/src/current/v25.1/set-up-logical-data-replication.md @@ -10,20 +10,26 @@ toc: true Logical data replication is only supported in CockroachDB {{ site.data.products.core }} clusters. {{site.data.alerts.end}} -In this tutorial, you will set up [**logical data replication (LDR)**]({% link {{ page.version.version }}/logical-data-replication-overview.md %}) streaming data from a source table to a destination table between two CockroachDB clusters. Both clusters are active and can serve traffic. You can apply the outlined steps to create _unidirectional_ LDR from a source table to a destination table (cluster A to cluster B) in one LDR job. Optionally, you can also create _bidirectional_ LDR from cluster B's table to cluster A's table by starting a second LDR job. In a bidirectional setup, each cluster operates as both a source and a destination in separate LDR jobs. +In this tutorial, you will set up [**logical data replication (LDR)**]({% link {{ page.version.version }}/logical-data-replication-overview.md %}) streaming data from a source table to a destination table between two CockroachDB clusters. Both clusters are active and can serve traffic. You can apply the outlined steps to set up either: -For more details on use cases, refer to the [Logical Data Replication Overview]({% link {{ page.version.version }}/logical-data-replication-overview.md %}). +- _Unidirectional_ LDR from a source table to a destination table (cluster A to cluster B) in one LDR job. +- _Bidirectional_ LDR for the same table from cluster A to cluster B and from cluster B to cluster A. In a bidirectional setup, each cluster operates as both a source and a destination in separate LDR jobs. + +{% include_cached new-in.html version="v25.1" %} Create the new table on the destination cluster automatically and conduct a fast, offline initial scan with the [`CREATE LOGICALLY REPLICATED`]({% link {{ page.version.version }}/create-logically-replicated.md %}) syntax. `CREATE LOGICALLY REPLICATED` accepts `unidirectional` or `bidirectional` as an option in order to create either one of the setups automatically. [Step 3](#step-3-start-ldr) outlines when to use the `CREATE LOGICALLY REPLICATED` or the `CREATE LOGICAL REPLICATION STREAM` syntax to start LDR. + +In the following diagram, **LDR stream 1** creates a unidirectional LDR setup, introducing **LDR stream 2** extends the setup to bidirectional. Diagram showing bidirectional LDR from cluster A to B and back again from cluster B to A. +For more details on use cases, refer to the [Logical Data Replication Overview]({% link {{ page.version.version }}/logical-data-replication-overview.md %}). + ## Tutorial overview -If you're setting up bidirectional LDR, both clusters will act as a source and a destination in the respective LDR jobs. The high-level steps are: +If you're setting up bidirectional LDR, both clusters will act as a source and a destination in the respective LDR jobs. The high-level steps for setting up bidirectional or unidirectional LDR: -1. Prepare the tables on each cluster with the prerequisites for starting LDR. -1. Set up an [external connection]({% link {{ page.version.version }}/create-external-connection.md %}) on cluster B (which will be the destination cluster initially) to hold the connection URI for cluster A. -1. Start LDR from cluster B with your required modes. -1. (Optional) Run Steps 1 to 3 again with cluster B as the source and A as the destination, which starts LDR streaming from cluster B to A. +1. Prepare the clusters with the required, settings, users, and privileges according to the LDR setup. +1. Set up [external connection(s)]({% link {{ page.version.version }}/create-external-connection.md %}) on the destination to hold the connection URI for the source. +1. Start LDR from the destination cluster with your required modes and syntax. 1. Check the status of the LDR job in the [DB Console]({% link {{ page.version.version }}/ui-overview.md %}). ## Before you begin @@ -36,12 +42,6 @@ You'll need: - All nodes in each cluster will need access to the Certificate Authority for the other cluster. Refer to [Step 2. Connect from the destination to the source](#step-2-connect-from-the-destination-to-the-source). - LDR replicates at the table level, which means clusters can contain other tables that are not part of the LDR job. If both clusters are empty, create the tables that you need to replicate with **identical** schema definitions (excluding indexes) on both clusters. If one cluster already has an existing table that you'll replicate, ensure the other cluster's table definition matches. For more details on the supported schemas, refer to [Schema Validation](#schema-validation). -{% comment %}To add later, after further dev work{{site.data.alerts.callout_info}} -If you need to run LDR through a load balancer, use the load balancer IP address as the SQL advertise address on each cluster. It is important to note that using a load balancer with LDR can impair performance. -{{site.data.alerts.end}}{% endcomment %} - -To create bidirectional LDR, you can complete the [optional step](#step-4-optional-set-up-bidirectional-ldr) to start the second LDR job that sends writes from the table on cluster B to the table on cluster A. - ### Schema validation Before you start LDR, you must ensure that all column names, types, constraints, and unique indexes on the destination table match with the source table. @@ -59,6 +59,12 @@ When you run LDR in [`immediate` mode](#modes), you cannot replicate a table wit ## Step 1. Prepare the cluster +In this step you'll prepare the required settings and privileges for LDR. + +{{site.data.alerts.callout_info}} +If you are setting up bidirectional LDR, you **must** run this step on both clusters. +{{site.data.alerts.end}} + 1. Enter the SQL shell for **both** clusters in separate terminal windows: {% include_cached copy-clipboard.html %} @@ -85,87 +91,148 @@ When you run LDR in [`immediate` mode](#modes), you cannot replicate a table wit GRANT SYSTEM REPLICATION TO {your username}; ~~~ - If you need to change the password later, refer to [`ALTER USER`]({% link {{ page.version.version }}/alter-user.md %}). + To change the password later, refer to [`ALTER USER`]({% link {{ page.version.version }}/alter-user.md %}). ## Step 2. Connect from the destination to the source -In this step, you'll set up an [external connection]({% link {{ page.version.version }}/create-external-connection.md %}) from the destination cluster to the source cluster. Depending on how you manage certificates, you must ensure that all nodes between the clusters have access to the certificate of the other cluster. +In this step, you'll set up [external connection(s)]({% link {{ page.version.version }}/create-external-connection.md %}) to store the connection string for one or both clusters. Depending on how you manage certificates, you must ensure that all nodes between the clusters have access to the certificate of the other cluster. You can use the `cockroach encode-uri` command to generate a connection string containing a cluster's certificate. -1. On the **source** cluster in a new terminal window, generate a connection string, by passing the replication user, node IP, and port, along with the directory to the source cluster's CA certificate: +1. On the **source** cluster in a new terminal window, generate a connection string, by passing the user, node IP, and port, along with the directory to the source cluster's CA certificate: {% include_cached copy-clipboard.html %} ~~~ shell - cockroach encode-uri {replication user}:{password}@{node IP}:26257 --ca-cert {path to CA certificate} --inline + cockroach encode-uri {user}:{password}@{node IP}:26257 --ca-cert {path to CA certificate} --inline ~~~ The connection string output contains the source cluster's certificate: ~~~ - {replication user}:{password}@{node IP}:26257?options=-ccluster%3Dsystem&sslinline=true&sslmode=verify-full&sslrootcert=-----BEGIN+CERTIFICATE-----{encoded certificate}-----END+CERTIFICATE-----%0A + {user}:{password}@{node IP}:26257?options=-ccluster%3Dsystem&sslinline=true&sslmode=verify-full&sslrootcert=-----BEGIN+CERTIFICATE-----{encoded certificate}-----END+CERTIFICATE-----%0A ~~~ 1. In the SQL shell on the **destination** cluster, create an [external connection]({% link {{ page.version.version }}/create-external-connection.md %}) using the source cluster's connection string. Prefix the `postgresql://` scheme to the connection string and replace `{source}` with your external connection name: {% include_cached copy-clipboard.html %} ~~~ sql - CREATE EXTERNAL CONNECTION {source} AS 'postgresql://{replication user}:{password}@{node IP}:26257?options=-ccluster%3Dsystem&sslinline=true&sslmode=verify-full&sslrootcert=-----BEGIN+CERTIFICATE-----{encoded certificate}-----END+CERTIFICATE-----%0A'; + CREATE EXTERNAL CONNECTION {source} AS 'postgresql://{user}:{password}@{node IP}:26257?options=-ccluster%3Dsystem&sslinline=true&sslmode=verify-full&sslrootcert=-----BEGIN+CERTIFICATE-----{encoded certificate}-----END+CERTIFICATE-----%0A'; + ~~~ + +### (Optional) Bidirectional: Create the connection for LDR stream 2 + +(Optional) For bidirectional LDR, you'll need to repeat creating the certificate output and the external connection for the opposite cluster. Both clusters will act as the source and destination. At this point, you've created an external connection for LDR stream 1, so cluster **A** (source) to **B** (destination). Now, create the same for LDR stream 2 cluster **B** (source) to cluster **A** (destination). + +1. On cluster **B**, run: + + {% include_cached copy-clipboard.html %} + ~~~ shell + cockroach encode-uri {user}:{password}@{node IP}:26257 --ca-cert {path to CA certificate} --inline + ~~~ + + The connection string output contains the source cluster's certificate: + + ~~~ + {user}:{password}@{node IP}:26257?options=-ccluster%3Dsystem&sslinline=true&sslmode=verify-full&sslrootcert=-----BEGIN+CERTIFICATE-----{encoded certificate}-----END+CERTIFICATE-----%0A + ~~~ + +1. On cluster **A**, create an [external connection]({% link {{ page.version.version }}/create-external-connection.md %}) using cluster B's connection string (source in LDR stream 2). Prefix the `postgresql://` scheme to the connection string and replace `{source}` with your external connection name: + + {% include_cached copy-clipboard.html %} + ~~~ sql + CREATE EXTERNAL CONNECTION {source} AS 'postgresql://{user}:{password}@{node IP}:26257?options=-ccluster%3Dsystem&sslinline=true&sslmode=verify-full&sslrootcert=-----BEGIN+CERTIFICATE-----{encoded certificate}-----END+CERTIFICATE-----%0A'; ~~~ ## Step 3. Start LDR -In this step, you'll start the LDR job from the destination cluster. You can replicate one or multiple tables in a single LDR job. You cannot replicate system tables in LDR, which means that you must manually apply configurations and cluster settings, such as [row-level TTL]({% link {{ page.version.version }}/row-level-ttl.md %}) and user permissions on the destination cluster. +In this step, you'll start the LDR stream(s) from the destination cluster. You can replicate one or multiple tables in a single LDR job. You cannot replicate system tables in LDR, which means that you must manually apply configurations and cluster settings, such as [row-level TTL]({% link {{ page.version.version }}/row-level-ttl.md %}) and user permissions on the destination cluster. _Modes_ determine how LDR replicates the data to the destination cluster. There are two modes: - `immediate` (default): {% include {{ page.version.version }}/ldr/immediate-description.md %} - `validated`: {% include {{ page.version.version }}/ldr/validated-description.md %} -1. From the **destination** cluster, start LDR. Use the fully qualified table name for the source and destination tables: +### Syntax - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE LOGICAL REPLICATION STREAM FROM TABLE {database.public.source_table_name} ON 'external://{source_external_connection}' INTO TABLE {database.public.destination_table_name}; - ~~~ +There are two different SQL statements for starting an LDR stream, depending on your requirements: + +- {% include_cached new-in.html version="v25.1" %} [`CREATE LOGICALLY REPLICATED`]({% link {{ page.version.version }}/create-logically-replicated.md %}): Creates the new table on the destination cluster automatically, and conducts a fast, offline initial scan. `CREATE LOGICALLY REPLICATED` accepts `unidirectional` or `bidirectional` as an option in order to create either one of the setups automatically. **The table cannot contain a user-defined type or foregin key dependencies.** Follow [these steps](#create-logically-replicated) for setup instructions. +- [`CREATE LOGICAL REPLICATION STREAM`]({% link {{ page.version.version }}/create-logical-replication-stream.md %}): Starts the LDR stream after you've created the matching table on the destination cluster. **If the table contains user-defined types or foreign key dependencies, you must use this syntax.** Allows for manual creation of unidirectional or bidirectional LDR. Follow [these steps](#create-logical-replication-stream) for setup instructions. + +Also, for either syntax, note: - You can change the default `mode` using the `WITH mode = validated` syntax. +- It is necessary to use the fully qualified table name for the source table and destination table in the statement. +- {% include {{ page.version.version }}/ldr/multiple-tables.md %} - If you would like to add multiple tables to the LDR job, ensure that the table name in the source table list and destination table list are in the same order: +#### `CREATE LOGICALLY REPLICATED` - {% include_cached copy-clipboard.html %} +Use `CREATE LOGICALLY REPLICATED` to create either a unidirectional or bidirectional LDR stream automatically: + +- Unidirectional LDR: run the following from the **destination** cluster: + + {% include_cached copy-clipboard.html %} ~~~ sql - CREATE LOGICAL REPLICATION STREAM FROM TABLES ({database.public.source_table_name_1},{database.public.source_table_name_2},...) ON 'external://{source_external_connection}' INTO TABLES ({database.public.destination_table_name_1},{database.public.destination_table_name_2},...); + CREATE LOGICALLY REPLICATED TABLE {database.public.destination_table_name} FROM TABLE {database.public.source_table_name} ON 'external://source' WITH unidirectional; ~~~ - {{site.data.alerts.callout_info}} - {% include {{ page.version.version }}/ldr/multiple-tables.md %} - {{site.data.alerts.end}} +- Bidirectional LDR: This statement will first create the LDR jobs for the first stream. You must run it from the **destination** cluster that does not contain the table. Once the offline initial scan completes, the reverse stream will be initialized so that the original destination cluster can send changes to the original source. - Once LDR has started, an LDR job will start on the destination cluster. You can [pause]({% link {{ page.version.version }}/pause-job.md %}), [resume]({% link {{ page.version.version }}/resume-job.md %}), or [cancel]({% link {{ page.version.version }}/cancel-job.md %}) the LDR job with the job ID. Use [`SHOW LOGICAL REPLICATION JOBS`]({% link {{ page.version.version }}/show-logical-replication-jobs.md %}) to display the LDR job IDs: + Run the following from the **destination** cluster (i.e, the cluster that does not have the table currently): {% include_cached copy-clipboard.html %} ~~~ sql - SHOW LOGICAL REPLICATION JOBS; - ~~~ - ~~~ - job_id | status | targets | replicated_time - ----------------------+---------+---------------------------+------------------ - 1012877040439033857 | running | {database.public.table} | NULL - (1 row) + CREATE LOGICALLY REPLICATED TABLE {database.public.destination_table_name} FROM TABLE {database.public.source_table_name} ON 'external://source' WITH bidirectional ON 'external://destination'; ~~~ - If you're setting up bidirectional LDR, both clusters will have a history retention job and an LDR job running. +You can include multiple tables in the LDR stream for unidirectional or bidirectional setups. Ensure that the table name in the source table list and destination table list are in the same order: + +{% include_cached copy-clipboard.html %} +~~~ sql +CREATE LOGICALLY REPLICATED TABLE ({database.public.destination_table_name_1}, {database.public.destination_table_name_2}) FROM TABLE ({database.public.source_table_name_1}, {database.public.source_table_name_2}) ON 'external://source' WITH bidirectional ON 'external://destination', label=track_job; +~~~ + +With the LDR streams created, move to [Step 4](#step-4-manage-and-monitor-the-ldr-jobs) to manage and monitor the jobs. + +#### `CREATE LOGICAL REPLICATION STREAM` + +Ensure you've created the table on the destination cluster with a matching schema definition to the source cluster table. From the **destination** cluster, start LDR. Use the fully qualified table name for the source and destination tables: + +{% include_cached copy-clipboard.html %} +~~~ sql +CREATE LOGICAL REPLICATION STREAM FROM TABLE {database.public.source_table_name} ON 'external://{source_external_connection}' INTO TABLE {database.public.destination_table_name}; +~~~ + +You can change the default `mode` using the `WITH mode = validated` syntax. + +If you would like to add multiple tables to the LDR job, ensure that the table name in the source table list and destination table list are in the same order: + +{% include_cached copy-clipboard.html %} +~~~ sql +CREATE LOGICAL REPLICATION STREAM FROM TABLES ({database.public.source_table_name_1},{database.public.source_table_name_2},...) ON 'external://{source_external_connection}' INTO TABLES ({database.public.destination_table_name_1},{database.public.destination_table_name_2},...); +~~~ + +(**Optional**) At this point, you've set up one LDR stream from cluster A as the source to cluster B as the destination. To set up LDR streaming in the opposite direction using `CREATE LOGICAL REPLICATION STREAM`, run the statement again but cluster B will now be the source, and cluster A will be the destination. + +## Step 4. Manage and monitor the LDR jobs -1. Move on to [Step 4](#step-4-optional-set-up-bidirectional-ldr) to set up a second LDR job. Or, once you have set up your required LDR jobs, refer to [Step 5](#step-5-monitor-the-ldr-jobs) to monitor the jobs in the DB Console. +Once LDR has started, an LDR job will run on the destination cluster. You can [pause]({% link {{ page.version.version }}/pause-job.md %}), [resume]({% link {{ page.version.version }}/resume-job.md %}), or [cancel]({% link {{ page.version.version }}/cancel-job.md %}) the LDR job with the job ID. Use [`SHOW LOGICAL REPLICATION JOBS`]({% link {{ page.version.version }}/show-logical-replication-jobs.md %}) to display the LDR job IDs: -## Step 4. (Optional) Set up bidirectional LDR +{% include_cached copy-clipboard.html %} +~~~ sql +SHOW LOGICAL REPLICATION JOBS; +~~~ +~~~ + job_id | status | targets | replicated_time +----------------------+---------+---------------------------+------------------ +1012877040439033857 | running | {database.public.table} | NULL +(1 row) +~~~ -At this point, you've set up one LDR job from cluster A as the source to cluster B as the destination. To set up LDR streaming in the opposite direction, complete [Step 1](#step-1-prepare-the-cluster), [Step 2](#step-2-connect-from-the-destination-to-the-source), and [Step 3](#step-3-start-ldr) again. Cluster B will now be the source, and cluster A will be the destination. +If you're setting up bidirectional LDR, both clusters will have a history retention job and an LDR job running. -## Step 5. Monitor the LDR jobs +### DB Console -In this step, you'll access the [DB Console]({% link {{ page.version.version }}/ui-overview.md %}) and monitor the status and metrics for the created LDR jobs. Depending on which cluster you would like to view, follow the instructions for either the source or destination. +You'll access the [DB Console]({% link {{ page.version.version }}/ui-overview.md %}) and monitor the status and metrics for the created LDR jobs. Depending on which cluster you would like to view, follow the instructions for either the source or destination. {{site.data.alerts.callout_success}} You can use the [DB Console]({% link {{ page.version.version }}/ui-overview.md %}), the SQL shell, [Metrics Export]({% link {{ page.version.version }}/datadog.md %}#enable-metrics-collection) with Prometheus and Datadog, and [labels with some LDR metrics]({% link {{ page.version.version }}/child-metrics.md %}) to monitor the job.