Skip to content

Commit

Permalink
Merge branch 'latest' into ggodeke-patch-1
Browse files Browse the repository at this point in the history
  • Loading branch information
atovpeko authored Dec 13, 2024
2 parents 6024674 + dddf40a commit 2a571b5
Show file tree
Hide file tree
Showing 9 changed files with 55 additions and 82 deletions.
17 changes: 12 additions & 5 deletions _partials/_cloud-intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,19 +13,26 @@ use as is, or extend with capabilities specific to your business needs. The avai
the pgai extension.
- **[PostgreSQL][create-service]**: the trusted industry-standard RDBMS. Ideal for applications requiring strong data
consistency, complex relationships, and advanced querying capabilities. Get ACID compliance, extensive SQL support,
JSON handling, and extensibility through custom functions, data types, and extensions.
JSON handling, and extensibility through custom functions, data types, and extensions. $CLOUD_LONG continuously
monitors your services and prevents common PostgreSQL out-of-memory crashes.

All $SERVICE_SHORTs include all the cloud tooling you'd expect for production use:
[automatic backups][automatic-backups], [high availability][high-availability], [read replicas][readreplica],
[data forking][operations-forking], [connection pooling][connection-pooling], [tiered storage][data-tiering],
[usage-based storage][how-plans-work], and much more.
All $SERVICE_LONGs include the tooling you expect for production and developer environments: [live migration][live-migration],
[automatic backups and PITR][automatic-backups], [high availability][high-availability], [read replicas][readreplica], [data forking][operations-forking], [connection pooling][connection-pooling], [tiered storage][data-tiering],
[usage-based storage][how-plans-work], secure in-Console [SQL editing][in-console-editors], service [metrics][metrics]
and [insights][insights], [streamlined maintenance][maintain-upgrade], and much more.

[what-is-time-series]: https://www.timescale.com/blog/what-is-a-time-series-database/#what-is-a-time-series-database
[create-service]: /getting-started/:currentVersion:/services/
[live-migration]: /migrate/:currentVersion:/live-migration/
[automatic-backups]: /use-timescale/:currentVersion:/backup-restore/
[high-availability]: /use-timescale/:currentVersion:/ha-replicas/high-availability/
[readreplica]: /use-timescale/:currentVersion:/ha-replicas/read-scaling/
[operations-forking]: /use-timescale/:currentVersion:/services/service-management/#fork-a-service
[connection-pooling]: /use-timescale/:currentVersion:/services/connection-pooling
[data-tiering]: /use-timescale/:currentVersion:/data-tiering/
[how-plans-work]: /about/:currentVersion:/pricing-and-account-management/#how-plans-work
[in-console-editors]: /getting-started/:currentVersion:/run-queries-from-console/
[metrics]: /use-timescale/:currentVersion:/metrics-logging/service-metrics/
[insights]: /use-timescale/:currentVersion:/metrics-logging/insights/
[maintain-upgrade]: /use-timescale/:currentVersion:/upgrades/

37 changes: 33 additions & 4 deletions _partials/_migrate_live_migrate_faq_all.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,7 @@ a table, index, view, or materialized view. When you see you this error:
### FATAL: remaining connection slots are reserved for non-replication superuser connections

This may happen when the number of connections exhaust `max_connections` defined in your target Timescale Cloud
service. By default, live-migration needs around ~10 connections on the source and ~20 connections on the target.
For information on tuning the number of connections during migration, see [Tune the target Timescale Cloud service][tune-connections].

service. By default, live-migration needs around ~6 connections on the source and ~12 connections on the target.

### Migration seems to be stuck with “x GB copied to Target DB (Source DB is y GB)”

Expand All @@ -38,6 +36,24 @@ To resolve this issue:
1. When `migrate` has finished, manually refresh the materialized views you excluded.


### Restart migration from scratch after a non-resumable failure

If the migration halts due to a failure, such as a misconfiguration of the source or target database, you may need to restart the migration from scratch. In such cases, you can reuse the original Timescale target instance created for the migration by utilizing the `--drop-if-exists` flag with the migrate command.

This flag ensures that the existing target objects created by the previous migration are dropped, allowing the migration to proceed without trouble.

Here’s an example command to restart the migration:

```shell
docker run --rm -it --name live-migration-migrate \
-e PGCOPYDB_SOURCE_PGURI=$SOURCE \
-e PGCOPYDB_TARGET_PGURI=$TARGET \
--pid=host \
-v ~/live-migration:/opt/timescale/ts_cdc \
timescale/live-migration:latest migrate --drop-if-exists
```

This approach provides a clean slate for the migration process while reusing the existing target instance.
### Inactive or lagging replication slots

If you encounter an “Inactive or lagging replication slots” warning on your cloud provider console after using live-migration, it might be due to lingering replication slots created by the live-migration tool on your source database.
Expand Down Expand Up @@ -80,6 +96,20 @@ Live-migration does not migrate table privileges. After completing Live-migratio
2. If the query is either UPDATE/DELETE, make sure the columns used on the WHERE clause have necessary indexes.
3. If the query is either UPDATE/DELETE on the tables which are converted as hypertables, make sure the REPLIDA IDENTITY(defaults to primary key) on the source is compatible with the target primary key. If not, create an UNIQUE index source database by including the hypertable partition column and make it as a REPLICA IDENTITY. Also, create the same UNIQUE index on target.

### ERROR: out of memory (or) Failed on request of size xxx in memory context "yyy" on Timescale instance

This error occurs when the Out of Memory (OOM) guard is triggered due to memory allocations exceeding safe limits. It typically happens when multiple concurrent connections to the TimescaleDB instance are performing memory-intensive operations. For example, during live migrations, this error can occur when large indexes are being created simultaneously.

The live-migration tool includes a retry mechanism to handle such errors. However, frequent OOM crashes may significantly delay the migration process.

One of the following can be used to avoid the OOM errors:

1. Upgrade to Higher Memory Spec Instances: To mitigate memory constraints, consider using a TimescaleDB instance with higher specifications, such as an instance with 8 CPUs and 32 GB RAM (or more). Higher memory capacity can handle larger workloads and reduce the likelihood of OOM errors.

1. Reduce Concurrency: If upgrading your instance is not feasible, you can reduce the concurrency of the index migration process using the `--index-jobs=<value>` flag in the migration command. By default, the value of `--index-jobs` matches the GUC max_parallel_workers. Lowering this value reduces the memory usage during migration but may increase the total migration time.

By taking these steps, you can prevent OOM errors and ensure a smoother migration experience with TimescaleDB.

### ERROR: invalid snapshot identifier: "xxxxxx" (or) SSL SYSCALL error: EOF detected on RDS

This rare phenomenon may happen when:
Expand All @@ -103,5 +133,4 @@ This rare phenomenon may happen when:
Upgrade to better instances types until migration completes.


[tune-connections]: /migrate/:currentVersion:/live-migration/#tune-the-target-timescale-cloud-service
[align-versions]: /migrate/:currentVersion:/live-migration/#align-the-version-of-timescaledb-on-the-source-and-target
4 changes: 4 additions & 0 deletions _partials/_migrate_live_setup_connection_strings.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,7 @@ export TARGET="postgres://tsdbadmin:<PASSWORD>@<HOST>:<PORT>/tsdb?sslmode=requir
```
You find the connection information for your Timescale Cloud service in the configuration file you
downloaded when you created the service.

<Highlight type="important">
Avoid using connection strings that route through connection poolers like PgBouncer or similar tools. The live-migration tool requires a direct connection to the database to function properly.
</Highlight>
11 changes: 1 addition & 10 deletions _partials/_migrate_live_setup_environment.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import SetupConnectionStrings from "versionContent/_partials/_migrate_live_setup_connection_strings.mdx";
import MigrationSetupDBConnectionTimescaleDB from "versionContent/_partials/_migrate_set_up_align_db_extensions_timescaledb.mdx";
import TuneSourceDatabase from "versionContent/_partials/_migrate_live_tune_source_database.mdx";
import MigrateSetupTargetEnvironment from "versionContent/_partials/_migrate_live_setup_environment_target_config.mdx";


## Set your connection strings
Expand All @@ -22,15 +21,7 @@ import MigrateSetupTargetEnvironment from "versionContent/_partials/_migrate_liv

</Procedure>

## Tune the target Timescale Cloud service

<Procedure>

<MigrateSetupTargetEnvironment />

</Procedure>

[modify-parameters]: /use-timescale/:currentVersion/configuration/customize-configuration/#modify-basic-parameters
[mst-portal]: https://portal.managed.timescale.com/login
[tsc-portal]: https://console.cloud.timescale.com/
[configure-instance-parameters]: /use-timescale/:currentVersion/configuration/customize-configuration/#configure-database-parameters
[configure-instance-parameters]: /use-timescale/:currentVersion/configuration/customize-configuration/#configure-database-parameters
12 changes: 1 addition & 11 deletions _partials/_migrate_live_setup_environment_awsrds.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import SetupConnectionStrings from "versionContent/_partials/_migrate_live_setup_connection_strings.mdx";
import MigrationSetupDBConnectionPostgresql from "versionContent/_partials/_migrate_set_up_align_db_extensions_postgres_based.mdx";
import TuneSourceDatabaseAWSRDS from "versionContent/_partials/_migrate_live_tune_source_database_awsrds.mdx";
import MigrateSetupTargetEnvironment from "versionContent/_partials/_migrate_live_setup_environment_target_config.mdx";

## Set your connection strings

Expand All @@ -23,14 +22,5 @@ import MigrateSetupTargetEnvironment from "versionContent/_partials/_migrate_liv
</Procedure>


## Tune the target Timescale Cloud service

<Procedure>

<MigrateSetupTargetEnvironment />

</Procedure>


[modify-parameters]: /use-timescale/:currentVersion:/configuration/customize-configuration/#modify-basic-parameters
[mst-portal]: https://portal.managed.timescale.com/login
[mst-portal]: https://portal.managed.timescale.com/login
11 changes: 1 addition & 10 deletions _partials/_migrate_live_setup_environment_mst.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import SetupConnectionStrings from "versionContent/_partials/_migrate_live_setup_connection_strings.mdx";
import MigrationSetupDBConnectionTimescaleDB from "versionContent/_partials/_migrate_set_up_align_db_extensions_timescaledb.mdx";
import TuneSourceDatabaseMST from "versionContent/_partials/_migrate_live_tune_source_database_mst.mdx";
import MigrateSetupTargetEnvironment from "versionContent/_partials/_migrate_live_setup_environment_target_config.mdx";

## Set your connection strings

Expand All @@ -22,14 +21,6 @@ import MigrateSetupTargetEnvironment from "versionContent/_partials/_migrate_liv

</Procedure>

## Tune the target Timescale Cloud service

<Procedure>

<MigrateSetupTargetEnvironment />

</Procedure>


[modify-parameters]: /use-timescale/:currentVersion:/configuration/customize-configuration/#modify-basic-parameters
[mst-portal]: https://portal.managed.timescale.com/login
[mst-portal]: https://portal.managed.timescale.com/login
10 changes: 1 addition & 9 deletions _partials/_migrate_live_setup_environment_postgres.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import SetupConnectionStrings from "versionContent/_partials/_migrate_live_setup_connection_strings.mdx";
import MigrationSetupDBConnectionPostgresql from "versionContent/_partials/_migrate_set_up_align_db_extensions_postgres_based.mdx";
import TuneSourceDatabasePostgres from "versionContent/_partials/_migrate_live_tune_source_database_postgres.mdx";
import MigrateSetupTargetEnvironment from "versionContent/_partials/_migrate_live_setup_environment_target_config.mdx";


## Set your connection strings
Expand All @@ -23,15 +22,8 @@ import MigrateSetupTargetEnvironment from "versionContent/_partials/_migrate_liv

</Procedure>

## Tune the target Timescale Cloud service

<Procedure>

<MigrateSetupTargetEnvironment />

</Procedure>

[modify-parameters]: /use-timescale/:currentVersion/configuration/customize-configuration/#modify-basic-parameters
[mst-portal]: https://portal.managed.timescale.com/login
[tsc-portal]: https://console.cloud.timescale.com/
[configure-instance-parameters]: /use-timescale/:currentVersion/configuration/customize-configuration/#configure-database-parameters
[configure-instance-parameters]: /use-timescale/:currentVersion/configuration/customize-configuration/#configure-database-parameters
31 changes: 0 additions & 31 deletions _partials/_migrate_live_setup_environment_target_config.md

This file was deleted.

4 changes: 2 additions & 2 deletions self-hosted/configuration/timescaledb-config.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,9 +40,9 @@ in this way.

### `timescaledb.enable_merge_on_cagg_refresh (bool)`

Set to `TRUE` to dramatically decrease the amount of data written on a continuous aggregate
Set to `ON` to dramatically decrease the amount of data written on a continuous aggregate
in the presence of a small number of changes, reduce the i/o cost of refreshing a
[continuous aggregate][continuous-aggregates], and generate fewer Write-Ahead Logs (WAL)
[continuous aggregate][continuous-aggregates], and generate fewer Write-Ahead Logs (WAL). Only works for continuous aggregates that don't have compression enabled.

## Distributed hypertables

Expand Down

0 comments on commit 2a571b5

Please sign in to comment.