Skip to content

Commit

Permalink
feat: updates on review.
Browse files Browse the repository at this point in the history
  • Loading branch information
billy-the-fish committed Dec 20, 2024
1 parent 445b8b5 commit 415e0c1
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 9 deletions.
15 changes: 8 additions & 7 deletions _partials/_migrate_prerequisites.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,23 @@

Best practice is to use an [Ubuntu EC2 instance][create-ec2-instance] hosted in the same region as your
Timescale Cloud service as a migration machine. That is, the machine you run the commands on to move your
Timescale Cloud service to move data. That is, the machine you run the commands on to move your
data from your source database to your target Timescale Cloud service.

Before you migrate your data:
Before you move your data:

- [Create a target Timescale Cloud service][created-a-database-service-in-timescale].

Each Timescale Cloud service [has a single database] that supports the
[most popular extensions][all available extensions]. Timescale Cloud services do not support [tablespaces],
and [there is no superuser associated with a Timescale service][no-superuser-for-timescale-instance].
We recommend creating a Timescale Cloud instance with at least 8 CPUs for a smoother migration experience. A higher-spec instance can significantly reduce the overall migration window.
Each Timescale Cloud service has a single database that supports the
[most popular extensions][all available extensions]. $SERVICE_LONGs do not support tablespaces,
and there is no superuser associated with a $SERVICE_SHORT.
Best practice is to create a $SERVICE_LONGs with at least 8 CPUs for a smoother experience. A higher-spec instance
can significantly reduce the overall migration window.

- To ensure that maintenance does not run while migration is in progress, best practice is to [adjust the maintenance window][adjust-maintenance-window].

[created-a-database-service-in-timescale]: /getting-started/:currentVersion:/services/
[has a single database]: /migrate/:currentVersion:/troubleshooting/#only-one-database-per-instance
[all available extensions]: /migrate/:currentVersion:/troubleshooting/#extension-availability
[all-available-extensions]: /use-timescale/:currentVersion:/extensions
[tablespaces]: /migrate/:currentVersion:/troubleshooting/#tablespaces
[no-superuser-for-timescale-instance]: /migrate/:currentVersion:/troubleshooting/#superuser-privileges
[pg_hbaconf]: https://www.timescale.com/blog/5-common-connection-errors-in-postgresql-and-how-to-solve-them/#no-pg_hbaconf-entry-for-host
Expand Down
4 changes: 2 additions & 2 deletions migrate/livesync.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ import SetupConnectionStrings from "versionContent/_partials/_migrate_live_setup

# Livesync from Postgres to Timescale Cloud

You use the Livesync Docker image to synchronize all data in the database, or specific tables from a PostgreSQL database
You use the Livesync Docker image to synchronize all data, or specific tables, from a PostgreSQL database
instance to a $SERVICE_LONG in real-time. You run Livesync continuously, turning PostgreSQL into a primary database
with a $SERVICE_LONG as a logical replica. This enables you to leverage $CLOUD_LONG’s real-time analytics capabilities
on your replica data.
Expand All @@ -29,7 +29,7 @@ integrate.

You use Livesync to:
* Copy existing data from a Postgres instance to a $SERVICE_LONG:
- Copy data at up to 150 GB/hr. You need at least a 4 CPU/16GB source database, a 4 CPU/16GB target $SERVICE_SHORT.
- Copy data at up to 150 GB/hr. You need at least a 4 CPU/16GB source database, and a 4 CPU/16GB target $SERVICE_SHORT.
- Copy the publication tables in parallel. However, large tables are still copied using a single connection.
Parallel copying is in the backlog.
- Forget foreign key relationships. Livesync disables foreign key validation during the sync. For example, if a
Expand Down

0 comments on commit 415e0c1

Please sign in to comment.