From c37140c21b73106777a0dfe924d658da56a4f6d0 Mon Sep 17 00:00:00 2001 From: atovpeko <114177030+atovpeko@users.noreply.github.com> Date: Tue, 26 Nov 2024 10:09:12 +0200 Subject: [PATCH 01/18] I/O boost docs (#3610) I/O boost docs added --------- Co-authored-by: Iain Cox --- about/pricing-and-account-management.md | 8 ++--- use-timescale/page-index/page-index.js | 4 +++ use-timescale/services/i-o-boost.md | 40 +++++++++++++++++++++++++ 3 files changed, 48 insertions(+), 4 deletions(-) create mode 100644 use-timescale/services/i-o-boost.md diff --git a/about/pricing-and-account-management.md b/about/pricing-and-account-management.md index 83a4bfd9b2..8b146794e1 100644 --- a/about/pricing-and-account-management.md +++ b/about/pricing-and-account-management.md @@ -147,13 +147,13 @@ The features included in each [plan][pricing-plans] are: | Time-series | ✓ | ✓ | ✓ | | Vector search | ✓ | ✓ | ✓ | | AI workflows (coming soon) | ✓ | ✓ | ✓ | -| Cloud SQL editor | 3 seats | 10 seats | 20 seats | +| Cloud SQL editor | 3 seats | 10 seats | 20 seats | | Charts | ✓ | ✓ | ✓ | | Dashboards | 2 | Unlimited | Unlimited | | **Storage and performance** | | | | | IOPS (autoscales) | 3,000 - 5,000 | 5,000 - 8,000 | 5,000 - 8,000 | | Bandwidth (autoscales) | 125 - 250 Mbps | 250 - 500 Mbps | Up to 500 mbps | -| IO Boost | | Add-on:
16K IOPS, 1000 Mbps BW | Add-on:
16K IOPS, 1000 Mbps BW | +| I/O Boost | | Add-on:
16K IOPS, 1000 Mbps BW | Add-on:
16K IOPS, 1000 Mbps BW | | **Availability and monitoring** | | | | | High-availability replicas
(Automated multi-AZ failover) | ✓ | ✓ | ✓ | | Read replicas | | ✓ | ✓ | @@ -163,7 +163,7 @@ The features included in each [plan][pricing-plans] are: | **Security and compliance** | | | | | End-to-end encryption | ✓ | ✓ | ✓ | | Private Networking (VPC) | 1 multi-attach VPC | Unlimited multi-attach VPCs | Unlimited multi-attach VPCs | -| IP address allow list | 1 list with up to 10 IP addresses | Up to 10 lists with up to 10 IP addresses each | Up to 10 lists with up to 10 IP addresses each | +| IP address allow list | 1 list with up to 10 IP addresses | Up to 10 lists with up to 10 IP addresses each | Up to 10 lists with up to 10 IP addresses each | | Multi-factor authentication | ✓ | ✓ | ✓ | | Federated authentication (SAML) | | | ✓ | | SOC 2 Type 2 report | | ✓ | ✓ | @@ -176,7 +176,7 @@ The features included in each [plan][pricing-plans] are: | Email support | ✓ | ✓ | ✓ | | Production support | Add-on | Add-on | ✓ | | Named account manager | | | ✓ | -|JOIN services (Jumpstart Onboarding and INtegration)| | Available at minimum spend | ✓ | +| JOIN services (Jumpstart Onboarding and INtegration) | | Available at minimum spend | ✓ | If you want to estimate your costs ahead of the billing cycle, you can use the [pricing calculator](http://timescale.com/pricing/calculator). diff --git a/use-timescale/page-index/page-index.js b/use-timescale/page-index/page-index.js index e94eca2469..3018729912 100644 --- a/use-timescale/page-index/page-index.js +++ b/use-timescale/page-index/page-index.js @@ -44,6 +44,10 @@ module.exports = [ excerpt: "Using a connection pool with your Timescale services", }, + { + title: "I/O boost", + href: "i-o-boost", + }, { title: "Troubleshooting Timescale services", href: "troubleshooting", diff --git a/use-timescale/services/i-o-boost.md b/use-timescale/services/i-o-boost.md new file mode 100644 index 0000000000..a23259c30e --- /dev/null +++ b/use-timescale/services/i-o-boost.md @@ -0,0 +1,40 @@ +--- +title: I/O boost +excerpt: Increase I/O and throughput to avoid performance bottlenecks and enhance scalability +products: [cloud] +keywords: [io, io boost, performance] +--- + +# I/O boost + +You use I/O boost to increase I/O and throughput of a service's [high-performance storage][data-tiering]. You can enable it for the most demanding applications, while keeping costs under control. + +Enabling I/O boost increases I/O to 16,000 IOPS and throughput to 1,000 MBps. The boost also applies to any [high-availability][ha-replicas] replicas you might have running for a service, although for an additional fee. + +This feature is available under the Scale and Enterprise [pricing tiers][pricing-tiers]. + +## Enable I/O boost + +You enable I/O boost from the `Operations` tab in [$CONSOLE][console]. + + + +1. **In $CONSOLE, choose the $SERVICE_SHORT you want to enable I/O boost for**. + +1. **Open the `Operations` tab and toggle the I/O boost switch. Then click `Apply`**. + + ![Timescale I/O Boost](https://assets.timescale.com/docs/images/timescale-i-o-boost.png) + + + +I/O boost is now enabled for this service and its replicas. You can enable or disable it once every 24 hours. + +[console]: https://console.cloud.timescale.com/dashboard/services +[ha-replicas]: /use-timescale/:currentVersion:/ha-replicas/high-availability/ +[read-replicas]: /use-timescale/:currentVersion:/ha-replicas/read-scaling/ +[pricing-tiers]: /about/:currentVersion:/pricing-and-account-management/ +[data-tiering]: /use-timescale/:currentVersion:/data-tiering/ + + + + From 5fd20c24abd934967231bd5d0a431f00cc2c798a Mon Sep 17 00:00:00 2001 From: atovpeko <114177030+atovpeko@users.noreply.github.com> Date: Tue, 26 Nov 2024 10:51:25 +0200 Subject: [PATCH 02/18] Removed backfill historical data on compressed chunks --------- Co-authored-by: Iain Cox --- .../compression/backfill-historical-data.md | 130 ------------------ .../compression/compression-policy.md | 27 +++- .../compression/decompress-chunks.md | 5 +- use-timescale/page-index/page-index.js | 5 - 4 files changed, 23 insertions(+), 144 deletions(-) delete mode 100644 use-timescale/compression/backfill-historical-data.md diff --git a/use-timescale/compression/backfill-historical-data.md b/use-timescale/compression/backfill-historical-data.md deleted file mode 100644 index 141062dcc1..0000000000 --- a/use-timescale/compression/backfill-historical-data.md +++ /dev/null @@ -1,130 +0,0 @@ ---- -title: Backfill historical data on compressed chunks -excerpt: How to backfill a batch of historical data on a compressed hypertable -products: [cloud, mst, self_hosted] -keywords: [compression, backfilling, hypertables] ---- - -# Backfill historical data on compressed chunks - -When you backfill data, you are inserting data into a chunk that has already -been compressed. As of version 2.10, this has been greatly simplified by running -insert commands on compressed chunks directly. When doing bulk backfilling, -it is recommended to pause the compression job until finished so the policy -doesn't compress the chunk you are working on. - -This section contains procedures for bulk backfilling, taking -you through these steps: - -1. Temporarily turning off any existing compression policy. This stops the - policy from trying to compress chunks that you are currently working on. -1. Performing the insertion or backfill. -1. Re-enabling the compression policy. This re-compresses the chunks you worked - on. - -Caveats: - * Backfilling compressed chunks with unique constraints is only supported in version 2.11 and above. - * In order to backfill the data and enforce unique constraints, it is possible that we end up decompressing some data. If we are backfilling larger amounts of data, it might be more performant to manully decompress the chunk that you are working on (as shown in the section [Backfilling manually][backfilling-manually] below). - - -## Backfill with a supplied function - -To make backfilling easier, you can use the -[backfilling functions][timescaledb-extras-backfill] in the -[TimescaleDB extras][timescaledb-extras] GitHub repository. In particular, the -`decompress_backfill` procedure automates many of the backfilling steps for you. - - -This section shows you how to bulk backfill data using a temporary table. -Temporary tables only exist for the duration of the database session, and then -are automatically dropped. If you backfill regularly, you might prefer to use a -regular table instead, so that multiple writers can insert into the table at the -same time. In this case, after you are done backfilling the data, clean up by -truncating your table in preparation for the next backfill. - - - - -### Backfilling with a supplied function - -1. At the psql prompt, create a temporary table with the same schema as the - hypertable you want to backfill into. In this example, the table is named - `example`, and the temporary table is named `cpu_temp`: - - ```sql - CREATE TEMPORARY TABLE cpu_temp AS SELECT * FROM example WITH NO DATA; - ``` - -1. Insert your data into the temporary table. -1. Call the `decompress_backfill` procedure. This procedure halts the - compression policy, identifies the compressed chunks that the backfilled - data corresponds to, decompresses the chunks, inserts data from the backfill - table into the main hypertable, and then re-enables the compression policy: - - ```sql - CALL decompress_backfill( - staging_table=>'cpu_temp', destination_hypertable=>'example' - ); - ``` - - - -## Backfill manually - -If you don't want to use a supplied function, you can perform the steps -manually. - - - -### Backfilling manually - -1. At the psql prompt, find the `job_id` of the policy: - - ```sql - SELECT j.job_id - FROM timescaledb_information.jobs j - WHERE j.proc_name = 'policy_compression' - AND j.hypertable_name = ; - ``` - -1. Pause compression, to prevent the policy from trying to compress chunks that - you are currently working on: - - ``` sql - SELECT alter_job(, scheduled => false); - ``` - -1. Decompress the chunks that you want to modify. - - ``` sql - SELECT decompress_chunk('_timescaledb_internal._hyper_2_2_chunk'); - ``` - - Repeat for each chunk. Alternatively, you can decompress a set of chunks - based on a time range using `show_chunks`: - - ``` sql - SELECT decompress_chunk(i) - FROM show_chunks('conditions', newer_than, older_than) i; - ``` - -1. When you have decompressed all the chunks you want to modify, perform the - `INSERT` or `UPDATE` commands to backfill the data. -1. Restart the compression policy job. The next time the job runs, it - recompresses any chunks that were decompressed. - - ``` sql - SELECT alter_job(, scheduled => true); - ``` - - Alternatively, to recompress chunks immediately, use the `run_job` command: - - ``` sql - CALL run_job(); - ``` - - - -[timescaledb-extras]: https://github.com/timescale/timescaledb-extras -[timescaledb-extras-backfill]: https://github.com/timescale/timescaledb-extras/blob/master/backfill.sql -[backfilling-manually]: /use-timescale/:currentVersion:/compression/backfill-historical-data/#backfilling-manually diff --git a/use-timescale/compression/compression-policy.md b/use-timescale/compression/compression-policy.md index 470b86b51e..36fd17ffca 100644 --- a/use-timescale/compression/compression-policy.md +++ b/use-timescale/compression/compression-policy.md @@ -14,7 +14,7 @@ you want to segment by. ## Enable a compression policy -This procedure uses an example table, called `example`, and segments it by the +This page uses an example table, called `example`, and segments it by the `device_id` column. Every chunk that is more than seven days old is then marked to be automatically compressed. The source data is organized like this: @@ -59,13 +59,30 @@ SELECT * FROM timescaledb_information.jobs For more information, see the API reference for [`timescaledb_information.jobs`][timescaledb_information-jobs]. +## Pause compression policy + +To disable a compression policy temporarily, find the corresponding job ID and then call `alter_job` to pause it: + +```sql +SELECT * FROM timescaledb_information.jobs where proc_name = 'policy_compression' AND relname = 'example' +``` + +```sql +SELECT alter_job(, scheduled => false); +``` + +To enable it again: + +``` sql +SELECT alter_job(, scheduled => true); +``` + ## Remove compression policy -To remove a compression policy, use `remove_compression_policy`. For example, to -remove a compression policy for a hypertable named `cpu`: +To remove a compression policy, use `remove_compression_policy`: ```sql -SELECT remove_compression_policy('cpu'); +SELECT remove_compression_policy('example'); ``` For more information, see the API reference for @@ -77,7 +94,7 @@ You can disable compression entirely on individual hypertables. This command works only if you don't currently have any compressed chunks: ```sql -ALTER TABLE SET (timescaledb.compress=false); +ALTER TABLE SET (timescaledb.compress=false); ``` If your hypertable contains compressed chunks, you need to diff --git a/use-timescale/compression/decompress-chunks.md b/use-timescale/compression/decompress-chunks.md index e7f89bab4c..dc09671ea6 100644 --- a/use-timescale/compression/decompress-chunks.md +++ b/use-timescale/compression/decompress-chunks.md @@ -22,8 +22,7 @@ for actions such as bulk inserts. This section describes commands to use for decompressing chunks. You can filter -by time to select the chunks you want to decompress. To learn how to backfill -data, see the [backfilling section][backfill]. +by time to select the chunks you want to decompress. ## Decompress chunks manually @@ -70,7 +69,5 @@ SELECT tableoid::regclass FROM metrics _timescaledb_internal._hyper_72_37_chunk ``` -[backfill]: /use-timescale/:currentVersion:/compression/backfill-historical-data/ [api-reference-decompress]: /api/:currentVersion:/compression/decompress_chunk/ [api-reference-alter-job]: /api/:currentVersion:/actions/alter_job/ - diff --git a/use-timescale/page-index/page-index.js b/use-timescale/page-index/page-index.js index 3018729912..4bfc0dde11 100644 --- a/use-timescale/page-index/page-index.js +++ b/use-timescale/page-index/page-index.js @@ -286,11 +286,6 @@ module.exports = [ href: "decompress-chunks", excerpt: "Decompress data chunks", }, - { - title: "Backfill historical data", - href: "backfill-historical-data", - excerpt: "Backfill historical data to compressed chunks", - }, { title: "Modify a schema", href: "modify-a-schema", From 84ef930775861e5c14ac4d89390e1e78831b4de5 Mon Sep 17 00:00:00 2001 From: atovpeko <114177030+atovpeko@users.noreply.github.com> Date: Tue, 26 Nov 2024 11:46:20 +0200 Subject: [PATCH 03/18] Removed the sample dataset (#3596) Co-authored-by: Iain Cox --- self-hosted/migration/migrate-influxdb.md | 73 ++++++++--------------- 1 file changed, 24 insertions(+), 49 deletions(-) diff --git a/self-hosted/migration/migrate-influxdb.md b/self-hosted/migration/migrate-influxdb.md index dc015750da..3193f7fd49 100644 --- a/self-hosted/migration/migrate-influxdb.md +++ b/self-hosted/migration/migrate-influxdb.md @@ -14,8 +14,10 @@ migrations. It pipes exported data directly to Timescale, and manages schema discovery, validation, and creation. + Outflux works with earlier versions of InfluxDB. It does not work with InfluxDB version 2 and later. + ## Prerequisites @@ -24,19 +26,15 @@ Before you start, make sure you have: * A running instance of InfluxDB and a means to connect to it. * An [installation of Timescale][install] and a means to connect to it. -* Data in your InfluxDB instance. If you need to import some sample data for a - test, see the instructions for [importing sample data][import-data]. +* Data in your InfluxDB instance. ## Procedures To import data from Outflux, follow these procedures: -1. [Install Outflux](#install-outflux) -1. [Import sample data](#import-sample-data-into-influxdb) to InfluxDB if you - don't have existing data. -1. [Discover, validate, and transfer - schema](#discover-validate-and-transfer-schema) to Timescale (optional) -1. [Migrate data to Timescale](#migrate-data-to-timescaledb) +1. [Install Outflux][install-outflux] +1. [Discover, validate, and transfer schema][discover-validate-and-transfer-schema] to Timescale (optional) +1. [Migrate data to Timescale][migrate-data-to-timescale] ## Install Outflux @@ -52,48 +50,17 @@ and MacOS. 1. Extract it to a preferred location. -If you prefer to build Outflux from source, see the [Outflux -README](https://github.com/timescale/outflux/blob/master/README.md) for + +If you prefer to build Outflux from source, see the [Outflux README][outflux-readme] for instructions. + -To get help with Outflux, you can run `./outflux --help` from the directory +To get help with Outflux, run `./outflux --help` from the directory where you installed it. -## Import sample data into InfluxDB - -If you don't have an existing InfluxDB database, or if you want to test on a -sample instance, you can try Outflux by importing sample data. We provide an -example file with data written in the Influx Line Protocol. - - - -### Importing sample data into InfluxDB - -1. Download the sample data: - - [Outflux taxi data](https://timescaledata.blob.core.windows.net/datasets/outflux_taxi.txt) - -1. Use the [Influx CLI client][influx-cmd] to load the data into InfluxDB. - - ```bash - influx -import -path=outflux_taxi.txt -database=outflux_tutorial - ``` - - This command imports the data into a new database named `outflux_tutorial`. - - -The sample data has no timestamp, so the time of the Influx server is used at -data insert. All data points belong to one measurement, `taxi`. The points are -tagged with `location`, `rating`, and `vendor`. Four fields are recorded: -`fare`, `mta_tax`, `tip`, and `tolls`. The Influx client assumes the server is -available at `http://localhost:8086`. - - - - ## Discover, validate, and transfer schema Outflux can: @@ -105,10 +72,12 @@ Outflux can: exists + Outflux's `migrate` command does schema transfer and data migration in one step. -For more information, see the [migrate](#migrate-data-to-timescaledb) section. +For more information, see the [migrate][migrate-data-to-timescale] section. Use this section if you want to validate and transfer your schema independently of data migration. + To transfer your schema from InfluxDB to Timescale, run `outflux @@ -124,9 +93,11 @@ To transfer all measurements from the database, leave out the measurement name argument. + This example uses the `postgres` user and database to connect to the Timescale database. For other connection options and configuration, see the [Outflux -Github repo](https://github.com/timescale/outflux#connection). +Github repo][outflux-gitbuh]. + ### Schema transfer options @@ -166,8 +137,8 @@ outflux migrate \ ``` The schema strategy and connection options are the same as for -`schema-transfer`. For more information, see the -[`schema-transfer`](#discover-validate-and-transfer-schema) section. +`schema-transfer`. For more information, see +[Discover, validate, and transfer schema][discover-validate-and-transfer-schema]. In addition, `outflux migrate` also takes the following flags: @@ -187,9 +158,13 @@ migrate`][outflux-migrate]. Alternatively, see the command line help: outflux migrate --help ``` -[import-data]: #import-sample-data-into-influxdb [influx-cmd]: https://docs.influxdata.com/influxdb/v1.7/tools/shell/ -[install]: /getting-started/latest/ +[install]: /getting-started/:currentVersion:/ [outflux-migrate]: https://github.com/timescale/outflux#migrate [outflux-releases]: https://github.com/timescale/outflux/releases [outflux]: https://github.com/timescale/outflux +[install-outflux]: /self-hosted/:currentVersion:/migration/migrate-influxdb/#install-outflux +[discover-validate-and-transfer-schema]: /self-hosted/:currentVersion:/migration/migrate-influxdb/#discover-validate-and-transfer-schema +[migrate-data-to-timescale]: /self-hosted/:currentVersion:/migration/migrate-influxdb/#migrate-data-to-timescale +[outflux-gitbuh]: https://github.com/timescale/outflux#connection +[outflux-readme]: https://github.com/timescale/outflux/blob/master/README.md \ No newline at end of file From 6faee36b32113aa5041487bdca06213891c56df6 Mon Sep 17 00:00:00 2001 From: Arunprasad Rajkumar Date: Tue, 26 Nov 2024 15:53:52 +0530 Subject: [PATCH 04/18] Fix typo in wal_sender_timeout recommendation (#3621) Signed-off-by: Arunprasad Rajkumar --- _partials/_migrate_live_migrate_faq_all.md | 8 ++++---- _partials/_migrate_live_tune_source_database.md | 10 +++++----- _partials/_migrate_live_tune_source_database_awsrds.md | 2 +- 3 files changed, 10 insertions(+), 10 deletions(-) diff --git a/_partials/_migrate_live_migrate_faq_all.md b/_partials/_migrate_live_migrate_faq_all.md index 60afd8dab5..9de2535d6f 100644 --- a/_partials/_migrate_live_migrate_faq_all.md +++ b/_partials/_migrate_live_migrate_faq_all.md @@ -90,10 +90,10 @@ This rare phenomenon may happen when: following GUCs to the recommended values on the source RDS instance. ```shell - psql -X -d $SOURCE -c 'alter system set tcp_keepalives_count=60' - psql -X -d $SOURCE -c 'alter system set tcp_keepalives_idle=10' - psql -X -d $SOURCE -c 'alter system set tcp_keepalives_interval=10' - psql -X -d $SOURCE -c 'alter system set wal_sender_timeout=30m' + psql -X -d $SOURCE -c "alter system set tcp_keepalives_count=60" + psql -X -d $SOURCE -c "alter system set tcp_keepalives_idle=10" + psql -X -d $SOURCE -c "alter system set tcp_keepalives_interval=10" + psql -X -d $SOURCE -c "alter system set wal_sender_timeout='30min'" ``` For more information, see [https://github.com/dimitri/pgcopydb/issues/773#issuecomment-2139093365](https://github.com/dimitri/pgcopydb/issues/773#issuecomment-2139093365) diff --git a/_partials/_migrate_live_tune_source_database.md b/_partials/_migrate_live_tune_source_database.md index d5642cddf0..556af7def8 100644 --- a/_partials/_migrate_live_tune_source_database.md +++ b/_partials/_migrate_live_tune_source_database.md @@ -18,10 +18,10 @@ a managed service, follow the instructions in the `From MST` tab on this page. 1. **Tune system messaging** ```shell - psql -X -d $SOURCE -c 'alter system set tcp_keepalives_count=60' - psql -X -d $SOURCE -c 'alter system set tcp_keepalives_idle=10' - psql -X -d $SOURCE -c 'alter system set tcp_keepalives_interval=10' - psql -X -d $SOURCE -c 'alter system set wal_sender_timeout=30m' + psql -X -d $SOURCE -c "alter system set tcp_keepalives_count=60" + psql -X -d $SOURCE -c "alter system set tcp_keepalives_idle=10" + psql -X -d $SOURCE -c "alter system set tcp_keepalives_interval=10" + psql -X -d $SOURCE -c "alter system set wal_sender_timeout='30min'" ``` 1. **Restart the source database** @@ -33,4 +33,4 @@ a managed service, follow the instructions in the `From MST` tab on this page. -[install-wal2json]: https://github.com/eulerto/wal2json \ No newline at end of file +[install-wal2json]: https://github.com/eulerto/wal2json diff --git a/_partials/_migrate_live_tune_source_database_awsrds.md b/_partials/_migrate_live_tune_source_database_awsrds.md index 7dded7988d..f60bf27c1d 100644 --- a/_partials/_migrate_live_tune_source_database_awsrds.md +++ b/_partials/_migrate_live_tune_source_database_awsrds.md @@ -25,7 +25,7 @@ Updating parameters on a PostgreSQL instance will cause an outage. Choose a time - `tcp_keepalives_count` set to `60`: the number of messages that can be lost before the client is considered dead. - `tcp_keepalives_idle` set to `10`: the amount of time with no network activity before the IS sends a TCP keepalive message to the client. - `tcp_keepalives_interval` set to `10`: the amount of time before a unacknowledged TCP keepalive message is restransmitted. - - `wal_sender_timeout` set to `30m`: the maximum time to wait for WAL replication. + - `wal_sender_timeout` set to `30min`: the maximum time to wait for WAL replication. 1. In RDS, navigate back to your [databases][databases], select the RDS instance to migrate and click `Modify`. From 155fa9c52fa7dcc7f6f2b4de5adc497891c39362 Mon Sep 17 00:00:00 2001 From: atovpeko <114177030+atovpeko@users.noreply.github.com> Date: Wed, 27 Nov 2024 14:15:56 +0200 Subject: [PATCH 05/18] Maintenance ui update (#3620) --- use-timescale/upgrades.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/use-timescale/upgrades.md b/use-timescale/upgrades.md index bff4a8b494..f7691ed2af 100644 --- a/use-timescale/upgrades.md +++ b/use-timescale/upgrades.md @@ -93,20 +93,20 @@ system during the upgrade. 1. [Log in to your Timescale account][cloud-login]. Click the name of the service that you want to manage the maintenance window for. -2. In the `Operations` tab, navigate to the `Maintenance` section, and - click `Change maintenance window`. -3. In the `Maintenance` dialog, select the day of the week, the time, and the +2. In the `Operations` tab, navigate to the `Environment` > `Maintenance` and click `Change maintenance window`. +3. Select the day of the week, the time, and the timezone that you want the maintenance window to start. Maintenance windows can run for up to four hours. -4. Check `Apply new maintenance window to all services` if you want to use the - same maintenance window settings for all of your Timescale services. -5. Click `Apply Changes`. - + Timescale change maintenance window +4. Check `Apply new maintenance window to all services` if you want to use the + same maintenance window settings for all of your Timescale services. +5. Click `Apply`. + ## Critical updates From 20262e0b5087f640e936ad4e63e2cfaa136a3f15 Mon Sep 17 00:00:00 2001 From: Iain Cox Date: Wed, 27 Nov 2024 16:20:19 +0000 Subject: [PATCH 06/18] chore: Add HIPAA compliance to the pricing page. (#3623) * chore: Add HIPAA compliance to the pricing page. --- about/pricing-and-account-management.md | 4 +++- use-timescale/security/overview.md | 8 ++++++++ 2 files changed, 11 insertions(+), 1 deletion(-) diff --git a/about/pricing-and-account-management.md b/about/pricing-and-account-management.md index 8b146794e1..040cc067a2 100644 --- a/about/pricing-and-account-management.md +++ b/about/pricing-and-account-management.md @@ -163,6 +163,7 @@ The features included in each [plan][pricing-plans] are: | **Security and compliance** | | | | | End-to-end encryption | ✓ | ✓ | ✓ | | Private Networking (VPC) | 1 multi-attach VPC | Unlimited multi-attach VPCs | Unlimited multi-attach VPCs | +| [HIPAA compliance][hipaa-compliance] | | | ✓ | | IP address allow list | 1 list with up to 10 IP addresses | Up to 10 lists with up to 10 IP addresses each | Up to 10 lists with up to 10 IP addresses each | | Multi-factor authentication | ✓ | ✓ | ✓ | | Federated authentication (SAML) | | | ✓ | @@ -237,4 +238,5 @@ alt="Adding a payment method in Timescale"/> [pricing-plans]: https://www.timescale.com/pricing [plan-features]: /about/:currentVersion:/pricing-and-account-management/#features-included-in-each-plan [production-support]: https://www.timescale.com/support -[get-in-touch]: https://www.timescale.com/contact \ No newline at end of file +[get-in-touch]: https://www.timescale.com/contact +[hipaa-compliance]: https://www.hhs.gov/hipaa/for-professionals/index.html diff --git a/use-timescale/security/overview.md b/use-timescale/security/overview.md index 4bde359880..e96b152739 100644 --- a/use-timescale/security/overview.md +++ b/use-timescale/security/overview.md @@ -83,6 +83,13 @@ Timescale operators never access customer data, unless explicitly requested by the customer to troubleshoot a technical issue. The Timescale operations team has mandatory recurring training regarding the applicable policies. +## HIPAA compliance + +Timescale Cloud's [Enterprise plan][pricing-plan-features] is now Health Insurance Portability and Accountability Act +(HIPAA) compliant. This allows organizations to securely manage and analyze sensitive healthcare data, ensuring they +meet regulatory requirements while building compliant applications. + + [timescale-privacy-policy]: https://www.timescale.com/legal/privacy [tsc-tos]: https://www.timescale.com/legal/timescale-cloud-terms-of-service [tsc-data-processor-addendum]: https://www.timescale.com/legal/timescale-cloud-data-processing-addendum @@ -92,3 +99,4 @@ has mandatory recurring training regarding the applicable policies. [vpc-peering]: /use-timescale/:currentVersion:/security/vpc [security-at-timescale]: https://www.timescale.com/security [ip-allowlist]: /use-timescale/:currentVersion:/security/ip-allow-list/ +[pricing-plan-features]: /about/:currentVersion:/pricing-and-account-management/#features-included-in-each-plan From 0afb88f321410dc80ad0bb567fef4db602a6c675 Mon Sep 17 00:00:00 2001 From: Iain Cox Date: Thu, 28 Nov 2024 08:58:47 +0000 Subject: [PATCH 07/18] chore: add first cookbook recipe. (#3550) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * chore: add first cookbook recipe. Co-authored-by: atovpeko <114177030+atovpeko@users.noreply.github.com> Co-authored-by: Jônatas Davi Paganini --- _partials/_cookbook-hypertables.md | 67 +++++++++++++++ _partials/_cookbook-iot.md | 131 +++++++++++++++++++++++++++++ tutorials/cookbook.md | 33 ++++++++ tutorials/page-index/page-index.js | 5 ++ 4 files changed, 236 insertions(+) create mode 100644 _partials/_cookbook-hypertables.md create mode 100644 _partials/_cookbook-iot.md create mode 100644 tutorials/cookbook.md diff --git a/_partials/_cookbook-hypertables.md b/_partials/_cookbook-hypertables.md new file mode 100644 index 0000000000..011e42f51f --- /dev/null +++ b/_partials/_cookbook-hypertables.md @@ -0,0 +1,67 @@ + +## Hypertable recipes + +This section contains recipes about hypertables. + +### Remove duplicates from an existing hypertable + +Looking to remove duplicates from an existing hypertable? One method is to run a `PARTITION BY` query to get +`ROW_NUMBER()` and then the `ctid` of rows where `row_number>1`. You then delete these rows. However, +you need to check `tableoid` and `ctid`. This is because `ctid` is not unique and might be duplicated in +different chunks. The following code example took 17 hours to process a table with 40 million rows: + +```sql +CREATE OR REPLACE FUNCTION deduplicate_chunks(ht_name TEXT, partition_columns TEXT, bot_id INT DEFAULT NULL) + RETURNS TABLE + ( + chunk_schema name, + chunk_name name, + deleted_count INT + ) +AS +$$ +DECLARE + chunk RECORD; + where_clause TEXT := ''; + deleted_count INT; +BEGIN + IF bot_id IS NOT NULL THEN + where_clause := FORMAT('WHERE bot_id = %s', bot_id); + END IF; + + FOR chunk IN + SELECT c.chunk_schema, c.chunk_name + FROM timescaledb_information.chunks c + WHERE c.hypertable_name = ht_name + LOOP + EXECUTE FORMAT(' + WITH cte AS ( + SELECT ctid, + ROW_NUMBER() OVER (PARTITION BY %s ORDER BY %s ASC) AS row_num, + * + FROM %I.%I + %s + ) + DELETE FROM %I.%I + WHERE ctid IN ( + SELECT ctid + FROM cte + WHERE row_num > 1 + ) + RETURNING 1; + ', partition_columns, partition_columns, chunk.chunk_schema, chunk.chunk_name, where_clause, chunk.chunk_schema, + chunk.chunk_name) + INTO deleted_count; + + RETURN QUERY SELECT chunk.chunk_schema, chunk.chunk_name, COALESCE(deleted_count, 0); + END LOOP; +END +$$ LANGUAGE plpgsql; + + +SELECT * +FROM deduplicate_chunks('nudge_events', 'bot_id, session_id, nudge_id, time', 2540); +``` + +Shoutout to **Mathias Ose** and **Christopher Piggott** for this recipe. + diff --git a/_partials/_cookbook-iot.md b/_partials/_cookbook-iot.md new file mode 100644 index 0000000000..5394d98f09 --- /dev/null +++ b/_partials/_cookbook-iot.md @@ -0,0 +1,131 @@ +## IoT recipes + +This section contains recipes for IoT issues: + +### Work with columnar IoT data + +Narrow and medium width tables are a great way to store IoT data. A lot of reasons are outlined in +[Designing Your Database Schema: Wide vs. Narrow Postgres Tables][blog-wide-vs-narrow]. + +One of the key advantages of narrow tables is that the schema does not have to change when you add new +sensors. Another big advantage is that each sensor can sample at different rates and times. This helps +support things like hysteresis, where new values are written infrequently unless the value changes by a +certain amount. + +#### Narrow table format example + +Working with narrow table data structures presents a few challenges. In the IoT world one concern is that +many data analysis approaches - including machine learning as well as more traditional data analysis - +require that your data is resampled and synchronized to a common time basis. Fortunately, TimescaleDB provides +you with [hyperfunctions][hyperfunctions] and other tools to help you work with this data. + +An example of a narrow table format is: + +| ts | sensor_id | value | +|-------------------------|-----------|-------| +| 2024-10-31 11:17:30.000 | 1007 | 23.45 | + +Typically you would couple this with a sensor table: + +| sensor_id | sensor_name | units | +|-----------|--------------|--------------------------| +| 1007 | temperature | degreesC | +| 1012 | heat_mode | on/off | +| 1013 | cooling_mode | on/off | +| 1041 | occupancy | number of people in room | + +A medium table retains the generic structure but adds columns of various types so that you can +use the same table to store float, int, bool, or even JSON (jsonb) data: + +| ts | sensor_id | d | i | b | t | j | +|-------------------------|-----------|-------|------|------|------|------| +| 2024-10-31 11:17:30.000 | 1007 | 23.45 | null | null | null | null | +| 2024-10-31 11:17:47.000 | 1012 | null | null | TRUE | null | null | +| 2024-10-31 11:18:01.000 | 1041 | null | 4 | null | null | null | + +To remove all-null entries, use an optional constraint such as: + +```sql + CONSTRAINT at_least_one_not_null + CHECK ((d IS NOT NULL) OR (i IS NOT NULL) OR (b IS NOT NULL) OR (j IS NOT NULL) OR (t IS NOT NULL)) +``` + +#### Get the last value of every sensor + +There are several ways to get the latest value of every sensor. The following examples use the +structure defined in [Narrow table format example][setup-a-narrow-table-format] as a reference: + +- [SELECT DISTINCT ON][select-distinct-on] +- [JOIN LATERAL][join-lateral] + +##### SELECT DISTINCT ON + +If you have a list of sensors, the easy way to get the latest value of every sensor is to use +`SELECT DISTINCT ON`: + +```sql +WITH latest_data AS ( + SELECT DISTINCT ON (sensor_id) ts, sensor_id, d + FROM iot_data + WHERE d is not null + AND ts > CURRENT_TIMESTAMP - INTERVAL '1 week' -- important + ORDER BY sensor_id, ts DESC +) +SELECT + sensor_id, sensors.name, ts, d +FROM latest_data +LEFT OUTER JOIN sensors ON latest_data.sensor_id = sensors.id +WHERE latest_data.d is not null +ORDER BY sensor_id, ts; -- Optional, for displaying results ordered by sensor_id +``` + +The common table expression (CTE) used above is not strictly necessary. However, it is an elegant way to join +to the sensor list to get a sensor name in the output. If this is not something you care about, +you can leave it out: + +```sql +SELECT DISTINCT ON (sensor_id) ts, sensor_id, d + FROM iot_data + WHERE d is not null + AND ts > CURRENT_TIMESTAMP - INTERVAL '1 week' -- important + ORDER BY sensor_id, ts DESC +``` + +It is important to take care when down-selecting this data. In the previous examples, +the time that the query would scan back was limited. However, if there any sensors that have either +not reported in a long time or in the worst case, never reported, this query devolves to a full table scan. +In a database with 1000+ sensors and 41 million rows, an unconstrained query takes over an hour. + +#### JOIN LATERAL + +An alternative to [SELECT DISTINCT ON][select-distinct-on] is to use a `JOIN LATERAL`. By selecting your entire +sensor list from the sensors table rather than pulling the IDs out using `SELECT DISTINCT`, `JOIN LATERAL` can offer +some improvements in performance: + +```sql +SELECT sensor_list.id, latest_data.ts, latest_data.d +FROM sensors sensor_list + -- Add a WHERE clause here to downselect the sensor list, if you wish +LEFT JOIN LATERAL ( + SELECT ts, d + FROM iot_data raw_data + WHERE sensor_id = sensor_list.id + ORDER BY ts DESC + LIMIT 1 +) latest_data ON true +WHERE latest_data.d is not null -- only pulling out float values ("d" column) in this example + AND latest_data.ts > CURRENT_TIMESTAMP - interval '1 week' -- important +ORDER BY sensor_list.id, latest_data.ts; +``` + +Limiting the time range is important, especially if you have a lot of data. Best practice is to use these +kinds of queries for dashboards and quick status checks. To query over a much larger time range, encapsulate +the previous example into a materialized query that refreshes infrequently, perhaps once a day. + +Shoutout to **Christopher Piggott** for this recipe. + +[blog-wide-vs-narrow]: https://www.timescale.com/learn/designing-your-database-schema-wide-vs-narrow-postgres-tables +[setup-a-narrow-table-format]: /tutorials/:currentVersion:/cookbook/#narrow-table-format-example +[select-distinct-on]: /tutorials/:currentVersion:/cookbook/#select-distinct-on +[join-lateral]: /tutorials/:currentVersion:/cookbook/#join-lateral +[hyperfunctions]: /use/:currentVersion:/hyperfunctions/ diff --git a/tutorials/cookbook.md b/tutorials/cookbook.md new file mode 100644 index 0000000000..d14aa04e2c --- /dev/null +++ b/tutorials/cookbook.md @@ -0,0 +1,33 @@ +--- +title: Timescale cookbook +excerpt: Code examples from the community that help you with loads of common conundrums. +product: [cloud, mst, self_hosted] +--- + +import Hypertables from "versionContent/_partials/_cookbook-hypertables.mdx"; +import IOT from "versionContent/_partials/_cookbook-iot.mdx"; + + +# Timescale community cookbook + +This page contains suggestions from the [TimescaleDB Community](https://timescaledb.slack.com/) about how to resolve +common issues. Use these code examples as guidance to work with your own data. + + +## Prerequisites + +To follow the examples in this page, you need a: + +- [Target Timescale Cloud service][create-a-service] +- [Connection to your service][connect-to-service] + + + + + + + + + +[create-a-service]: /getting-started/:currentVersion:/services/#create-a-timescale-cloud-service +[connect-to-service]: /getting-started/:currentVersion:/run-queries-from-console/ diff --git a/tutorials/page-index/page-index.js b/tutorials/page-index/page-index.js index 1e9510f4c7..8b2281dca2 100644 --- a/tutorials/page-index/page-index.js +++ b/tutorials/page-index/page-index.js @@ -154,6 +154,11 @@ module.exports = [ href: "simulate-iot-sensor-data", excerpt: "Simulate and query an IoT sensor dataset", }, + { + title: "Timescale community cookbook", + href: "cookbook", + excerpt: "Code examples from the community that help you with loads of common conundrums.", + }, ], }, ]; From 99a545f1a06f2f92cc3a8c26de5d21f31b4c2111 Mon Sep 17 00:00:00 2001 From: atovpeko <114177030+atovpeko@users.noreply.github.com> Date: Mon, 2 Dec 2024 11:03:55 +0200 Subject: [PATCH 08/18] fixed links (#3635) --- _partials/_cookbook-iot.md | 2 +- mlc_config.json | 3 +++ 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/_partials/_cookbook-iot.md b/_partials/_cookbook-iot.md index 5394d98f09..6320e85345 100644 --- a/_partials/_cookbook-iot.md +++ b/_partials/_cookbook-iot.md @@ -128,4 +128,4 @@ Shoutout to **Christopher Piggott** for this recipe. [setup-a-narrow-table-format]: /tutorials/:currentVersion:/cookbook/#narrow-table-format-example [select-distinct-on]: /tutorials/:currentVersion:/cookbook/#select-distinct-on [join-lateral]: /tutorials/:currentVersion:/cookbook/#join-lateral -[hyperfunctions]: /use/:currentVersion:/hyperfunctions/ +[hyperfunctions]: /use-timescale/:currentVersion:/hyperfunctions/ diff --git a/mlc_config.json b/mlc_config.json index de2bb9a782..8456b74d58 100644 --- a/mlc_config.json +++ b/mlc_config.json @@ -63,6 +63,9 @@ }, { "pattern": "https://www.cse.ust.hk/~raywong/comp5331/References/EfficientComputationOfFrequentAndTop-kElementsInDataStreams.pdf" + }, + { + "pattern": "^https://www.hhs.gov/hipaa/for-professionals/index.html" } ] } From 28ee7e92ee3d1ec7718d51c37f529c39873347d7 Mon Sep 17 00:00:00 2001 From: Aldo Fuster Turpin Date: Mon, 2 Dec 2024 20:31:51 +0100 Subject: [PATCH 09/18] add ap-south-1 with "coming soon" to list of regions (#3627) Co-authored-by: Iain Cox --- use-timescale/regions.md | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/use-timescale/regions.md b/use-timescale/regions.md index 051ffad539..0d72c252f7 100644 --- a/use-timescale/regions.md +++ b/use-timescale/regions.md @@ -14,19 +14,20 @@ We tune your database for performance and handle scalability, high availability, Timescale Cloud services run in the following Amazon Web Services (AWS) regions: -| Region | Zone | Location | -|------------------|---------------|----------------| -| `ap-southeast-1` | Asia Pacific | Singapore | -| `ap-southeast-2` | Asia Pacific | Sydney | -| `ap-northeast-1` | Asia Pacific | Tokyo | -| `ca-central-1` | Canada | Central | -| `eu-central-1` | Europe | Frankfurt | -| `eu-west-1` | Europe | Ireland | -| `eu-west-2` | Europe | London | -| `sa-east-1` | South America | São Paulo | -| `us-east-1` | United States | North Virginia | -| `us-east-2` | United States | Ohio | -| `us-west-2` | United States | Oregon | +| Region | Zone | Location | +|----------------------------|---------------|----------------| +| `ap-south-1` (coming soon) | Asia Pacific | Mumbai | +| `ap-southeast-1` | Asia Pacific | Singapore | +| `ap-southeast-2` | Asia Pacific | Sydney | +| `ap-northeast-1` | Asia Pacific | Tokyo | +| `ca-central-1` | Canada | Central | +| `eu-central-1` | Europe | Frankfurt | +| `eu-west-1` | Europe | Ireland | +| `eu-west-2` | Europe | London | +| `sa-east-1` | South America | São Paulo | +| `us-east-1` | United States | North Virginia | +| `us-east-2` | United States | Ohio | +| `us-west-2` | United States | Oregon | From bbdc929fcd0f4df7aef36e0de6eacbb40500a77c Mon Sep 17 00:00:00 2001 From: Iain Cox Date: Tue, 3 Dec 2024 12:19:52 +0000 Subject: [PATCH 10/18] 3579 feedback page self hostedlatestupgradesminor upgrade (#3630) * chore: update the upgrade procedures. Co-authored-by: Brent Graveland Co-authored-by: atovpeko <114177030+atovpeko@users.noreply.github.com> --- .../_install-self-hosted-docker-based.mdx | 2 +- _partials/_migrate_dump_postgresql.md | 2 +- .../_migrate_install_psql_ec2_instance.md | 4 +- .../_migrate_live_setup_connection_strings.md | 6 +- _partials/_migrate_prerequisites.md | 1 - .../_migrate_self_postgres_check_versions.md | 36 ++++ ..._self_postgres_implement_migration_path.md | 37 ++++ ...grate_self_postgres_plan_migration_path.md | 23 ++ ...self_postgres_timescaledb_compatibility.md | 24 +++ .../_migrate_set_up_database_first_steps.md | 4 +- .../_migrate_set_up_source_and_target.md | 4 +- _partials/_migrate_to_upload_to_target.md | 4 +- _partials/_plan_upgrade.md | 16 +- self-hosted/install/installation-source.md | 2 +- self-hosted/page-index/page-index.js | 29 ++- self-hosted/tooling/install-toolkit.md | 2 +- self-hosted/upgrades/downgrade.md | 44 ++-- self-hosted/upgrades/index.md | 35 +-- self-hosted/upgrades/major-upgrade.md | 199 ++++++++++-------- self-hosted/upgrades/minor-upgrade.md | 64 ++---- self-hosted/upgrades/upgrade-docker.md | 31 +-- self-hosted/upgrades/upgrade-pg.md | 131 ++++++------ 22 files changed, 399 insertions(+), 301 deletions(-) create mode 100644 _partials/_migrate_self_postgres_check_versions.md create mode 100644 _partials/_migrate_self_postgres_implement_migration_path.md create mode 100644 _partials/_migrate_self_postgres_plan_migration_path.md create mode 100644 _partials/_migrate_self_postgres_timescaledb_compatibility.md diff --git a/_partials/_install-self-hosted-docker-based.mdx b/_partials/_install-self-hosted-docker-based.mdx index 00440a4d6c..4457fe5d2a 100644 --- a/_partials/_install-self-hosted-docker-based.mdx +++ b/_partials/_install-self-hosted-docker-based.mdx @@ -12,7 +12,7 @@ In Terminal: [TimescaleDB Toolkit](https://github.com/timescale/timescaledb-toolkit), and support for PostGIS and Patroni. The lighter-weight `timescale/timescaledb:latest-pg17` non-ha image uses [Alpine][alpine]. - TimescaleDB is pre-created in the default Postgres database in both the -ha and non-ha docker images. + TimescaleDB is pre-created in the default PostgreSQL database in both the -ha and non-ha docker images. By default, TimescaleDB is added to any new database you create in these images. 1. **Run the container** diff --git a/_partials/_migrate_dump_postgresql.md b/_partials/_migrate_dump_postgresql.md index c33c1a736c..8661f3cb25 100644 --- a/_partials/_migrate_dump_postgresql.md +++ b/_partials/_migrate_dump_postgresql.md @@ -1,6 +1,6 @@ import MigrationSetupFirstSteps from "versionContent/_partials/_migrate_set_up_database_first_steps.mdx"; import MigrationSetupDBConnectionPostgresql from "versionContent/_partials/_migrate_set_up_align_db_extensions_postgres_based.mdx"; -import MigrationProcedureDumpSchemaPostgres from "versionContent/_partials/_migrate_dump_roles_schema_data_postgres.mdx"; +import MigrationProcedureDumpSchemaPostgreSQL from "versionContent/_partials/_migrate_dump_roles_schema_data_postgres.mdx"; import MigrationValidateRestartApp from "versionContent/_partials/_migrate_validate_and_restart_app.mdx"; ## Prepare to migrate diff --git a/_partials/_migrate_install_psql_ec2_instance.md b/_partials/_migrate_install_psql_ec2_instance.md index e668e824c5..00f0d5326c 100644 --- a/_partials/_migrate_install_psql_ec2_instance.md +++ b/_partials/_migrate_install_psql_ec2_instance.md @@ -60,7 +60,7 @@ ```sh export SOURCE="postgres://:@:/" ``` - The value of `Master password` was supplied when this Postgres RDS instance was created. + The value of `Master password` was supplied when this PostgreSQL RDS instance was created. 1. Test your connection: ```sh @@ -71,4 +71,4 @@ [about-hypertables]: /use-timescale/:currentVersion:/hypertables/about-hypertables/ -[data-compression]: /use-timescale/:currentVersion:/compression/about-compression/ \ No newline at end of file +[data-compression]: /use-timescale/:currentVersion:/compression/about-compression/ diff --git a/_partials/_migrate_live_setup_connection_strings.md b/_partials/_migrate_live_setup_connection_strings.md index 6707b5ae2a..0f14a70fcb 100644 --- a/_partials/_migrate_live_setup_connection_strings.md +++ b/_partials/_migrate_live_setup_connection_strings.md @@ -2,8 +2,8 @@ These variables hold the connection information for the source database and targ In Terminal on your migration machine, set the following: ```bash -export SOURCE=postgres://:@:/ -export TARGET=postgres://tsdbadmin:@:/tsdb?sslmode=require +export SOURCE="postgres://:@:/" +export TARGET="postgres://tsdbadmin:@:/tsdb?sslmode=require" ``` You find the connection information for your Timescale Cloud service in the configuration file you -downloaded when you created the service. \ No newline at end of file +downloaded when you created the service. diff --git a/_partials/_migrate_prerequisites.md b/_partials/_migrate_prerequisites.md index ccb8d3e880..56d5c0bc74 100644 --- a/_partials/_migrate_prerequisites.md +++ b/_partials/_migrate_prerequisites.md @@ -19,7 +19,6 @@ Before you migrate your data: [all available extensions]: /migrate/:currentVersion:/troubleshooting/#extension-availability [tablespaces]: /migrate/:currentVersion:/troubleshooting/#tablespaces [no-superuser-for-timescale-instance]: /migrate/:currentVersion:/troubleshooting/#superuser-privileges -[upgrade instructions]: /self-hosted/:currentVersion:/upgrades/about-upgrades/ [pg_hbaconf]: https://www.timescale.com/blog/5-common-connection-errors-in-postgresql-and-how-to-solve-them/#no-pg_hbaconf-entry-for-host [create-ec2-instance]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance [adjust-maintenance-window]: /use-timescale/:currentVersion:/upgrades/#adjusting-your-maintenance-window diff --git a/_partials/_migrate_self_postgres_check_versions.md b/_partials/_migrate_self_postgres_check_versions.md new file mode 100644 index 0000000000..286750ec97 --- /dev/null +++ b/_partials/_migrate_self_postgres_check_versions.md @@ -0,0 +1,36 @@ + + +To see the versions of PostgreSQL and TimescaleDB running in a self-hosted database instance: + +1. **Set your connection string** + + This variable holds the connection information for the database to upgrade: + + ```bash + export SOURCE="postgres://:@:/" + ``` + +2. **Retrieve the version of PostgreSQL that you are running** + ```shell + psql -X -d $SOURCE -c "SELECT version();" + ``` + PostgreSQL returns something like: + ```shell + ----------------------------------------------------------------------------------------------------------------------------------------- + PostgreSQL 17.2 (Ubuntu 17.2-1.pgdg22.04+1) on aarch64-unknown-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, 64-bit + (1 row) + ``` + +1. **Retrieve the version of TimescaleDB that you are running** + ```sql + psql -X -d $SOURCE -c "\dx timescaledb;" + ``` + PostgreSQL returns something like: + ```shell + Name | Version | Schema | Description + -------------+---------+------------+--------------------------------------------------------------------- + timescaledb | 2.17.2 | public | Enables scalable inserts and complex queries for time-series data + (1 row) + ``` + + diff --git a/_partials/_migrate_self_postgres_implement_migration_path.md b/_partials/_migrate_self_postgres_implement_migration_path.md new file mode 100644 index 0000000000..37973edae1 --- /dev/null +++ b/_partials/_migrate_self_postgres_implement_migration_path.md @@ -0,0 +1,37 @@ + + +You cannot upgrade TimescaleDB and PostgreSQL at the same time. You upgrade each product in +the following steps: + +1. **Upgrade TimescaleDB** + + ```sql + psql -X -d $SOURCE -c "ALTER EXTENSION timescaledb UPDATE TO '';" + ``` + +1. **If your migration path dictates it, upgrade PostgreSQL** + + Follow the procedure in [Upgrade PostgreSQL][upgrade-pg]. The version of TimescaleDB installed + in your PostgreSQL deployment must be the same before and after the PostgreSQL upgrade. + +1. **If your migration path dictates it, upgrade TimescaleDB again** + + ```sql + psql -X -d $SOURCE -c "ALTER EXTENSION timescaledb UPDATE TO '';" + ``` + +1. **Check that you have upgraded to the correct version of TimescaleDB** + + ```sql + psql -X -d $SOURCE -c "\dx timescaledb;" + ``` + PostgreSQL returns something like: + ```shell + Name | Version | Schema | Description + -------------+---------+--------+--------------------------------------------------------------------------------------- + timescaledb | 2.17.2 | public | Enables scalable inserts and complex queries for time-series data (Community Edition) + ``` + + + +[upgrade-pg]: /self-hosted/:currentVersion:/upgrades/upgrade-pg/#upgrade-your-postgresql-instance diff --git a/_partials/_migrate_self_postgres_plan_migration_path.md b/_partials/_migrate_self_postgres_plan_migration_path.md new file mode 100644 index 0000000000..0ecb649776 --- /dev/null +++ b/_partials/_migrate_self_postgres_plan_migration_path.md @@ -0,0 +1,23 @@ + +import SupportMatrix from "versionContent/_partials/_migrate_self_postgres_timescaledb_compatibility.mdx"; + +Best practice is to always use the latest version of TimescaleDB. Subscribe to our releases on GitHub or use Timescale +Cloud and always get latest update without any hassle. + +Check the following support matrix against the versions of TimescaleDB and PostgreSQL that you are running currently +and the versions you want to update to, then choose your upgrade path. + +For example, to upgrade from TimescaleDB 2.13 on PostgreSQL 13 to TimescaleDB 2.17.2 you need to: +1. Upgrade TimescaleDB to 2.16 +1. Upgrade PostgreSQL to 14 or higher +1. Upgrade TimescaleDB to 2.17.2. + +You may need to [upgrade to the latest PostgreSQL version][upgrade-pg] before you upgrade TimescaleDB. Also, +if you use [Timescale Toolkit][toolkit-install], ensure the `timescaledb_toolkit` extension is >= +v1.6.0 before you upgrade TimescaleDB extension. + + + +[upgrade-pg]: /self-hosted/:currentVersion:/upgrades/upgrade-pg/#upgrade-your-postgresql-instance +[timescale-toolkit]:https://github.com/timescale/timescaledb-toolkit +[toolkit-install]: /self-hosted/:currentVersion:/tooling/install-toolkit/ diff --git a/_partials/_migrate_self_postgres_timescaledb_compatibility.md b/_partials/_migrate_self_postgres_timescaledb_compatibility.md new file mode 100644 index 0000000000..671064db8a --- /dev/null +++ b/_partials/_migrate_self_postgres_timescaledb_compatibility.md @@ -0,0 +1,24 @@ + +| Version number |PostgreSQL 17|PostgreSQL 16|PostgreSQL 15|PostgreSQL 14|PostgreSQL 13|PostgreSQL 12|PostgreSQL 11|PostgreSQL 10| +|---------------------------|-|-|-|-|-|-|-|-| +| TimescaleDB
2.17.x |✅|✅|✅|✅|❌|❌|❌|❌|❌| +| TimescaleDB
2.16.x |❌|✅|✅|✅|✅|❌|❌|❌|❌|❌| +| TimescaleDB
2.15.x |❌|✅|✅|✅|✅|❌|❌|❌|❌|❌| +| TimescaleDB
2.14.x |❌|✅|✅|✅|✅|❌|❌|❌|❌|❌| +| TimescaleDB
2.13.x |❌|✅|✅|✅|✅|❌|❌|❌|❌| +| TimescaleDB
2.12.x |❌|❌|✅|✅|✅|❌|❌|❌|❌| +| TimescaleDB
2.10.x |❌|❌|✅|✅|✅|✅|❌|❌|❌| +| TimescaleDB
2.5 - 2.9 |❌|❌|❌|✅|✅|✅|❌|❌|❌| +| TimescaleDB
2.4 |❌|❌|❌|❌|✅|✅|❌|❌|❌| +| TimescaleDB
2.1 - 2.3 |❌|❌|❌|❌|✅|✅|✅|❌|❌| +| TimescaleDB
2.0 |❌|❌|❌|❌|❌|✅|✅|❌|❌ +| TimescaleDB
1.7 |❌|❌|❌|❌|❌|✅|✅|✅|✅| + +We recommend not using TimescaleDB with PostgreSQL 17.1, 16.5, 15.9, 14.14, 13.17, 12.21. +These minor versions [introduced a breaking binary interface change][postgres-breaking-change] that, +once identified, was reverted in subsequent minor PostgreSQL versions 17.2, 16.6, 15.10, 14.15, 13.18, and 12.22. +When you build from source, best practice is to build with PostgreSQL 17.2, 16.6, etc and higher. +Users of [Timescale Cloud](https://console.cloud.timescale.com/) and platform packages for Linux, Windows, MacOS, +Docker, and Kubernetes are unaffected. + +[postgres-breaking-change]: https://www.postgresql.org/about/news/postgresql-172-166-1510-1415-1318-and-1222-released-2965/ diff --git a/_partials/_migrate_set_up_database_first_steps.md b/_partials/_migrate_set_up_database_first_steps.md index 13a36eecfc..19562516e4 100644 --- a/_partials/_migrate_set_up_database_first_steps.md +++ b/_partials/_migrate_set_up_database_first_steps.md @@ -8,8 +8,8 @@ These variables hold the connection information for the source database and target Timescale Cloud service: ```bash - export SOURCE=postgres://:@:/ - export TARGET=postgres://tsdbadmin:@:/tsdb?sslmode=require + export SOURCE="postgres://:@:/" + export TARGET="postgres://tsdbadmin:@:/tsdb?sslmode=require" ``` You find the connection information for your Timescale Cloud Service in the configuration file you downloaded when you created the service. diff --git a/_partials/_migrate_set_up_source_and_target.md b/_partials/_migrate_set_up_source_and_target.md index 3fa5e5c8ac..13a09236c9 100644 --- a/_partials/_migrate_set_up_source_and_target.md +++ b/_partials/_migrate_set_up_source_and_target.md @@ -5,7 +5,7 @@ databases are referred to as `$SOURCE` and `$TARGET` throughout this guide. This can be set in your shell, for example: ```bash -export SOURCE=postgres://:@:/ -export TARGET=postgres://:@:/ +export SOURCE="postgres://:@:/" +export TARGET="postgres://:@:/" ```
diff --git a/_partials/_migrate_to_upload_to_target.md b/_partials/_migrate_to_upload_to_target.md index 426ff16755..33ea3e121d 100644 --- a/_partials/_migrate_to_upload_to_target.md +++ b/_partials/_migrate_to_upload_to_target.md @@ -10,8 +10,8 @@ These variables hold the connection information for the source database and target Timescale Cloud service: ```bash - export SOURCE=postgres://:@:/ - export TARGET=postgres://tsdbadmin:@:/tsdb?sslmode=require + export SOURCE="postgres://:@:/" + export TARGET="postgres://tsdbadmin:@:/tsdb?sslmode=require" ``` You find the connection information for your Timescale Cloud Service in the configuration file you downloaded when you created the service. diff --git a/_partials/_plan_upgrade.md b/_partials/_plan_upgrade.md index e579123a00..83bc96aa9d 100644 --- a/_partials/_plan_upgrade.md +++ b/_partials/_plan_upgrade.md @@ -1,19 +1,11 @@ -You can upgrade your self-hosted Timescale installation in-place. This means -that you do not need to dump and restore your data. However, it is still -important that you plan for your upgrade ahead of time. -Before you upgrade: - -* Read [the release notes][relnotes] for the Timescale version you are - upgrading to. -* Check which PostgreSQL version you are currently running. You might need to - [upgrade to the latest PostgreSQL version][upgrade-pg] - before you begin your Timescale upgrade. -* [Perform a backup][backup] of your database. While Timescale +- Install the PostgreSQL client tools on your migration machine. This includes `psql`, and `pg_dump`. +- Read [the release notes][relnotes] for the version of TimescaleDB that you are upgrading to. +- [Perform a backup][backup] of your database. While Timescale upgrades are performed in-place, upgrading is an intrusive operation. Always make sure you have a backup on hand, and that the backup is readable in the case of disaster. [relnotes]: https://github.com/timescale/timescaledb/releases -[upgrade-pg]: /self-hosted/:currentVersion:/upgrades/upgrade-pg/ +[upgrade-pg]: /self-hosted/:currentVersion:/upgrade-pg/#upgrade-your-postgresql-instance [backup]: /self-hosted/:currentVersion:/backup-and-restore/ diff --git a/self-hosted/install/installation-source.md b/self-hosted/install/installation-source.md index bff5757ba0..374efd6bda 100644 --- a/self-hosted/install/installation-source.md +++ b/self-hosted/install/installation-source.md @@ -77,5 +77,5 @@ And that is it! You have TimescaleDB running on a database on a self-hosted inst [config]: /self-hosted/:currentVersion:/configuration/ [postgres-download]: https://www.postgresql.org/download/ [cmake-download]: https://cmake.org/download/ -[compatibility-matrix]: /self-hosted/:currentVersion:/upgrades/upgrade-pg/ +[compatibility-matrix]: /self-hosted/:currentVersion:/upgrades/upgrade-pg/#plan-your-upgrade-path [postgres-breaking-change]: https://www.postgresql.org/about/news/postgresql-172-166-1510-1415-1318-and-1222-released-2965/ diff --git a/self-hosted/page-index/page-index.js b/self-hosted/page-index/page-index.js index 1cbaeba80c..80b601a3b0 100644 --- a/self-hosted/page-index/page-index.js +++ b/self-hosted/page-index/page-index.js @@ -270,37 +270,32 @@ module.exports = [ href: "upgrades", children: [ { - title: "About upgrades", - href: "about-upgrades", - excerpt: "Learn about upgrading self-hosted TimescaleDB", - }, - { - title: "Minor upgrades", + title: "Upgrade TimescaleDB to a minor version", href: "minor-upgrade", excerpt: - "Upgrade to a new minor version of self-hosted TimescaleDB", + "Upgrade self-hosted TimescaleDB to a new minor version", }, { - title: "Major upgrades", + title: "Upgrade TimescaleDB to a major version", href: "major-upgrade", excerpt: - "Upgrade to a new major version of self-hosted TimescaleDB", - }, - { - title: "Downgrade self-hosted TimescaleDB", - href: "downgrade", - excerpt: "Downgrade a self-hosted TimescaleDB version", + "Upgrade self-hosted TimescaleDB to a new major version", }, { - title: "Upgrade within Docker", + title: "Upgrade TimescaleDB running in Docker", href: "upgrade-docker", excerpt: - "Upgrade to a new minor version of self-hosted TimescaleDB within a Docker container", + "Upgrade self-hosted TimescaleDB running in a Docker container to a new minor version", }, { title: "Upgrade PostgreSQL", href: "upgrade-pg", - excerpt: "Upgrade to a new version of PostgreSQL", + excerpt: "Upgrade PostgreSQL to a new version", + }, + { + title: "Downgrade TimescaleDB to a minor version", + href: "downgrade", + excerpt: "Downgrade self-hosted TimescaleDB to the previous minor version", }, ], }, diff --git a/self-hosted/tooling/install-toolkit.md b/self-hosted/tooling/install-toolkit.md index bcf48ab8d7..852888ebda 100644 --- a/self-hosted/tooling/install-toolkit.md +++ b/self-hosted/tooling/install-toolkit.md @@ -44,7 +44,7 @@ The recommended way to install the Toolkit is to use the To get Toolkit, use the high availability image, `timescaledb-ha`: ```bash -docker pull timescale/timescaledb-ha:pg16 +docker pull timescale/timescaledb-ha:pg17 ``` For more information on running TimescaleDB using Docker, see the section on diff --git a/self-hosted/upgrades/downgrade.md b/self-hosted/upgrades/downgrade.md index 4e32bb8705..5d69c93d9d 100644 --- a/self-hosted/upgrades/downgrade.md +++ b/self-hosted/upgrades/downgrade.md @@ -1,13 +1,13 @@ --- title: Downgrade to a previous version of TimescaleDB -excerpt: Roll back to an older version of TimescaleDB +excerpt: Downgrade self-hosted TimescaleDB to the previous minor version products: [self_hosted] keywords: [upgrades] --- import ConsiderCloud from "versionContent/_partials/_consider-cloud.mdx"; -# Downgrade to a previous version of TimescaleDB +# Downgrade TimescaleDB to a minor version If you upgrade to a new TimescaleDB version and encounter problems, you can roll back to a previously installed version. This works in the same way as a minor @@ -42,28 +42,41 @@ Before you downgrade: ## Downgrade TimescaleDB to a previous minor version This downgrade uses the PostgreSQL `ALTER EXTENSION` function to downgrade to -the latest version of the TimescaleDB extension. TimescaleDB supports having +a previous version of the TimescaleDB extension. TimescaleDB supports having different extension versions on different databases within the same PostgreSQL instance. This allows you to upgrade and downgrade extensions independently on different databases. Run the `ALTER EXTENSION` function on each database to downgrade them individually. + The downgrade script is tested and supported for single-step downgrades. That is, downgrading from the current version, to the previous minor version. Downgrading might not work if you have made changes to your database between upgrading and downgrading. + -### Downgrading the TimescaleDB extension +1. **Set your connection string** + + This variable holds the connection information for the database to upgrade: + + ```bash + export SOURCE="postgres://:@:/" + ``` -1. Connect to psql using the `-X` flag. This prevents any `.psqlrc` commands - from accidentally triggering the load of a previous TimescaleDB version on - session startup. -1. At the psql prompt, downgrade the TimescaleDB extension. This must be the - first command you execute in the current session: +2. **Connect to your database instance** + ```shell + psql -X -d $SOURCE + ``` + + The `-X` flag prevents any `.psqlrc` commands from accidentally triggering the load of a + previous TimescaleDB version on session startup. + +1. **Downgrade the TimescaleDB extension** + This must be the first command you execute in the current session: ```sql ALTER EXTENSION timescaledb UPDATE TO ''; @@ -72,14 +85,19 @@ upgrading and downgrading. For example: ```sql - ALTER EXTENSION timescaledb UPDATE TO '2.5.1'; + ALTER EXTENSION timescaledb UPDATE TO '2.17.0'; ``` -1. Check that you have downgraded to the correct version of the extension with - the `\dx` command. The output should show the downgraded version number. +1. **Check that you have downgraded to the correct version of TimescaleDB** ```sql - \dx timescaledb + \dx timescaledb; + ``` + Postgres returns something like: + ```shell + Name | Version | Schema | Description + -------------+---------+--------+--------------------------------------------------------------------------------------- + timescaledb | 2.17.0 | public | Enables scalable inserts and complex queries for time-series data (Community Edition) ``` diff --git a/self-hosted/upgrades/index.md b/self-hosted/upgrades/index.md index b80ca9fbd6..3c1e88c576 100644 --- a/self-hosted/upgrades/index.md +++ b/self-hosted/upgrades/index.md @@ -9,35 +9,20 @@ import ConsiderCloud from "versionContent/_partials/_consider-cloud.mdx"; # Upgrade TimescaleDB -You can upgrade your on-premise TimescaleDB installation in-place. +A major upgrade is when you update from TimescaleDB `X.` to `Y.`. +A minor upgrade is when you update from TimescaleDB `.x`, to TimescaleDB `.y`. +You upgrade your self-hosted TimescaleDB installation in-place. -A major upgrade is when you upgrade from one major version of TimescaleDB, to -the next major version. For example, when you upgrade from TimescaleDB 1, -to TimescaleDB 2. - -A minor upgrade is when you upgrade within your current major version of -TimescaleDB. For example, when you upgrade from TimescaleDB 2.5, to -TimescaleDB 2.6. - -If you originally installed TimescaleDB using Docker, you can upgrade from -within the Docker container. For more information, and instructions, see the -[Upgrading with Docker section][upgrade-docker]. - -You can also downgrade your TimescaleDB installation to a previous version, if -you need to. + -* [Learn about upgrades][about-upgrades] to understand how it works - before you begin your upgrade. -* Upgrade to the next [minor version][upgrade-minor] of TimescaleDB. -* Upgrade to the next [major version][upgrade-major] of TimescaleDB. -* [Downgrade][downgrade] to a previous version of TimescaleDB. -* Upgrade TimescaleDB using [Docker][upgrade-docker]. -* Upgrade the version of [PostgreSQL][upgrade-pg] your TimescaleDB - installation uses. +This section shows you how to: - +* Upgrade self-hosted TimescaleDB to a new [minor version][upgrade-minor]. +* Upgrade self-hosted TimescaleDB to a new [major version][upgrade-major]. +* Upgrade self-hosted TimescaleDB running in a [Docker container][upgrade-docker] to a new minor version. +* Upgrade [PostgreSQL][upgrade-pg] to a new version. +* Downgrade self-hosted TimescaleDB to the [previous minor version][downgrade]. -[about-upgrades]: /self-hosted/:currentVersion:/upgrades/about-upgrades/ [downgrade]: /self-hosted/:currentVersion:/upgrades/downgrade/ [upgrade-docker]: /self-hosted/:currentVersion:/upgrades/upgrade-docker/ [upgrade-major]: /self-hosted/:currentVersion:/upgrades/major-upgrade/ diff --git a/self-hosted/upgrades/major-upgrade.md b/self-hosted/upgrades/major-upgrade.md index 818522e5f4..98da5552d2 100644 --- a/self-hosted/upgrades/major-upgrade.md +++ b/self-hosted/upgrades/major-upgrade.md @@ -1,37 +1,67 @@ --- title: Major TimescaleDB upgrades -excerpt: Upgrade from one major of TimescaleDB to the next major version +excerpt: Upgrade self-hosted TimescaleDB to a new major version products: [self_hosted] keywords: [upgrades] --- import PlanUpgrade from "versionContent/_partials/_plan_upgrade.mdx"; import ConsiderCloud from "versionContent/_partials/_consider-cloud.mdx"; +import CheckVersions from "versionContent/_partials/_migrate_self_postgres_check_versions.mdx"; +import SupportMatrix from "versionContent/_partials/_migrate_self_postgres_timescaledb_compatibility.mdx"; +import ImplementMigrationPath from "versionContent/_partials/_migrate_self_postgres_implement_migration_path.mdx"; -# Major TimescaleDB upgrades +# Upgrade TimescaleDB to a major version -A major upgrade is when you upgrade from one major version of TimescaleDB, to -the next major version. For example, when you upgrade from TimescaleDB 1, -to TimescaleDB 2. +A major upgrade is when you update from TimescaleDB `X.` to `Y.`. +A minor upgrade is when you update from TimescaleDB `.x`, to TimescaleDB `.y`. +You can run different versions of TimescaleDB on different databases within the same PostgreSQL instance. +This process uses the PostgreSQL `ALTER EXTENSION` function to upgrade TimescaleDB independently on different +databases. -For upgrading within your current major version, for example upgrading from -TimescaleDB 2.5 to TimescaleDB 2.6, see the -[minor upgrades section][upgrade-minor]. +When you perform a major upgrade, new policies are automatically configured based on your current +configuration. In order to verify your policies post upgrade, in this upgrade process you export +your policy settings before upgrading. -## Plan your upgrade +This page shows you how to perform a major upgrade. For minor upgrades, see +[Upgrade TimescaleDB to a minor version][upgrade-minor]. + +## Prerequisites -## Breaking changes +## Check the TimescaleDB and PostgreSQL versions + + + +## Plan your upgrade path + +Best practice is to always use the latest version of TimescaleDB. Subscribe to our releases on GitHub or use Timescale +Cloud and always get latest update without any hassle. + +Check the following support matrix against the versions of TimescaleDB and PostgreSQL that you are +running currently and the versions you want to update to, then choose your upgrade path. + +For example, to upgrade from TimescaleDB 1.7 on PostgreSQL 12 to TimescaleDB 2.17.2 on PostgreSQL 15 you +need to: +1. Upgrade TimescaleDB to 2.10 +1. Upgrade PostgreSQL to 15 +1. Upgrade TimescaleDB to 2.17.2. + +You may need to [upgrade to the latest PostgreSQL version][upgrade-pg] before you upgrade TimescaleDB. + + -When you upgrade from TimescaleDB 1, to TimescaleDB 2, scripts +## Check for failed retention policies + +When you upgrade from TimescaleDB 1 to TimescaleDB 2, scripts automatically configure updated features to work as expected with the new version. However, not everything works in exactly the same way as previously. Before you begin this major upgrade, check the database log for errors related -to failed retention policies that could have occurred in TimescaleDB 1. You +to failed retention policies that could have occurred in TimescaleDB 1. You can either remove the failing policies entirely, or update them to be compatible with your existing continuous aggregates. @@ -39,103 +69,83 @@ If incompatible retention policies are present when you perform the upgrade, the `ignore_invalidation_older_than` setting is automatically turned off, and a notice is shown. -## Upgrade TimescaleDB to the next major version - -To perform this major upgrade: +## Export your policy settings -1. Export your TimescaleDB 1 policy settings -1. Upgrade the TimescaleDB extension -1. Verify updated policy settings and jobs + -When you perform the upgrade, new policies are automatically configured based on -your current configuration. This upgrade process allows you to export your -policy settings before performing the upgrade, so that you can verify them after -the upgrade is complete. +1. **Set your connection string** -This upgrade uses the PostgreSQL `ALTER EXTENSION` function to upgrade to the -latest version of the TimescaleDB extension. TimescaleDB supports having -different extension versions on different databases within the same PostgreSQL -instance. This allows you to upgrade extensions independently on different -databases. Run the `ALTER EXTENSION` function on each database to upgrade them -individually. + This variable holds the connection information for the database to upgrade: - + ```bash + export SOURCE="postgres://:@:/" + ``` -### Exporting TimescaleDB 1 policy settings +1. **Connect to your PostgreSQL deployment** + ```bash + psql -d $SOURCE + ``` -1. At the psql prompt, use this command to save the current settings for your - policy statistics to a `.csv` file: +1. **Save your policy statistics settings to a `.csv` file** ```sql COPY (SELECT * FROM timescaledb_information.policy_stats) TO policy_stats.csv csv header ``` -1. Use this command to save the current settings for your continuous aggregates - to a `.csv` file: +1. **Save your continuous aggregates settings to a `.csv` file** ```sql COPY (SELECT * FROM timescaledb_information.continuous_aggregate_stats) TO continuous_aggregate_stats.csv csv header ``` -1. Use this command to save the current settings for your drop chunk policies to - a `.csv` file: +1. **Save your drop chunk policies to a `.csv` file** ```sql COPY (SELECT * FROM timescaledb_information.drop_chunks_policies) TO drop_chunk_policies.csv csv header ``` -1. Use this command to save the current settings for your reorder policies - to a `.csv` file: +1. **Save your reorder policies to a `.csv` file** ```sql COPY (SELECT * FROM timescaledb_information.reorder_policies) TO reorder_policies.csv csv header ``` - +1. **Exit your psql session** + ```sql + \q; + ``` - + -### Upgrading the TimescaleDB extension -1. Connect to psql using the `-X` flag. This prevents any `.psqlrc` commands - from accidentally triggering the load of a previous TimescaleDB version on - session startup. -1. At the psql prompt, upgrade the TimescaleDB extension. This must be the first - command you execute in the current session: - ```sql - ALTER EXTENSION timescaledb UPDATE; - ``` +## Implement your upgrade path -1. Check that you have upgraded to the latest version of the extension with the - `\dx` command. The output should show the upgraded version number. + - ```sql - \dx timescaledb - ``` - - To upgrade TimescaleDB in a Docker container, see the - [Docker container upgrades](/self-hosted/latest/upgrades/upgrade-docker) - section. - + +To upgrade TimescaleDB in a Docker container, see the +[Docker container upgrades](/self-hosted/latest/upgrades/upgrade-docker) +section. + - +## Verify the updated policy settings and jobs -### Verifying updated policy settings and jobs - -1. Use this query to verify the continuous aggregate policy jobs: +1. **Verify the continuous aggregate policy jobs** ```sql SELECT * FROM timescaledb_information.jobs WHERE application_name LIKE 'Refresh Continuous%'; - + ``` + Postgres returns something like: + ```shell -[ RECORD 1 ]-----+-------------------------------------------------- job_id | 1001 application_name | Refresh Continuous Aggregate Policy [1001] @@ -154,35 +164,40 @@ individually. hypertable_name | _materialized_hypertable_2 ``` -1. Verify the information for each policy type that you exported before you - upgraded. For continuous aggregates, take note of the `config` information to - verify that all settings were converted correctly. -1. Verify that all jobs are scheduled and running as expected using the new - `timescaledb_information.job_stats` view: - -```sql -SELECT * FROM timescaledb_information.job_stats - WHERE job_id = 1001; -``` - -The output looks like this: - -```sql --[ RECORD 1 ]----------+------------------------------ -hypertable_schema | _timescaledb_internal -hypertable_name | _materialized_hypertable_2 -job_id | 1001 -last_run_started_at | 2020-10-02 09:38:06.871953-04 -last_successful_finish | 2020-10-02 09:38:06.932675-04 -last_run_status | Success -job_status | Scheduled -last_run_duration | 00:00:00.060722 -next_scheduled_run | 2020-10-02 10:38:06.932675-04 -total_runs | 1 -total_successes | 1 -total_failures | 0 -``` +1. **Verify the information for each policy type that you exported before you upgraded.** + + For continuous aggregates, take note of the `config` information to + verify that all settings were converted correctly. + +1. **Verify that all jobs are scheduled and running as expected** + + ```sql + SELECT * FROM timescaledb_information.job_stats + WHERE job_id = 1001; + ``` + Postgres returns something like: + ```sql + -[ RECORD 1 ]----------+------------------------------ + hypertable_schema | _timescaledb_internal + hypertable_name | _materialized_hypertable_2 + job_id | 1001 + last_run_started_at | 2020-10-02 09:38:06.871953-04 + last_successful_finish | 2020-10-02 09:38:06.932675-04 + last_run_status | Success + job_status | Scheduled + last_run_duration | 00:00:00.060722 + next_scheduled_run | 2020-10-02 10:38:06.932675-04 + total_runs | 1 + total_successes | 1 + total_failures | 0 + ``` +You are running a shiny new version of TimescaleDB. + [upgrade-minor]: /self-hosted/:currentVersion:/upgrades/minor-upgrade/ +[relnotes]: https://github.com/timescale/timescaledb/releases +[upgrade-pg]: /self-hosted/:currentVersion:/upgrades/upgrade-pg/#upgrade-postgresql +[backup]: /self-hosted/:currentVersion:/backup-and-restore/ +[export-policy-settings]: /self-hosted/:currentVersion:/upgrades/major-upgrade/#export-your-policy-settings diff --git a/self-hosted/upgrades/minor-upgrade.md b/self-hosted/upgrades/minor-upgrade.md index 388ab55f84..02da60f6c8 100644 --- a/self-hosted/upgrades/minor-upgrade.md +++ b/self-hosted/upgrades/minor-upgrade.md @@ -1,66 +1,48 @@ --- title: Minor TimescaleDB upgrades -excerpt: Upgrade within the same major version of TimescaleDB +excerpt: Upgrade self-hosted TimescaleDB to a new minor version products: [self_hosted] keywords: [upgrades] --- import PlanUpgrade from "versionContent/_partials/_plan_upgrade.mdx"; import ConsiderCloud from "versionContent/_partials/_consider-cloud.mdx"; +import CheckVersions from "versionContent/_partials/_migrate_self_postgres_check_versions.mdx"; +import PlanMigrationPath from "versionContent/_partials/_migrate_self_postgres_plan_migration_path.mdx"; +import ImplementMigrationPath from "versionContent/_partials/_migrate_self_postgres_implement_migration_path.mdx"; -# Minor TimescaleDB upgrades +# Upgrade TimescaleDB to a new minor version -A minor upgrade is when you upgrade within your current major version of -TimescaleDB. For example, when you upgrade from TimescaleDB 2.5, to -TimescaleDB 2.6. - -For upgrading to a new major version, for example upgrading from -TimescaleDB 1 to TimescaleDB 2, see the -[major upgrades section][upgrade-major]. +A minor upgrade is when you update from TimescaleDB `.x` to TimescaleDB `.y`. +A major upgrade is when you update from TimescaleDB `X.` to `Y.`. +You can run different versions of TimescaleDB on different databases within the same PostgreSQL instance. +This process uses the PostgreSQL `ALTER EXTENSION` function to upgrade TimescaleDB independently on different +databases. -## Plan your upgrade - - +This page shows you how to perform a minor upgrade, for major upgrades, see [Upgrade TimescaleDB to a major version][upgrade-major]. -## Upgrade TimescaleDB to the next minor version +## Prerequisites -This upgrade uses the PostgreSQL `ALTER EXTENSION` function to upgrade to the -latest version of the TimescaleDB extension. TimescaleDB supports having -different extension versions on different databases within the same PostgreSQL -instance. This allows you to upgrade extensions independently on different -databases. Run the `ALTER EXTENSION` function on each database to upgrade them -individually. + - +## Check the TimescaleDB and PostgreSQL versions -### Upgrading the TimescaleDB extension + -1. Connect to psql using the `-X` flag. This prevents any `.psqlrc` commands - from accidentally triggering the load of a previous TimescaleDB version on - session startup. -1. At the psql prompt, upgrade the TimescaleDB extension. This must be the first - command you execute in the current session: +## Plan your upgrade path - ```sql - ALTER EXTENSION timescaledb UPDATE; - ``` + -1. Check that you have upgraded to the latest version of the extension with the - `\dx` command. The output should show the upgraded version number. - ```sql - \dx timescaledb - ``` +## Implement your upgrade path - - To upgrade TimescaleDB in a Docker container, see the - [Docker container upgrades](/self-hosted/latest/upgrades/upgrade-docker) - section. - + - +You are running a shiny new version of TimescaleDB. +[relnotes]: https://github.com/timescale/timescaledb/releases +[upgrade-pg]: /self-hosted/:currentVersion:/upgrades/upgrade-pg/#upgrade-postgresql [upgrade-major]: /self-hosted/:currentVersion:/upgrades/major-upgrade/ - +[backup]: /self-hosted/:currentVersion:/backup-and-restore/ diff --git a/self-hosted/upgrades/upgrade-docker.md b/self-hosted/upgrades/upgrade-docker.md index 8598ae0a9a..ed1c5ead52 100644 --- a/self-hosted/upgrades/upgrade-docker.md +++ b/self-hosted/upgrades/upgrade-docker.md @@ -1,26 +1,30 @@ --- title: Upgrades within a Docker container -excerpt: Upgrade TimescaleDB within a Docker container +excerpt: Upgrade self-hosted TimescaleDB running in a Docker container to a new minor version products: [self_hosted] keywords: [upgrades, Docker] --- import ConsiderCloud from "versionContent/_partials/_consider-cloud.mdx"; -# Upgrades within a Docker container +# Upgrade TimescaleDB running in Docker -If you originally installed TimescaleDB using Docker, you can upgrade from -within the Docker container. This allows you to upgrade to the latest -TimescaleDB version, while retaining your data. +If you originally installed TimescaleDB using Docker, you can upgrade from within the Docker +container. This allows you to upgrade to the latest TimescaleDB version while retaining your data. - -In this section, the examples use a Docker instance called `timescaledb`. If you -have given your Docker instance a different name, replace it when you issue the -commands. - +The `timescale/timescaledb-ha*` images have the files necessary to run previous versions. Patch releases +only contain bugfixes so should always be safe. Non-patch releases may rarely require some extra steps. +These steps are mentioned in the [release notes][relnotes] for the version of TimescaleDB +that you are upgrading to. + +After you upgrade the docker image, you run `ALTER EXTENSION` for all databases using TimescaleDB. +The examples in this page use a Docker instance called `timescaledb`. If you +have given your Docker instance a different name, replace it when you issue the +commands. + ## Determine the mount point type When you start your upgraded Docker container, you need to be able to point the @@ -79,12 +83,12 @@ data. ### Upgrading TimescaleDB within Docker 1. Pull the latest TimescaleDB image. This command pulls the image for - PostgreSQL 14. If you're using another PostgreSQL version, look for the - relevant tag in the + TimescaleDB 2.17.x running on PostgreSQL 17. If you're using another PostgreSQL version, + look for the relevant tag in the [TimescaleDB HA Docker Hub repository](https://hub.docker.com/r/timescale/timescaledb-ha/tags). ```bash - docker pull timescale/timescaledb-ha:pg16 + docker pull timescale/timescaledb-ha:pg17 ``` 1. Stop the old container, and remove it: @@ -150,3 +154,4 @@ If you have multiple databases, you need to update each database separately. [toolkit]: /self-hosted/:currentVersion:/tooling/install-toolkit/ +[relnotes]: https://github.com/timescale/timescaledb/releases diff --git a/self-hosted/upgrades/upgrade-pg.md b/self-hosted/upgrades/upgrade-pg.md index 4e9ffa6481..d1ebf79494 100644 --- a/self-hosted/upgrades/upgrade-pg.md +++ b/self-hosted/upgrades/upgrade-pg.md @@ -1,100 +1,87 @@ --- title: Upgrade PostgreSQL -excerpt: Upgrade the PostgreSQL installation associated with TimescaleDB +excerpt: Upgrade PostgreSQL to a new version products: [self_hosted] keywords: [upgrades, PostgreSQL, versions, compatibility] --- +import PlanUpgrade from "versionContent/_partials/_plan_upgrade.mdx"; import ConsiderCloud from "versionContent/_partials/_consider-cloud.mdx"; +import PlanMigrationPath from "versionContent/_partials/_migrate_self_postgres_plan_migration_path.mdx"; # Upgrade PostgreSQL -Because TimescaleDB is a PostgreSQL extension, you need to ensure you keep your -underlying PostgreSQL installation up to date. When you upgrade your TimescaleDB -extension to a new version, always take the time to check if you need to also -upgrade your PostgreSQL version. If you are running an older version of -PostgreSQL, you need to upgrade it first, before you upgrade your TimescaleDB -extension. +TimescaleDB is a PostgreSQL extension. Ensure that you upgrade to compatible versions of TimescaleDB and PostgreSQL. -TimescaleDB supports these PostgreSQL releases. If you are not running a -compatible PostgreSQL version, make sure you upgrade PostgreSQL before you -upgrade TimescaleDB: - -||PostgreSQL 17|PostgreSQL 16|PostgreSQL 15|PostgreSQL 14|PostgreSQL 13|PostgreSQL 12|PostgreSQL 11|PostgreSQL 10|PostgreSQL 9.6| -|-|-|-|-|-|-|-|-|-| -|TimescaleDB 2.17 and higher|✅|✅|✅|✅|❌|❌|❌|❌|❌| -|TimescaleDB 2.16 and higher|❌|✅|✅|✅|✅|❌|❌|❌|❌|❌| -|TimescaleDB 2.15 and higher|❌|✅|✅|✅|✅|❌|❌|❌|❌|❌| -|TimescaleDB 2.14 and higher|❌|✅|✅|✅|✅|❌|❌|❌|❌|❌| -|TimescaleDB 2.13 and higher|❌|✅|✅|✅|✅|❌|❌|❌|❌| -|TimescaleDB 2.12 and higher|❌|❌|✅|✅|✅|❌|❌|❌|❌| -|TimescaleDB 2.10 and higher|❌|❌|✅|✅|✅|✅|❌|❌|❌| -|TimescaleDB 2.5 to 2.9|❌|❌|❌|✅|✅|✅|❌|❌|❌| -|TimescaleDB 2.4|❌|❌|❌|❌|✅|✅|❌|❌|❌| -|TimescaleDB 2.1 to 2.3|❌|❌|❌|❌|✅|✅|✅|❌|❌| -|TimescaleDB 2.0|❌|❌|❌|❌|❌|✅|✅|❌|❌ -|TimescaleDB 1.7|❌|❌|❌|❌|❌|✅|✅|✅|✅| - -We recommend not using TimescaleDB with PostgreSQL 17.1, 16.5, 15.9, 14.14, 13.17, 12.21. -These minor versions [introduced a breaking binary interface change][postgres-breaking-change] that, -once identified, was reverted in subsequent minor PostgreSQL versions 17.2, 16.6, 15.10, 14.15, 13.18, and 12.22. -When you build from source, best practice is to build with PostgreSQL 17.2, 16.6, etc and higher. -Users of [Timescale Cloud](https://console.cloud.timescale.com/) and platform packages for Linux, Windows, MacOS, -Docker, and Kubernetes are unaffected. - -You cannot upgrade TimescaleDB and PostgreSQL at the same time. You upgrade each product in -the following steps: - -1. Upgrade TimescaleDB to the desired version in your current PostgreSQL installation. -2. Upgrade PostgreSQL to the desired version. - -The version of TimescaleDB installed in your PostgreSQL deployment must be the same before -and after the PostgreSQL upgrade. - -## Plan your upgrade - -You can upgrade your PostgreSQL installation in-place. This means -that you do not need to dump and restore your data. However, it is still -important that you plan for your upgrade ahead of time. - -Before you upgrade: - -* Read [the release notes][pg-relnotes] for the PostgreSQL version you are - upgrading to. -* [Perform a backup][backup] of your database. While PostgreSQL and - TimescaleDB upgrades are performed in-place, upgrading is an intrusive - operation. Always make sure you have a backup on hand, and that the backup is - readable in the case of disaster. - -## Upgrade PostgreSQL - -You can use the [`pg_upgrade`][pg_upgrade] tool to upgrade PostgreSQL in-place. -This means that you do not need to dump and restore your data. Instead, -`pg_upgrade` allows you to retain the data files of your current PostgreSQL -installation while binding the new PostgreSQL binary runtime to them. This is -supported for PostgreSQL 8.4 and higher. + +## Prerequisites + + + +## Plan your upgrade path + + + +## Upgrade your PostgreSQL instance + +You use [`pg_upgrade`][pg_upgrade] to upgrade PostgreSQL in-place. `pg_upgrade` allows you to retain +the data files of your current PostgreSQL installation while binding the new PostgreSQL binary runtime +to them. -### Upgrading PostgreSQL +1. **Find the location of the PostgreSQL binary** -1. Before you begin, determine the location of the PostgreSQL binary and your - data directory on your local system. -1. At the psql prompt, perform the upgrade: + Set the `OLD_BIN_DIR` environment variable to the folder holding the `postgres` binary. + For example, `which postgres` returns something like `/usr/lib/postgresql/16/bin/postgres`. + ```bash + export OLD_BIN_DIR=/usr/lib/postgresql/16/bin + ``` + +1. **Set your connection string** + + This variable holds the connection information for the database to upgrade: + + ```bash + export SOURCE="postgres://:@:/" + ``` + +1. **Retrieve the location of the PostgreSQL data folder** + + Set the `OLD_DATA_DIR` environment variable to the value returned by the following: + ```shell + psql -d "$SOURCE" -c "SHOW data_directory ;" + ``` + PostgreSQL returns something like: + ```shell + ---------------------------- + /home/postgres/pgdata/data + (1 row) + ``` + +1. **Choose the new locations for the PostgreSQL binary and data folders** + + For example: + ```shell + export NEW_BIN_DIR=/usr/lib/postgresql/17/bin + export NEW_DATA_DIR=/home/postgres/pgdata/data-17 + ``` +1. Using psql, perform the upgrade: ```sql - pg_upgrade -b -B -d -D + pg_upgrade -b $OLD_BIN_DIR -B $NEW_BIN_DIR -d $OLD_DATA_DIR -D $NEW_DATA_DIR ``` -If you are moving data to a new physical instance of PostgreSQL, you can use the -`pg_dump` and `pg_restore` tools to dump your data from the old database, and -then restore it into the new, upgraded, database. For more information, see the [backup and restore section][backup]. +If you are moving data to a new physical instance of PostgreSQL, you can use `pg_dump` and `pg_restore` +to dump your data from the old database, and then restore it into the new, upgraded, database. For more +information, see the [backup and restore section][backup]. [backup]: /self-hosted/:currentVersion:/backup-and-restore/ [pg-relnotes]: https://www.postgresql.org/docs/release/ [pg_upgrade]: https://www.postgresql.org/docs/current/static/pgupgrade.html [postgres-breaking-change]: https://www.postgresql.org/about/news/postgresql-172-166-1510-1415-1318-and-1222-released-2965/ +[upgrade-pg]: /self-hosted/:currentVersion:/upgrades/upgrade-pg/#upgrade-postgresql From ee87dc01369ab6660cc27ae2b498d93a747b2215 Mon Sep 17 00:00:00 2001 From: Iain Cox Date: Tue, 3 Dec 2024 12:26:44 +0000 Subject: [PATCH 11/18] chore: update the hypertable links. (#3636) * chore: update the hypertable links. --- api/create_hypertable_old.md | 8 ++++++ use-timescale/hypertables/create.md | 40 +++++++++++++---------------- 2 files changed, 26 insertions(+), 22 deletions(-) diff --git a/api/create_hypertable_old.md b/api/create_hypertable_old.md index 4530de6503..c0ee41070b 100644 --- a/api/create_hypertable_old.md +++ b/api/create_hypertable_old.md @@ -10,6 +10,13 @@ api: # create_hypertable() (old interface) + + +This page describes the hypertable API supported prior to TimescaleDB v2.13. Best practice is to use the new +[`create_hypertable`][api-create-hypertable] interface. + + + Creates a TimescaleDB hypertable from a PostgreSQL table (replacing the latter), partitioned on time and with the option to partition on one or more other columns. The PostgreSQL table cannot be an already partitioned table @@ -183,3 +190,4 @@ SELECT create_hypertable('events', 'event', time_partitioning_func => 'event_sta [create_distributed_hypertable]: /api/:currentVersion:/distributed-hypertables/create_distributed_hypertable [hash-partitions]: /use-timescale/:currentVersion:/hypertables/about-hypertables/#hypertable-partitioning [hypertable-docs]: /use-timescale/:currentVersion:/hypertables/ +[api-create-hypertable]: /api/:currentVersion:/hypertable/create_hypertable/ diff --git a/use-timescale/hypertables/create.md b/use-timescale/hypertables/create.md index 17ccf93a96..f77b154c91 100644 --- a/use-timescale/hypertables/create.md +++ b/use-timescale/hypertables/create.md @@ -7,29 +7,22 @@ keywords: [hypertables, create] # Create hypertables -After [creating a Timescale database][install], you're ready to create your -first hypertable. Creating a hypertable is a two-step process: -1. Create a PostgreSQL table as usual -1. Convert it to a hypertable +Hypertables are designed for real-time analytics, they are PostgreSQL tables that automatically partition your data by +time. Typically, you partition hypertables on columns that hold time values. These partitioning columns can be of +the `timestamptz`, `date`, or `integer` types. While `timestamp` is also supported, +[best practice is to use `timestamptz`][timestamps-best-practice]. - -This code uses the new generalized hypertable API introduced in -TimescaleDB 2.13. The [old interface for `create_hypertable` is also -available](/api/:currentVersion:/hypertable/create_hypertable_old/). - +This code uses the best practice [`create_hypertable`][api-create-hypertable] API introduced in TimescaleDB 2.13. +You can also use the [old interface](/api/:currentVersion:/hypertable/create_hypertable_old/). -## Create a hypertable -To create a hypertable, you create a standard PostgreSQL table and then -convert it into a hypertable. +## Create a hypertable -Hypertables are designed for real-time analytics and typically partitioned by columns that hold time values. These can be of the `timestamptz`, `date`, or `integer` types. While `timestamp` is also supported, best practice is to use `timestamptz` instead. [PostgreSQL timestamp](https://wiki.postgresql.org/wiki/Don't_Do_This#Don.27t_use_timestamp_.28without_time_zone.29) explains why using `timestamp` is discouraged. +After you have [created a Timescale Cloud service][install], you're ready to create your first hypertable: -### Creating a hypertable - 1. Create a standard [PostgreSQL table][postgres-createtable]: ```sql @@ -42,8 +35,9 @@ Hypertables are designed for real-time analytics and typically partitioned by co ); ``` -1. Convert the table to a hypertable. Specify the name of the table you want to - convert, and the column that holds its time values. +1. [Convert the table to a hypertable][api-create-hypertable]. + + Specify the name of the table to convert and the column that holds its time values. For example: ```sql SELECT create_hypertable('conditions', by_range('time')); @@ -51,18 +45,20 @@ Hypertables are designed for real-time analytics and typically partitioned by co -If your table already has data, you can migrate the data when creating the -hypertable. Set the `migrate_data` argument to true when you call the -`create_hypertable` function. This might take a long time if you have a lot of -data. For more information about migrating data, see the +If your table already has data, set [`migrate_data` to `true`][api-create-hypertable-arguments] when +you create the hypertable. + +However, if you have a lot of data, this may take a long time. For more information about migrating data, see [Migrate your data to Timescale Cloud][data-migration]. -[create-distributed-hypertable]: /self-hosted/:currentVersion:/distributed-hypertables/create-distributed-hypertables/ [install]: /getting-started/:currentVersion:/ [postgres-createtable]: https://www.postgresql.org/docs/current/sql-createtable.html [postgresql-timestamp]: https://wiki.postgresql.org/wiki/Don't_Do_This#Don.27t_use_timestamp_.28without_time_zone.29 [data-migration]: /migrate/:currentVersion:/ +[api-create-hypertable]: /api/:currentVersion:/hypertable/create_hypertable/ +[api-create-hypertable-arguments]: /api/:currentVersion:/hypertable/create_hypertable/#arguments +[timestamps-best-practice]: https://wiki.postgresql.org/wiki/Don't_Do_This#Don.27t_use_timestamp_.28without_time_zone.29 From 1c2e811d7ac7e94f888d7f49a0c744566a7697d8 Mon Sep 17 00:00:00 2001 From: atovpeko <114177030+atovpeko@users.noreply.github.com> Date: Wed, 4 Dec 2024 19:41:13 +0200 Subject: [PATCH 12/18] Updated UI for service creation --- getting-started/services.md | 23 ++++++++++------------- 1 file changed, 10 insertions(+), 13 deletions(-) diff --git a/getting-started/services.md b/getting-started/services.md index 9dedc3b91a..105f80f60f 100644 --- a/getting-started/services.md +++ b/getting-started/services.md @@ -16,7 +16,6 @@ import CloudIntro from "versionContent/_partials/_cloud-intro.mdx"; - To start using $CLOUD_LONG for your data: @@ -32,26 +31,24 @@ To start using $CLOUD_LONG for your data: ## Create a $SERVICE_LONG -Now that you have an active $COMPANY account, you create and manage your services in $CONSOLE. When you create a service, you give a structure for your future data, which you then add manually or migrate from other services. All relevant $CLOUD_LONG features under your pricing plan are automatically available when you create a service. +Now that you have an active $COMPANY account, you create and manage your $SERVICE_SHORTs in $CONSOLE. When you create a service, you give a structure for your future data, which you then add manually or migrate from other services. All relevant $CLOUD_LONG features under your pricing plan are automatically available when you create a service. + + - +1. In the [service creation page][create-service], choose the PostgreSQL service or add `Time-series and analytics` and `AI and Vector` capabilities. Click `Save and continue`. -1. In the [service creation page][create-service], choose **Time Series and Analytics**. - ![Create Timescale Cloud service](https://assets.timescale.com/docs/images/console-create-service.png) + ![Create Timescale Cloud service](https://assets.timescale.com/docs/images/create-timescale-service.png) -1. In **Create a service**, configure your service, then click **Create service**. +1. Follow the next steps in `Create a service` to configure the compute size, environment, availability, region, and service name. Then click `Create service`. Your service is constructed immediately and is ready to use. -1. Click **Download the config** and store the configuration information you need to connect to this service in a - secure location. +1. Click `Download the config` and store the configuration information you need to connect to this service in a secure location. This file contains the passwords and configuration information you need to connect to your service using the - $CONSOLE Cloud SQL editors, from the command line, or using third party database administration tools. + $CONSOLE data mode, from the command line, or using third-party database administration tools. -1. Follow the service creation wizard. - -If you choose to go directly to the service overview, [Connect to your service][connect-to-your-service] +If you choose to go directly to the service overview, [Check your service and connect to it][connect-to-your-service] shows you how to connect. @@ -71,13 +68,13 @@ And that is it, you are up and running. Enjoy developing with $COMPANY. [tsc-portal]: https://console.cloud.timescale.com/ [services-how-to]: /use-timescale/:currentVersion:/services/ [install-psql]: /use-timescale/:currentVersion:/integrations/query-admin/psql/ - [create-an-account]: /getting-started/:currentVersion:/services/#create-a-timescale-cloud-account [create-a-service]: /getting-started/:currentVersion:/services/#create-a-timescale-cloud-service [connect-to-your-service]: /getting-started/:currentVersion:/services/#connect-to-your-service [create-a-hypertable]: /getting-started/:currentVersion:/services/#create-a-hypertable [create-service]: https://console.cloud.timescale.com/dashboard/create_services [what-is-time-series]: https://www.timescale.com/blog/what-is-a-time-series-database/#what-is-a-time-series-database +[what-is-dynamic-postgres]: https://www.timescale.com/dynamic-postgresql [hypertables]: /use-timescale/:currentVersion:/hypertables/about-hypertables/#hypertable-partitioning [timescaledb]: https://docs.timescale.com/#TimescaleDB From 20aff079db922b3865b736ce49913de829a90ac0 Mon Sep 17 00:00:00 2001 From: Iain Cox Date: Thu, 5 Dec 2024 10:24:40 +0000 Subject: [PATCH 13/18] chore: add a note about chunk size. (#3639) --- .../hypertables/about-hypertables.md | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/use-timescale/hypertables/about-hypertables.md b/use-timescale/hypertables/about-hypertables.md index 1b9a5c7ed7..e48bc298dc 100644 --- a/use-timescale/hypertables/about-hypertables.md +++ b/use-timescale/hypertables/about-hypertables.md @@ -60,21 +60,22 @@ the number of chunks you see when inspecting it. ### Best practices for time partitioning Chunk size affects insert and query performance. You want a chunk small enough -to fit into memory. This allows you to insert and query recent data without -reading from disk. But you don't want too many small and sparsely filled chunks. -This can affect query planning time and compression. +to fit into memory so you can insert and query recent data without +reading from disk. However, having too many small and sparsely filled chunks can +affect query planning time and compression. -We recommend setting the `chunk_time_interval` so that 25% of main memory can -store one chunk, including its indexes, from each active hypertable. You can -estimate the required interval from your data rate. For example, if you write -approximately 2 GB of data per day and have 64 GB of memory, set the -interval to 1 week. If you write approximately 10 GB of data per day on the -same machine, set the time interval to 1 day. +Best practice is to set `chunk_time_interval` so that prior to processing, one chunk of data +takes up 25% of main memory, including the indexes from each active hypertable. +For example, if your write approximately 2 GB of data per day to a database with 64 GB of +memory, set `chunk_time_interval` to 1 week. If you write approximately 10 GB of data per day +on the same machine, set the time interval to 1 day. + If you use expensive index types, such as some PostGIS geospatial indexes, take care to check the total size of the chunk and its index. You can do so using the [`chunks_detailed_size`](/api/latest/hypertable/chunks_detailed_size) function. + For a detailed analysis of how to optimize your chunk sizes, see the From f15f314af5a0dec989f2d802ba046c975e03e724 Mon Sep 17 00:00:00 2001 From: Iain Cox Date: Fri, 6 Dec 2024 15:22:58 +0100 Subject: [PATCH 14/18] chore: add a note about timescale.enable_chunk_skipping. (#3638) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * chore: add a note about timescale.enable_chunk_skipping. * chore: add a note about recompression after migration from an older version of TimescaleDB. Co-authored-by: Jônatas Davi Paganini --- api/enable_chunk_skipping.md | 79 ++++++++++++++++-------------------- 1 file changed, 36 insertions(+), 43 deletions(-) diff --git a/api/enable_chunk_skipping.md b/api/enable_chunk_skipping.md index e5f5df140d..587aaeb7d8 100644 --- a/api/enable_chunk_skipping.md +++ b/api/enable_chunk_skipping.md @@ -13,34 +13,36 @@ api: Enable range statistics for a specific column in a **compressed** hypertable. This tracks a range of values for that column per chunk. Used for chunk pruning during query optimization. -### Required arguments +Best practice is to enable range tracking on columns that are correlated to the +partitioning column. In other words, enable tracking on secondary columns which are +referenced in the `WHERE` clauses in your queries. -|Name|Type|Description| -|-|-|-| -|`hypertable`|REGCLASS|Hypertable that the column belongs to| -|`column_name`|TEXT|Column to track range statistics for| - -### Optional arguments +TimescaleDB supports min/max range tracking for the `smallint`, `int`, +`bigint`, `serial`, `bigserial`, `date`, `timestamp`, and `timestamptz` data types. The +min/max ranges are calculated when a chunk belonging to +this hypertable is compressed using the [compress_chunk][compress_chunk] function. +The range is stored in start (inclusive) and end (exclusive) form in the +`chunk_column_stats` catalog table. -|Name|Type|Description| -|-|-|-| -|`if_not_exists`|BOOLEAN|Set to `true` so that a notice is sent when ranges are not being tracked for a column. By default, an error is thrown| +This way you store the min/max values for such columns in this catalog +table at the per-chunk level. These min/max range values do +not participate in partitioning of the data. These ranges are +used for chunk pruning when the `WHERE` clause of an SQL query specifies +ranges on the column. -### Returns +A [DROP COLUMN](https://www.postgresql.org/docs/current/sql-altertable.html#SQL-ALTERTABLE-DESC-DROP-COLUMN) +on a column with statistics tracking enabled on it ends up removing all relevant entries +from the catalog table. -|Column|Type|Description| -|-|-|-| -|`column_stats_id`|INTEGER|ID of the entry in the TimescaleDB internal catalog| -|`enabled`|BOOLEAN|Returns `true` when tracking is enabled, `if_not_exists` is `true`, and when a new entry is not -added| +A [decompress_chunk][decompress_chunk] invocation on a compressed chunk resets its entries +from the `chunk_column_stats` catalog table since now it's available for DML and the +min/max range values can change on any further data manipulation in the chunk. - - TimescaleDB supports min/max range tracking for the `smallint`, `int`, - `bigint`, `serial`, `bigserial`, `date`, `timestamp`, and `timestamptz` data types. +By default, this feature is disabled. To enable chunk skipping, set `timescale.enable_chunk_skipping = on` in +`postgresql.conf`. When you upgrade from a database instance that uses compression but does not support chunk +skipping, you need to recompress the previously compressed chunks for chunk skipping to work. - - -### Sample use +## Samples In this sample, you convert the `conditions` table to a hypertable with partitioning on the `time` column. You then specify and enable additional columns to track ranges for. @@ -50,31 +52,22 @@ SELECT create_hypertable('conditions', 'time'); SELECT enable_chunk_skipping('conditions', 'device_id'); ``` - - Best practice is to enable range tracking on columns that are correlated to the - partitioning column. In other words, enable tracking on secondary columns which are - referenced in the `WHERE` clauses in your queries. - - The min/max ranges are calculated when a chunk belonging to - this hypertable is compressed using the [compress_chunk][compress_chunk] function. - The range is stored in start (inclusive) and end (exclusive) form in the - `chunk_column_stats` catalog table. +## Arguments - This way you store the min/max values for such columns in this catalog - table at the per-chunk level. These min/max range values do - not participate in partitioning of the data. These ranges are - used for chunk pruning when the `WHERE` clause of an SQL query specifies - ranges on the column. +| Name | Type | Default | Required | Description | +|-------------|------------------|---------|-|----------------------------------------| +|`column_name`| `TEXT` | - | ✔ | Column to track range statistics for | +|`hypertable`| `REGCLASS` | - | ✔ | Hypertable that the column belongs to | +|`if_not_exists`| `BOOLEAN` | `false` | ✖ | Set to `true` so that a notice is sent when ranges are not being tracked for a column. By default, an error is thrown | - A [DROP COLUMN](https://www.postgresql.org/docs/current/sql-altertable.html#SQL-ALTERTABLE-DESC-DROP-COLUMN) - on a column with statistics tracking enabled on it ends up removing all relevant entries - from the catalog table. - A [decompress_chunk][decompress_chunk] invocation on a compressed chunk resets its entries - from the `chunk_column_stats` catalog table since now it's available for DML and the - min/max range values can change on any further data manipulation in the chunk. +## Returns - +|Column|Type|Description| +|-|-|-| +|`column_stats_id`|INTEGER|ID of the entry in the TimescaleDB internal catalog| +|`enabled`|BOOLEAN|Returns `true` when tracking is enabled, `if_not_exists` is `true`, and when a new entry is not +added| [compress_chunk]: /api/:currentVersion:/compression/compress_chunk/ [decompress_chunk]: /api/:currentVersion:/compression/decompress_chunk/ From 1934b042246494b8f1ccdb9ad9492ec911690a73 Mon Sep 17 00:00:00 2001 From: Willie Tran Date: Fri, 6 Dec 2024 09:47:35 -0600 Subject: [PATCH 15/18] Update changelog.md (#3641) * Update changelog.md Signed-off-by: Willie Tran * review --------- Signed-off-by: Willie Tran Co-authored-by: atovpeko Co-authored-by: Iain Cox --- about/changelog.md | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/about/changelog.md b/about/changelog.md index 68cd93865a..8bd45a13f5 100644 --- a/about/changelog.md +++ b/about/changelog.md @@ -8,6 +8,19 @@ keywords: [changelog, upgrades, updates, releases] All the latest features and updates to Timescale products. +## 🛝 New service creation flow + + +- **AI and Vector:** the UI now lets you choose an option for creating AI and Vector-ready services right from the start. You no longer need to add the pgai, pgvector, and pgvectorscale extensions manually. You can combine this with time-series capabilities as well! + + ![Create Timescale Cloud service](https://assets.timescale.com/docs/images/create-timescale-service.png) + +- **Compute size recommendations:** new (and old) users were sometimes unsure about what compute size to use for their workload. We now offer compute size recommendations based on how much data you plan to have in your service. + + ![Service compute recommendation](https://assets.timescale.com/docs/images/timescale-service-compute-size.png) + +- **More information about configuration options:** we've made it clearer what each configuration option does, so that you can make more informed choices about how you want your service to be set up. + ## 🗝️ IP Allow Lists! From 8708fd93ec848e2ac1c2af13770806ce559bcd12 Mon Sep 17 00:00:00 2001 From: Iain Cox Date: Mon, 9 Dec 2024 15:12:10 +0100 Subject: [PATCH 16/18] 3130 site bug issue with the page self hostedlatestbackup and restore (#3626) chore: play with the logical backup page. Co-authored-by: Arunprasad Rajkumar --- self-hosted/backup-and-restore/index.md | 5 +- .../backup-and-restore/logical-backup.md | 185 ++++++++++++++++++ self-hosted/page-index/page-index.js | 5 + 3 files changed, 193 insertions(+), 2 deletions(-) create mode 100644 self-hosted/backup-and-restore/logical-backup.md diff --git a/self-hosted/backup-and-restore/index.md b/self-hosted/backup-and-restore/index.md index ef311ad61a..dcf55fb714 100644 --- a/self-hosted/backup-and-restore/index.md +++ b/self-hosted/backup-and-restore/index.md @@ -14,8 +14,8 @@ TimescaleDB takes advantage of the reliable backup and restore functionality provided by PostgreSQL. There are a few different mechanisms you can use to backup your self-hosted TimescaleDB database: -* Logical backups with pg_dump and pg_restore. -* [Physical backups][physical-backups] with `pg_basebackup` or another tool. +* [Logical backup][logical-backups] with pg_dump and pg_restore. +* [Physical backup][physical-backups] with `pg_basebackup` or another tool. * _DEPRECATED_ [Ongoing physical backups][ongoing-physical-backups] using write-ahead log (WAL) archiving. @@ -23,3 +23,4 @@ backup your self-hosted TimescaleDB database: [ongoing-physical-backups]: /self-hosted/:currentVersion:/backup-and-restore/docker-and-wale/ [physical-backups]: /self-hosted/:currentVersion:/backup-and-restore/physical/ +[logical-backups]: /self-hosted/:currentVersion:/backup-and-restore/logical-backup/ diff --git a/self-hosted/backup-and-restore/logical-backup.md b/self-hosted/backup-and-restore/logical-backup.md new file mode 100644 index 0000000000..e903b7f46c --- /dev/null +++ b/self-hosted/backup-and-restore/logical-backup.md @@ -0,0 +1,185 @@ +--- +title: Logical backup with pg_dump and pg_restore +excerpt: Back up and restore a hypertable or an entire database using native PostgreSQL commands +keywords: [backups, restore] +tags: [recovery, logical backup, pg_dump, pg_restore] +--- + +# Logical backup with `pg_dump` and `pg_restore` + +You backup and restore each self-hosted PostgreSQL database with TimescaleDB enabled using the native +PostgreSQL [`pg_dump`][pg_dump] and [`pg_restore`][pg_restore] commands. This also works for compressed hypertables, +you don't have to decompress the chunks before you begin. + +If you are using `pg_dump` to backup regularly, make sure you keep +track of the versions of PostgreSQL and TimescaleDB you are running. For more +information, see [Versions are mismatched when dumping and restoring a database][troubleshooting-version-mismatch]. + +This page shows you how to: + +- [Back up and restore an entire database][backup-entire-database] +- [Back up and restore individual hypertables][backup-individual-tables] + +You can also [upgrade between different versions of TimescaleDB][timescaledb-upgrade]. + +## Prerequisites + +- A source database to backup from, and a target database to restore to. +- Install the `psql` and `pg_dump` PostgreSQL client tools on your migration machine. + +## Back up and restore an entire database + +You backup and restore an entire database using `pg_dump` and `psql`. + + + +In terminal: + +1. **Set your connection strings** + + These variables hold the connection information for the source database to backup from and + the target database to restore to: + + ```bash + export SOURCE=postgres://:@:/ + export TARGET=postgres://:@: + ``` + +1. **Backup your database** + + ```bash + pg_dump -d "$SOURCE" \ + -Fc -f .bak + ``` + You may see some errors while `pg_dump` is running. See [Troubleshooting self-hosted TimescaleDB][troubleshooting] + to check if they can be safely ignored. + +1. **Restore your database from the backup** + + 1. Connect to your target database: + ```bash + psql -d "$TARGET" + ``` + + 1. Create a new database and enable TimescaleDB: + + ```sql + CREATE DATABASE ; + \c + CREATE EXTENSION IF NOT EXISTS timescaledb; + ``` + + 1. Put your database in the right state for restoring: + + ```sql + SELECT timescaledb_pre_restore(); + ``` + + 1. Restore the database: + + ```sql + pg_restore -Fc -d .bak + ``` + + 1. Return your database to normal operations: + + ```sql + SELECT timescaledb_post_restore(); + ``` + Do not use `pg_restore` with the `-j` option. This option does not correctly restore the + TimescaleDB catalogs. + + + + +## Back up and restore individual hypertables + +`pg_dump` provides flags that allow you to specify tables or schemas +to back up. However, using these flags means that the dump lacks necessary +information that TimescaleDB requires to understand the relationship between +them. Even if you explicitly specify both the hypertable and all of its +constituent chunks, the dump would still not contain all the information it +needs to recreate the hypertable on restore. + +To backup individual hypertables, backup the database schema, then backup only the tables +you need. You also use this method to backup individual plain tables. + + +In Terminal: + +1. **Set your connection strings** + + These variables hold the connection information for the source database to backup from and + the target database to restore to: + + ```bash + export SOURCE=postgres://:@:/ + export TARGET=postgres://:@:/ + ``` + +1. **Backup the database schema and individual tables** + + 1. Back up the hypertable schema: + + ```bash + pg_dump -s -d $SOURCE --table > schema.sql + ``` + + 1. Backup hypertable data to a CSV file: + + For each hypertable to backup: + ```bash + psql -d $SOURCE \ + -c "\COPY (SELECT * FROM ) TO .csv DELIMITER ',' CSV" + ``` + +1. **Restore the schema to the target database** + + ```bash + psql -d $TARGET < schema.sql + ``` + +1. **Restore hypertables from the backup** + + For each hypertable to backup: + 1. Recreate the hypertable: + + ```bash + psql -d $TARGET -c "SELECT create_hypertable(, )" + ``` + When you [create the new hypertable][create_hypertable], you do not need to use the + same parameters as existed in the source database. This + can provide a good opportunity for you to re-organize your hypertables if + you need to. For example, you can change the partitioning key, the number of + partitions, or the chunk interval sizes. + + 1. Restore the data: + + ```bash + psql -d $TARGET -c "\COPY FROM .csv CSV" + ``` + + The standard `COPY` command in PostgreSQL is single threaded. If you have a + lot of data, you can speed up the copy using the [timescaledb-parallel-copy][parallel importer]. + + + +Best practice is to backup and restore a database at a time. However, if you have superuser access to +PostgreSQL instance with TimescaleDB installed, you can use `pg_dumpall` to backup all PostgreSQL databases in a +cluster, including global objects that are common to all databases, namely database roles, tablespaces, +and privilege grants. You restore the PostgreSQL instance using `psql`. For more information, see the +[PostgreSQL documentation][postgres-docs]. + + +[parallel importer]: https://github.com/timescale/timescaledb-parallel-copy +[pg_dump]: https://www.postgresql.org/docs/current/static/app-pgdump.html +[pg_restore]: https://www.postgresql.org/docs/current/static/app-pgrestore.html +[timescaledb_pre_restore]: /api/:currentVersion:/administration/#timescaledb_pre_restore +[timescaledb_post_restore]: /api/:currentVersion:/administration/#timescaledb_post_restore +[timescaledb-upgrade]: /self-hosted/:currentVersion:/upgrades/ +[troubleshooting]: /self-hosted/:currentVersion:/troubleshooting/ +[troubleshooting-version-mismatch]: /self-hosted/:currentVersion:/troubleshooting/#versions-are-mismatched-when-dumping-and-restoring-a-database +[postgres-docs]: https://www.postgresql.org/docs/17/backup-dump.html#BACKUP-DUMP-ALL +[backup-entire-database]: /self-hosted/:currentVersion:/backup-and-restore/logical-backup/#back-up-and-restore-an-entire-database +[backup-individual-tables]: /self-hosted/:currentVersion:/backup-and-restore/logical-backup/#back-up-and-restore-individual-hypertables +[create_hypertable]: /api/:currentVersion:/hypertable/create_hypertable/ diff --git a/self-hosted/page-index/page-index.js b/self-hosted/page-index/page-index.js index 80b601a3b0..f1947f3816 100644 --- a/self-hosted/page-index/page-index.js +++ b/self-hosted/page-index/page-index.js @@ -101,6 +101,11 @@ module.exports = [ title: "Backup and restore", href: "backup-and-restore", children: [ + { + title: "Logical backup", + href: "logical-backup", + excerpt: "Back up and restore a hypertable or an entire database using native PostgreSQL commands", + }, { title: "Docker & WAL-E", href: "docker-and-wale", From 0dfcd48288684ff89632540dad1c5b12b7c815ec Mon Sep 17 00:00:00 2001 From: Iain Cox Date: Thu, 12 Dec 2024 15:26:15 +0100 Subject: [PATCH 17/18] chore: make self-hosted configuration easier to understand. (#3624) --- self-hosted/configuration/postgres-config.md | 38 +++++++++++++------- 1 file changed, 26 insertions(+), 12 deletions(-) diff --git a/self-hosted/configuration/postgres-config.md b/self-hosted/configuration/postgres-config.md index 887a296888..483df9c1a1 100644 --- a/self-hosted/configuration/postgres-config.md +++ b/self-hosted/configuration/postgres-config.md @@ -18,23 +18,37 @@ For some common configuration settings you might want to adjust, see the For more information about the PostgreSQL configuration page, see the [PostgreSQL documentation][pg-config]. -## Editing the PostgreSQL configuration file +## Edit the PostgreSQL configuration file The location of the PostgreSQL configuration file depends on your operating -system and installation. You can find the location by querying the database as -the `postgres` user, from the psql prompt: +system and installation. -```sql -SHOW config_file; -``` +1. **Find the location of the config file for your Postgres instance** + 1. Connect to your database: + ```shell + psql -d "postgres://:@:/" + ``` + 1. Retrieve the database file location from the database internal configuration. + ```sql + SHOW config_file; + ``` + Postgres returns the path to your configuration file. For example: + ```sql + -------------------------------------------- + /home/postgres/pgdata/data/postgresql.conf + (1 row) + ``` -The configuration file requires one parameter per line. Blank lines are ignored, -and you can use a `#` symbol at the beginning of a line to denote a comment. +1. **Open the config file, then [edit your postgres configuration][pg-config]** + ```shell + vi /home/postgres/pgdata/data/postgresql.conf + ``` + +1. **Save your updated configuration** -When you have made changes to the configuration file, the new configuration is -not applied immediately. The configuration file is reloaded whenever the server -receives a `SIGHUP` signal, or you can manually reload the file uses the -`pg_ctl` command. + When you have saved the changes you make to the configuration file, the new configuration is + not applied immediately. The configuration file is automatically reloaded when the server + receives a `SIGHUP` signal. To manually reload the file, use the `pg_ctl` command. ## Setting parameters at the command prompt From 5dd7ed70ad33320ee02962dec84dfdb740f95bb4 Mon Sep 17 00:00:00 2001 From: atovpeko <114177030+atovpeko@users.noreply.github.com> Date: Fri, 13 Dec 2024 13:49:26 +0200 Subject: [PATCH 18/18] merge clarification (#3649) Co-authored-by: Iain Cox --- self-hosted/configuration/timescaledb-config.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/self-hosted/configuration/timescaledb-config.md b/self-hosted/configuration/timescaledb-config.md index 7f3e0af993..4f92fb621d 100644 --- a/self-hosted/configuration/timescaledb-config.md +++ b/self-hosted/configuration/timescaledb-config.md @@ -40,9 +40,9 @@ in this way. ### `timescaledb.enable_merge_on_cagg_refresh (bool)` -Set to `TRUE` to dramatically decrease the amount of data written on a continuous aggregate +Set to `ON` to dramatically decrease the amount of data written on a continuous aggregate in the presence of a small number of changes, reduce the i/o cost of refreshing a -[continuous aggregate][continuous-aggregates], and generate fewer Write-Ahead Logs (WAL) +[continuous aggregate][continuous-aggregates], and generate fewer Write-Ahead Logs (WAL). Only works for continuous aggregates that don't have compression enabled. ## Distributed hypertables