-
Notifications
You must be signed in to change notification settings - Fork 103
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
feat: put the structure in for hypercore.
- Loading branch information
1 parent
20aff07
commit 9caf890
Showing
14 changed files
with
765 additions
and
37 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,91 @@ | ||
--- | ||
api_name: add_compression_policy() | ||
excerpt: Add policy to schedule automatic compression of chunks | ||
topics: [compression, jobs] | ||
keywords: [compression, policies] | ||
tags: [scheduled jobs, background jobs, automation framework] | ||
api: | ||
license: community | ||
type: function | ||
--- | ||
|
||
# add_compression_policy() <Tag type="community" content="community" /> | ||
|
||
Set a policy where the system compresses a chunk automatically in the background after | ||
it reaches a given age. | ||
|
||
Compression policies can only be created on hypertables or continuous aggregates | ||
that already have compression enabled. To set `timescaledb.compress` and other | ||
configuration parameters for hypertables, use the | ||
[`ALTER TABLE`][compression_alter-table] | ||
command. To enable compression on continuous aggregates, use the | ||
[`ALTER MATERIALIZED VIEW`][compression_continuous-aggregate] | ||
command. To view the policies that you set or the policies that already exist, | ||
see [informational views][informational-views]. | ||
|
||
## Required arguments | ||
|
||
|Name|Type|Description| | ||
|-|-|-| | ||
|`hypertable`|REGCLASS|Name of the hypertable or continuous aggregate| | ||
|`compress_after`|INTERVAL or INTEGER|The age after which the policy job compresses chunks. `compress_after` is calculated relative to the current time, so chunks containing data older than `now - {compress_after}::interval` are compressed. This argument is mutually exclusive with `compress_created_before`.| | ||
|`compress_created_before`|INTERVAL|Chunks with creation time older than this cut-off point are compressed. The cut-off point is computed as `now() - compress_created_before`. Defaults to `NULL`. Not supported for continuous aggregates yet. This argument is mutually exclusive with `compress_after`. | | ||
|
||
The `compress_after` parameter should be specified differently depending | ||
on the type of the time column of the hypertable or continuous aggregate: | ||
|
||
* For hypertables with TIMESTAMP, TIMESTAMPTZ, and DATE time columns: the time | ||
interval should be an INTERVAL type. | ||
* For hypertables with integer-based timestamps: the time interval should be | ||
an integer type (this requires the [integer_now_func][set_integer_now_func] | ||
to be set). | ||
|
||
## Optional arguments | ||
<!-- vale Google.Acronyms = NO --> | ||
<!-- vale Vale.Spelling = NO --> | ||
|
||
|Name|Type|Description| | ||
|-|-|-| | ||
|`schedule_interval`|INTERVAL|The interval between the finish time of the last execution and the next start. Defaults to 12 hours for hyper tables with a `chunk_time_interval` >= 1 day and `chunk_time_interval / 2` for all other hypertables.| | ||
|`initial_start`|TIMESTAMPTZ|Time the policy is first run. Defaults to NULL. If omitted, then the schedule interval is the interval from the finish time of the last execution to the next start. If provided, it serves as the origin with respect to which the next_start is calculated | | ||
|`timezone`|TEXT|A valid time zone. If `initial_start` is also specified, subsequent executions of the compression policy are aligned on its initial start. However, daylight savings time (DST) changes may shift this alignment. Set to a valid time zone if this is an issue you want to mitigate. If omitted, UTC bucketing is performed. Defaults to `NULL`.| | ||
|`if_not_exists`|BOOLEAN|Setting to `true` causes the command to fail with a warning instead of an error if a compression policy already exists on the hypertable. Defaults to false.| | ||
<!-- vale Google.Acronyms = YES --> | ||
<!-- vale Vale.Spelling = YES --> | ||
|
||
## Sample usage | ||
|
||
Add a policy to compress chunks older than 60 days on the `cpu` hypertable. | ||
|
||
``` sql | ||
SELECT add_compression_policy('cpu', compress_after => INTERVAL '60d'); | ||
``` | ||
|
||
Add a policy to compress chunks created 3 months before on the 'cpu' hypertable. | ||
|
||
``` sql | ||
SELECT add_compression_policy('cpu', compress_created_before => INTERVAL '3 months'); | ||
``` | ||
|
||
Note above that when `compress_after` is used then the time data range | ||
present in the partitioning time column is used to select the target | ||
chunks. Whereas, when `compress_created_before` is used then the chunks | ||
which were created 3 months ago are selected. | ||
|
||
Add a compress chunks policy to a hypertable with an integer-based time column: | ||
|
||
``` sql | ||
SELECT add_compression_policy('table_with_bigint_time', BIGINT '600000'); | ||
``` | ||
|
||
Add a policy to compress chunks of a continuous aggregate called `cpu_weekly`, that are | ||
older than eight weeks: | ||
|
||
``` sql | ||
SELECT add_compression_policy('cpu_weekly', INTERVAL '8 weeks'); | ||
``` | ||
|
||
[compression_alter-table]: /api/:currentVersion:/compression/alter_table_compression/ | ||
[compression_continuous-aggregate]: /api/:currentVersion:/continuous-aggregates/alter_materialized_view/ | ||
[set_integer_now_func]: /api/:currentVersion:/hypertable/set_integer_now_func | ||
[informational-views]: /api/:currentVersion:/informational-views/jobs/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,46 @@ | ||
--- | ||
api_name: timescaledb_information.chunk_compression_settings | ||
excerpt: Get information about compression settings for all chunks | ||
topics: [information, compression, chunk] | ||
keywords: [compression, chunk, information] | ||
tags: [chunk compression, compression settings] | ||
api: | ||
license: community | ||
type: view | ||
--- | ||
|
||
# timescaledb_information.chunk_compression_settings | ||
|
||
Show the compression settings for each chunk that has compression enabled. | ||
|
||
### Arguments | ||
|
||
|Name|Type|Description| | ||
|-|-|-| | ||
|`hypertable`|`REGCLASS`|Hypertable which has compression enabled| | ||
|`chunk`|`REGCLASS`|Chunk which has compression enabled| | ||
|`segmentby`|`TEXT`|List of columns used for segmenting the compressed data| | ||
|`orderby`|`TEXT`| List of columns used for ordering compressed data along with ordering and NULL ordering information| | ||
|
||
### Sample use | ||
|
||
Show compression settings for all chunks: | ||
|
||
```sql | ||
SELECT * FROM timescaledb_information.chunk_compression_settings' | ||
hypertable | measurements | ||
chunk | _timescaledb_internal._hyper_1_1_chunk | ||
segmentby | | ||
orderby | "time" DESC | ||
``` | ||
Find all chunk compression settings for a specific hypertable: | ||
```sql | ||
SELECT * FROM timescaledb_information.chunk_compression_settings WHERE hypertable::TEXT LIKE 'metrics'; | ||
hypertable | metrics | ||
chunk | _timescaledb_internal._hyper_2_3_chunk | ||
segmentby | metric_id | ||
orderby | "time" | ||
``` | ||
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,90 @@ | ||
--- | ||
api_name: chunk_compression_stats() | ||
excerpt: Get compression-related statistics for chunks | ||
topics: [compression] | ||
keywords: [compression, statistics, chunks, information] | ||
tags: [disk space, schemas, size] | ||
api: | ||
license: community | ||
type: function | ||
--- | ||
|
||
# chunk_compression_stats() <Tag type="community">Community</Tag> | ||
|
||
Get chunk-specific statistics related to hypertable compression. | ||
All sizes are in bytes. | ||
|
||
This function shows the compressed size of chunks, computed when the | ||
`compress_chunk` is manually executed, or when a compression policy processes | ||
the chunk. An insert into a compressed chunk does not update the compressed | ||
sizes. For more information about how to compute chunk sizes, see the | ||
`chunks_detailed_size` section. | ||
|
||
### Required arguments | ||
|
||
|Name|Type|Description| | ||
|-|-|-| | ||
|`hypertable`|REGCLASS|Name of the hypertable| | ||
|
||
### Returns | ||
|
||
|Column|Type|Description| | ||
|-|-|-| | ||
|`chunk_schema`|TEXT|Schema name of the chunk| | ||
|`chunk_name`|TEXT|Name of the chunk| | ||
|`compression_status`|TEXT|the current compression status of the chunk| | ||
|`before_compression_table_bytes`|BIGINT|Size of the heap before compression (NULL if currently uncompressed)| | ||
|`before_compression_index_bytes`|BIGINT|Size of all the indexes before compression (NULL if currently uncompressed)| | ||
|`before_compression_toast_bytes`|BIGINT|Size the TOAST table before compression (NULL if currently uncompressed)| | ||
|`before_compression_total_bytes`|BIGINT|Size of the entire chunk table (table+indexes+toast) before compression (NULL if currently uncompressed)| | ||
|`after_compression_table_bytes`|BIGINT|Size of the heap after compression (NULL if currently uncompressed)| | ||
|`after_compression_index_bytes`|BIGINT|Size of all the indexes after compression (NULL if currently uncompressed)| | ||
|`after_compression_toast_bytes`|BIGINT|Size the TOAST table after compression (NULL if currently uncompressed)| | ||
|`after_compression_total_bytes`|BIGINT|Size of the entire chunk table (table+indexes+toast) after compression (NULL if currently uncompressed)| | ||
|`node_name`|TEXT|nodes on which the chunk is located, applicable only to distributed hypertables| | ||
|
||
### Sample usage | ||
|
||
```sql | ||
SELECT * FROM chunk_compression_stats('conditions') | ||
ORDER BY chunk_name LIMIT 2; | ||
|
||
-[ RECORD 1 ]------------------+---------------------- | ||
chunk_schema | _timescaledb_internal | ||
chunk_name | _hyper_1_1_chunk | ||
compression_status | Uncompressed | ||
before_compression_table_bytes | | ||
before_compression_index_bytes | | ||
before_compression_toast_bytes | | ||
before_compression_total_bytes | | ||
after_compression_table_bytes | | ||
after_compression_index_bytes | | ||
after_compression_toast_bytes | | ||
after_compression_total_bytes | | ||
node_name | | ||
-[ RECORD 2 ]------------------+---------------------- | ||
chunk_schema | _timescaledb_internal | ||
chunk_name | _hyper_1_2_chunk | ||
compression_status | Compressed | ||
before_compression_table_bytes | 8192 | ||
before_compression_index_bytes | 32768 | ||
before_compression_toast_bytes | 0 | ||
before_compression_total_bytes | 40960 | ||
after_compression_table_bytes | 8192 | ||
after_compression_index_bytes | 32768 | ||
after_compression_toast_bytes | 8192 | ||
after_compression_total_bytes | 49152 | ||
node_name | | ||
``` | ||
|
||
Use `pg_size_pretty` get the output in a more human friendly format. | ||
|
||
```sql | ||
SELECT pg_size_pretty(after_compression_total_bytes) AS total | ||
FROM chunk_compression_stats('conditions') | ||
WHERE compression_status = 'Compressed'; | ||
|
||
-[ RECORD 1 ]--+------ | ||
total | 48 kB | ||
|
||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,97 @@ | ||
--- | ||
api_name: timescaledb_information.compression_settings | ||
excerpt: Get information about compression settings for hypertables | ||
topics: [information, compression] | ||
keywords: [compression] | ||
tags: [informational, settings, hypertables, schemas, indexes] | ||
api: | ||
license: community | ||
type: view | ||
--- | ||
|
||
import DeprecationNotice from "versionContent/_partials/_deprecated.mdx"; | ||
|
||
# timescaledb_information.compression_settings | ||
|
||
This view exists for backwards compatibility. The supported views to retrieve information about compression are: | ||
|
||
- [timescaledb_information.hypertable_compression_settings][hypertable_compression_settings] | ||
- [timescaledb_information.chunk_compression_settings][chunk_compression_settings]. | ||
|
||
<DeprecationNotice /> | ||
|
||
|
||
|
||
Get information about compression-related settings for hypertables. | ||
Each row of the view provides information about individual `orderby` | ||
and `segmentby` columns used by compression. | ||
|
||
How you use `segmentby` is the single most important thing for compression. It | ||
affects compresion rates, query performance, and what is compressed or | ||
decompressed by mutable compression. | ||
|
||
|
||
### Available columns | ||
|
||
|Name|Type|Description| | ||
|---|---|---| | ||
| `hypertable_schema` | TEXT | Schema name of the hypertable | | ||
| `hypertable_name` | TEXT | Table name of the hypertable | | ||
| `attname` | TEXT | Name of the column used in the compression settings | | ||
| `segmentby_column_index` | SMALLINT | Position of attname in the compress_segmentby list | | ||
| `orderby_column_index` | SMALLINT | Position of attname in the compress_orderby list | | ||
| `orderby_asc` | BOOLEAN | True if this is used for order by ASC, False for order by DESC | | ||
| `orderby_nullsfirst` | BOOLEAN | True if nulls are ordered first for this column, False if nulls are ordered last| | ||
|
||
### Sample usage | ||
|
||
```sql | ||
CREATE TABLE hypertab (a_col integer, b_col integer, c_col integer, d_col integer, e_col integer); | ||
SELECT table_name FROM create_hypertable('hypertab', by_range('a_col', 864000000)); | ||
|
||
ALTER TABLE hypertab SET (timescaledb.compress, timescaledb.compress_segmentby = 'a_col,b_col', | ||
timescaledb.compress_orderby = 'c_col desc, d_col asc nulls last'); | ||
|
||
SELECT * FROM timescaledb_information.compression_settings WHERE hypertable_name = 'hypertab'; | ||
|
||
-[ RECORD 1 ]----------+--------- | ||
hypertable_schema | public | ||
hypertable_name | hypertab | ||
attname | a_col | ||
segmentby_column_index | 1 | ||
orderby_column_index | | ||
orderby_asc | | ||
orderby_nullsfirst | | ||
-[ RECORD 2 ]----------+--------- | ||
hypertable_schema | public | ||
hypertable_name | hypertab | ||
attname | b_col | ||
segmentby_column_index | 2 | ||
orderby_column_index | | ||
orderby_asc | | ||
orderby_nullsfirst | | ||
-[ RECORD 3 ]----------+--------- | ||
hypertable_schema | public | ||
hypertable_name | hypertab | ||
attname | c_col | ||
segmentby_column_index | | ||
orderby_column_index | 1 | ||
orderby_asc | f | ||
orderby_nullsfirst | t | ||
-[ RECORD 4 ]----------+--------- | ||
hypertable_schema | public | ||
hypertable_name | hypertab | ||
attname | d_col | ||
segmentby_column_index | | ||
orderby_column_index | 2 | ||
orderby_asc | t | ||
orderby_nullsfirst | f | ||
``` | ||
|
||
<Highlight type="note"> | ||
The `by_range` dimension builder is an addition to TimescaleDB 2.13. | ||
</Highlight> | ||
|
||
[chunk_compression_settings]: /api/:currentVersion:/informational-views/chunk_compression_settings/ | ||
[hypertable_compression_settings]: /api/:currentVersion:/informational-views/hypertable_compression_settings/ | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,57 @@ | ||
--- | ||
api_name: compress_chunk() | ||
excerpt: Manually compress a given chunk | ||
topics: [compression] | ||
keywords: [compression] | ||
tags: [chunks] | ||
api: | ||
license: community | ||
type: function | ||
--- | ||
|
||
# compress_chunk() <Tag type="community">Community</Tag> | ||
|
||
The `compress_chunk` function is used to compress (or recompress, if necessary) | ||
a specific chunk. This is most often used instead of the | ||
[`add_compression_policy`][add_compression_policy] function, when a user | ||
wants more control over the scheduling of compression. For most users, we | ||
suggest using the policy framework instead. | ||
|
||
You can also compress chunks by | ||
[running the job associated with your compression policy][run-job]. | ||
`compress_chunk` gives you more fine-grained control by | ||
allowing you to target a specific chunk that needs compressing. | ||
|
||
<Highlight type="tip"> | ||
You can get a list of chunks belonging to a hypertable using the | ||
[`show_chunks` function](/api/latest/hypertable/show_chunks/). | ||
</Highlight> | ||
|
||
### Required arguments | ||
|
||
|Name|Type|Description| | ||
|---|---|---| | ||
| `chunk_name` | REGCLASS | Name of the chunk to be compressed| | ||
|
||
### Optional arguments | ||
|
||
|Name|Type|Description| | ||
|---|---|---| | ||
| `if_not_compressed` | BOOLEAN | Disabling this will make the function error out on chunks that are already compressed. Defaults to true.| | ||
|
||
### Returns | ||
|
||
|Column|Description| | ||
|---|---| | ||
| `compress_chunk` | (REGCLASS) Name of the chunk that was compressed| | ||
|
||
### Sample usage | ||
|
||
Compress a single chunk. | ||
|
||
``` sql | ||
SELECT compress_chunk('_timescaledb_internal._hyper_1_2_chunk'); | ||
``` | ||
|
||
[add_compression_policy]: /api/:currentVersion:/compression/add_compression_policy/ | ||
[run-job]: /api/:currentVersion:/actions/run_job/ |
Oops, something went wrong.