diff --git a/docs/cn/developer/index.md b/docs/cn/developer/index.md index 7ddbfa2da9..e4cae875f3 100644 --- a/docs/cn/developer/index.md +++ b/docs/cn/developer/index.md @@ -11,27 +11,27 @@ sidebar_position: -2 使用适用于主流编程语言的原生驱动连接 Databend。所有驱动均支持 Databend 自托管部署和 Databend Cloud 部署。 -| 语言 | 包 | 主要特性 | 文档 | -| ----------- | --------------------------------------------------------------------------- | ------------------------------------ | ----------------------------------- | -| **Go** | [databend-go](https://github.com/databendlabs/databend-go) | 标准 database/sql 接口,连接池 | [查看指南](00-drivers/00-golang.md) | -| **Python** | [databend-driver](https://pypi.org/project/databend-driver/) | 同步/异步支持,提供 SQLAlchemy 方言 | [查看指南](00-drivers/01-python.md) | -| **Node.js** | [databend-driver](https://www.npmjs.com/package/databend-driver) | TypeScript 支持,基于 Promise 的 API | [查看指南](00-drivers/02-nodejs.md) | -| **Java** | [databend-jdbc](https://github.com/databendcloud/databend-jdbc) | JDBC 4.0 兼容,预处理语句 | [查看指南](00-drivers/03-jdbc.md) | -| **Rust** | [databend-driver](https://github.com/databendlabs/BendSQL/tree/main/driver) | Async/await 支持,类型安全查询 | [查看指南](00-drivers/04-rust.md) | +| 语言 | 包 | 主要特性 | 文档 | +|----------|---------|-------------|---------------| +| **Go** | [databend-go](https://github.com/databendlabs/databend-go) | 标准 database/sql 接口,连接池 | [查看指南](00-drivers/00-golang.md) | +| **Python** | [databend-driver](https://pypi.org/project/databend-driver/) | 同步/异步支持,提供 SQLAlchemy 方言 | [查看指南](00-drivers/01-python.md) | +| **Node.js** | [databend-driver](https://www.npmjs.com/package/databend-driver) | TypeScript 支持,基于 Promise 的 API | [查看指南](00-drivers/02-nodejs.md) | +| **Java** | [databend-jdbc](https://github.com/databendcloud/databend-jdbc) | JDBC 4.0 兼容,预处理语句 | [查看指南](00-drivers/03-jdbc.md) | +| **Rust** | [databend-driver](https://github.com/databendlabs/BendSQL/tree/main/driver) | Async/await 支持,类型安全查询 | [查看指南](00-drivers/04-rust.md) | ## API Databend 提供 REST API,用于直接集成和自定义应用程序。 -| API | 描述 | 使用场景 | -| --------------------------- | -------------------------------------- | ------------------------ | +| API | 描述 | 使用场景 | +|-----|-------------|----------| | [HTTP API](10-apis/http.md) | 用于 SQL 执行和数据操作的 RESTful 接口 | 自定义集成,直接执行 SQL | ## 开发工具 -- **[BendSQL CLI](/tutorials/getting-started/connect-to-databend-bendsql)** - Databend 的命令行界面 +- **[BendSQL CLI](/tutorials/connect/connect-to-databendcloud-bendsql)** - Databend 的命令行界面 - **[Databend Cloud Console](/guides/cloud/using-databend-cloud/worksheet)** - 基于 Web 的管理界面 ## 其他资源 -- **[社区](https://github.com/databendlabs/databend)** - 获取帮助并分享知识 +- **[社区](https://github.com/databendlabs/databend)** - 获取帮助并分享知识 \ No newline at end of file diff --git a/docs/cn/guides/20-cloud/10-using-databend-cloud/02-dashboard.md b/docs/cn/guides/20-cloud/10-using-databend-cloud/02-dashboard.md index 42d4c98bd2..d33d415517 100644 --- a/docs/cn/guides/20-cloud/10-using-databend-cloud/02-dashboard.md +++ b/docs/cn/guides/20-cloud/10-using-databend-cloud/02-dashboard.md @@ -1,7 +1,6 @@ --- title: 仪表盘 --- - import StepsWrap from '@site/src/components/StepsWrap'; import StepContent from '@site/src/components/Steps/step-content'; import EllipsisSVG from '@site/static/img/icon/ellipsis.svg'; @@ -26,16 +25,17 @@ import EllipsisSVG from '@site/static/img/icon/ellipsis.svg'; 请注意,这些聚合函数有助于汇总和揭示查询结果中原始数据的有价值模式。可用的聚合函数根据您选择的不同数据类型和图表类型而有所不同。 -| 函数 | 描述 | -| ------- | --------------------------------------------------------------- | -| None | 不对数据进行任何更改。 | -| Count | 计算查询结果中该字段的记录数 (不包括包含 NULL 和 '' 值的记录)。 | -| Min | 计算查询结果中的最小值。 | -| Max | 计算查询结果中的最大值。 | -| Median | 计算查询结果中的中位数。 | -| Sum | 计算查询结果中数值的总和。 | -| Average | 计算查询结果中数值数据的平均值。 | -| Mode | 识别查询结果中出现频率最高的值。 | + +| 函数 | 描述 | +|----------------------|----------------------------------------------------------------| +| None | 不对数据进行任何更改。 | +| Count | 计算查询结果中该字段的记录数 (不包括包含 NULL 和 '' 值的记录)。 | +| Min | 计算查询结果中的最小值。 | +| Max | 计算查询结果中的最大值。 | +| Median | 计算查询结果中的中位数。 | +| Sum | 计算查询结果中数值的总和。 | +| Average | 计算查询结果中数值数据的平均值。 | +| Mode | 识别查询结果中出现频率最高的值。 | 4. 返回 Databend Cloud 主页,在左侧导航菜单中选择**仪表盘**,然后点击**新建仪表盘**。 @@ -61,4 +61,4 @@ import EllipsisSVG from '@site/static/img/icon/ellipsis.svg'; ## 教程 -- [COVID-19 数据仪表盘制作](/tutorials/cloud-ops/dashboard) +- [COVID-19 数据仪表盘制作](/tutorials/databend-cloud/dashboard) \ No newline at end of file diff --git a/docs/cn/guides/20-cloud/index.md b/docs/cn/guides/20-cloud/index.md index 8701cb659d..426e7e2496 100644 --- a/docs/cn/guides/20-cloud/index.md +++ b/docs/cn/guides/20-cloud/index.md @@ -8,27 +8,27 @@ Databend Cloud 是一个完全托管的云数仓服务,为您的数据分析 ## 快速导航 -| 类别 | 资源 | 描述 | -| ------------ | ------------------------------------------------------------- | ---------------------------------- | -| **入门指南** | [创建新账户](/guides/cloud/new-account) | 注册 Databend Cloud 并创建您的组织 | -| **基础知识** | [组织与成员](/guides/cloud/using-databend-cloud/organization) | 了解组织的工作原理并管理团队成员 | -| | [计算集群](/guides/cloud/using-databend-cloud/warehouses) | 了解计算资源、规模和最佳实践 | -| | [工作区](/guides/cloud/using-databend-cloud/worksheet) | 执行 SQL 查询并分析数据 | -| | [仪表盘](/tutorials/cloud-ops/dashboard) | 通过可视化监控您的数据分析 | -| **管理** | [成本管理](/guides/cloud/manage/costs) | 设置支出限制并控制您的费用 | -| | [监控](/guides/cloud/manage/monitor) | 跟踪使用情况和性能 | -| | [AI 功能](/guides/cloud/manage/ai-features) | 利用 AI 功能进行数据分析 | -| | [指标](/guides/cloud/manage/metrics) | 分析性能指标 | +| 类别 | 资源 | 描述 | +|----------|----------|-------------| +| **入门指南** | [创建新账户](/guides/cloud/new-account) | 注册 Databend Cloud 并创建您的组织 | +| **基础知识** | [组织与成员](/guides/cloud/using-databend-cloud/organization) | 了解组织的工作原理并管理团队成员 | +| | [计算集群](/guides/cloud/using-databend-cloud/warehouses) | 了解计算资源、规模和最佳实践 | +| | [工作区](/guides/cloud/using-databend-cloud/worksheet) | 执行 SQL 查询并分析数据 | +| | [仪表盘](/guides/cloud/using-databend-cloud/dashboard) | 通过可视化监控您的数据分析 | +| **管理** | [成本管理](/guides/cloud/manage/costs) | 设置支出限制并控制您的费用 | +| | [监控](/guides/cloud/manage/monitor) | 跟踪使用情况和性能 | +| | [AI 功能](/guides/cloud/manage/ai-features) | 利用 AI 功能进行数据分析 | +| | [指标](/guides/cloud/manage/metrics) | 分析性能指标 | ## 🔗 连接选项 -| 客户端类型 | 选项 | 使用场景 | -| -------------- | ------------------------------------------------ | --------------------------------------- | -| **SQL 客户端** | [BendSQL](/guides/sql-clients/bendsql) | 面向开发者和脚本的命令行界面 | -| | [DBeaver](/guides/sql-clients/jdbc) | 用于数据分析和可视化查询的 GUI 应用程序 | -| **编程语言** | [Python](/guides/sql-clients/developers/python) | 数据科学、分析和机器学习 | -| | [Go](/guides/sql-clients/developers/golang) | 后端应用程序和微服务 | -| | [Node.js](/guides/sql-clients/developers/nodejs) | Web 应用程序和服务 | -| | [Java](/guides/sql-clients/developers/jdbc) | 企业应用程序 | +| 客户端类型 | 选项 | 使用场景 | +|-------------|---------|----------| +| **SQL 客户端** | [BendSQL](/guides/sql-clients/bendsql) | 面向开发者和脚本的命令行界面 | +| | [DBeaver](/guides/sql-clients/jdbc) | 用于数据分析和可视化查询的 GUI 应用程序 | +| **编程语言** | [Python](/guides/sql-clients/developers/python) | 数据科学、分析和机器学习 | +| | [Go](/guides/sql-clients/developers/golang) | 后端应用程序和微服务 | +| | [Node.js](/guides/sql-clients/developers/nodejs) | Web 应用程序和服务 | +| | [Java](/guides/sql-clients/developers/jdbc) | 企业应用程序 | -有关详细的连接说明和更多选项,请参阅 [SQL 客户端](/guides/sql-clients/) 部分。 +有关详细的连接说明和更多选项,请参阅 [SQL 客户端](/guides/sql-clients/) 部分。 \ No newline at end of file diff --git a/docs/cn/guides/30-sql-clients/00-bendsql/index.md b/docs/cn/guides/30-sql-clients/00-bendsql/index.md index 101a323787..50e88c61a9 100644 --- a/docs/cn/guides/30-sql-clients/00-bendsql/index.md +++ b/docs/cn/guides/30-sql-clients/00-bendsql/index.md @@ -161,30 +161,30 @@ DSN(数据源名称)是一种简单而强大的方式,可以使用单个 U databend[+flight]://user[:password]@host[:port]/[database][?sslmode=disable][&arg1=value1] ``` -| 通用 DSN 参数 | 描述 | -| ----------------- | ------------------------------------ | -| `tenant` | 租户 ID,仅限 Databend Cloud。 | -| `warehouse` | 计算集群名称,仅限 Databend Cloud。 | -| `sslmode` | 如果不使用 TLS,则设置为 `disable`。 | -| `tls_ca_file` | 自定义根 CA 证书路径。 | -| `connect_timeout` | 连接超时时间(秒)。 | - -| RestAPI 客户端参数 | 描述 | -| --------------------------- | ------------------------------------------------------------------------------------------------------ | -| `wait_time_secs` | 页面请求等待时间,默认为 `1`。 | -| `max_rows_in_buffer` | 页面缓冲区中的最大行数。 | -| `max_rows_per_page` | 单个页面的最大响应行数。 | -| `page_request_timeout_secs` | 单个页面请求的超时时间,默认为 `30`。 | -| `presign` | 启用数据加载的预签名。选项:`auto`、`detect`、`on`、`off`。默认为 `auto`(仅对 Databend Cloud 启用)。 | - -| FlightSQL 客户端参数 | 描述 | -| --------------------------- | -------------------------------------------------------------- | -| `query_timeout` | 查询超时时间(秒)。 | -| `tcp_nodelay` | 默认为 `true`。 | -| `tcp_keepalive` | TCP keepalive 时间(秒)(默认为 `3600`,设置为 `0` 以禁用)。 | -| `http2_keep_alive_interval` | Keep-alive 间隔时间(秒),默认为 `300`。 | -| `keep_alive_timeout` | Keep-alive 超时时间(秒),默认为 `20`。 | -| `keep_alive_while_idle` | 默认为 `true`。 | +| 通用 DSN 参数 | 描述 | +|-----------------------|--------------------------------------| +| `tenant` | 租户 ID,仅限 Databend Cloud。 | +| `warehouse` | 计算集群名称,仅限 Databend Cloud。 | +| `sslmode` | 如果不使用 TLS,则设置为 `disable`。 | +| `tls_ca_file` | 自定义根 CA 证书路径。 | +| `connect_timeout` | 连接超时时间(秒)。 | + +| RestAPI 客户端参数 | 描述 | +|-----------------------------|-------------------------------------------------------------------------------------------------------------------------------| +| `wait_time_secs` | 页面请求等待时间,默认为 `1`。 | +| `max_rows_in_buffer` | 页面缓冲区中的最大行数。 | +| `max_rows_per_page` | 单个页面的最大响应行数。 | +| `page_request_timeout_secs` | 单个页面请求的超时时间,默认为 `30`。 | +| `presign` | 启用数据加载的预签名。选项:`auto`、`detect`、`on`、`off`。默认为 `auto`(仅对 Databend Cloud 启用)。 | + +| FlightSQL 客户端参数 | 描述 | +|-----------------------------|----------------------------------------------------------------------| +| `query_timeout` | 查询超时时间(秒)。 | +| `tcp_nodelay` | 默认为 `true`。 | +| `tcp_keepalive` | TCP keepalive 时间(秒)(默认为 `3600`,设置为 `0` 以禁用)。 | +| `http2_keep_alive_interval` | Keep-alive 间隔时间(秒),默认为 `300`。 | +| `keep_alive_timeout` | Keep-alive 超时时间(秒),默认为 `20`。 | +| `keep_alive_while_idle` | 默认为 `true`。 | #### DSN 示例 @@ -209,10 +209,10 @@ databend+flight://root:@localhost:8900/database1?connect_timeout=10 3. 您的 DSN 将在 **Examples** 部分中自动生成。在 DSN 下方,您会找到一个 BendSQL 代码段,该代码段将 DSN 导出为名为 `BENDSQL_DSN` 的环境变量,并使用正确的配置启动 BendSQL。您可以直接将其复制并粘贴到您的终端中。 -```bash title='Example' -export BENDSQL_DSN="databend://cloudapp:******@tn3ftqihs.gw.aws-us-east-2.default.databend.com:443/information_schema?warehouse=small-xy2t" -bendsql -``` + ```bash title='Example' + export BENDSQL_DSN="databend://cloudapp:******@tn3ftqihs.gw.aws-us-east-2.default.databend.com:443/information_schema?warehouse=small-xy2t" + bendsql + ``` ### 连接到私有化部署的 Databend @@ -242,26 +242,27 @@ bendsql ## 教程 - [使用 BendSQL 连接到私有化部署的 Databend](/tutorials/) -- [使用 BendSQL 连接到 Databend Cloud](/tutorials/getting-started/connect-to-databend-bendsql) +- [使用 BendSQL 连接到 Databend Cloud](/tutorials/connect/connect-to-databendcloud-bendsql) ## BendSQL 设置 BendSQL 提供了一系列设置,允许您定义查询结果的呈现方式: -| 设置项 | 描述 | -| -------------------- | ----------------------------------------------------------------------------------------------- | -| `display_pretty_sql` | 设置为 `true` 时,SQL 查询将以视觉上吸引人的方式进行格式化,使其更易于阅读和理解。 | -| `prompt` | 命令行界面中显示的提示符,通常指示正在访问的用户、计算集群和数据库。 | -| `progress_color` | 指定用于进度指示器的颜色,例如在执行需要一些时间才能完成的查询时。 | -| `show_progress` | 设置为 `true` 时,将显示进度指示器以显示长时间运行的查询或操作的进度。 | -| `show_stats` | 如果为 `true`,则在执行每个查询后,将显示查询统计信息,例如执行时间、读取的行数和处理的字节数。 | -| `max_display_rows` | 设置查询结果输出中将显示的最大行数。 | -| `max_col_width` | 设置每列显示渲染的最大字符宽度。小于 3 的值将禁用此限制。 | -| `max_width` | 设置整个显示输出的最大字符宽度。值为 0 时,默认为终端窗口的宽度。 | -| `output_format` | 设置用于显示查询结果的格式 (`table`、`csv`、`tsv`、`null`)。 | -| `expand` | 控制查询的输出是显示为单独的记录还是以表格格式显示。可用值:`on`、`off` 和 `auto`。 | -| `multi_line` | 确定是否允许多行输入 SQL 查询。设置为 `true` 时,查询可以跨越多行以提高可读性。 | -| `replace_newline` | 指定是否应将查询结果输出中的换行符替换为空格。这可以防止显示中出现意外的换行。 | + +| 设置项 | 描述 | +| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `display_pretty_sql` | 设置为 `true` 时,SQL 查询将以视觉上吸引人的方式进行格式化,使其更易于阅读和理解。 | +| `prompt` | 命令行界面中显示的提示符,通常指示正在访问的用户、计算集群和数据库。 | +| `progress_color` | 指定用于进度指示器的颜色,例如在执行需要一些时间才能完成的查询时。 | +| `show_progress` | 设置为 `true` 时,将显示进度指示器以显示长时间运行的查询或操作的进度。 | +| `show_stats` | 如果为 `true`,则在执行每个查询后,将显示查询统计信息,例如执行时间、读取的行数和处理的字节数。 | +| `max_display_rows` | 设置查询结果输出中将显示的最大行数。 | +| `max_col_width` | 设置每列显示渲染的最大字符宽度。小于 3 的值将禁用此限制。 | +| `max_width` | 设置整个显示输出的最大字符宽度。值为 0 时,默认为终端窗口的宽度。 | +| `output_format` | 设置用于显示查询结果的格式 (`table`、`csv`、`tsv`、`null`)。 | +| `expand` | 控制查询的输出是显示为单独的记录还是以表格格式显示。可用值:`on`、`off` 和 `auto`。 | +| `multi_line` | 确定是否允许多行输入 SQL 查询。设置为 `true` 时,查询可以跨越多行以提高可读性。 | +| `replace_newline` | 指定是否应将查询结果输出中的换行符替换为空格。这可以防止显示中出现意外的换行。 | 有关每个设置的详细信息,请参阅以下参考信息: @@ -408,6 +409,7 @@ root@localhost:8000/default> SELECT * FROM system.configs; `max_col_width` 和 `max_width` 设置分别指定单个列和整个显示输出中允许的最大字符宽度。以下示例将列显示宽度设置为 10 个字符,并将整个显示宽度设置为 100 个字符: + ```sql title='Example:' // highlight-next-line root@localhost:8000/default> .max_col_width 10 @@ -597,13 +599,13 @@ root@localhost:8000/default> .max_width 100 BendSQL 为用户提供了各种命令,以简化其工作流程并自定义其体验。以下是 BendSQL 中可用命令的概述: -| 命令 | 描述 | -| ------------------------ | ------------------------- | -| `!exit` | 退出 BendSQL。 | -| `!quit` | 退出 BendSQL。 | -| `!configs` | 显示当前的 BendSQL 设置。 | -| `!set ` | 修改 BendSQL 设置。 | -| `!source ` | 执行 SQL 文件。 | +| 命令 | 描述 | +| ------------------------ | ---------------------------------- | +| `!exit` | 退出 BendSQL。 | +| `!quit` | 退出 BendSQL。 | +| `!configs` | 显示当前的 BendSQL 设置。 | +| `!set ` | 修改 BendSQL 设置。 | +| `!source ` | 执行 SQL 文件。 | 有关每个命令的示例,请参阅下面的参考信息: @@ -709,4 +711,4 @@ FROM │ 3 │ Charlie │ └────────────────────────────────────┘ 3 rows read in 0.064 sec. Processed 3 rows, 81 B (46.79 rows/s, 1.23 KiB/s) -``` +``` \ No newline at end of file diff --git a/docs/cn/guides/40-load-data/02-load-db/kafka.md b/docs/cn/guides/40-load-data/02-load-db/kafka.md index ef00bb2e32..1701aee5e6 100644 --- a/docs/cn/guides/40-load-data/02-load-db/kafka.md +++ b/docs/cn/guides/40-load-data/02-load-db/kafka.md @@ -30,5 +30,5 @@ Databend 提供了以下插件和工具,用于从 Kafka 主题中摄取数据 ## 教程 -- [使用 bend-ingest-kafka 从 Kafka 加载](/tutorials/ingest-and-stream/kafka-databend-kafka-connect) -- [使用 databend-kafka-connect 从 Kafka 加载](/tutorials/migrate/migrating-from-mysql-with-kafka-connect) +- [使用 bend-ingest-kafka 从 Kafka 加载](/tutorials/load/kafka-bend-ingest-kafka) +- [使用 databend-kafka-connect 从 Kafka 加载](/tutorials/load/kafka-databend-kafka-connect) \ No newline at end of file diff --git a/docs/cn/release-notes/databend.md b/docs/cn/release-notes/databend.md index 014bed449f..3700c9c965 100644 --- a/docs/cn/release-notes/databend.md +++ b/docs/cn/release-notes/databend.md @@ -12,244 +12,7 @@ This page provides information about recent features, enhancements, and bug fixe - - -## Nov 24, 2025 (v1.2.848-nightly) - -## What's Changed -### Thoughtful Bug Fix 🔧 -* fix: unable to get field on rank limit when rule_eager_aggregation applied by **@KKould** in [#19007](https://github.com/databendlabs/databend/pull/19007) -* fix: pivot extra columns on projection by **@KKould** in [#18994](https://github.com/databendlabs/databend/pull/18994) -### Code Refactor 🎉 -* refactor: bump crates arrow* and parquet to version 56 by **@dantengsky** in [#18997](https://github.com/databendlabs/databend/pull/18997) -### Others 📒 -* chore(ut): support for const columns as input to function unit tests by **@forsaken628** in [#19009](https://github.com/databendlabs/databend/pull/19009) -* chore(query): enable to cache the previous python import directory for python udf by **@sundy-li** in [#19003](https://github.com/databendlabs/databend/pull/19003) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.848-nightly - - - - - -## Nov 21, 2025 (v1.2.847-nightly) - -## What's Changed -### Others 📒 -* chore: make query service start after meta by **@everpcpc** in [#19002](https://github.com/databendlabs/databend/pull/19002) -* chore(query): Refresh virtual column support limit and selection by **@b41sh** in [#19001](https://github.com/databendlabs/databend/pull/19001) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.847-nightly - - - - - -## Nov 21, 2025 (v1.2.846-nightly) - -## What's Changed -### Thoughtful Bug Fix 🔧 -* fix: Block::to_record_batch fail when a column is array of NULLs. by **@youngsofun** in [#18989](https://github.com/databendlabs/databend/pull/18989) -* fix: `desc password policy ` column types must match schema types. by **@youngsofun** in [#18990](https://github.com/databendlabs/databend/pull/18990) -### Code Refactor 🎉 -* refactor(query): pass timezone by reference to avoid Arc churn by **@TCeason** in [#18998](https://github.com/databendlabs/databend/pull/18998) -* refactor(query): remove potential performance hotspots caused by fetch_add by **@zhang2014** in [#18995](https://github.com/databendlabs/databend/pull/18995) -### Others 📒 -* chore(query): Accelerate vector index quantization score calculation with SIMD by **@b41sh** in [#18957](https://github.com/databendlabs/databend/pull/18957) -* chore(query): clamp timestamps to jiff range before timezone conversion by **@TCeason** in [#18996](https://github.com/databendlabs/databend/pull/18996) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.846-nightly - - - - - -## Nov 20, 2025 (v1.2.845-nightly) - -## What's Changed -### Exciting New Features ✨ -* feat: impl UDTF Server by **@KKould** in [#18947](https://github.com/databendlabs/databend/pull/18947) -* feat(query):masking policy support rbac by **@TCeason** in [#18982](https://github.com/databendlabs/databend/pull/18982) -* feat: improve runtime filter [Part 2] by **@SkyFan2002** in [#18955](https://github.com/databendlabs/databend/pull/18955) -### Build/Testing/CI Infra Changes 🔌 -* ci: upgrade k3s for meta chaos by **@everpcpc** in [#18983](https://github.com/databendlabs/databend/pull/18983) -### Others 📒 -* chore: bump opendal to 0.54.1 by **@dqhl76** in [#18970](https://github.com/databendlabs/databend/pull/18970) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.845-nightly - - - - - -## Nov 18, 2025 (v1.2.844-nightly) - -## What's Changed -### Others 📒 -* chore: adjust the storage method of timestamp_tz so that the timestamp value is retrieved directly. by **@KKould** in [#18974](https://github.com/databendlabs/databend/pull/18974) -* chore: add more logs to cover aggregate spill by **@dqhl76** in [#18980](https://github.com/databendlabs/databend/pull/18980) -* chore(query): Virtual column support external table by **@b41sh** in [#18981](https://github.com/databendlabs/databend/pull/18981) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.844-nightly - - - - - -## Nov 18, 2025 (v1.2.843-nightly) - -## What's Changed -### Thoughtful Bug Fix 🔧 -* fix(query): count_distinct needs to handle nullable correctly by **@forsaken628** in [#18973](https://github.com/databendlabs/databend/pull/18973) -### Build/Testing/CI Infra Changes 🔌 -* ci: fix dependency for test cloud control server by **@everpcpc** in [#18978](https://github.com/databendlabs/databend/pull/18978) -### Others 📒 -* chore(query): improve python udf script by **@sundy-li** in [#18960](https://github.com/databendlabs/databend/pull/18960) -* chore(query): delete replace masking/row access policy by **@TCeason** in [#18972](https://github.com/databendlabs/databend/pull/18972) -* chore(query): Optimize Optimizer Performance by Reducing Redundant Computations by **@b41sh** in [#18979](https://github.com/databendlabs/databend/pull/18979) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.843-nightly - - - - - -## Nov 17, 2025 (v1.2.842-nightly) - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.842-nightly - - - - - -## Nov 14, 2025 (v1.2.841-nightly) - -## What's Changed -### Exciting New Features ✨ -* feat: http handler return geometry_output_format with data. by **@youngsofun** in [#18963](https://github.com/databendlabs/databend/pull/18963) -* feat(query): add table statistics admin api by **@zhang2014** in [#18967](https://github.com/databendlabs/databend/pull/18967) -* feat: upgrade nom to version 8.0.0 and accelerate expr_element using the first token. by **@KKould** in [#18935](https://github.com/databendlabs/databend/pull/18935) -### Thoughtful Bug Fix 🔧 -* fix(query): or_filter get incorrectly result by **@zhyass** in [#18965](https://github.com/databendlabs/databend/pull/18965) -* fix(query): Fix copy into Variant field panic with infinite number by **@b41sh** in [#18962](https://github.com/databendlabs/databend/pull/18962) -### Code Refactor 🎉 -* refactor: stream spill triggering for partial aggregation by **@dqhl76** in [#18943](https://github.com/databendlabs/databend/pull/18943) -* chore: optimize ExprBloomFilter to use references instead of clones by **@dantengsky** in [#18157](https://github.com/databendlabs/databend/pull/18157) -### Others 📒 -* chore(query): adjust the default Bloom filter enable setting by **@zhang2014** in [#18966](https://github.com/databendlabs/databend/pull/18966) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.841-nightly - - - - - -## Nov 14, 2025 (v1.2.840-nightly) - -## What's Changed -### Exciting New Features ✨ -* feat: new fuse table option `enable_parquet_dictionary` by **@dantengsky** in [#17675](https://github.com/databendlabs/databend/pull/17675) -### Thoughtful Bug Fix 🔧 -* fix: timestamp_tz display by **@KKould** in [#18958](https://github.com/databendlabs/databend/pull/18958) -### Others 📒 -* chore: flaky test by **@zhyass** in [#18959](https://github.com/databendlabs/databend/pull/18959) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.840-nightly - - - - - -## Nov 13, 2025 (v1.2.839-nightly) - -## What's Changed -### Thoughtful Bug Fix 🔧 -* fix: return timezone when set in query level. by **@youngsofun** in [#18952](https://github.com/databendlabs/databend/pull/18952) -* fix(query): Skip sequence lookups when re-binding stored defaults by **@TCeason** in [#18946](https://github.com/databendlabs/databend/pull/18946) -* fix(query): build mysql tls config by **@everpcpc** in [#18953](https://github.com/databendlabs/databend/pull/18953) -* fix(query): defer MySQL session creation until the handshake completes by **@everpcpc** in [#18956](https://github.com/databendlabs/databend/pull/18956) -### Code Refactor 🎉 -* refactor(query): prevent masking/row access policy name conflicts by **@TCeason** in [#18937](https://github.com/databendlabs/databend/pull/18937) -* refactor(query): optimize visibility checker for large-scale deployments improved 10x by **@TCeason** in [#18954](https://github.com/databendlabs/databend/pull/18954) -### Others 📒 -* chore(query): improve resolve large array by **@sundy-li** in [#18949](https://github.com/databendlabs/databend/pull/18949) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.839-nightly - - - - - -## Nov 12, 2025 (v1.2.838-nightly) - -## What's Changed -### Exciting New Features ✨ -* feat(query): support policy_reference table function by **@TCeason** in [#18944](https://github.com/databendlabs/databend/pull/18944) -* feat: improve runtime filter [Part 1] by **@SkyFan2002** in [#18893](https://github.com/databendlabs/databend/pull/18893) -### Thoughtful Bug Fix 🔧 -* fix(query): fix query function parsing nested conditions by **@b41sh** in [#18940](https://github.com/databendlabs/databend/pull/18940) -* fix(query): handle complex types in procedure argument parsing by **@TCeason** in [#18929](https://github.com/databendlabs/databend/pull/18929) -* fix: error in multi statement transaction retry by **@SkyFan2002** in [#18934](https://github.com/databendlabs/databend/pull/18934) -* fix: flaky test progress not updated in real time in cluster mode by **@youngsofun** in [#18945](https://github.com/databendlabs/databend/pull/18945) -### Code Refactor 🎉 -* refactor(binder): move the rewrite of ASOF JOIN to the logical plan and remove scalar_expr from `DerivedColumn` by **@forsaken628** in [#18938](https://github.com/databendlabs/databend/pull/18938) -* refactor(query): optimized `UnaryState` design and simplified `string_agg` implementation by **@forsaken628** in [#18941](https://github.com/databendlabs/databend/pull/18941) -* refactor(query): rename exchange hash to node to node hash by **@zhang2014** in [#18948](https://github.com/databendlabs/databend/pull/18948) -### Others 📒 -* chore(query): ignore assert const in memo logical test by **@zhang2014** in [#18950](https://github.com/databendlabs/databend/pull/18950) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.838-nightly - - - - - -## Nov 10, 2025 (v1.2.837-nightly) - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.837-nightly - - - - - -## Nov 8, 2025 (v1.2.836-nightly) - -## What's Changed -### Exciting New Features ✨ -* feat(query): Support `bitmap_to_array` function by **@b41sh** in [#18927](https://github.com/databendlabs/databend/pull/18927) -* feat(query): prevent dropping in-use security policies by **@TCeason** in [#18918](https://github.com/databendlabs/databend/pull/18918) -* feat(mysql): add JDBC healthcheck regex to support SELECT 1 FROM DUAL by **@yufan022** in [#18933](https://github.com/databendlabs/databend/pull/18933) -* feat: return timezone in HTTP handler. by **@youngsofun** in [#18936](https://github.com/databendlabs/databend/pull/18936) -### Thoughtful Bug Fix 🔧 -* fix: FilterExecutor needs to handle projections when `enable_selector_executor` is turned off. by **@forsaken628** in [#18921](https://github.com/databendlabs/databend/pull/18921) -* fix(query): fix Inverted/Vector index panic with Native Storage Format by **@b41sh** in [#18932](https://github.com/databendlabs/databend/pull/18932) -* fix(query): optimize the cost estimation of some query plans by **@zhang2014** in [#18926](https://github.com/databendlabs/databend/pull/18926) -* fix: alter column with specify approx distinct by **@zhyass** in [#18928](https://github.com/databendlabs/databend/pull/18928) -### Code Refactor 🎉 -* refactor: refine experimental final aggregate spill by **@dqhl76** in [#18907](https://github.com/databendlabs/databend/pull/18907) -* refactor(query): AccessType downcasts now return Result so mismatches surface useful diagnostics by **@forsaken628** in [#18923](https://github.com/databendlabs/databend/pull/18923) -* refactor(query): merge pipeline core, sources and sinks crate by **@zhang2014** in [#18939](https://github.com/databendlabs/databend/pull/18939) -### Others 📒 -* chore: remove fixeme on TimestampTz by **@KKould** in [#18924](https://github.com/databendlabs/databend/pull/18924) -* chore: fixed time zone on shanghai to fix flasky 02_0079_function_interval.test by **@KKould** in [#18930](https://github.com/databendlabs/databend/pull/18930) -* chore: DataType::TimestampTz display: "TimestampTz" -> "Timestamp_Tz" by **@KKould** in [#18931](https://github.com/databendlabs/databend/pull/18931) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.836-nightly - - - - + ## Nov 4, 2025 (v1.2.835-nightly) @@ -272,7 +35,7 @@ This page provides information about recent features, enhancements, and bug fixe - + ## Nov 3, 2025 (v1.2.834-nightly) @@ -594,4 +357,283 @@ This page provides information about recent features, enhancements, and bug fixe + + +## Sep 24, 2025 (v1.2.818-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat(meta): add member-list subcommand to databend-metactl by **@drmingdrmer** in [#18760](https://github.com/databendlabs/databend/pull/18760) +* feat(meta-service): add snapshot V004 streaming protocol by **@drmingdrmer** in [#18763](https://github.com/databendlabs/databend/pull/18763) +### Thoughtful Bug Fix 🔧 +* fix: auto commit of ddl not work when calling procedure in transaction by **@SkyFan2002** in [#18753](https://github.com/databendlabs/databend/pull/18753) +* fix: vacuum tables that are dropped by `create or replace` statement by **@dantengsky** in [#18751](https://github.com/databendlabs/databend/pull/18751) +* fix(query): fix data lost caused by nullable in spill by **@zhang2014** in [#18766](https://github.com/databendlabs/databend/pull/18766) +### Code Refactor 🎉 +* refactor(query): improve the readability of aggregate function hash table by **@forsaken628** in [#18747](https://github.com/databendlabs/databend/pull/18747) +* refactor(query): Optimize Virtual Column Write Performance by **@b41sh** in [#18752](https://github.com/databendlabs/databend/pull/18752) +### Others 📒 +* chore: resolve post-merge compilation failure after KvApi refactoring by **@dantengsky** in [#18761](https://github.com/databendlabs/databend/pull/18761) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.818-nightly + + + + + +## Sep 22, 2025 (v1.2.817-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat: databend-metabench: benchmark list by **@drmingdrmer** in [#18745](https://github.com/databendlabs/databend/pull/18745) +* feat: /v1/status include last_query_request_at. by **@youngsofun** in [#18750](https://github.com/databendlabs/databend/pull/18750) +### Thoughtful Bug Fix 🔧 +* fix: query dropped table in fuse_time_travel_size() report error by **@SkyFan2002** in [#18748](https://github.com/databendlabs/databend/pull/18748) +### Code Refactor 🎉 +* refactor(meta-service): separate raft-log-store and raft-state-machine store by **@drmingdrmer** in [#18746](https://github.com/databendlabs/databend/pull/18746) +* refactor: meta-service: simplify raft store and state machine by **@drmingdrmer** in [#18749](https://github.com/databendlabs/databend/pull/18749) +* refactor(query): stream style block writer for hash join spill by **@zhang2014** in [#18742](https://github.com/databendlabs/databend/pull/18742) +* refactor(native): preallocate zero offsets before compression by **@BohuTANG** in [#18756](https://github.com/databendlabs/databend/pull/18756) +* refactor: meta-service: compact immutable levels periodically by **@drmingdrmer** in [#18757](https://github.com/databendlabs/databend/pull/18757) +* refactor(query): add async buffer for spill data by **@zhang2014** in [#18758](https://github.com/databendlabs/databend/pull/18758) +### Build/Testing/CI Infra Changes 🔌 +* ci: add compat test for databend-go. by **@youngsofun** in [#18734](https://github.com/databendlabs/databend/pull/18734) +### Others 📒 +* chore: move auto implemented KvApi methods to Ext trait by **@drmingdrmer** in [#18759](https://github.com/databendlabs/databend/pull/18759) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.817-nightly + + + + + +## Sep 19, 2025 (v1.2.816-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat(rbac): procedure object support rbac by **@TCeason** in [#18730](https://github.com/databendlabs/databend/pull/18730) +### Thoughtful Bug Fix 🔧 +* fix(query): reduce redundant result-set-spill logs during query waits by **@BohuTANG** in [#18741](https://github.com/databendlabs/databend/pull/18741) +* fix: fuse_vacuum2 panic while vauuming empty table with data_retentio… by **@dantengsky** in [#18744](https://github.com/databendlabs/databend/pull/18744) +### Code Refactor 🎉 +* refactor: compactor internal structure by **@drmingdrmer** in [#18738](https://github.com/databendlabs/databend/pull/18738) +* refactor(query): refactor the join partition to reduce memory amplification by **@zhang2014** in [#18732](https://github.com/databendlabs/databend/pull/18732) +* refactor: Make the ownership key deletion and table/database replace in the same transaction by **@TCeason** in [#18739](https://github.com/databendlabs/databend/pull/18739) +### Others 📒 +* chore(meta-service): re-organize tests for raft-store by **@drmingdrmer** in [#18740](https://github.com/databendlabs/databend/pull/18740) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.816-nightly + + + + + +## Sep 18, 2025 (v1.2.815-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat: add ANY_VALUE as alias for ANY aggregate function by **@BohuTANG** in [#18728](https://github.com/databendlabs/databend/pull/18728) +* feat: add Immutable::compact to merge two level by **@drmingdrmer** in [#18731](https://github.com/databendlabs/databend/pull/18731) +### Thoughtful Bug Fix 🔧 +* fix: last query id not only contain those cached. by **@youngsofun** in [#18727](https://github.com/databendlabs/databend/pull/18727) +### Code Refactor 🎉 +* refactor: raft-store: in-memory readonly level compaction by **@drmingdrmer** in [#18736](https://github.com/databendlabs/databend/pull/18736) +* refactor: new setting `max_vacuum_threads` by **@dantengsky** in [#18737](https://github.com/databendlabs/databend/pull/18737) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.815-nightly + + + + + +## Sep 17, 2025 (v1.2.814-nightly) + +## What's Changed +### Thoughtful Bug Fix 🔧 +* fix(query): ensure jwt roles to user if not exists by **@everpcpc** in [#18720](https://github.com/databendlabs/databend/pull/18720) +* fix(query): Set Parquet default encoding to `PLAIN` to ensure data compatibility by **@b41sh** in [#18724](https://github.com/databendlabs/databend/pull/18724) +### Others 📒 +* chore: replace Arc<Mutex<SysData>> with SysData by **@drmingdrmer** in [#18723](https://github.com/databendlabs/databend/pull/18723) +* chore: add error check on private task test script by **@KKould** in [#18698](https://github.com/databendlabs/databend/pull/18698) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.814-nightly + + + + + +## Sep 16, 2025 (v1.2.813-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat(query): support result set spilling by **@forsaken628** in [#18679](https://github.com/databendlabs/databend/pull/18679) +### Thoughtful Bug Fix 🔧 +* fix(meta-service): detach the SysData to avoid race condition by **@drmingdrmer** in [#18722](https://github.com/databendlabs/databend/pull/18722) +### Code Refactor 🎉 +* refactor(raft-store): update trait interfaces and restructure leveled map by **@drmingdrmer** in [#18719](https://github.com/databendlabs/databend/pull/18719) +### Documentation 📔 +* docs(raft-store): enhance documentation across all modules by **@drmingdrmer** in [#18721](https://github.com/databendlabs/databend/pull/18721) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.813-nightly + + + + + +## Sep 15, 2025 (v1.2.812-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat: `infer_schema` expands csv and ndjson support by **@KKould** in [#18552](https://github.com/databendlabs/databend/pull/18552) +### Thoughtful Bug Fix 🔧 +* fix(query): column default expr should not cause seq.nextval modify by **@b41sh** in [#18694](https://github.com/databendlabs/databend/pull/18694) +* fix: `vacuum2` all should ignore SYSTEM dbs by **@dantengsky** in [#18712](https://github.com/databendlabs/databend/pull/18712) +* fix(meta-service): snapshot key count should be reset by **@drmingdrmer** in [#18718](https://github.com/databendlabs/databend/pull/18718) +### Code Refactor 🎉 +* refactor(meta-service): respond mget items in stream instead of in a vector by **@drmingdrmer** in [#18716](https://github.com/databendlabs/databend/pull/18716) +* refactor(meta-service0): rotbl: use `spawn_blocking()` instead `blocking_in_place()` by **@drmingdrmer** in [#18717](https://github.com/databendlabs/databend/pull/18717) +### Build/Testing/CI Infra Changes 🔌 +* ci: migration `09_http_handler` to pytest by **@forsaken628** in [#18714](https://github.com/databendlabs/databend/pull/18714) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.812-nightly + + + + + +## Sep 11, 2025 (v1.2.811-nightly) + +## What's Changed +### Thoughtful Bug Fix 🔧 +* fix: error occurred when retrying transaction on empty table by **@SkyFan2002** in [#18703](https://github.com/databendlabs/databend/pull/18703) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.811-nightly + + + + + +## Sep 10, 2025 (v1.2.810-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat: impl Date & Timestamp on `RANGE BETWEEN` by **@KKould** in [#18696](https://github.com/databendlabs/databend/pull/18696) +* feat: add pybend Python binding with S3 connection and stage support by **@BohuTANG** in [#18704](https://github.com/databendlabs/databend/pull/18704) +* feat(query): add api to list stream by **@everpcpc** in [#18701](https://github.com/databendlabs/databend/pull/18701) +### Thoughtful Bug Fix 🔧 +* fix: collected profiles lost in cluster mode by **@dqhl76** in [#18680](https://github.com/databendlabs/databend/pull/18680) +* fix(python-binding): complete Python binding CI configuration by **@BohuTANG** in [#18686](https://github.com/databendlabs/databend/pull/18686) +* fix(python-binding): resolve virtual environment permission conflicts in CI by **@BohuTANG** in [#18708](https://github.com/databendlabs/databend/pull/18708) +* fix: error when using materialized CTE in multi-statement transactions by **@SkyFan2002** in [#18707](https://github.com/databendlabs/databend/pull/18707) +* fix(query): add config to the embed mode to clarify this mode by **@zhang2014** in [#18710](https://github.com/databendlabs/databend/pull/18710) +### Build/Testing/CI Infra Changes 🔌 +* ci: run behave test of bendsql for compact. by **@youngsofun** in [#18697](https://github.com/databendlabs/databend/pull/18697) +* ci: Temporarily disable warehouse testing of private tasks by **@KKould** in [#18709](https://github.com/databendlabs/databend/pull/18709) +### Others 📒 +* chore(python-binding): documentation and PyPI metadata by **@BohuTANG** in [#18711](https://github.com/databendlabs/databend/pull/18711) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.810-nightly + + + + + +## Sep 8, 2025 (v1.2.809-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat: support reset of worksheet session. by **@youngsofun** in [#18688](https://github.com/databendlabs/databend/pull/18688) +### Thoughtful Bug Fix 🔧 +* fix(query): fix unable cast Variant Nullable type to Int32 type in MERGE INTO by **@b41sh** in [#18687](https://github.com/databendlabs/databend/pull/18687) +* fix: meta-semaphore: re-connect when no event recevied by **@drmingdrmer** in [#18690](https://github.com/databendlabs/databend/pull/18690) +### Code Refactor 🎉 +* refactor(meta-semaphore): handle error occurs during new-stream, lease-extend by **@drmingdrmer** in [#18695](https://github.com/databendlabs/databend/pull/18695) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.809-nightly + + + + + +## Sep 8, 2025 (v1.2.808-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat: support Check Constraint by **@KKould** in [#18661](https://github.com/databendlabs/databend/pull/18661) +* feat(parser): add intelligent SQL error suggestion system by **@BohuTANG** in [#18670](https://github.com/databendlabs/databend/pull/18670) +* feat: enhance resource scheduling logs with clear status and configuration details by **@BohuTANG** in [#18684](https://github.com/databendlabs/databend/pull/18684) +* feat(meta-semaphore): allows to specify timestamp as semaphore seq by **@drmingdrmer** in [#18685](https://github.com/databendlabs/databend/pull/18685) +### Thoughtful Bug Fix 🔧 +* fix: clean `db_id_table_name` during vacuuming dropped tables by **@dantengsky** in [#18665](https://github.com/databendlabs/databend/pull/18665) +* fix: forbid transform with where clause. by **@youngsofun** in [#18681](https://github.com/databendlabs/databend/pull/18681) +* fix(query): fix incorrect order of group by items with CTE or subquery by **@sundy-li** in [#18692](https://github.com/databendlabs/databend/pull/18692) +### Code Refactor 🎉 +* refactor(meta): extract utilities from monolithic util.rs by **@drmingdrmer** in [#18678](https://github.com/databendlabs/databend/pull/18678) +* refactor(query): split Spiller to provide more scalability by **@forsaken628** in [#18691](https://github.com/databendlabs/databend/pull/18691) +### Build/Testing/CI Infra Changes 🔌 +* ci: compat test for JDBC use test from main. by **@youngsofun** in [#18668](https://github.com/databendlabs/databend/pull/18668) +### Others 📒 +* chore: add test about create sequence to keep old version by **@TCeason** in [#18673](https://github.com/databendlabs/databend/pull/18673) +* chore: add some log for runtime filter by **@SkyFan2002** in [#18674](https://github.com/databendlabs/databend/pull/18674) +* chore: add profile for runtime filter by **@SkyFan2002** in [#18675](https://github.com/databendlabs/databend/pull/18675) +* chore: catch `to_date`/`to_timestamp` unwrap by **@KKould** in [#18677](https://github.com/databendlabs/databend/pull/18677) +* chore(query): add retry for semaphore queue by **@zhang2014** in [#18689](https://github.com/databendlabs/databend/pull/18689) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.808-nightly + + + + + +## Sep 3, 2025 (v1.2.807-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat(query): Add SecureFilter for Row Access Policies and Stats Privacy by **@TCeason** in [#18623](https://github.com/databendlabs/databend/pull/18623) +* feat(query): support `start` and `increment` options for sequence creation by **@TCeason** in [#18659](https://github.com/databendlabs/databend/pull/18659) +### Thoughtful Bug Fix 🔧 +* fix(rbac): create or replace ownership_object should delete the old ownership key by **@TCeason** in [#18667](https://github.com/databendlabs/databend/pull/18667) +* fix(history-table): stop heartbeat when another node starts by **@dqhl76** in [#18664](https://github.com/databendlabs/databend/pull/18664) +### Code Refactor 🎉 +* refactor: extract garbage collection api to garbage_collection_api.rs by **@drmingdrmer** in [#18663](https://github.com/databendlabs/databend/pull/18663) +* refactor(meta): complete SchemaApi trait decomposition by **@drmingdrmer** in [#18669](https://github.com/databendlabs/databend/pull/18669) +### Others 📒 +* chore: enable distributed recluster by **@zhyass** in [#18644](https://github.com/databendlabs/databend/pull/18644) +* chore(ci): make ci success by **@TCeason** in [#18672](https://github.com/databendlabs/databend/pull/18672) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.807-nightly + + + + + +## Sep 2, 2025 (v1.2.806-nightly) + +## What's Changed +### Thoughtful Bug Fix 🔧 +* fix(query): try fix hang for cluster aggregate by **@zhang2014** in [#18655](https://github.com/databendlabs/databend/pull/18655) +### Code Refactor 🎉 +* refactor(schema-api): extract SecurityApi trait by **@drmingdrmer** in [#18658](https://github.com/databendlabs/databend/pull/18658) +* refactor(query): remove useless ee feature by **@zhang2014** in [#18660](https://github.com/databendlabs/databend/pull/18660) +### Build/Testing/CI Infra Changes 🔌 +* ci: fix download artifact for sqlsmith by **@everpcpc** in [#18662](https://github.com/databendlabs/databend/pull/18662) +* ci: ttc test with nginx and minio. by **@youngsofun** in [#18657](https://github.com/databendlabs/databend/pull/18657) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.806-nightly + + + diff --git a/docs/cn/sql-reference/10-sql-commands/00-ddl/01-table/92-attach-table.md b/docs/cn/sql-reference/10-sql-commands/00-ddl/01-table/92-attach-table.md index 8ece8790ac..a4bb6447a8 100644 --- a/docs/cn/sql-reference/10-sql-commands/00-ddl/01-table/92-attach-table.md +++ b/docs/cn/sql-reference/10-sql-commands/00-ddl/01-table/92-attach-table.md @@ -32,13 +32,11 @@ CONNECTION = ( CONNECTION_NAME = '' ) - **``**:新建附加表的名称 - **``**:可选列清单(从源表选择) - - 缺省时包含所有列 - 提供列级安全与访问控制 - 示例:`(customer_id, product, amount)` - **``**:对象存储中的源表数据路径 - - 格式:`s3://///` - 示例:`s3://databend-toronto/1/23351/` @@ -73,13 +71,13 @@ SELECT snapshot_location FROM FUSE_SNAPSHOT('default', 'employees'); ### 核心优势 -| 传统方法 | Databend ATTACH TABLE | -| -------------------- | --------------------- | -| 多份数据副本 | 单副本全局共享 | -| ETL 延迟与同步问题 | 实时更新永不滞后 | -| 复杂维护流程 | 零维护成本 | -| 副本增加安全风险 | 细粒度列级访问 | -| 数据移动导致性能下降 | 基于原始数据全面优化 | +| 传统方法 | Databend ATTACH TABLE | +|---------------------|----------------------| +| 多份数据副本 | 单副本全局共享 | +| ETL 延迟与同步问题 | 实时更新永不滞后 | +| 复杂维护流程 | 零维护成本 | +| 副本增加安全风险 | 细粒度列级访问 | +| 数据移动导致性能下降 | 基于原始数据全面优化 | ### 安全与性能 @@ -94,13 +92,13 @@ SELECT snapshot_location FROM FUSE_SNAPSHOT('default', 'employees'); ```sql -- 1. 创建存储连接 -CREATE CONNECTION my_s3_connection - STORAGE_TYPE = 's3' +CREATE CONNECTION my_s3_connection + STORAGE_TYPE = 's3' ACCESS_KEY_ID = '' SECRET_ACCESS_KEY = ''; -- 2. 附加全列数据表 -ATTACH TABLE population_all_columns 's3://databend-doc/1/16/' +ATTACH TABLE population_all_columns 's3://databend-doc/1/16/' CONNECTION = (CONNECTION_NAME = 'my_s3_connection'); ``` @@ -108,7 +106,7 @@ ATTACH TABLE population_all_columns 's3://databend-doc/1/16/' ```sql -- 附加选定列保障数据安全 -ATTACH TABLE population_selected (city, population) 's3://databend-doc/1/16/' +ATTACH TABLE population_selected (city, population) 's3://databend-doc/1/16/' CONNECTION = (CONNECTION_NAME = 'my_s3_connection'); ``` @@ -116,12 +114,12 @@ ATTACH TABLE population_selected (city, population) 's3://databend-doc/1/16/' ```sql -- 创建 IAM 角色连接(比密钥更安全) -CREATE CONNECTION s3_role_connection - STORAGE_TYPE = 's3' +CREATE CONNECTION s3_role_connection + STORAGE_TYPE = 's3' ROLE_ARN = 'arn:aws:iam::123456789012:role/databend-role'; -- 通过 IAM 角色附加表 -ATTACH TABLE population_all_columns 's3://databend-doc/1/16/' +ATTACH TABLE population_all_columns 's3://databend-doc/1/16/' CONNECTION = (CONNECTION_NAME = 's3_role_connection'); ``` @@ -129,16 +127,16 @@ ATTACH TABLE population_all_columns 's3://databend-doc/1/16/' ```sql -- 市场分析视图 -ATTACH TABLE marketing_view (customer_id, product, amount, order_date) -'s3://your-bucket/1/23351/' +ATTACH TABLE marketing_view (customer_id, product, amount, order_date) +'s3://your-bucket/1/23351/' CONNECTION = (CONNECTION_NAME = 'my_s3_connection'); -- 财务分析视图(不同列) -ATTACH TABLE finance_view (order_id, amount, profit, order_date) -'s3://your-bucket/1/23351/' +ATTACH TABLE finance_view (order_id, amount, profit, order_date) +'s3://your-bucket/1/23351/' CONNECTION = (CONNECTION_NAME = 'my_s3_connection'); ``` ## 扩展阅读 -- [使用 ATTACH TABLE 链接表](/tutorials/cloud-ops/link-tables) +- [使用 ATTACH TABLE 链接表](/tutorials/databend-cloud/link-tables) \ No newline at end of file diff --git a/docs/cn/tutorials/01-taobao.md b/docs/cn/tutorials/01-taobao.md index 51f4882cfd..57837e0acc 100644 --- a/docs/cn/tutorials/01-taobao.md +++ b/docs/cn/tutorials/01-taobao.md @@ -185,7 +185,7 @@ ORDER BY day; ![Alt text](@site/static/public/img/usecase/taobao-2.png) -也可以通过 [使用仪表盘](/tutorials/cloud-ops/dashboard) 功能,生成折线图: +也可以通过 [使用仪表盘](/guides/cloud/using-databend-cloud/dashboard) 功能,生成折线图: ![Alt text](@site/static/public/img/usecase/taobao-3.png) @@ -292,7 +292,7 @@ order by hour; ![Alt text](@site/static/public/img/usecase/taobao-7.png) -也可以通过 [使用仪表盘](/tutorials/cloud-ops/dashboard) 功能,生成折线图: +也可以通过 [使用仪表盘](/guides/cloud/using-databend-cloud/dashboard) 功能,生成折线图: ![Alt text](@site/static/public/img/usecase/taobao-8.png) @@ -316,7 +316,7 @@ order by weekday; ![Alt text](@site/static/public/img/usecase/taobao-9.png) -也可以通过 [使用仪表盘](/tutorials/cloud-ops/dashboard) 功能,生成柱状图: +也可以通过 [使用仪表盘](/guides/cloud/using-databend-cloud/dashboard) 功能,生成柱状图: ![Alt text](@site/static/public/img/usecase/taobao-10.png) diff --git a/docs/cn/tutorials/_category_.json b/docs/cn/tutorials/_category_.json index 858b322c2a..993d0b543f 100644 --- a/docs/cn/tutorials/_category_.json +++ b/docs/cn/tutorials/_category_.json @@ -1,5 +1,3 @@ { - "label": "教程", - "link": { "type": "doc", "id": "index" }, - "position": 0 -} + "label": "教程" +} \ No newline at end of file diff --git a/docs/cn/tutorials/cloud-ops/_category_.json b/docs/cn/tutorials/cloud-ops/_category_.json deleted file mode 100644 index b68ae079a6..0000000000 --- a/docs/cn/tutorials/cloud-ops/_category_.json +++ /dev/null @@ -1,4 +0,0 @@ -{ - "label": "云上运维", - "position": 6 -} diff --git a/docs/cn/tutorials/cloud-ops/dashboard.md b/docs/cn/tutorials/cloud-ops/dashboard.md deleted file mode 100644 index 6998bf1c40..0000000000 --- a/docs/cn/tutorials/cloud-ops/dashboard.md +++ /dev/null @@ -1,183 +0,0 @@ ---- -title: "Databend Cloud:仪表盘导览" -sidebar_label: "Dashboard" ---- -import StepsWrap from '@site/src/components/StepsWrap'; -import StepContent from '@site/src/components/Steps/step-content'; - -本教程将加载、分析并为数据集“纽约时报 Covid-19 数据”创建一个 Dashboard。该数据集每日更新美国全国的病例、死亡和其他相关指标,可从国家、州、县等不同维度展现 2022 年疫情的全貌。 - -| 字段 | 说明 | -|--------|------------------------------| -| date | 已累积 Covid-19 数据的日期。 | -| county | 数据所属的县。 | -| state | 数据所属的州。 | -| fips | 对应地区的 FIPS 代码。 | -| cases | 已确认病例的累计数量。 | -| deaths | 因 Covid-19 去世的累计数量。 | - -### 步骤 1:准备数据 - -“纽约时报 Covid-19 数据”是一个内置示例数据集,只需几次点击即可加载。Databend Cloud 会自动创建目标表,无需事先建表。 - - - - -### 加载数据集 - -1. 在 Databend Cloud 的 **Overview** 页面点击 **Load Data**。 -2. 在弹出的向导中选择 **A new table**,然后在 **Load sample data** 下拉列表中选择 **Covid-19 Data from New York Times.CSV**: - -![Alt text](@site/static/public/img/cloud/dashboard-1.png) - -3. 在下一页中,选择数据库并为要创建的目标表命名。 - -![Alt text](@site/static/public/img/cloud/dashboard-2.png) - -4. 点击 **Confirm**。Databend Cloud 会创建目标表并加载数据,过程可能需要几秒钟。 - - - - - -### 处理 NULL - -在分析前建议检查并处理表中的 NULL 与重复值,以免影响结果。 - -1. 新建 Worksheet,运行以下 SQL 检查表内是否存在 NULL: - -```sql -SELECT COUNT(*) -FROM covid_19_us_2022_3812 -WHERE date IS NULL OR country IS NULL OR state IS NULL OR fips IS NULL OR cases IS NULL OR deaths IS NULL; -``` - -返回的 `41571` 表示至少包含一个 NULL 的行数。 - -2. 删除所有包含 NULL 的行: - -```sql -DELETE FROM covid_19_us_2022_3812 -WHERE date IS NULL OR country IS NULL OR state IS NULL OR fips IS NULL OR cases IS NULL OR deaths IS NULL; -``` - - - - - -### 处理重复行 - -1. 在同一个 Worksheet 中运行以下 SQL 检查重复记录: - -```sql -SELECT date, country, state, fips, cases, deaths, COUNT(*) -FROM covid_19_us_2022_3812 -GROUP BY date, country, state, fips, cases, deaths -HAVING COUNT(*) > 1; -``` - -该查询返回 `0`,表示没有重复记录,数据可以用于分析。 - - - - -### 步骤 2:基于查询结果创建图表 - -此步骤将运行四条查询,并将结果可视化为计分卡、饼图、柱状图与折线图。**请为每条查询创建单独的 Worksheet**。 - - - - -### 2022 年全美死亡总数 - -1. 在 Worksheet 中运行以下 SQL: - -```sql --- 统计 2022-12-31 当天美国的累积死亡总数 -SELECT SUM(deaths) -FROM covid_19_us_2022_3812 -WHERE date = '2022-12-31'; -``` - -2. 基于查询结果创建计分卡: - -![Alt text](@site/static/public/img/cloud/dashboard-3.gif) - - - - - -### 各州死亡总数(2022) - -1. 在 Worksheet 中运行以下 SQL: - -```sql --- 统计 2022-12-31 当天各州的累积死亡人数 -SELECT state, SUM(deaths) -FROM covid_19_us_2022_3812 -WHERE date = '2022-12-31' -GROUP BY state; -``` - -2. 使用查询结果创建饼图: - -![Alt text](@site/static/public/img/cloud/dashboard-4.gif) - - - - - -### 维京群岛的病例与死亡 - -1. 在 Worksheet 中运行以下 SQL: - -```sql --- 查询 2022-12-31 维京群岛的全部数据 -SELECT * FROM covid_19_us_2022_3812 -WHERE date = '2022-12-31' AND state = 'Virgin Islands'; -``` - -2. 基于结果创建柱状图: - -![Alt text](@site/static/public/img/cloud/dashboard-5.gif) - - - - - -### 圣约翰各月的病例与死亡 - -1. 在 Worksheet 中运行以下 SQL: - -```sql --- 获取 2022 年每月底圣约翰的数据 -SELECT * FROM covid_19_us_2022_3812 -WHERE - (date = '2022-01-31' - OR date = '2022-02-28' - OR date = '2022-03-31' - OR date = '2022-04-30' - OR date = '2022-05-31' - OR date = '2022-06-30' - OR date = '2022-07-31' - OR date = '2022-08-31' - OR date = '2022-09-30' - OR date = '2022-10-31' - OR date = '2022-11-30' - OR date = '2022-12-31') - AND country = 'St. John' ORDER BY date; -``` - -2. 创建折线图展示结果: - -![Alt text](@site/static/public/img/cloud/dashboard-6.gif) - - - - -### 步骤 3:将图表添加到 Dashboard - -1. 在 Databend Cloud 中访问 **Dashboards** > **New Dashboard** 创建一个新的 Dashboard,并点击 **Add Chart**。 -2. 将左侧的图表拖放到 Dashboard,可以根据需要调整尺寸与位置。 - -![Alt text](@site/static/public/img/cloud/dashboard-7.gif) diff --git a/docs/cn/tutorials/connect/_category_.json b/docs/cn/tutorials/connect/_category_.json new file mode 100644 index 0000000000..0075ec0047 --- /dev/null +++ b/docs/cn/tutorials/connect/_category_.json @@ -0,0 +1,3 @@ +{ + "label": "连接" +} \ No newline at end of file diff --git a/docs/cn/tutorials/getting-started/connect-to-databend-bendsql.md b/docs/cn/tutorials/connect/connect-to-databend-bendsql.md similarity index 59% rename from docs/cn/tutorials/getting-started/connect-to-databend-bendsql.md rename to docs/cn/tutorials/connect/connect-to-databend-bendsql.md index fd6fee77ca..bb20afae76 100644 --- a/docs/cn/tutorials/getting-started/connect-to-databend-bendsql.md +++ b/docs/cn/tutorials/connect/connect-to-databend-bendsql.md @@ -1,27 +1,28 @@ --- -title: "使用 BendSQL 连接(自建版)" -sidebar_label: "BendSQL(自建版)" +title: "连接 Databend (BendSQL)" +sidebar_label: "BendSQL" +slug: / --- import StepsWrap from '@site/src/components/StepsWrap'; import StepContent from '@site/src/components/Steps/step-content'; -本教程将指导你如何使用 BendSQL 连接自建 Databend 实例。 +在本教程中,我们将指导你如何使用 BendSQL 连接到自托管的 Databend 实例。 ### 开始之前 -- 请先在本地安装 [Docker](https://www.docker.com/),用于启动 Databend。 -- 请先安装 BendSQL,参见 [安装 BendSQL](/guides/sql-clients/bendsql/#installing-bendsql)。 +- 确保本地已安装 [Docker](https://www.docker.com/),我们将用它启动 Databend。 +- 确保本地已安装 BendSQL。安装方法请参考 [安装 BendSQL](/guides/sql-clients/bendsql/#installing-bendsql)。 ### 启动 Databend -在终端运行以下命令启动 Databend: +在终端运行以下命令启动 Databend 实例: ```bash docker run -d --name databend \ @@ -33,23 +34,23 @@ docker run -d --name databend \ 该命令会在本地 Docker 容器中启动 Databend,连接信息如下: -- Host:`127.0.0.1` -- Port:`8000` -- User:`eric` -- Password:`abc123` +- 主机:`127.0.0.1` +- 端口:`8000` +- 用户:`eric` +- 密码:`abc123` ### 启动 BendSQL -Databend 成功运行后,即可通过 BendSQL 连接。在终端执行: +Databend 实例运行后,即可用 BendSQL 连接。打开终端并执行: ```bash bendsql --host 127.0.0.1 --port 8000 --user eric --password abc123 ``` -该命令会使用 HTTP API 连接到 `127.0.0.1:8000`,用户名 `eric`、密码 `abc123`。成功连接后会看到类似输出: +该命令通过 `127.0.0.1:8000` 的 HTTP API,以用户 `eric` 和密码 `abc123` 连接 Databend。成功后可见如下提示: ```bash Welcome to BendSQL 0.24.7-ff9563a(2024-12-27T03:23:17.723492000Z). @@ -64,7 +65,7 @@ Started web server at 127.0.0.1:8080 ### 执行查询 -连接成功后即可在 BendSQL shell 内执行 SQL。例如输入 `SELECT NOW();` 查询当前时间: +连接成功后,可在 BendSQL Shell 中执行 SQL。例如输入 `SELECT NOW();` 查看当前时间: ```bash eric@127.0.0.1:8000/default> SELECT NOW(); @@ -73,7 +74,7 @@ SELECT NOW() ┌────────────────────────────┐ │ now() │ -│ Timestamp │ +│ Timestamp │ ├────────────────────────────┤ │ 2025-04-24 13:24:06.640616 │ └────────────────────────────┘ @@ -85,7 +86,7 @@ SELECT NOW() ### 退出 BendSQL -输入 `quit` 即可退出。 +输入 `quit` 即可退出 BendSQL。 ```bash eric@127.0.0.1:8000/default> quit @@ -94,11 +95,11 @@ Bye~ ``` ### BendSQL UI -使用 `--ui` 选项时,BendSQL 会启动一个 Web Server 并打开浏览器展示 UI,可在浏览器中执行 SQL、分析查询性能,也可以复制 URL 与他人分享结果。 +使用 `--ui` 选项,BendSQL 会启动 Web 服务器并自动打开浏览器展示图形界面。你可以在浏览器中执行 SQL、分析查询性能,还可通过复制 URL 与他人共享结果。 ```bash -❯ Bendsql -h 127.0.0.1 --port 8000 --ui +❯ bendsql -h 127.0.0.1 --port 8000 --ui ``` - + \ No newline at end of file diff --git a/docs/cn/tutorials/connect/connect-to-databend-dbeaver.md b/docs/cn/tutorials/connect/connect-to-databend-dbeaver.md new file mode 100644 index 0000000000..eaa3cd931b --- /dev/null +++ b/docs/cn/tutorials/connect/connect-to-databend-dbeaver.md @@ -0,0 +1,58 @@ +--- +title: "连接 Databend (DBeaver)" +sidebar_label: "DBeaver" +--- + +import StepsWrap from '@site/src/components/StepsWrap'; +import StepContent from '@site/src/components/Steps/step-content'; + +在本教程中,我们将指导您完成使用 DBeaver 连接到私有化部署 Databend 实例的过程。 + + + + +### 开始之前 + +- 确保您的本地机器上安装了 [Docker](https://www.docker.com/),因为它将用于启动 Databend。 +- 确认您的本地机器上安装了 DBeaver 24.3.1 或更高版本。 + + + + +### 启动 Databend + +在您的终端中运行以下命令以启动 Databend 实例: + +:::note +如果在启动容器时没有为 `QUERY_DEFAULT_USER` 或 `QUERY_DEFAULT_PASSWORD` 指定自定义值,则将创建一个默认的 `root` 用户,且没有密码。 +::: + +```bash +docker run -d --name databend \ + -p 3307:3307 -p 8000:8000 -p 8124:8124 -p 8900:8900 \ + datafuselabs/databend:nightly +``` + + + + +### 设置连接 + +1. 在 DBeaver 中,转到 **Database** > **New Database Connection** 以打开连接向导,然后在 **Analytical** 类别下选择 **Databend**。 + +![alt text](@site/static/img/connect/dbeaver-analytical.png) + +2. 为 **Username** 输入 `root`。 + +![alt text](@site/static/img/connect/dbeaver-user-root.png) + +3. 点击 **Test Connection** 以验证连接。如果这是您第一次连接到 Databend,系统将提示您下载驱动程序。点击 **Download** 继续。 + +![alt text](@site/static/img/connect/dbeaver-download-driver.png) + +下载完成后,测试连接应该成功,如下所示: + +![alt text](../../../../static/img/connect/dbeaver-success.png) + + + \ No newline at end of file diff --git a/docs/cn/tutorials/connect/connect-to-databendcloud-bendsql.md b/docs/cn/tutorials/connect/connect-to-databendcloud-bendsql.md new file mode 100644 index 0000000000..db8e05b1de --- /dev/null +++ b/docs/cn/tutorials/connect/connect-to-databendcloud-bendsql.md @@ -0,0 +1,62 @@ +--- +title: "连接 Databend Cloud (BendSQL)" +sidebar_label: "Cloud (BendSQL)" +--- + +import StepsWrap from '@site/src/components/StepsWrap'; +import StepContent from '@site/src/components/Steps/step-content'; + +在本教程中,我们将指导您完成使用 BendSQL 连接到 Databend Cloud 的过程。 + + + + +### 开始之前 + +- 确保您的机器上已安装 BendSQL。有关如何使用各种包管理器安装 BendSQL 的说明,请参阅 [安装 BendSQL](/guides/sql-clients/bendsql/#installing-bendsql)。 +- 确保您已经拥有 Databend Cloud 帐户并且可以成功登录。 + + + + + +### 获取连接信息 + +1. 登录到 Databend Cloud,然后单击 **Connect**。 + +![Alt text](/img/connect/bendsql-4.gif) + +2. 选择要连接的数据库,例如“default”;然后选择一个计算集群。如果您忘记了密码,请重置它。 + +3. 您可以在 **Examples** 部分找到当前计算集群的 DSN 详细信息以及用于通过 BendSQL 连接到 Databend Cloud 的连接字符串。对于此步骤,只需复制 **BendSQL** 选项卡中提供的内容。 + +![Alt text](/img/connect/bendsql-5.png) + + + + +### 启动 BendSQL + +要启动 BendSQL,请将复制的内容粘贴到您的终端或命令提示符中。如果您复制的密码显示为“**\*\***”,请将其替换为您的实际密码。 + +![Alt text](/img/connect/bendsql-6.png) + + + + + +### 执行查询 + +连接后,您可以在 BendSQL shell 中执行 SQL 查询。例如,键入 `SELECT NOW();` 以返回当前时间。 + +![Alt text](/img/connect/bendsql-7.png) + + + + +### 退出 BendSQL + +要退出 BendSQL,请键入 `quit`。 + + + \ No newline at end of file diff --git a/docs/cn/tutorials/connect/connect-to-databendcloud-dbeaver.md b/docs/cn/tutorials/connect/connect-to-databendcloud-dbeaver.md new file mode 100644 index 0000000000..5d80aea74e --- /dev/null +++ b/docs/cn/tutorials/connect/connect-to-databendcloud-dbeaver.md @@ -0,0 +1,54 @@ +--- +title: "连接 Databend Cloud (DBeaver)" +sidebar_label: "Cloud (DBeaver)" +--- +import StepsWrap from '@site/src/components/StepsWrap'; +import StepContent from '@site/src/components/Steps/step-content'; + +在本教程中,我们将指导您完成使用 DBeaver 连接到 Databend Cloud 的过程。 + + + + +### 开始之前 + +- 确认您的本地机器上已安装 DBeaver 24.3.1 或更高版本。 + + + + +### 获取连接信息 + +在创建与 Databend Cloud 的连接之前,您需要登录到 Databend Cloud 以获取连接信息。有关更多信息,请参见 [连接到计算集群](/guides/cloud/using-databend-cloud/warehouses#connecting)。在本教程中,我们将使用以下连接信息: + +![alt text](@site/static/img/connect/dbeaver-connect-info.png) +> **Note**: +> 如果您的 `user` 或 `password` 包含特殊字符,您需要在相应的字段中单独提供它们(例如,DBeaver 中的 `Username` 和 `Password` 字段)。在这种情况下,Databend 将为您处理必要的编码。但是,如果您将凭据一起提供(例如,作为 `user:password`),则必须确保在使用前对整个字符串进行正确编码。 + + + + +### 设置连接 + +1. 在 DBeaver 中,转到 **Database** > **New Database Connection** 以打开连接向导,然后在 **Analytical** 类别下选择 **Databend**。 + +![alt text](@site/static/img/connect/dbeaver-analytical.png) + +2. 在 **Main** 选项卡中,根据上一步中获得的连接信息输入 **Host**、**Port**、**Username** 和 **Password**。 + +![alt text](@site/static/img/connect/dbeaver-main-tab.png) + +3. 在 **Driver properties** 选项卡中,根据上一步中获得的连接信息输入 **Warehouse** 名称。 + +![alt text](@site/static/img/connect/dbeaver-driver-properties.png) + +4. 在 **SSL** 选项卡中,选中 **Use SSL** 复选框。 + +![alt text](@site/static/img/connect/dbeaver-use-ssl.png) + +5. 单击 **Test Connection** 以验证连接。如果这是您第一次连接到 Databend,系统将提示您下载驱动程序。单击 **Download** 继续。下载完成后,测试连接应该成功,如下所示: + +![alt text](@site/static/img/connect/dbeaver-cloud-success.png) + + + \ No newline at end of file diff --git a/docs/cn/tutorials/databend-cloud/_category_.json b/docs/cn/tutorials/databend-cloud/_category_.json new file mode 100644 index 0000000000..f71cf7dac6 --- /dev/null +++ b/docs/cn/tutorials/databend-cloud/_category_.json @@ -0,0 +1,4 @@ +{ + "label": "Databend Cloud", + "key": "tutorials-databend-cloud-cn" +} diff --git a/docs/cn/tutorials/cloud-ops/aws-billing.md b/docs/cn/tutorials/databend-cloud/aws-billing.md similarity index 77% rename from docs/cn/tutorials/cloud-ops/aws-billing.md rename to docs/cn/tutorials/databend-cloud/aws-billing.md index 36d5db770a..fd25029507 100644 --- a/docs/cn/tutorials/cloud-ops/aws-billing.md +++ b/docs/cn/tutorials/databend-cloud/aws-billing.md @@ -1,15 +1,14 @@ --- -title: "Databend Cloud:AWS 账单" -sidebar_label: "AWS 账单" +title: 分析 AWS 账单 --- -在本教程中,我们将演示如何导入 AWS 账单数据,并通过 SQL 进行成本分析。你会学习如何把 AWS 账单数据加载进 Databend Cloud、使用查询找出主要成本驱动因素,并洞察 AWS 的使用方式。 +在本教程中,我们将介绍如何导入 AWS 账单数据并使用 SQL 进行成本分析。你将学习如何将 AWS 账单数据加载到 Databend Cloud 中,查询它以查找关键的成本驱动因素,并深入了解你的 AWS 使用情况。 -AWS 账单数据详细记录了云服务的用量及对应费用,可直接在 AWS Billing Console 的 Cost and Usage Reports (CUR) 服务中以 Parquet 格式导出。本教程所用的数据集位于 [https://datasets.databend.com/aws-billing.parquet](https://datasets.databend.com/aws-billing.parquet),遵循 CUR 规范,包含服务名称、用量类型、定价等字段。完整的字段释义请参考 [AWS Cost and Usage Report Data Dictionary](https://docs.aws.amazon.com/cur/latest/userguide/data-dictionary.html)。 +AWS 账单数据提供了你的云服务使用情况和相关成本的全面细分,可以直接从 AWS Billing Console 中的 AWS Cost and Usage Reports (CUR) 服务中以 Parquet 格式导出。在本教程中,我们将使用 Parquet 格式的示例数据集,该数据集可在 [https://datasets.databend.com/aws-billing.parquet](https://datasets.databend.com/aws-billing.parquet) 获得。该数据集遵循 CUR 标准,其中包括服务名称、使用类型和定价详细信息等字段。有关完整的架构参考,你可以参考 [AWS Cost and Usage Report Data Dictionary](https://docs.aws.amazon.com/cur/latest/userguide/data-dictionary.html)。 -## 步骤 1:创建目标表 +## Step 1: 创建目标表 -打开 Worksheet,创建名为 `doc` 的数据库,并创建 `aws_billing` 表: +打开一个 worksheet,创建一个名为 `doc` 的数据库,然后创建一个名为 `aws_billing` 的表: ```sql CREATE DATABASE doc; @@ -177,26 +176,27 @@ CREATE TABLE aws_billing ( ); ``` -## 步骤 2:加载 AWS 账单数据集 +## Step 2: 加载 AWS 账单数据集 -本步骤将在 Databend Cloud 中通过几次点击完成数据加载。 +在此步骤中,你只需点击几下即可将 AWS 账单数据集加载到 Databend Cloud 中。 -1. 在 Databend Cloud 内,选择 **Overview** > **Load Data** 打开数据导入向导。 -2. 选择 **An existing table** 作为目标表,点击 **Load from a URL** 并输入数据集地址 `https://datasets.databend.com/aws-billing.parquet`。 +1. 在 Databend Cloud 中,选择 **Overview** > **Load Data** 以启动数据加载向导。 + +2. 选择将数据加载到 **An existing table**,然后选择 **Load from a URL** 并输入数据集 URL:`https://datasets.databend.com/aws-billing.parquet`。 ![alt text](../../../../static/img/documents/tutorials/aws-billing-1.png) -3. 选择刚刚创建的数据库及表,并指定要使用的 Warehouse。 +3. 选择你创建的数据库和表,然后选择一个计算集群。 ![alt text](../../../../static/img/documents/tutorials/aws-billing-2.png) -4. 点击 **Confirm** 开始加载。 +4. 单击 **Confirm** 开始数据加载。 -## 步骤 3:使用 SQL 分析成本 +## Step 3: 使用 SQL 分析成本 -账单数据加载完毕后,就可以用 SQL 查询来分析 AWS 账单。本节提供了一些示例,帮助你快速识别花费最多的部分。 +现在你的账单数据已就绪,你可以使用 SQL 查询来分析 AWS 账单信息。此步骤提供了一些示例,可以帮助你了解支出并发现关键见解。 -以下查询会找出花费最高的服务: +以下查询标识了你使用过的最昂贵的服务: ```sql SELECT @@ -211,7 +211,7 @@ ORDER BY Total_Cost DESC LIMIT 25; ``` -以下查询会标出成本最高的 AWS EC2 资源: +以下查询标识了最昂贵的 AWS EC2 资源: ```sql SELECT @@ -227,7 +227,7 @@ ORDER BY Total_Cost DESC LIMIT 25; ``` -以下查询会找出花费最高的 S3 Bucket: +以下查询标识了最昂贵的 S3 存储桶: ```sql SELECT @@ -242,7 +242,7 @@ ORDER BY Cost DESC LIMIT 25; ``` -以下查询会根据综合成本找出最贵的 25 个 Region: +以下查询根据混合成本标识了前 25 个最昂贵的区域: ```sql SELECT @@ -257,7 +257,7 @@ ORDER BY Total_Cost DESC LIMIT 25; ``` -以下查询会把成本按照实例类型(Reserved Instances 与 On-Demand)分类,方便了解各类型的支出贡献: +以下查询将你的成本分为预留实例和按需实例,以帮助你了解每种类型对总支出的贡献: ```sql SELECT @@ -271,4 +271,4 @@ WHERE line_item_blended_cost IS NOT NULL GROUP BY Instance_Type ORDER BY Total_Cost DESC; -``` +``` \ No newline at end of file diff --git a/docs/cn/tutorials/databend-cloud/dashboard.md b/docs/cn/tutorials/databend-cloud/dashboard.md new file mode 100644 index 0000000000..2ac1866fe4 --- /dev/null +++ b/docs/cn/tutorials/databend-cloud/dashboard.md @@ -0,0 +1,187 @@ +--- +title: COVID-19 仪表盘 +--- +import StepsWrap from '@site/src/components/StepsWrap'; +import StepContent from '@site/src/components/Steps/step-content'; + +本教程演示如何加载和分析 Covid-19 数据集,并为其创建仪表盘。该数据集包含美国全境每日更新的 Covid-19 病例、死亡及相关统计信息,可全面展示 2022 年全年疫情在全国、州、县各级的影响与细节。 + +| 字段 | 描述 | +|---------|----------------------------------------------| +| date | 报告的 Covid-19 累计数据日期。 | +| county | 该条数据对应的县名称。 | +| state | 该条数据对应的州名称。 | +| fips | 与该地点关联的 FIPS 代码。 | +| cases | Covid-19 确诊病例的累计数量。 | +| deaths | 因 Covid-19 导致的累计死亡人数。 | + +### 步骤 1:准备数据 + +数据集“Covid-19 Data from New York Times”为内置示例,只需几次点击即可加载。目标表会自动创建,无需提前手动建表。 + + + + +### 加载数据集 + +1. 在 Databend Cloud 的**概览**页面点击 **Load Data** 按钮。 +2. 在打开的页面中,选择 **A new table** 单选按钮,然后在 **Load sample data** 下拉菜单中选择 **Covid-19 Data from New York Times.CSV**: + +![Alt text](@site/static/public/img/cloud/dashboard-1.png) + +3. 在下一页面选择数据库,并为即将创建的目标表命名。 + +![Alt text](@site/static/public/img/cloud/dashboard-2.png) + +4. 点击 **Confirm**。Databend Cloud 开始创建目标表并加载数据集,此过程可能需要几秒钟。 + + + + + + +### 处理 NULL 值 + +分析前建议检查表中的 NULL 与重复值,以免影响结果。 + +1. 新建工作区,使用以下 SQL 检查是否存在 NULL 值: + +```sql +SELECT COUNT(*) +FROM covid_19_us_2022_3812 +WHERE date IS NULL OR country IS NULL OR state IS NULL OR fips IS NULL OR cases IS NULL OR deaths IS NULL; +``` + +该语句返回 `41571`,表示有 41571 行存在至少一个 NULL 值。 + +2. 删除这些含 NULL 的行: + +```sql +DELETE FROM covid_19_us_2022_3812 +WHERE date IS NULL OR country IS NULL OR state IS NULL OR fips IS NULL OR cases IS NULL OR deaths IS NULL; +``` + + + + + + +### 处理重复值 + +1. 在同一工作区使用以下 SQL 检查重复行: + +```sql +SELECT date, country, state, fips, cases, deaths, COUNT(*) +FROM covid_19_us_2022_3812 +GROUP BY date, country, state, fips, cases, deaths +HAVING COUNT(*) > 1; +``` + +该语句返回 `0`,表示无重复行,数据已可用于分析。 + + + + +### 步骤 2:用查询结果创建图表 + +我们将运行四条查询以获取洞察,并通过记分卡、饼图、柱状图和折线图进行可视化。**请为每条查询单独创建工作区**。 + + + + +### 2022 年美国死亡总数 + +1. 在工作区运行以下 SQL: + +```sql +-- 计算 2022 年 12 月 31 日美国累计死亡数 +SELECT SUM(deaths) +FROM covid_19_us_2022_3812 +WHERE date = '2022-12-31'; +``` + +2. 利用查询结果在工作区内创建记分卡: + +![Alt text](@site/static/public/img/cloud/dashboard-3.gif) + + + + + + +### 2022 年各州死亡总数 + +1. 在工作区运行以下 SQL: + +```sql +-- 计算 2022 年 12 月 31 日各州累计死亡数 +SELECT state, SUM(deaths) +FROM covid_19_us_2022_3812 +WHERE date = '2022-12-31' +GROUP BY state; +``` + +2. 利用查询结果在工作区内创建饼图: + +![Alt text](@site/static/public/img/cloud/dashboard-4.gif) + + + + + +### 维尔京群岛病例与死亡 + +1. 在工作区运行以下 SQL: + +```sql +-- 获取 2022 年 12 月 31 日维尔京群岛的全部数据 +SELECT * FROM covid_19_us_2022_3812 +WHERE date = '2022-12-31' AND state = 'Virgin Islands'; +``` + +2. 利用查询结果在工作区内创建柱状图: + +![Alt text](@site/static/public/img/cloud/dashboard-5.gif) + + + + + + +### 圣约翰每月累计病例与死亡 + +1. 在工作区运行以下 SQL: + +```sql +-- 获取圣约翰每月末的数据 +SELECT * FROM covid_19_us_2022_3812 +WHERE + (date = '2022-01-31' + OR date = '2022-02-28' + OR date = '2022-03-31' + OR date = '2022-04-30' + OR date = '2022-05-31' + OR date = '2022-06-30' + OR date = '2022-07-31' + OR date = '2022-08-31' + OR date = '2022-09-30' + OR date = '2022-10-31' + OR date = '2022-11-30' + OR date = '2022-12-31') + AND country = 'St. John' ORDER BY date; +``` + +2. 利用查询结果在工作区内创建折线图: + +![Alt text](@site/static/public/img/cloud/dashboard-6.gif) + + + + +### 步骤 3:将图表添加到仪表盘 + +1. 在 Databend Cloud 通过 **Dashboards** > **New Dashboard** 创建仪表盘,然后点击 **Add Chart**。 + +2. 将左侧图表拖至仪表盘,可自由调整大小与位置。 + +![Alt text](@site/static/public/img/cloud/dashboard-7.gif) \ No newline at end of file diff --git a/docs/cn/tutorials/cloud-ops/link-tables.md b/docs/cn/tutorials/databend-cloud/link-tables.md similarity index 57% rename from docs/cn/tutorials/cloud-ops/link-tables.md rename to docs/cn/tutorials/databend-cloud/link-tables.md index dfd3f930f0..7683315a20 100644 --- a/docs/cn/tutorials/cloud-ops/link-tables.md +++ b/docs/cn/tutorials/databend-cloud/link-tables.md @@ -1,22 +1,21 @@ --- -title: "Databend Cloud:通过 ATTACH TABLE 共享数据" -sidebar_label: "数据共享" +title: 使用 ATTACH TABLE --- -本教程将演示如何在 Databend Cloud 中使用 [ATTACH TABLE](/sql/sql-commands/ddl/table/attach-table) 命令,将一张 Databend Cloud 表链接到存放在 S3 Bucket 中的自建 Databend 表。 +本教程介绍如何使用 [ATTACH TABLE](/sql/sql-commands/ddl/table/attach-table) 命令将 Databend Cloud 中的表链接到 S3 中的现有表。 -## 开始之前 +## 准备工作 -请确保已经满足以下前提条件: +在开始之前,请确保您已准备好以下先决条件: -- 本地已安装 [Docker](https://www.docker.com/),用于启动自建 Databend。 -- 已有一个供自建 Databend 使用的 AWS S3 Bucket。参见 [创建 S3 Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html)。 -- 拥有具备目标 Bucket 访问权限的 AWS Access Key ID 与 Secret Access Key。参见 [管理 AWS 凭证](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys)。 -- 本地已安装 BendSQL。安装方法请见 [安装 BendSQL](/guides/sql-clients/bendsql/#installing-bendsql)。 +- 您的本地机器上已安装 [Docker](https://www.docker.com/),因为它将用于启动私有化部署的 Databend。 +- 一个 AWS S3 bucket,用作您的私有化部署 Databend 的存储。[了解如何创建 S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html)。 +- 具有足够权限访问您的 S3 bucket 的 AWS Access Key ID 和 Secret Access Key。[管理您的 AWS 凭证](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys)。 +- 您的本地机器上已安装 BendSQL。有关如何使用各种包管理器安装 BendSQL 的说明,请参阅 [安装 BendSQL](/guides/sql-clients/bendsql/#installing-bendsql)。 ## 步骤 1:在 Docker 中启动 Databend -1. 在本地启动 Databend 容器。以下命令以 `databend-doc` 作为存储 Bucket,并填写了 S3 Endpoint 与访问凭证: +1. 在您的本地机器上启动一个 Databend 容器。以下命令启动一个以 S3 作为存储后端的 Databend 容器,使用 `databend-doc` bucket,以及指定的 S3 endpoint 和身份验证凭证。 ```bash docker run \ @@ -29,7 +28,7 @@ docker run \ datafuselabs/databend:v1.2.699-nightly ``` -2. 创建名为 `population` 的表来保存城市、省份与人口数据,并插入示例记录: +2. 创建一个名为 `population` 的表来存储城市、省份和人口数据,并插入示例如下: ```sql CREATE TABLE population ( @@ -44,7 +43,7 @@ INSERT INTO population (city, province, population) VALUES ('Vancouver', 'British Columbia', 631486); ``` -3. 运行以下语句获取该表在 S3 上的位置。下列结果显示表的 S3 URI 为 `s3://databend-doc/1/16/`: +3. 运行以下语句以检索表在 S3 中的位置。如下面的结果所示,本教程中该表的 S3 URI 为 `s3://databend-doc/1/16/`。 ```sql SELECT snapshot_location FROM FUSE_SNAPSHOT('default', 'population'); @@ -56,31 +55,33 @@ SELECT snapshot_location FROM FUSE_SNAPSHOT('default', 'population'); └──────────────────────────────────────────────────┘ ``` -## 步骤 2:在 Databend Cloud 中创建附加表 +## 步骤 2:在 Databend Cloud 中设置 Attached Tables -1. 使用 BendSQL 连接 Databend Cloud。如需了解 BendSQL 连接方法,可参考教程:[使用 BendSQL 连接 Databend Cloud](../getting-started/connect-to-databendcloud-bendsql.md)。 +1. 使用 BendSQL 连接到 Databend Cloud。如果您不熟悉 BendSQL,请参阅本教程:[使用 BendSQL 连接到 Databend Cloud](../connect/connect-to-databendcloud-bendsql.md)。 -2. 执行以下语句创建两张附加表: - - `population_all_columns`:包含来源表的全部列。 - - `population_only`:仅包含 `city` 与 `population` 两列。 +2. 执行以下语句以创建两个 attached tables: + - 第一个表 `population_all_columns` 包含源数据中的所有列。 + - 第二个表 `population_only` 仅包含选定的列(`city` 和 `population`)。 ```sql --- 附加包含所有列的表 +-- 创建一个包含源数据中所有列的 attached table ATTACH TABLE population_all_columns 's3://databend-doc/1/16/' CONNECTION = ( - ACCESS_KEY_ID = '', - SECRET_ACCESS_KEY = '' + REGION='us-east-2', + AWS_KEY_ID = '', + AWS_SECRET_KEY = '' ); --- 附加只保留 city 与 population 的表 +-- 创建一个仅包含源数据中选定列(city 和 population)的 attached table ATTACH TABLE population_only (city, population) 's3://databend-doc/1/16/' CONNECTION = ( - ACCESS_KEY_ID = '', - SECRET_ACCESS_KEY = '' + REGION='us-east-2', + AWS_KEY_ID = '', + AWS_SECRET_KEY = '' ); ``` -## 步骤 3:验证附加表 +## 步骤 3:验证 Attached Tables -1. 查询两张附加表,确认数据一致: +1. 查询两个 attached tables 以验证其内容: ```sql SELECT * FROM population_all_columns; @@ -104,7 +105,7 @@ SELECT * FROM population_only; └────────────────────────────────────┘ ``` -2. 如果在自建 Databend 中更新原表(例如把 Toronto 的人口改为 2,371,571),附加表也会反映同样的变更: +2. 如果您更新 Databend 中的源表,您可以在 Databend Cloud 上的 attached table 中观察到相同的更改。例如,如果您将源表中 Toronto 的人口更改为 2,371,571: ```sql UPDATE population @@ -112,17 +113,17 @@ SET population = 2371571 WHERE city = 'Toronto'; ``` -随后再次查询即可看到变化: +执行更新后,您可以查询两个 attached tables 以验证是否反映了更改: ```sql --- 查询包含全部列的附加表 +-- 检查包含所有列的 attached table 中更新后的人口 SELECT population FROM population_all_columns WHERE city = 'Toronto'; --- 查询仅包含 population 列的附加表 +-- 检查仅包含人口列的 attached table 中更新后的人口 SELECT population FROM population_only WHERE city = 'Toronto'; ``` -预期输出: +上述两个查询的预期输出: ```sql ┌─────────────────┐ @@ -132,15 +133,15 @@ SELECT population FROM population_only WHERE city = 'Toronto'; └─────────────────┘ ``` -3. 如果在原表中删除 `province` 列,附加表中同样无法再查询该列: +3. 如果您从源表中删除 `province` 列,则该列将不再在 attached table 中可用于查询。 ```sql ALTER TABLE population DROP province; ``` -之后任何引用 `province` 的查询都会报错,而其他列仍可正常使用。 +删除列后,任何引用它的查询都将导致错误。但是,仍然可以成功查询剩余的列。 -示例:查询已删除的列会失败: +例如,尝试查询删除的 `province` 列将失败: ```sql SELECT province FROM population_all_columns; @@ -151,7 +152,7 @@ error: APIError: QueryFailed: [1065]error: | ^^^^^^^^ column province doesn't exist ``` -但 `city`、`population` 仍可照常查询: +但是,您仍然可以检索 `city` 和 `population` 列: ```sql SELECT city, population FROM population_all_columns; @@ -163,4 +164,4 @@ SELECT city, population FROM population_all_columns; │ Montreal │ 1704694 │ │ Vancouver │ 631486 │ └────────────────────────────────────┘ -``` +``` \ No newline at end of file diff --git a/docs/cn/tutorials/develop/_category_.json b/docs/cn/tutorials/develop/_category_.json deleted file mode 100644 index 8badcdcc3b..0000000000 --- a/docs/cn/tutorials/develop/_category_.json +++ /dev/null @@ -1,4 +0,0 @@ -{ - "label": "Databend 开发", - "position": 4 -} diff --git a/docs/cn/tutorials/develop/python/integrating-with-databend-cloud-using-databend-driver.md b/docs/cn/tutorials/develop/python/integrating-with-databend-cloud-using-databend-driver.md deleted file mode 100644 index f5c23bdcda..0000000000 --- a/docs/cn/tutorials/develop/python/integrating-with-databend-cloud-using-databend-driver.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -title: "Python:使用 databend-driver 连接 Databend Cloud" ---- - -本教程将演示如何使用 `databend-driver` 连接 Databend Cloud,并通过 Python 创建表、插入数据与查询结果。 - -## 开始之前 - -请确保已创建 Warehouse 并获取连接信息,参考 [连接计算集群](/guides/cloud/using-databend-cloud/warehouses#connecting)。 - -## 步骤 1:使用 pip 安装依赖 - -```shell -pip install databend-driver -``` - -## 步骤 2:用 databend-driver 建立连接 - -1. 将以下代码保存为 `main.py`: - -```python -from databend_driver import BlockingDatabendClient - -# 使用你的凭证连接 Databend Cloud(替换 PASSWORD、HOST、DATABASE 与 WAREHOUSE_NAME) -client = BlockingDatabendClient(f"databend://cloudapp:{PASSWORD}@{HOST}:443/{DATABASE}?warehouse={WAREHOUSE_NAME}") - -# 获取 cursor 执行查询 -cursor = client.cursor() - -# 如果存在,则先删除表 -cursor.execute('DROP TABLE IF EXISTS data') - -# 创建表(若不存在) -cursor.execute('CREATE TABLE IF NOT EXISTS data (x Int32, y String)') - -# 插入数据 -cursor.execute("INSERT INTO data (x, y) VALUES (1, 'yy'), (2, 'xx')") - -# 查询全表 -cursor.execute('SELECT * FROM data') - -# 读取所有结果 -rows = cursor.fetchall() - -# 打印结果 -for row in rows: - print(row.values()) -``` - -2. 执行 `python main.py`: - -```bash -python main.py -(1, 'yy') -(2, 'xx') -``` diff --git a/docs/cn/tutorials/develop/python/integrating-with-databend-cloud-using-databend-sqlalchemy.md b/docs/cn/tutorials/develop/python/integrating-with-databend-cloud-using-databend-sqlalchemy.md deleted file mode 100644 index 900a3cc9c6..0000000000 --- a/docs/cn/tutorials/develop/python/integrating-with-databend-cloud-using-databend-sqlalchemy.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: "Python:使用 SQLAlchemy 连接 Databend Cloud" ---- - -本教程将演示如何借助 `databend-sqlalchemy` 连接 Databend Cloud,并使用 Python 创建表、插入数据与查询结果。 - -## 开始之前 - -请确保已创建 Warehouse 并获取连接信息,参考 [连接计算集群](/guides/cloud/using-databend-cloud/warehouses#connecting)。 - -## 步骤 1:使用 pip 安装依赖 - -```shell -pip install databend-sqlalchemy -``` - -## 步骤 2:通过 databend_sqlalchemy 连接 - -1. 将以下代码保存为 `main.py`: - -```python -from sqlalchemy import create_engine, text -from sqlalchemy.engine.base import Connection, Engine - -# 使用你的凭证连接 Databend Cloud(替换 PASSWORD、HOST、DATABASE 与 WAREHOUSE_NAME) -engine = create_engine( - f"databend://{username}:{password}@{host_port_name}/{database_name}?sslmode=disable" -) -cursor = engine.connect() -cursor.execute(text('DROP TABLE IF EXISTS data')) -cursor.execute(text('CREATE TABLE IF NOT EXISTS data( Col1 TINYINT, Col2 VARCHAR )')) -cursor.execute(text("INSERT INTO data VALUES (1,'zz')")) -res = cursor.execute(text("SELECT * FROM data")) -print(res.fetchall()) -``` - -2. 执行 `python main.py`: - -```bash -python main.py -[(1, 'zz')] -``` diff --git a/docs/cn/tutorials/getting-started/_category_.json b/docs/cn/tutorials/getting-started/_category_.json deleted file mode 100644 index 6bb350b618..0000000000 --- a/docs/cn/tutorials/getting-started/_category_.json +++ /dev/null @@ -1,4 +0,0 @@ -{ - "label": "连接 Databend", - "position": 1 -} diff --git a/docs/cn/tutorials/getting-started/connect-to-databend-dbeaver.md b/docs/cn/tutorials/getting-started/connect-to-databend-dbeaver.md deleted file mode 100644 index 0b7ba63755..0000000000 --- a/docs/cn/tutorials/getting-started/connect-to-databend-dbeaver.md +++ /dev/null @@ -1,54 +0,0 @@ ---- -title: "使用 DBeaver 连接(自建版)" -sidebar_label: "DBeaver(自建版)" ---- - -import StepsWrap from '@site/src/components/StepsWrap'; -import StepContent from '@site/src/components/Steps/step-content'; - -本教程将指导你如何使用 DBeaver 连接自建 Databend 实例。 - - - - -### 开始之前 - -- 请先在本地安装 [Docker](https://www.docker.com/),用于启动 Databend。 -- 请确保本地已安装 DBeaver 24.3.1 或更高版本。 - - - - -### 启动 Databend - -在终端运行以下命令启动 Databend: - -:::note -如果启动容器时未设置 `QUERY_DEFAULT_USER` 或 `QUERY_DEFAULT_PASSWORD`,系统会默认创建没有密码的 `root` 用户。 -::: - -```bash -docker run -d --name databend \ - -p 3307:3307 -p 8000:8000 -p 8124:8124 -p 8900:8900 \ - datafuselabs/databend:nightly -``` - - - - -### 建立连接 - -1. 在 DBeaver 中依次点击 **Database** > **New Database Connection** 打开连接向导,在 **Analytical** 分类下选择 **Databend**。 - -![alt text](@site/static/img/connect/dbeaver-analytical.png) - -2. 将 **Username** 设置为 `root`。 - -![alt text](@site/static/img/connect/dbeaver-user-root.png) - -3. 点击 **Test Connection** 进行测试。如果是首次连接 Databend,DBeaver 会提示下载驱动,点击 **Download**。下载完成后连接测试应成功,如下图: - -![alt text](../../../../static/img/connect/dbeaver-success.png) - - - diff --git a/docs/cn/tutorials/getting-started/connect-to-databendcloud-bendsql.md b/docs/cn/tutorials/getting-started/connect-to-databendcloud-bendsql.md deleted file mode 100644 index 212e053d7a..0000000000 --- a/docs/cn/tutorials/getting-started/connect-to-databendcloud-bendsql.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -title: "使用 BendSQL 连接 Databend Cloud" -sidebar_label: "Databend Cloud + BendSQL" ---- - -import StepsWrap from '@site/src/components/StepsWrap'; -import StepContent from '@site/src/components/Steps/step-content'; - -本教程将指导你如何通过 BendSQL 连接 Databend Cloud。 - - - - -### 开始之前 - -- 请先安装 BendSQL,参见 [安装 BendSQL](/guides/sql-clients/bendsql/#installing-bendsql)。 -- 请确认你已拥有 Databend Cloud 账号并可成功登录。 - - - - - -### 获取连接信息 - -1. 登录 Databend Cloud,点击 **Connect**。 - -![Alt text](/img/connect/bendsql-4.gif) - -2. 选择要连接的数据库(如 "default"),再选择 Warehouse。如忘记密码可以直接在此重置。 - -3. 在 **Examples** 区域可以看到当前 Warehouse 的 DSN 详情以及 BendSQL 连接示例。本教程只需复制 **BendSQL** 选项卡中的内容。 - -![Alt text](/img/connect/bendsql-5.png) - - - - -### 启动 BendSQL - -将刚复制的内容粘贴到终端中即可启动 BendSQL。如果复制出来的密码显示为 `***`,请记得替换为真实密码。 - -![Alt text](/img/connect/bendsql-6.png) - - - - - -### 执行查询 - -连接成功后即可在 BendSQL shell 中执行 SQL,例如输入 `SELECT NOW();` 查询当前时间。 - -![Alt text](/img/connect/bendsql-7.png) - - - - -### 退出 BendSQL - -输入 `quit` 即可退出 BendSQL。 - - - diff --git a/docs/cn/tutorials/getting-started/connect-to-databendcloud-dbeaver.md b/docs/cn/tutorials/getting-started/connect-to-databendcloud-dbeaver.md deleted file mode 100644 index c6d9cf0e4a..0000000000 --- a/docs/cn/tutorials/getting-started/connect-to-databendcloud-dbeaver.md +++ /dev/null @@ -1,54 +0,0 @@ ---- -title: '使用 DBeaver 连接 Databend Cloud' -sidebar_label: 'Databend Cloud + DBeaver' ---- -import StepsWrap from '@site/src/components/StepsWrap'; -import StepContent from '@site/src/components/Steps/step-content'; - -本教程将指导你如何通过 DBeaver 连接 Databend Cloud。 - - - - -### 开始之前 - -- 请确保本地已安装 DBeaver 24.3.1 或更高版本。 - - - - -### 获取连接信息 - -在 DBeaver 建立连接前,需要先登录 Databend Cloud 获取连接详情。参见 [连接计算集群](/guides/cloud/using-databend-cloud/warehouses#connecting)。本教程示例使用如下信息: - -![alt text](@site/static/img/connect/dbeaver-connect-info.png) -> **注意**: -> 如果 `user` 或 `password` 中包含特殊字符,请在 DBeaver 的对应输入框(如 Username、Password)分别填写,Databend 会自动处理编码。如果你使用 `user:password` 这种组合形式,需要自行确保整段字符串已正确编码。 - - - - -### 建立连接 - -1. 在 DBeaver 中依次点击 **Database** > **New Database Connection** 打开连接向导,在 **Analytical** 分类下选择 **Databend**。 - -![alt text](@site/static/img/connect/dbeaver-analytical.png) - -2. 在 **Main** 页签中,根据上一节的连接信息填写 **Host**、**Port**、**Username** 与 **Password**。 - -![alt text](@site/static/img/connect/dbeaver-main-tab.png) - -3. 在 **Driver properties** 页签中,填写 **Warehouse** 名称。 - -![alt text](@site/static/img/connect/dbeaver-driver-properties.png) - -4. 在 **SSL** 页签中勾选 **Use SSL**。 - -![alt text](@site/static/img/connect/dbeaver-use-ssl.png) - -5. 点击 **Test Connection** 验证连接。如果是首次连接 Databend,系统会提示下载驱动,点击 **Download**。下载完成后应出现成功提示: - -![alt text](@site/static/img/connect/dbeaver-cloud-success.png) - - - diff --git a/docs/cn/tutorials/index.md b/docs/cn/tutorials/index.md deleted file mode 100644 index 06f2211667..0000000000 --- a/docs/cn/tutorials/index.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: index -title: 教程 -slug: / -sidebar_label: 总览 -sidebar_position: 0 -description: 查找覆盖连接、摄取、迁移、开发与运维 Databend 的动手指南。 ---- - -挑选一个任务开始上手: - -## 连接 Databend -- [BendSQL(自建版)](/tutorials/getting-started/connect-to-databend-bendsql) -- [DBeaver(自建版)](/tutorials/getting-started/connect-to-databend-dbeaver) -- [BendSQL(Databend Cloud)](/tutorials/getting-started/connect-to-databendcloud-bendsql) -- [DBeaver(Databend Cloud)](/tutorials/getting-started/connect-to-databendcloud-dbeaver) - -## 数据摄取与流式写入 -- [借助 Bend Ingest 将 Kafka 写入 Databend](/tutorials/ingest-and-stream/kafka-bend-ingest-kafka) -- [Kafka Connect Sink](/tutorials/ingest-and-stream/kafka-databend-kafka-connect) -- [使用 Vector 将日志自动导入 Databend Cloud](/tutorials/ingest-and-stream/automating-json-log-loading-with-vector) -- [访问 MySQL/Redis 字典](/tutorials/ingest-and-stream/access-mysql-and-redis) -- [查询系统元数据](/tutorials/ingest-and-stream/query-metadata) - -## 迁移数据库 -- [如何选择 MySQL 迁移路径](/tutorials/migrate/) -- [借助 Debezium 的 MySQL CDC](/tutorials/migrate/migrating-from-mysql-with-debezium) -- [借助 Flink CDC 的 MySQL CDC](/tutorials/migrate/migrating-from-mysql-with-flink-cdc) -- [MySQL 批量:db-archiver / DataX / Addax](/tutorials/migrate/migrating-from-mysql-with-db-archiver) -- [Snowflake 迁移至 Databend](/tutorials/migrate/migrating-from-snowflake) - -## Databend 开发 -- [Python + Databend Cloud(databend-driver)](/tutorials/develop/python/integrating-with-databend-cloud-using-databend-driver) -- [Python + Databend Cloud(SQLAlchemy)](/tutorials/develop/python/integrating-with-databend-cloud-using-databend-sqlalchemy) -- [Python + 自建 Databend](/tutorials/develop/python/integrating-with-self-hosted-databend) - -## 运维与恢复 -- [使用 BendSave 实现容灾恢复](/tutorials/operate-and-recover/bendsave) - -## 云上运维 -- [了解 AWS 账单](/tutorials/cloud-ops/aws-billing) -- [使用 Databend Cloud 仪表盘](/tutorials/cloud-ops/dashboard) -- [跨库关联表](/tutorials/cloud-ops/link-tables) diff --git a/docs/cn/tutorials/ingest-and-stream/_category_.json b/docs/cn/tutorials/ingest-and-stream/_category_.json deleted file mode 100644 index 0d6b75c1a7..0000000000 --- a/docs/cn/tutorials/ingest-and-stream/_category_.json +++ /dev/null @@ -1,4 +0,0 @@ -{ - "label": "数据摄取与流式", - "position": 2 -} diff --git a/docs/cn/tutorials/ingest-and-stream/kafka-bend-ingest-kafka.md b/docs/cn/tutorials/ingest-and-stream/kafka-bend-ingest-kafka.md deleted file mode 100644 index 4d5a30130a..0000000000 --- a/docs/cn/tutorials/ingest-and-stream/kafka-bend-ingest-kafka.md +++ /dev/null @@ -1,128 +0,0 @@ ---- -title: 使用 Bend Ingest 接入 Kafka ---- - -本教程将指导你通过 Docker 搭建 Kafka 环境,并使用 [bend-ingest-kafka](https://github.com/databendcloud/bend-ingest-kafka) 将 Kafka 消息加载到 Databend Cloud。 - -### 步骤 1:搭建 Kafka 环境 - -在 9092 端口运行 Apache Kafka Docker 容器: - -```shell -MacBook-Air:~ eric$ docker run -d \ -> --name kafka \ -> -p 9092:9092 \ -> apache/kafka:latest -Unable to find image 'apache/kafka:latest' locally -latest: Pulling from apache/kafka -... -``` - -### 步骤 2:创建 Topic 并生产消息 - -1. 进入 Kafka 容器: - -```shell -MacBook-Air:~ eric$ docker exec --workdir /opt/kafka/bin/ -it kafka sh -``` - -2. 创建名为 `test-topic` 的 Topic: - -```shell -/opt/kafka/bin $ ./kafka-topics.sh --bootstrap-server localhost:9092 --create --topic test-topic -Created topic test-topic. -``` - -3. 使用控制台 Producer 向 `test-topic` 推送消息: - -```shell -/opt/kafka/bin $ ./kafka-console-producer.sh --bootstrap-server localhost:9092 --topic test-topic -``` - -4. 输入 JSON 消息: - -```json -{"id": 1, "name": "Alice", "age": 30} -{"id": 2, "name": "Bob", "age": 25} -``` - -5. 输入完成后按 Ctrl+C 停止 Producer。 - -### 步骤 3:在 Databend Cloud 中创建表 - -```sql -CREATE DATABASE doc; - -CREATE TABLE databend_topic ( - id INT NOT NULL, - name VARCHAR NOT NULL, - age INT NOT NULL - ) ENGINE=FUSE; -``` - -### 步骤 4:安装并运行 bend-ingest-kafka - -1. 安装 bend-ingest-kafka: - -```shell -go install github.com/databendcloud/bend-ingest-kafka@latest -``` - -2. 执行以下命令,将 `test-topic` 中的消息写入 Databend Cloud 目标表: - -```shell -MacBook-Air:~ eric$ bend-ingest-kafka \ -> --kafka-bootstrap-servers="localhost:9092" \ -> --kafka-topic="test-topic" \ -> --databend-dsn="" \ -> --databend-table="doc.databend_topic" \ -> --data-format="json" -INFO[0000] Starting worker worker-0 -... -``` - -3. 使用 BendSQL 连接 Databend Cloud 验证数据: - -```bash -cloudapp@(eric)/doc> SELECT * FROM databend_topic; - --[ RECORD 1 ]----------------------------------- - id: 1 -name: Alice - age: 30 --[ RECORD 2 ]----------------------------------- - id: 2 -name: Bob - age: 25 -``` - -4. 如需以 RAW 模式加载消息,请运行: - -```bash -bend-ingest-kafka \ - --kafka-bootstrap-servers="localhost:9092" \ - --kafka-topic="test-topic" \ - --databend-dsn="" \ - --is-json-transform=false -``` - -会在 `doc` 数据库生成新表 `test_ingest`,示例数据如下: - -```bash -cloudapp@(eric)/doc> SELECT * FROM test_ingest; - --[ RECORD 1 ]----------------------------------- - uuid: 17f9e56e-19ba-4d42-88a0-e16b27815d04 - koffset: 0 - kpartition: 0 - raw_data: {"age":30,"id":1,"name":"Alice"} -record_metadata: {"create_time":"2024-08-27T19:10:45.888Z",...} - add_time: 2024-08-27 19:12:55.081444 --[ RECORD 2 ]----------------------------------- - uuid: 0f57f71a-32ee-4df3-b75e-d123b9a91543 - koffset: 1 - kpartition: 0 - raw_data: {"age":25,"id":2,"name":"Bob"} -record_metadata: {"create_time":"2024-08-27T19:10:52.946Z",...} - add_time: 2024-08-27 19:12:55.081470 -``` diff --git a/docs/cn/tutorials/ingest-and-stream/kafka-databend-kafka-connect.md b/docs/cn/tutorials/ingest-and-stream/kafka-databend-kafka-connect.md deleted file mode 100644 index ce4f64b8f7..0000000000 --- a/docs/cn/tutorials/ingest-and-stream/kafka-databend-kafka-connect.md +++ /dev/null @@ -1,146 +0,0 @@ ---- -title: 使用 Kafka Connect 接入 Kafka ---- - -本教程将演示如何在 Confluent Cloud 中的 Kafka 与 Databend Cloud 之间搭建 Kafka Connect Sink 流水线,使用 [databend-kafka-connect](https://github.com/databendcloud/databend-kafka-connect) 插件生产消息并写入 Databend Cloud。 - -### 步骤 1:搭建 Kafka 环境 - -首先在 Confluent Cloud 中准备 Kafka 环境。 - -1. 注册并登录免费的 Confluent Cloud 账号。 -2. 参考 [Confluent Quick Start](https://docs.confluent.io/cloud/current/get-started/index.html#step-1-create-a-ak-cluster-in-ccloud) 在默认环境中创建基础 Kafka 集群。 -3. 按照 [Install Confluent CLI](https://docs.confluent.io/confluent-cli/current/install.html) 在本地安装 CLI,并登录: - -```shell -confluent login --save -``` - -4. 使用 CLI 创建 API Key,并将其设置为当前 Key: - -```shell -confluent kafka cluster list - -confluent api-key create --resource lkc-jr57j2 -... -confluent api-key use --resource lkc-jr57j2 -``` - -### 步骤 2:上传自定义 Connector 插件 - -本步骤将 databend-kafka-connect Sink 插件上传到 Confluent Cloud。 - -1. 在 [GitHub Releases](https://github.com/databendcloud/databend-kafka-connect/releases) 下载最新版 databend-kafka-connect。 -2. 在 Confluent Cloud 中依次点击 **Connectors** > **Add Connector** > **Add plugin**。 -3. 填写以下信息并上传插件包: - -| 参数 | 说明 | -|------|------| -| Connector plugin name | 例如 `databend_plugin` | -| Custom plugin description | 例如 `Kafka Connect sink connector for Databend` | -| Connector class | `com.databend.kafka.connect.DatabendSinkConnector` | -| Connector type | `Sink` | - -### 步骤 3:创建 Kafka Topic - -1. 在 Confluent Cloud 中点击 **Topics** > **Add topic**。 -2. 设置 Topic 名称(如 `databend_topic`),继续下一步。 -3. 选择 **Create a schema for message values**,点击 **Create Schema**。 - -![alt text](../../../../static/img/documents/tutorials/kafka-2.png) - -4. 在 **Add new schema** 页面选择 **Avro** 标签页,并粘贴以下 Schema: - -```json -{ - "doc": "Sample schema to help you get started.", - "fields": [ - { - "doc": "The int type is a 32-bit signed integer.", - "name": "id", - "type": "int" - }, - { - "doc": "The string is a unicode character sequence.", - "name": "name", - "type": "string" - }, - { - "doc": "The string is a unicode character sequence.", - "name": "age", - "type": "int" - } - ], - "name": "sampleRecord", - "type": "record" -} -``` - -![alt text](../../../../static/img/documents/tutorials/kafka-1.png) - -### 步骤 4:添加 Connector - -1. 在 Confluent Cloud 中点击 **Connectors** > **Add Connector**,选择刚上传的插件。 - -![alt text](../../../../static/img/documents/tutorials/kafka-3.png) - -2. 在 **Kafka credentials** 步骤中选择 **Use an existing API key**,输入之前创建的 API key 与 secret。 - -![alt text](../../../../static/img/documents/tutorials/kafka-4.png) - -3. 在 **Configuration** 步骤中切换到 **JSON** 标签页,粘贴以下配置并替换占位符: - -```json -{ - "auto.create": "true", - "auto.evolve": "true", - "batch.size": "1", - "confluent.custom.schema.registry.auto": "true", - "connection.attempts": "3", - "connection.backoff.ms": "10000", - "connection.database": "", - "connection.password": "", - "connection.url": "jdbc:databend://", - "connection.user": "cloudapp", - "errors.tolerance": "none", - "insert.mode": "upsert", - "key.converter": "org.apache.kafka.connect.storage.StringConverter", - "max.retries": "10", - "pk.fields": "id", - "pk.mode": "record_value", - "table.name.format": ".${topic}", - "topics": "databend_topic", - "value.converter": "io.confluent.connect.avro.AvroConverter" -} -``` - -4. 在 **Networking** 步骤中填写 Databend Cloud Warehouse Endpoint,例如 `xxxxxxxxx--xxx.gw.aws-us-east-2.default.databend.com`。 -5. 在 **Sizing** 步骤中设为 **1 task**。 -6. 在 **Review and launch** 中为 Connector 命名,例如 `databend_connector`。 - -### 步骤 5:生产消息 - -1. 将用于 Topic 的 Schema 保存为本地 `schema.json` 文件。 - -```json -{ - "doc": "Sample schema to help you get started.", - ... -} -``` - -2. 使用 Confluent CLI 执行 `confluent kafka topic produce `,向 Kafka Topic 发送消息: - -```shell -confluent kafka topic produce databend_topic --value-format avro --schema schema.json -Successfully registered schema with ID "100001". -Starting Kafka Producer. Use Ctrl-C or Ctrl-D to exit. - -{"id":1, "name":"Alice", "age":30} -{"id":2, "name":"Bob", "age":25} -{"id":3, "name":"Charlie", "age":35} -``` - -3. 在 Databend Cloud 中查看数据,确认写入成功: - -![alt text](../../../../static/img/documents/tutorials/kafka-5.png) diff --git a/docs/cn/tutorials/integrate/_category_.json b/docs/cn/tutorials/integrate/_category_.json new file mode 100644 index 0000000000..acff8e216f --- /dev/null +++ b/docs/cn/tutorials/integrate/_category_.json @@ -0,0 +1,3 @@ +{ + "label": "数据集成" +} \ No newline at end of file diff --git a/docs/cn/tutorials/ingest-and-stream/access-mysql-and-redis.md b/docs/cn/tutorials/integrate/access-mysql-and-redis.md similarity index 52% rename from docs/cn/tutorials/ingest-and-stream/access-mysql-and-redis.md rename to docs/cn/tutorials/integrate/access-mysql-and-redis.md index b0851c13db..6c06cddf14 100644 --- a/docs/cn/tutorials/ingest-and-stream/access-mysql-and-redis.md +++ b/docs/cn/tutorials/integrate/access-mysql-and-redis.md @@ -1,24 +1,24 @@ --- -title: 使用 Dictionary 访问 MySQL 与 Redis +title: 访问 MySQL 和 Redis --- -本教程将演示如何在 Databend 中通过 Dictionary 访问 MySQL 与 Redis 数据。你将学习如何为外部数据源创建 Dictionary,并像查询本地表一样无缝读取这些数据。 +本教程介绍如何使用 Databend 字典访问 MySQL 和 Redis 数据,实现无缝的数据查询和集成。 ## 开始之前 -请在本地安装 [Docker](https://www.docker.com/),用于启动 Databend、MySQL 与 Redis 容器。同时需要一个连接 MySQL 的 SQL 客户端,推荐使用 [BendSQL](/guides/sql-clients/bendsql/) 连接 Databend。 +在开始之前,请确保您的本地机器上安装了 [Docker](https://www.docker.com/)。我们需要 Docker 来为 Databend、MySQL 和 Redis 设置必要的容器。您还需要一个 SQL 客户端来连接到 MySQL;我们建议使用 [BendSQL](/guides/sql-clients/bendsql/) 连接到 Databend。 -## 步骤 1:搭建环境 +## 步骤 1:设置环境 -本步骤会在本地通过 Docker 启动 Databend、MySQL 与 Redis。 +在这一步中,我们将在您的本地机器上使用 Docker 启动 Databend、MySQL 和 Redis 的实例。 -1. 创建名为 `mynetwork` 的 Docker 网络,供各容器互通: +1. 创建一个名为 `mynetwork` 的 Docker 网络,以启用您的 Databend、MySQL 和 Redis 容器之间的通信: ```bash docker network create mynetwork ``` -2. 在该网络内启动名为 `mysql` 的 MySQL 容器: +2. 运行以下命令以在 `mynetwork` 网络中启动一个名为 `mysql` 的 MySQL 容器: ```bash docker run -d \ @@ -29,7 +29,7 @@ docker run -d \ mysql:latest ``` -3. 启动名为 `databend` 的 Databend 容器: +3. 运行以下命令以在 `mynetwork` 网络中启动一个名为 `databend` 的 Databend 容器: ```bash docker run -d \ @@ -42,7 +42,7 @@ docker run -d \ datafuselabs/databend:nightly ``` -4. 启动名为 `redis` 的 Redis 容器: +4. 运行以下命令以在 `mynetwork` 网络中启动一个名为 `redis` 的 Redis 容器: ```bash docker run -d \ @@ -52,43 +52,70 @@ docker run -d \ redis:latest ``` -5. 检查 `mynetwork`,确认三个容器都在同一网络: +5. 通过检查 `mynetwork` Docker 网络,验证 Databend、MySQL 和 Redis 容器是否连接到同一网络: ```bash docker network inspect mynetwork -``` - -输出示例: -```bash [ { "Name": "mynetwork", - ... - "Containers": { - "14d50cc4d075158a6d5fa4e6c8b7db60960f8ba1f64d6bceff0692c7e99f37b5": { - "Name": "redis", - ... - }, - "276bc1023f0ea999afc41e063f1f3fe7404cb6fbaaf421005d5c05be343ce5e5": { - "Name": "databend", - ... - }, - "95c21de94d27edc5e6fa8e335e0fd5bff12557fa30889786de9f483b8d111dbc": { - "Name": "mysql", - ... + "Id": "ba8984e9ca07f49dd6493fd7c8be9831bda91c44595fc54305fc6bc241a77485", + "Created": "2024-09-23T21:24:34.59324771Z", + "Scope": "local", + "Driver": "bridge", + "EnableIPv6": false, + "IPAM": { + "Driver": "default", + "Options": {}, + "Config": [ + { + "Subnet": "172.18.0.0/16", + "Gateway": "172.18.0.1" } + ] + }, + "Internal": false, + "Attachable": false, + "Ingress": false, + "ConfigFrom": { + "Network": "" + }, + "ConfigOnly": false, + "Containers": { + "14d50cc4d075158a6d5fa4e6c8b7db60960f8ba1f64d6bceff0692c7e99f37b5": { + "Name": "redis", + "EndpointID": "e1d1015fea745bbbb34c6a9fb11010b6960a139914b7cc2c6a20fbca4f3b77d8", + "MacAddress": "02:42:ac:12:00:04", + "IPv4Address": "172.18.0.4/16", + "IPv6Address": "" }, - ... + "276bc1023f0ea999afc41e063f1f3fe7404cb6fbaaf421005d5c05be343ce5e5": { + "Name": "databend", + "EndpointID": "ac915b9df2fef69c5743bf16b8f07e0bb8c481ca7122b171d63fb9dc2239f873", + "MacAddress": "02:42:ac:12:00:03", + "IPv4Address": "172.18.0.3/16", + "IPv6Address": "" + }, + "95c21de94d27edc5e6fa8e335e0fd5bff12557fa30889786de9f483b8d111dbc": { + "Name": "mysql", + "EndpointID": "44fdf40de8c3d4c8fec39eb03ef1219c9cf1548e9320891694a9758dd0540ce3", + "MacAddress": "02:42:ac:12:00:02", + "IPv4Address": "172.18.0.2/16", + "IPv6Address": "" + } + }, + "Options": {}, + "Labels": {} } ] ``` -## 步骤 2:准备示例数据 +## 步骤 2:填充示例数据 -本步骤将在 Databend、MySQL 与 Redis 中写入示例数据。 +在这一步中,我们将向 MySQL 和 Redis 以及 Databend 添加示例数据。 -1. 在 Databend 中创建 `users_databend` 表并插入示例数据: +1. 在 Databend 中,创建一个名为 `users_databend` 的表,并插入示例用户数据: ```sql CREATE TABLE users_databend ( @@ -102,7 +129,7 @@ INSERT INTO users_databend (id, name) VALUES (3, 'Charlie'); ``` -2. 在 MySQL 中创建 `dict` 数据库与 `users` 表,并插入示例数据: +2. 在 MySQL 中,创建一个名为 `dict` 的数据库,创建一个 `users` 表,并插入示例数据: ```sql CREATE DATABASE dict; @@ -120,17 +147,17 @@ INSERT INTO users (name, email) VALUES ('Charlie', 'charlie@example.com'); ``` -3. 通过 Docker Desktop 或运行 `docker ps` 找到 Redis 容器 ID: +3. 在 Docker Desktop 上或通过在终端中运行 `docker ps` 找到您的 Redis 容器 ID: ![alt text](../../../../static/img/documents/tutorials/redis-container-id.png) -4. 使用容器 ID 进入 Redis CLI(将 `14d50cc4d075` 替换为实际 ID): +4. 使用您的 Redis 容器 ID 访问 Redis CLI(将 `14d50cc4d075` 替换为您实际的容器 ID): ```bash docker exec -it 14d50cc4d075 redis-cli ``` -5. 在 Redis CLI 中插入示例数据: +5. 通过在 Redis CLI 中运行以下命令,将示例用户数据插入到 Redis 中: ```bash SET user:1 '{"notifications": "enabled", "theme": "dark"}' @@ -138,11 +165,11 @@ SET user:2 '{"notifications": "disabled", "theme": "light"}' SET user:3 '{"notifications": "enabled", "theme": "dark"}' ``` -## 步骤 3:创建 Dictionary +## 步骤 3:创建字典 -本步骤将在 Databend 中为 MySQL 与 Redis 创建 Dictionary,并通过查询提取外部数据。 +在这一步中,我们将在 Databend 中为 MySQL 和 Redis 创建字典,然后从这些外部源查询数据。 -1. 在 Databend 中创建名为 `mysql_users` 的 Dictionary 指向 MySQL: +1. 在 Databend 中,创建一个名为 `mysql_users` 的字典,该字典连接到 MySQL 实例: ```sql CREATE DICTIONARY mysql_users @@ -162,7 +189,7 @@ SOURCE(MySQL( )); ``` -2. 创建名为 `redis_user_preferences` 的 Dictionary 指向 Redis: +2. 在 Databend 中,创建一个名为 `mysql_users` 的字典,该字典连接到 Redis 实例: ```sql CREATE DICTIONARY redis_user_preferences @@ -177,7 +204,7 @@ SOURCE(Redis( )); ``` -3. 查询两个 Dictionary: +3. 查询我们之前创建的 MySQL 和 Redis 字典中的数据。 ```sql SELECT @@ -189,7 +216,7 @@ FROM users_databend AS u; ``` -该查询会返回用户的 ID、姓名,同时通过 MySQL Dictionary 获取 email,通过 Redis Dictionary 获取偏好设置。 +上面的查询检索用户信息,包括来自 `users_databend` 表的 ID 和姓名,以及来自 MySQL 字典的电子邮件和来自 Redis 字典的用户偏好设置。 ```sql title='Result:' ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ @@ -200,4 +227,4 @@ FROM │ 2 │ Bob │ bob@example.com │ {"notifications": "disabled", "theme": "light"} │ │ 3 │ Charlie │ charlie@example.com │ {"notifications": "enabled", "theme": "dark"} │ └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ -``` +``` \ No newline at end of file diff --git a/docs/cn/tutorials/load/_category_.json b/docs/cn/tutorials/load/_category_.json new file mode 100644 index 0000000000..b630adfeb2 --- /dev/null +++ b/docs/cn/tutorials/load/_category_.json @@ -0,0 +1,3 @@ +{ + "label": "数据导入" +} \ No newline at end of file diff --git a/docs/cn/tutorials/ingest-and-stream/automating-json-log-loading-with-vector.md b/docs/cn/tutorials/load/automating-json-log-loading-with-vector.md similarity index 67% rename from docs/cn/tutorials/ingest-and-stream/automating-json-log-loading-with-vector.md rename to docs/cn/tutorials/load/automating-json-log-loading-with-vector.md index 84e4bfb7b9..fb1b63f034 100644 --- a/docs/cn/tutorials/ingest-and-stream/automating-json-log-loading-with-vector.md +++ b/docs/cn/tutorials/load/automating-json-log-loading-with-vector.md @@ -1,42 +1,44 @@ --- -title: 使用 Vector 摄取 JSON 日志(Cloud) +title: 自动导入 JSON 日志 --- -本教程将模拟本地生成日志,借助 [Vector](https://vector.dev/) 收集后写入 S3,并通过定时任务在 Databend Cloud 中自动加载。 +在本教程中,我们将模拟在本地生成日志,使用 [Vector](https://vector.dev/) 收集日志,将其存储到 S3,并通过定时任务自动将其摄取到 Databend Cloud。 -![Automating JSON Log Loading with Vector](@site/static/img/documents/tutorials/vector-tutorial.png) +![使用 Vector 自动加载 JSON 日志](@site/static/img/documents/tutorials/vector-tutorial.png) -## 开始之前 +## 准备工作 -请准备以下资源: +开始前,请确保已准备好以下先决条件: -- **Amazon S3 Bucket**:用于存放 Vector 收集的日志。[了解如何创建 Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html)。 -- **AWS 凭证**:具备目标 Bucket 访问权限的 AWS Access Key ID 与 Secret Access Key。[更多信息](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys)。 -- **AWS CLI**:已安装并配置好访问上述 Bucket 的权限。[下载 AWS CLI](https://aws.amazon.com/cli/)。 -- **Docker**:本地安装 [Docker](https://www.docker.com/),用于运行 Vector。 +- **Amazon S3 存储桶**:用于存放 Vector 收集的日志。 [了解如何创建 S3 存储桶](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html)。 +- **AWS 凭证**:具备访问 S3 存储桶权限的 AWS Access Key ID 和 Secret Access Key。 [管理 AWS 凭证](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys)。 +- **AWS CLI**:确保已安装 [AWS CLI](https://aws.amazon.com/cli/) 并配置好访问 S3 存储桶所需的权限。 +- **Docker**:确保本地已安装 [Docker](https://www.docker.com/),用于部署 Vector。 -## 步骤 1:在 S3 Bucket 中创建目标文件夹 +## 第一步:在 S3 存储桶中创建目标文件夹 -为了存放由 Vector 同步的日志,在 Bucket 内创建名为 `logs` 的文件夹。本教程使用 `s3://databend-doc/logs/`。 +为存放 Vector 收集的日志,请在 S3 存储桶中创建一个名为 logs 的文件夹。本教程使用 `s3://databend-doc/logs/` 作为目标路径。 + +以下命令在 databend-doc 存储桶中创建名为 logs 的空文件夹: ```bash aws s3api put-object --bucket databend-doc --key logs/ ``` -## 步骤 2:创建本地日志文件 +## 第二步:创建本地日志文件 -通过创建本地文件模拟日志生成。示例路径为 `/Users/eric/Documents/logs/app.log`。 +通过创建本地日志文件来模拟日志生成。本教程使用 `/Users/eric/Documents/logs/app.log` 作为文件路径。 -添加以下 JSON 行表示示例事件: +将以下 JSON 行添加到文件中,作为示例日志事件: ```json title='app.log' {"user_id": 1, "event": "login", "timestamp": "2024-12-08T10:00:00Z"} {"user_id": 2, "event": "purchase", "timestamp": "2024-12-08T10:05:00Z"} ``` -## 步骤 3:配置并运行 Vector +## 第三步:配置并运行 Vector -1. 创建 Vector 配置文件 `vector.yaml`(示例路径 `/Users/eric/Documents/vector.yaml`): +1. 在本地创建名为 `vector.yaml` 的 Vector 配置文件。本教程将其放在 `/Users/eric/Documents/vector.yaml`,内容如下: ```yaml title='vector.yaml' sources: @@ -70,7 +72,7 @@ sinks: secret_access_key: "" ``` -2. 使用 Docker 启动 Vector,并挂载配置文件与日志目录: +2. 使用 Docker 启动 Vector,并映射配置文件和本地日志目录: ```bash docker run \ @@ -82,35 +84,35 @@ docker run \ timberio/vector:nightly-alpine ``` -3. 稍等片刻,并检查 `logs` 文件夹是否已有同步文件: +3. 稍等片刻,然后检查日志是否已同步到 S3 的 logs 文件夹: ```bash aws s3 ls s3://databend-doc/logs/ ``` -若同步成功,输出类似: +若日志文件已成功同步到 S3,将看到类似以下输出: ```bash 2024-12-10 15:22:13 0 2024-12-10 17:52:42 112 1733871161-7b89e50a-6eb4-4531-8479-dd46981e4674.log.gz ``` -可将文件下载到本地查看: +现在可从存储桶下载已同步的日志文件: ```bash aws s3 cp s3://databend-doc/logs/1733871161-7b89e50a-6eb4-4531-8479-dd46981e4674.log.gz ~/Documents/ ``` -与原始日志相比,同步后的日志是 NDJSON 格式,每条记录被包裹在 `log` 字段内: +与原始日志相比,同步后的日志为 NDJSON 格式,每条记录被包裹在外层 `log` 字段中: ```json {"log":{"event":"login","timestamp":"2024-12-08T10:00:00Z","user_id":1}} {"log":{"event":"purchase","timestamp":"2024-12-08T10:05:00Z","user_id":2}} ``` -## 步骤 4:在 Databend Cloud 创建 Task +## 第四步:在 Databend Cloud 中创建任务 -1. 打开 Worksheet,创建关联 `logs` 文件夹的 External Stage: +1. 打开工作表,创建一个指向存储桶中 logs 文件夹的外部 Stage: ```sql CREATE STAGE mylog 's3://databend-doc/logs/' CONNECTION=( @@ -119,7 +121,7 @@ CREATE STAGE mylog 's3://databend-doc/logs/' CONNECTION=( ); ``` -创建成功后可以列出 Stage 内的文件: +Stage 创建成功后,可列出其中的文件: ```sql LIST @mylog; @@ -141,7 +143,7 @@ CREATE TABLE logs ( ); ``` -3. 创建定时任务,从 Stage 加载日志到 `logs` 表: +3. 创建定时任务,将日志从外部 Stage 加载到 logs 表: ```sql CREATE TASK IF NOT EXISTS myvectortask @@ -165,7 +167,7 @@ PURGE = TRUE; ALTER TASK myvectortask RESUME; ``` -稍等片刻并查询表,确认日志已写入: +稍等片刻,检查日志是否已加载到表中: ```sql SELECT * FROM logs; @@ -178,16 +180,16 @@ SELECT * FROM logs; └──────────────────────────────────────────────────────────┘ ``` -再次执行 `LIST @mylog;` 会发现 Stage 中已无文件,因为任务设置了 `PURGE = TRUE`,加载完成后会自动删除源文件。 +此时若运行 `LIST @mylog;`,将看不到任何文件。这是因为任务配置了 `PURGE = TRUE`,加载日志后会从 S3 删除已同步的文件。 -现在在本地 `app.log` 中追加两条日志: +现在,让我们在本地日志文件 `app.log` 中再模拟生成两条日志: ```bash echo '{"user_id": 3, "event": "logout", "timestamp": "2024-12-08T10:10:00Z"}' >> /Users/eric/Documents/logs/app.log echo '{"user_id": 4, "event": "login", "timestamp": "2024-12-08T10:15:00Z"}' >> /Users/eric/Documents/logs/app.log ``` -等待新文件同步至 S3 后,定时任务会自动加载这些记录。再次查询表即可看到新增日志: +稍等片刻,日志将同步到 S3(logs 文件夹中会出现新文件)。随后定时任务会把新日志加载到表中。再次查询表,即可看到这些日志: ```sql SELECT * FROM logs; @@ -200,4 +202,4 @@ SELECT * FROM logs; │ login │ 2024-12-08 10:00:00 │ 1 │ │ purchase │ 2024-12-08 10:05:00 │ 2 │ └──────────────────────────────────────────────────────────┘ -``` +``` \ No newline at end of file diff --git a/docs/cn/tutorials/load/kafka-bend-ingest-kafka.md b/docs/cn/tutorials/load/kafka-bend-ingest-kafka.md new file mode 100644 index 0000000000..077a87d823 --- /dev/null +++ b/docs/cn/tutorials/load/kafka-bend-ingest-kafka.md @@ -0,0 +1,151 @@ +--- +title: 从 Kafka 导入 (bend-ingest-kafka) +--- + +在本教程中,我们将指导您使用 Docker 设置 Kafka 环境,并使用 [bend-ingest-kafka](https://github.com/databendcloud/bend-ingest-kafka) 将消息从 Kafka 加载到 Databend Cloud。 + +### 步骤 1:设置 Kafka 环境 + +在端口 9092 上运行 Apache Kafka Docker 容器: + +```shell +MacBook-Air:~ eric$ docker run -d \ +> --name kafka \ +> -p 9092:9092 \ +> apache/kafka:latest +Unable to find image 'apache/kafka:latest' locally +latest: Pulling from apache/kafka +690e87867337: Pull complete +5dddb19fae62: Pull complete +86caa4220d9f: Pull complete +7802c028acb4: Pull complete +16a3d1421c02: Pull complete +ab648c7f18ee: Pull complete +a917a90b7df6: Pull complete +4e446fc89158: Pull complete +f800ce0fc22f: Pull complete +a2e5e46262c3: Pull complete +Digest: sha256:c89f315cff967322c5d2021434b32271393cb193aa7ec1d43e97341924e57069 +Status: Downloaded newer image for apache/kafka:latest +0261b8f3d5fde74f5f20340b58cb85d29d9b40ee4f48f1df2c41a68b616d22dc +``` + +### 步骤 2:创建 Topic 并生产消息 + +1. 访问 Kafka 容器: + +```shell +MacBook-Air:~ eric$ docker exec --workdir /opt/kafka/bin/ -it kafka sh +``` + +2. 创建一个名为 `test-topic` 的新 Kafka topic: + +```shell +/opt/kafka/bin $ ./kafka-topics.sh --bootstrap-server localhost:9092 --create --topic test-topic +Created topic test-topic. +``` + +3. 使用 Kafka 控制台生产者将消息发送到 test-topic: + +```shell +/opt/kafka/bin $ ./kafka-console-producer.sh --bootstrap-server localhost:9092 --topic test-topic +``` + +4. 输入 JSON 格式的消息: + +```json +{"id": 1, "name": "Alice", "age": 30} +{"id": 2, "name": "Bob", "age": 25} +``` + +5. 完成后,使用 Ctrl+C 停止生产者。 + +### 步骤 3:在 Databend Cloud 中创建表 + +在 Databend Cloud 中创建目标表: + +```sql +CREATE DATABASE doc; + +CREATE TABLE databend_topic ( + id INT NOT NULL, + name VARCHAR NOT NULL, + age INT NOT NULL + ) ENGINE=FUSE; +``` + +### 步骤 4:安装并运行 bend-ingest-kafka + +1. 运行以下命令安装 bend-ingest-kafka 工具: + +```shell +go install github.com/databendcloud/bend-ingest-kafka@latest +``` + +2. 运行以下命令,将来自 `test-topic` Kafka topic 的消息提取到 Databend Cloud 中的目标表: + +```shell +MacBook-Air:~ eric$ bend-ingest-kafka \ +> --kafka-bootstrap-servers="localhost:9092" \ +> --kafka-topic="test-topic" \ +> --databend-dsn="" \ +> --databend-table="doc.databend_topic" \ +> --data-format="json" +INFO[0000] Starting worker worker-0 +WARN[0072] Failed to read message from Kafka: context deadline exceeded kafka_batch_reader=ReadBatch +2024/08/20 15:10:15 ingest 2 rows (1.225576 rows/s), 75 bytes (45.959100 bytes/s) +``` + +3. 使用 BendSQL 连接到 Databend Cloud,并验证数据是否已成功加载: + +```bash +Welcome to BendSQL 0.19.2-1e338e1(2024-07-17T09:02:28.323121000Z). +Connecting to tn3ftqihs--eric.gw.aws-us-east-2.default.databend.com:443 with warehouse eric as user cloudapp +Connected to Databend Query v1.2.626-nightly-a055124b65(rust-1.81.0-nightly-2024-08-27T15:49:08.376336236Z) + +cloudapp@(eric)/doc> SELECT * FROM databend_topic; + +SELECT * FROM databend_topic + +-[ RECORD 1 ]----------------------------------- + id: 1 +name: Alice + age: 30 +-[ RECORD 2 ]----------------------------------- + id: 2 +name: Bob + age: 25 +``` + +4. 要以 RAW 模式加载消息,只需运行以下命令: + +```bash +bend-ingest-kafka \ + --kafka-bootstrap-servers="localhost:9092" \ + --kafka-topic="test-topic" \ + --databend-dsn="" \ + --is-json-transform=false +``` + +您将在 `doc` 数据库中获得一个新表,其中包含以下行: + +```bash +cloudapp@(eric)/doc> SELECT * FROM test_ingest; + +SELECT * FROM test_ingest + +-[ RECORD 1 ]----------------------------------- + uuid: 17f9e56e-19ba-4d42-88a0-e16b27815d04 + koffset: 0 + kpartition: 0 + raw_data: {"age":30,"id":1,"name":"Alice"} +record_metadata: {"create_time":"2024-08-27T19:10:45.888Z","key":"","offset":0,"partition":0,"topic":"test-topic"} + add_time: 2024-08-27 19:12:55.081444 +-[ RECORD 2 ]----------------------------------- + uuid: 0f57f71a-32ee-4df3-b75e-d123b9a91543 + koffset: 1 + kpartition: 0 + raw_data: {"age":25,"id":2,"name":"Bob"} +record_metadata: {"create_time":"2024-08-27T19:10:52.946Z","key":"","offset":1,"partition":0,"topic":"test-topic"} + add_time: 2024-08-27 19:12:55.081470 +``` diff --git a/docs/cn/tutorials/load/kafka-databend-kafka-connect.md b/docs/cn/tutorials/load/kafka-databend-kafka-connect.md new file mode 100644 index 0000000000..78f107addf --- /dev/null +++ b/docs/cn/tutorials/load/kafka-databend-kafka-connect.md @@ -0,0 +1,188 @@ +--- +title: 从 Kafka 导入 (Kafka Connect) +--- + +在本教程中,我们将使用 Kafka Connect sink connector 插件 [databend-kafka-connect](https://github.com/databendcloud/databend-kafka-connect) 建立 Confluent Cloud 中 Kafka 和 Databend Cloud 之间的连接。然后,我们将演示如何生成消息并将其加载到 Databend Cloud 中。 + +### Step 1: Setting up Kafka Environment + +在开始之前,请确保您的 Kafka 环境已在 Confluent Cloud 中正确设置。 + +1. 注册一个免费的 Confluent Cloud 帐户。注册并创建帐户后,[登录](https://confluent.cloud/login)到您的 Confluent Cloud 帐户。 + +2. 按照 [Confluent Quick Start](https://docs.confluent.io/cloud/current/get-started/index.html#step-1-create-a-ak-cluster-in-ccloud) 在您的默认环境中创建并启动一个基本的 Kafka 集群。 + +3. 按照 [Install Confluent CLI](https://docs.confluent.io/confluent-cli/current/install.html) 指南在您的本地机器上安装 Confluent CLI。安装完成后,登录到您的 Confluent Cloud 帐户以连接到 Confluent Cloud: + +```shell +confluent login --save +``` + +4. 使用 Confluent CLI 创建一个 API 密钥,并将其设置为活动的 API 密钥。 + +```shell +confluent kafka cluster list + + Current | ID | Name | Type | Cloud | Region | Availability | Network | Status +----------+------------+-----------+-------+-------+-----------+--------------+---------+--------- + * | lkc-jr57j2 | cluster_0 | BASIC | aws | us-east-2 | | | UP + +confluent api-key create --resource lkc-jr57j2 +It may take a couple of minutes for the API key to be ready. +Save the API key and secret. The secret is not retrievable later. ++------------+------------------------------------------------------------------+ +| API Key | | +| API Secret | | ++------------+------------------------------------------------------------------+ + +confluent api-key use --resource lkc-jr57j2 +``` + +### Step 2: Add Custom Connector Plugin + +在此步骤中,您将 Kafka Connect sink connector 插件 databend-kafka-connect 上传到 Confluent Cloud。 + +1. 从 [GitHub repository](https://github.com/databendcloud/databend-kafka-connect/releases) 下载最新版本的 databend-kafka-connect。 + +2. 在 Confluent Cloud 中,从导航菜单中,单击 **Connectors** > **Add Connector** > **Add plugin**。 + +3. 填写插件详细信息,如下所示,然后上传 databend-kafka-connect 包。 + +| Parameter | Description | +| ------------------------- | ---------------------------------------------------------- | +| Connector plugin name | 设置一个名称,例如 `databend_plugin` | +| Custom plugin description | 描述插件,例如 `Kafka Connect sink connector for Databend` | +| Connector class | `com.databend.kafka.connect.DatabendSinkConnector` | +| Connector type | `Sink` | + +### Step 3: Create a Kafka Topic + +在此步骤中,您将在 Confluent Cloud 中创建一个 Kafka topic。 + +1. 在 Confluent Cloud 中,从导航菜单中,单击 **Topics** > **Add topic**。 + +2. 设置 topic 名称,例如 `databend_topic`,然后继续下一步。 + +3. 选择 **Create a schema for message values**,然后单击 **Create Schema**。 + +![alt text](../../../../static/img/documents/tutorials/kafka-2.png) + +4. 在 **Add new schema** 页面上,选择 **Avro** 选项卡,然后复制以下 schema 并将其粘贴到编辑器中: + +```json +{ + "doc": "Sample schema to help you get started.", + "fields": [ + { + "doc": "The int type is a 32-bit signed integer.", + "name": "id", + "type": "int" + }, + { + "doc": "The string is a unicode character sequence.", + "name": "name", + "type": "string" + }, + { + "doc": "The string is a unicode character sequence.", + "name": "age", + "type": "int" + } + ], + "name": "sampleRecord", + "type": "record" +} +``` + +![alt text](../../../../static/img/documents/tutorials/kafka-1.png) + +### Step 4: Add a Connector + +在此步骤中,您将设置一个连接到 Databend Cloud 的 connector。 + +1. 在 Confluent Cloud 中,从导航菜单中,单击 **Connectors** > **Add Connector**。搜索然后选择您上传的插件。 + +![alt text](../../../../static/img/documents/tutorials/kafka-3.png) + +2. 在 **Kafka credentials** 步骤中,选择 **Use an existing API key**,然后输入您使用 Confluent CLI 创建的 API 密钥和 secret。 + +![alt text](../../../../static/img/documents/tutorials/kafka-4.png) + +3. 在 **Configuration** 步骤中,选择 **JSON** 选项卡,然后复制以下配置并将其粘贴到编辑器中,将占位符替换为您的实际值: + +```json +{ + "auto.create": "true", + "auto.evolve": "true", + "batch.size": "1", + "confluent.custom.schema.registry.auto": "true", + "connection.attempts": "3", + "connection.backoff.ms": "10000", + "connection.database": "", + "connection.password": "", + "connection.url": "jdbc:databend://", + "connection.user": "cloudapp", + "errors.tolerance": "none", + "insert.mode": "upsert", + "key.converter": "org.apache.kafka.connect.storage.StringConverter", + "max.retries": "10", + "pk.fields": "id", + "pk.mode": "record_value", + "table.name.format": ".${topic}", + "topics": "databend_topic", + "value.converter": "io.confluent.connect.avro.AvroConverter" +} +``` + +4. 在 **Networking** 步骤中,输入您的 Databend Cloud 计算集群 endpoint,例如 `xxxxxxxxx--xxx.gw.aws-us-east-2.default.databend.com`。 + +5. 在 **Sizing** 步骤中,将其设置为 **1 task**。 + +6. 在 **Review and launch** 步骤中,设置一个名称,例如 `databend_connector`。 + +### Step 5: Produce Messages + +在此步骤中,您将使用 Confluent CLI 生成消息,并验证它们是否已加载到 Databend Cloud 中。 + +1. 在您的本地机器上,将用于创建 topic 的 schema 保存为 JSON 文件,例如 `schema.json`。 + +```json +{ + "doc": "Sample schema to help you get started.", + "fields": [ + { + "doc": "The int type is a 32-bit signed integer.", + "name": "id", + "type": "int" + }, + { + "doc": "The string is a unicode character sequence.", + "name": "name", + "type": "string" + }, + { + "doc": "The string is a unicode character sequence.", + "name": "age", + "type": "int" + } + ], + "name": "sampleRecord", + "type": "record" +} +``` + +2. 在 Confluent CLI 中,使用 `confluent kafka topic produce ` 命令启动 Kafka producer,以将消息发送到您的 Kafka topic。 + +```shell +confluent kafka topic produce databend_topic --value-format avro --schema schema.json +Successfully registered schema with ID "100001". +Starting Kafka Producer. Use Ctrl-C or Ctrl-D to exit. + +{"id":1, "name":"Alice", "age":30} +{"id":2, "name":"Bob", "age":25} +{"id":3, "name":"Charlie", "age":35} +``` + +3. 在 Databend Cloud 中,验证数据是否已成功加载: + +![alt text](../../../../static/img/documents/tutorials/kafka-5.png) diff --git a/docs/cn/tutorials/ingest-and-stream/query-metadata.md b/docs/cn/tutorials/load/query-metadata.md similarity index 85% rename from docs/cn/tutorials/ingest-and-stream/query-metadata.md rename to docs/cn/tutorials/load/query-metadata.md index f91510e998..06b53444b6 100644 --- a/docs/cn/tutorials/ingest-and-stream/query-metadata.md +++ b/docs/cn/tutorials/load/query-metadata.md @@ -1,25 +1,25 @@ --- -title: 检查 Databend 元数据 +title: 查询元数据 --- -本教程将演示如何把示例 Parquet 文件上传到 Internal Stage、推断其列定义,并创建带有文件级元数据字段的表,以便追踪每行数据来自哪个文件、对应的行号等。 +在本教程中,我们将引导您完成以下步骤:将示例 Parquet 文件上传到内部 Stage,推断列定义,并创建一个包含文件级别元数据字段的表。当您想要跟踪每一行的来源或在数据集中包含文件名和行号等元数据时,这将非常有用。 ### 开始之前 -请先完成以下准备: +在开始之前,请确保您已准备好以下先决条件: -- [下载示例数据集](https://datasets.databend.com/iris.parquet) 并保存到本地。 -- 在本地安装 BendSQL。参见 [安装 BendSQL](/guides/sql-clients/bendsql/#installing-bendsql)。 +- [下载示例数据集](https://datasets.databend.com/iris.parquet) 并将其保存到您的本地文件夹。 +- BendSQL 已安装在您的本地机器上。有关如何使用各种包管理器安装 BendSQL 的说明,请参阅 [安装 BendSQL](/guides/sql-clients/bendsql/#installing-bendsql)。 -### 步骤 1:创建 Internal Stage +### 步骤 1:创建一个内部 Stage ```sql CREATE STAGE my_internal_stage; ``` -### 步骤 2:通过 BendSQL 上传文件 +### 步骤 2:使用 BendSQL 上传示例文件 -假设示例文件位于 `/Users/eric/Documents/iris.parquet`,可在 BendSQL 中运行: +假设您的示例数据集位于 `/Users/eric/Documents/iris.parquet`,请在 BendSQL 中运行以下命令将其上传到 Stage: ```sql PUT fs:///Users/eric/Documents/iris.parquet @my_internal_stage; @@ -33,10 +33,11 @@ PUT fs:///Users/eric/Documents/iris.parquet @my_internal_stage; └───────────────────────────────────────────────────────┘ ``` -### 步骤 3:从 Stage 文件推断列定义 - +### 步骤 3:从暂存文件中查询列定义 :::caution -`infer_schema` 目前仅支持 Parquet 文件。 + +`infer_schema` 目前仅支持 parquet 文件格式。 + ::: ```sql @@ -56,9 +57,9 @@ SELECT * FROM INFER_SCHEMA(location => '@my_internal_stage/iris.parquet'); └──────────────────────────────────────────────┘ ``` -### 步骤 4:带元数据字段的预览 +### 步骤 4:使用元数据字段预览文件内容 -可以使用 `metadata$filename`、`metadata$file_row_number` 等字段查看文件级信息: +您可以使用 `metadata$filename` 和 `metadata$file_row_number` 等元数据字段来检查文件级别的信息: ```sql SELECT @@ -81,7 +82,9 @@ LIMIT 5; └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ ``` -### 步骤 5:创建包含元数据字段的表 +### 步骤 5:创建一个包含元数据字段的表 + +让我们创建一个表,其中包含推断的列以及文件名和行号等元数据字段: ```sql CREATE TABLE iris_with_meta AS @@ -96,7 +99,7 @@ SELECT FROM @my_internal_stage/iris.parquet; ``` -### 步骤 6:查询带元数据的数据 +### 步骤 6:查询带有元数据的数据 ```sql SELECT * FROM iris_with_meta LIMIT 5; @@ -112,4 +115,4 @@ SELECT * FROM iris_with_meta LIMIT 5; │ iris.parquet │ 3 │ 4.6 │ 3.1 │ 1.5 │ 0.2 │ setosa │ │ iris.parquet │ 4 │ 5 │ 3.6 │ 1.4 │ 0.2 │ setosa │ └────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ -``` +``` \ No newline at end of file diff --git a/docs/cn/tutorials/migrate/_category_.json b/docs/cn/tutorials/migrate/_category_.json index fe39b1c626..b31a3fc068 100644 --- a/docs/cn/tutorials/migrate/_category_.json +++ b/docs/cn/tutorials/migrate/_category_.json @@ -1,4 +1,3 @@ { - "label": "数据库迁移", - "position": 3 -} + "label": "数据迁移" +} \ No newline at end of file diff --git a/docs/cn/tutorials/migrate/index.md b/docs/cn/tutorials/migrate/index.md index 07e13d10df..a54b2111fe 100644 --- a/docs/cn/tutorials/migrate/index.md +++ b/docs/cn/tutorials/migrate/index.md @@ -1,62 +1,62 @@ --- -title: 规划向 Databend 的迁移 +title: 数据迁移到 Databend --- # 数据迁移到 Databend -请选择源数据库与迁移需求,找到最适合的 Databend 迁移方案。 +选择您的源数据库和迁移需求,找到最适合迁移到 Databend 的方法。 -## MySQL → Databend +## MySQL 到 Databend -Databend 支持两类主要迁移方式: +Databend 支持从 MySQL 迁移的两种主要方法: -| 迁移方式 | 推荐工具 | 支持的 MySQL 版本 | -|----------|----------|-------------------| -| 批量加载 | db-archiver | 所有 MySQL 版本 | -| 以 CDC 实时同步 | Debezium | 所有 MySQL 版本 | +| 迁移方法 | 推荐工具 | 支持的 MySQL 版本 | +|--------------------------|------------------------------|--------------------------| +| 批量加载 | db-archiver | 所有 MySQL 版本 | +| CDC 持续同步 | Debezium | 所有 MySQL 版本 | -### 何时选择实时迁移(CDC) +### 何时选择实时迁移 (CDC) -> **推荐**:实时迁移优先选择 **Debezium**。 +> **推荐**:对于实时迁移,我们推荐 **Debezium** 作为默认选择。 -- 需要持续同步,延迟尽量低 -- 需要捕获所有数据变更(插入、更新、删除) +- 您需要最小延迟的持续数据同步 +- 您需要捕获所有数据变更 (插入、更新、删除) -| 工具 | 能力 | 最适合场景 | 适用情形 | -|------|------|------------|----------| -| [Debezium](/tutorials/migrate/migrating-from-mysql-with-debezium) | CDC、全量 | 以极低延迟捕获行级变更 | 需要完整的 INSERT/UPDATE/DELETE CDC;希望基于 binlog 的复制以降低源库压力 | -| [Flink CDC](/tutorials/migrate/migrating-from-mysql-with-flink-cdc) | CDC、全量、转换 | 复杂 ETL + 实时转换 | 迁移过程中需要过滤/转换;需要可扩展的计算框架;希望使用 SQL 完成转换 | -| [Kafka Connect](/tutorials/migrate/migrating-from-mysql-with-kafka-connect) | CDC、增量、全量 | 已有 Kafka 基础设施 | 已经使用 Kafka;需要简单配置;可以依赖时间戳或自增字段做增量同步 | +| 工具 | 功能 | 最适合 | 选择条件 | +|------|------------|----------|-------------| +| [Debezium](/tutorials/migrate/migrating-from-mysql-with-debezium) | CDC、全量加载 | 以最小延迟捕获行级变更 | 您需要完整的 CDC 以及所有 DML 操作 (INSERT/UPDATE/DELETE);您希望基于 binlog 的复制对源数据库影响最小 | +| [Flink CDC](/tutorials/migrate/migrating-from-mysql-with-flink-cdc) | CDC、全量加载、转换 | 具有实时转换的复杂 ETL | 您需要在迁移过程中过滤或转换数据;您需要可扩展的处理框架;您希望基于 SQL 的转换功能 | +| [Kafka Connect](/tutorials/migrate/migrating-from-mysql-with-kafka-connect) | CDC、增量、全量加载 | 现有的 Kafka 基础设施 | 您已经在使用 Kafka;您需要简单的配置;您可以使用时间戳或自增列进行增量同步 | ### 何时选择批量迁移 -> **推荐**:批量迁移优先选择 **db-archiver**。 +> **推荐**:对于批量迁移,我们推荐 **db-archiver** 作为默认选择。 -- 需要一次性或定期批量迁移 -- 需要迁移大量历史数据 -- 对实时性没有要求 +- 您需要一次性或定时数据传输 +- 您有大量历史数据需要迁移 +- 您不需要实时同步 -| 工具 | 能力 | 最适合场景 | 适用情形 | -|------|------|------------|----------| -| [db-archiver](/tutorials/migrate/migrating-from-mysql-with-db-archiver) | 全量、增量 | 高效归档历史数据 | 数据按时间分区;需要归档历史;希望轻量化工具 | -| [DataX](/tutorials/migrate/migrating-from-mysql-with-datax) | 全量、增量 | 大规模数据高速迁移 | 需要高吞吐;希望并行处理;需要成熟广泛使用的工具 | -| [Addax](/tutorials/migrate/migrating-from-mysql-with-addax) | 全量、增量 | DataX 增强版,更高性能 | 相比 DataX 需要更好的错误处理;想要监控增强;希望使用更新的功能 | +| 工具 | 功能 | 最适合 | 选择条件 | +|------|------------|----------|-------------| +| [db-archiver](/tutorials/migrate/migrating-from-mysql-with-db-archiver) | 全量加载、增量 | 高效的历史数据归档 | 您有按时间分区的数据;您需要归档历史数据;您希望使用轻量级、专注的工具 | +| [DataX](/tutorials/migrate/migrating-from-mysql-with-datax) | 全量加载、增量 | 大数据集的高性能传输 | 您需要大数据集的高吞吐量;您希望并行处理能力;您需要成熟、广泛使用的工具 | +| [Addax](/tutorials/migrate/migrating-from-mysql-with-addax) | 全量加载、增量 | 性能更好的增强版 DataX | 您需要比 DataX 更好的错误处理;您希望改进的监控功能;您需要更新的特性和功能 | -## Snowflake → Databend +## Snowflake 到 Databend -Snowflake 迁移 Databend 需要三步: +从 Snowflake 迁移到 Databend 包含三个步骤: -1. **为 Amazon S3 配置 Snowflake Storage Integration**:建立 Snowflake 与 S3 的安全访问 -2. **准备并导出数据到 S3**:将 Snowflake 数据导出为 Parquet -3. **加载数据到 Databend**:从 S3 导入 Databend +1. **为 Amazon S3 配置 Snowflake Storage Integration**:在 Snowflake 和 S3 之间建立安全访问 +2. **准备并导出数据到 Amazon S3**:将您的 Snowflake 数据以 Parquet 格式导出到 S3 +3. **将数据加载到 Databend**:从 S3 将数据导入到 Databend ### 何时选择 Snowflake 迁移 -| 工具 | 能力 | 最适合场景 | 适用情形 | -|------|------|------------|----------| -| [Snowflake 迁移](/tutorials/migrate/migrating-from-snowflake) | 全量 | 整体数据仓库迁移 | 需要迁出整个 Snowflake 仓库;希望通过 Parquet 高效传输;需要保持两边的 schema 兼容 | +| 工具 | 功能 | 最适合 | 选择条件 | +|------|------------|----------|-------------| +| [Snowflake 迁移](/tutorials/migrate/migrating-from-snowflake) | 全量加载 | 完整的数仓转换 | 您需要迁移整个 Snowflake 数仓;您希望使用 Parquet 格式进行高效数据传输;您需要在系统间保持 schema 兼容性 | ## 相关主题 - [加载数据](/guides/load-data/) -- [导出数据](/guides/unload-data/) +- [卸载数据](/guides/unload-data/) \ No newline at end of file diff --git a/docs/cn/tutorials/migrate/migrating-from-mysql-with-addax.md b/docs/cn/tutorials/migrate/migrating-from-mysql-with-addax.md index 961db637be..f61ebdde04 100644 --- a/docs/cn/tutorials/migrate/migrating-from-mysql-with-addax.md +++ b/docs/cn/tutorials/migrate/migrating-from-mysql-with-addax.md @@ -1,15 +1,15 @@ --- -title: 使用 Addax 迁移 MySQL(批量) -sidebar_label: 'MySQL → Databend:Addax(批量)' +title: 使用 Addax 迁移 MySQL +sidebar_label: 'Addax' --- -> **能力**:全量、增量 +> **功能**: 全量导入, 增量导入 -本教程演示如何使用 Addax 将 MySQL 数据加载到 Databend。请提前部署 Databend、MySQL 与 Addax。 +在本教程中,您将使用 Addax 将数据从 MySQL 加载到 Databend。在开始之前,请确保您已在环境中成功设置 Databend、MySQL 和 Addax。 -1. 在 MySQL 中创建数据迁移账号及示例数据。 +1. 在 MySQL 中,创建一个 SQL 用户,您将使用该用户进行数据加载,然后创建一个表并使用示例数据填充它。 -```sql title='MySQL' +```sql title='In MySQL:' mysql> create user 'mysqlu1'@'%' identified by '123'; mysql> grant all on *.* to 'mysqlu1'@'%'; mysql> create database db; @@ -17,17 +17,17 @@ mysql> create table db.tb01(id int, col1 varchar(10)); mysql> insert into db.tb01 values(1, 'test1'), (2, 'test2'), (3, 'test3'); ``` -2. 在 Databend 中创建目标表。 +2. 在 Databend 中,创建相应的目标表。 -```sql title='Databend' +```sql title='In Databend:' databend> create database migrated_db; databend> create table migrated_db.tb01(id int null, col1 String null); ``` -3. 将以下内容保存为 _mysql_demo.json_: +3. 将以下代码复制并粘贴到文件中,并将该文件命名为 _mysql_demo.json_: :::note -参数说明参见 https://wgzhao.github.io/Addax/develop/writer/databendwriter/#_2 +有关可用参数及其说明,请参阅以下链接提供的文档:https://wgzhao.github.io/Addax/develop/writer/databendwriter/#_2 ::: ```json title='mysql_demo.json' @@ -76,14 +76,14 @@ databend> create table migrated_db.tb01(id int null, col1 String null); } ``` -4. 运行 Addax: +4. 运行 Addax: ```shell cd {YOUR_ADDAX_DIR_BIN} ./addax.sh -L debug ./mysql_demo.json ``` -完成后即可在 Databend 验证: +一切就绪!要验证数据加载,请在 Databend 中执行查询: ```sql databend> select * from migrated_db.tb01; @@ -94,4 +94,4 @@ databend> select * from migrated_db.tb01; | 2 | test2 | | 3 | test3 | +------+-------+ -``` +``` \ No newline at end of file diff --git a/docs/cn/tutorials/migrate/migrating-from-mysql-with-datax.md b/docs/cn/tutorials/migrate/migrating-from-mysql-with-datax.md index 113a7fa5ec..7c2ce8e2b5 100644 --- a/docs/cn/tutorials/migrate/migrating-from-mysql-with-datax.md +++ b/docs/cn/tutorials/migrate/migrating-from-mysql-with-datax.md @@ -1,15 +1,15 @@ --- -title: 使用 DataX 迁移 MySQL(批量) -sidebar_label: 'MySQL → Databend:DataX(批量)' +title: 使用 DataX 迁移 MySQL +sidebar_label: 'DataX' --- -> **能力**:全量、增量 +> **功能**: 全量导入, 增量导入 -本教程演示如何使用 DataX 将 MySQL 数据加载到 Databend。请提前在环境中部署好 Databend、MySQL 与 DataX。 +在本教程中,您将使用 DataX 将数据从 MySQL 加载到 Databend。在开始之前,请确保您已在您的环境中成功设置了 Databend、MySQL 和 DataX。 -1. 在 MySQL 中创建用于数据迁移的 SQL 用户,并准备示例表及数据。 +1. 在 MySQL 中,创建一个 SQL 用户,您将使用该用户进行数据加载,然后创建一个表并使用示例数据填充它。 -```sql title='MySQL' +```sql title='在 MySQL 中:' mysql> create user 'mysqlu1'@'%' identified by 'databend'; mysql> grant all on *.* to 'mysqlu1'@'%'; mysql> create database db; @@ -17,18 +17,18 @@ mysql> create table db.tb01(id int, d double, t TIMESTAMP, col1 varchar(10)); mysql> insert into db.tb01 values(1, 3.1,now(), 'test1'), (1, 4.1,now(), 'test2'), (1, 4.1,now(), 'test2'); ``` -2. 在 Databend 中创建对应目标表。 +2. 在 Databend 中,创建一个对应的目标表。 :::note -DataX 会自动将数据类型映射到 Databend 类型,详情见 https://github.com/alibaba/DataX/blob/master/databendwriter/doc/databendwriter.md#33-type-convert +DataX 数据类型在加载到 Databend 时可以转换为 Databend 的数据类型。有关 DataX 数据类型与 Databend 数据类型之间的具体对应关系,请参阅以下链接中提供的文档:https://github.com/alibaba/DataX/blob/master/databendwriter/doc/databendwriter.md#33-type-convert ::: -```sql title='Databend' +```sql title='在 Databend 中:' databend> create database migrated_db; databend> create table migrated_db.tb01(id int null, d double null, t TIMESTAMP null, col1 varchar(10) null); ``` -3. 将以下内容保存为 *mysql_demo.json*。更多参数请参考 https://github.com/alibaba/DataX/blob/master/databendwriter/doc/databendwriter.md#32-configuration-description +3. 将以下代码复制并粘贴到一个文件中,并将该文件命名为 *mysql_demo.json*。有关可用参数及其描述,请参阅以下链接中提供的文档:https://github.com/alibaba/DataX/blob/master/databendwriter/doc/databendwriter.md#32-configuration-description ```json title='mysql_demo.json' { @@ -90,7 +90,7 @@ databend> create table migrated_db.tb01(id int null, d double null, t TIMESTAMP ``` :::tip -上述配置默认以 INSERT 模式写入 Databend。若需启用 REPLACE 模式,请添加 `writeMode` 与 `onConflictColumn`,如: +上面提供的代码配置 DatabendWriter 在 INSERT 模式下运行。要切换到 REPLACE 模式,您必须包含 writeMode 和 onConflictColumn 参数。例如: ```json title='mysql_demo.json' ... @@ -103,14 +103,14 @@ databend> create table migrated_db.tb01(id int null, d double null, t TIMESTAMP ``` ::: -4. 运行 DataX: +4. 运行 DataX: ```shell cd {YOUR_DATAX_DIR_BIN} python datax.py ./mysql_demo.json ``` -完成后即可在 Databend 中验证: +一切就绪!要验证数据加载,请在 Databend 中执行查询: ```sql databend> select * from migrated_db.tb01; @@ -121,4 +121,4 @@ databend> select * from migrated_db.tb01; | 1 | 4.1 | 2023-02-01 07:11:08.501000 | test2 | | 1 | 4.1 | 2023-02-01 07:11:08.501000 | test2 | +------+------+----------------------------+-------+ -``` +``` \ No newline at end of file diff --git a/docs/cn/tutorials/migrate/migrating-from-mysql-with-db-archiver.md b/docs/cn/tutorials/migrate/migrating-from-mysql-with-db-archiver.md index f2d972c602..aa4f0a4bd6 100644 --- a/docs/cn/tutorials/migrate/migrating-from-mysql-with-db-archiver.md +++ b/docs/cn/tutorials/migrate/migrating-from-mysql-with-db-archiver.md @@ -1,24 +1,24 @@ --- -title: 使用 db-archiver 迁移 MySQL(批量) -sidebar_label: 'MySQL → Databend:db-archiver(批量)' +title: 使用 db-archiver 迁移 MySQL +sidebar_label: 'db-archiver' --- -> **能力**:全量、增量 -> **✅ 推荐**:批量迁移历史数据 +> **功能**:全量导入 (Full Load)、增量导入 (Incremental) +> **✅ 推荐**用于历史数据的批量迁移 -本教程将演示如何通过 db-archiver 将 MySQL 迁移到 Databend Cloud。 +在本教程中,我们将引导你完成使用 db-archiver 从 MySQL 迁移到 Databend Cloud 的过程。 -## 开始之前 +## 准备工作 -请准备以下环境: +在开始之前,请确保你已满足以下先决条件: -- 本地安装 [Docker](https://www.docker.com/),用于启动 MySQL。 -- 本地安装 [Go](https://go.dev/dl/),用于安装 db-archiver。 -- 本地安装 BendSQL,参见 [安装 BendSQL](/guides/sql-clients/bendsql/#installing-bendsql)。 +- [Docker](https://www.docker.com/) 已安装在本地计算机,将用于启动 MySQL。 +- [Go](https://go.dev/dl/) 已安装在本地计算机,安装 db-archiver 需要它。 +- BendSQL 已安装在本地计算机。关于如何使用各种包管理器安装 BendSQL,请参阅 [安装 BendSQL](/guides/sql-clients/bendsql/#installing-bendsql)。 -## 步骤 1:在 Docker 中启动 MySQL +## 第 1 步:在 Docker 中启动 MySQL -1. 运行以下命令启动名为 **mysql-server** 的 MySQL 容器,创建 `mydb` 数据库,root 密码为 `root`: +1. 在本地计算机启动 MySQL 容器。以下命令启动名为 **mysql-server** 的容器,创建 **mydb** 数据库,并将 root 密码设为 `root`: ```bash docker run \ @@ -29,34 +29,50 @@ docker run \ -d mysql:8 ``` -2. 验证容器运行状态: +2. 验证 MySQL 运行状态: ```bash docker ps ``` -输出中应包含 **mysql-server**: +检查输出中名为 **mysql-server** 的容器: -```bash -CONTAINER ID IMAGE ... NAMES -1a8f8d7d0e1a mysql:8 ... mysql-server +```bash +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +1a8f8d7d0e1a mysql:8 "docker-entrypoint.s…" 10 hours ago Up About an hour 0.0.0.0:3306->3306/tcp, 33060/tcp mysql-server ``` -## 步骤 2:写入示例数据 +## 第 2 步:向 MySQL 填充示例数据 -1. 登录 MySQL,密码为 `root`: +1. 登录 MySQL 容器,提示时输入密码 `root`: ```bash docker exec -it mysql-server mysql -u root -p ``` -2. 切换到 `mydb`: +``` +Enter password: +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 8 +Server version: 8.4.4 MySQL Community Server - GPL + +Copyright (c) 2000, 2025, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. +``` + +2. 切换到 **mydb** 数据库: ```bash mysql> USE mydb; +Database changed ``` -3. 创建表 `my_table` 并插入数据: +3. 复制粘贴以下 SQL,创建 **my_table** 表并插入数据: ```sql CREATE TABLE my_table ( @@ -70,7 +86,7 @@ INSERT INTO my_table (name, value) VALUES ('Charlie', 30); ``` -4. 查询确认: +4. 验证数据: ```bash mysql> SELECT * FROM my_table; @@ -81,14 +97,20 @@ mysql> SELECT * FROM my_table; | 2 | Bob | 20 | | 3 | Charlie | 30 | +----+---------+-------+ +3 rows in set (0.00 sec) ``` -5. 输入 `quit` 退出。 +5. 退出 MySQL 容器: -## 步骤 3:在 Databend Cloud 创建目标表 +```bash +mysql> quit +Bye +``` + +## 第 3 步:在 Databend Cloud 设置目标 -1. 使用 BendSQL 连接 Databend Cloud,参考 [使用 BendSQL 连接 Databend Cloud](../getting-started/connect-to-databendcloud-bendsql.md)。 -2. 创建目标表 `my_table`: +1. 使用 BendSQL 连接 Databend Cloud。若不熟悉 BendSQL,请参阅教程:[使用 BendSQL 连接 Databend Cloud](../connect/connect-to-databendcloud-bendsql.md)。 +2. 复制粘贴以下 SQL,创建目标表 **my_table**: ```sql CREATE TABLE my_table ( @@ -98,13 +120,13 @@ CREATE TABLE my_table ( ); ``` -## 步骤 4:安装 db-archiver +## 第 4 步:安装 db-archiver -从 [Releases](https://github.com/databendcloud/db-archiver/releases/) 下载适合你架构的版本。 +根据系统架构,从[发布页面](https://github.com/databendcloud/db-archiver/releases/)下载 db-archiver。 -## 步骤 5:配置并运行 db-archiver +## 第 5 步:配置并运行 db-archiver -1. 新建本地文件 **conf.json**: +1. 在本地创建 **conf.json** 文件,内容如下: ```json { @@ -130,18 +152,64 @@ CREATE TABLE my_table ( } ``` -2. 在 conf.json 所在目录运行: +2. 在 **conf.json** 所在目录运行以下命令启动迁移: ```bash db-archiver -f conf.json ``` -控制台将显示迁移进度: +迁移开始输出如下: ```bash start time: 2025-01-22 21:45:33 -... +sourcedatabase pattern ^mydb$ +not match db: information_schema +sourcedatabase pattern ^mydb$ +match db: mydb +sourcedatabase pattern ^mydb$ +not match db: mysql +sourcedatabase pattern ^mydb$ +not match db: performance_schema +sourcedatabase pattern ^mydb$ +not match db: sys +INFO[0000] Start worker mydb.my_table +INFO[0000] Worker mydb.my_table checking before start +INFO[0000] Starting worker mydb.my_table +INFO[0000] db.table is mydb.my_table, minSplitKey: 1, maxSplitKey : 6 +2025/01/22 21:45:33 thread-1: extract 2 rows (1.997771 rows/s) +2025/01/22 21:45:33 thread-1: extract 0 rows (1.999639 rows/s) +2025/01/22 21:45:33 thread-1: extract 2 rows (1.999887 rows/s) +2025/01/22 21:45:33 thread-1: extract 2 rows (1.999786 rows/s) +INFO[0000] get presigned url cost: 126 ms +INFO[0000] get presigned url cost: 140 ms +INFO[0000] get presigned url cost: 159 ms +INFO[0000] upload by presigned url cost: 194 ms +INFO[0000] upload by presigned url cost: 218 ms +INFO[0000] upload by presigned url cost: 230 ms +INFO[0000] thread-1: copy into cost: 364 ms ingest_databend=IngestData +2025/01/22 21:45:34 thread-1: ingest 2 rows (2.777579 rows/s), 68 bytes (94.437695 bytes/s) +2025/01/22 21:45:34 Globla speed: total ingested 2 rows (2.777143 rows/s), 29 bytes (40.268568 bytes/s) +INFO[0001] thread-1: copy into cost: 407 ms ingest_databend=IngestData +2025/01/22 21:45:34 thread-1: ingest 2 rows (2.603310 rows/s), 72 bytes (88.512532 bytes/s) +2025/01/22 21:45:34 Globla speed: total ingested 4 rows (2.603103 rows/s), 62 bytes (37.744993 bytes/s) +INFO[0001] thread-1: copy into cost: 475 ms ingest_databend=IngestData +2025/01/22 21:45:34 thread-1: ingest 2 rows (2.401148 rows/s), 70 bytes (81.639015 bytes/s) +2025/01/22 21:45:34 Globla speed: total ingested 6 rows (2.400957 rows/s), 93 bytes (34.813873 bytes/s) INFO[0001] Worker dbarchiver finished and data correct, source data count is 6, target data count is 6 end time: 2025-01-22 21:45:34 total time: 1.269478875s ``` + +3. 返回 BendSQL 会话验证迁移: + +```sql +SELECT * FROM my_table; + +┌────────────────────────────────────────────┐ +│ id │ name │ value │ +├───────┼──────────────────┼─────────────────┤ +│ 3 │ Charlie │ 30 │ +│ 1 │ Alice │ 10 │ +│ 2 │ Bob │ 20 │ +└────────────────────────────────────────────┘ +``` \ No newline at end of file diff --git a/docs/cn/tutorials/migrate/migrating-from-mysql-with-debezium.md b/docs/cn/tutorials/migrate/migrating-from-mysql-with-debezium.md index 80eab586b0..95d1290440 100644 --- a/docs/cn/tutorials/migrate/migrating-from-mysql-with-debezium.md +++ b/docs/cn/tutorials/migrate/migrating-from-mysql-with-debezium.md @@ -1,16 +1,16 @@ --- -title: 使用 Debezium 迁移 MySQL(CDC) -sidebar_label: 'MySQL → Databend:Debezium(CDC)' +title: 使用 Debezium 迁移 MySQL +sidebar_label: 'Debezium' --- -> **能力**:CDC、全量 -> **✅ 推荐**:实时迁移并完整捕获变更 +> **功能**: CDC, 全量导入 +> **✅ 推荐** 用于实时迁移,具有完整变更数据捕获 -本教程将演示如何使用 Debezium 将 MySQL 数据同步到 Databend。请提前部署 Databend、MySQL 与 Debezium。 +在本教程中,您将使用 Debezium 将数据从 MySQL 加载到 Databend。在开始之前,请确保您已在环境中成功设置 Databend、MySQL 和 Debezium。 -## 步骤 1:在 MySQL 中准备数据 +## 步骤 1. 准备 MySQL 中的数据 -创建数据库与表并插入示例数据: +在 MySQL 中创建一个数据库和一个表,并将示例数据插入到表中。 ```sql CREATE DATABASE mydb; @@ -31,19 +31,19 @@ INSERT INTO products VALUES (default,"scooter","Small 2-wheel scooter"), (default,"spare tire","24 inch spare tire"); ``` -## 步骤 2:在 Databend 中创建数据库 +## 步骤 2. 在 Databend 中创建数据库 -只需创建对应数据库即可,无需建表: +在 Databend 中创建相应的数据库。请注意,您无需创建与 MySQL 中的表对应的表。 ```sql CREATE DATABASE debezium; ``` -## 步骤 3:创建 application.properties +## 步骤 3. 创建 application.properties -创建文件 _application.properties_ 并启动 debezium-server-databend。安装与启动方法见 [Installing debezium-server-databend](#installing-debezium-server-databend)。 +创建文件 _application.properties_,然后启动 debezium-server-databend。有关如何安装和启动该工具,请参见 [安装 debezium-server-databend](#installing-debezium-server-databend)。 -首次启动时会按配置的 Batch Size 对 MySQL 数据进行全量同步,成功后即可在 Databend 中看到这些数据。 +首次启动时,该工具使用指定的批量大小执行从 MySQL 到 Databend 的数据完全同步。因此,成功复制后,MySQL 中的数据现在在 Databend 中可见。 ```text title='application.properties' debezium.sink.type=databend @@ -92,4 +92,4 @@ quarkus.log.level=INFO quarkus.log.category."org.eclipse.jetty".level=WARN ``` -完成配置后即可在 Databend 查询 `products` 表,验证 MySQL 数据是否同步。随后在 MySQL 中执行插入、更新或删除,也会实时体现在 Databend 中。 +一切就绪!如果您查询 Databend 中的 products 表,您将看到 MySQL 中的数据已成功同步。您可以随意在 MySQL 中执行插入、更新或删除操作,并且您会观察到 Databend 中也反映了相应的更改。 \ No newline at end of file diff --git a/docs/cn/tutorials/migrate/migrating-from-mysql-with-flink-cdc.md b/docs/cn/tutorials/migrate/migrating-from-mysql-with-flink-cdc.md index 6640580cfa..c0b120d446 100644 --- a/docs/cn/tutorials/migrate/migrating-from-mysql-with-flink-cdc.md +++ b/docs/cn/tutorials/migrate/migrating-from-mysql-with-flink-cdc.md @@ -1,37 +1,42 @@ --- title: 使用 Flink CDC 迁移 MySQL -sidebar_label: 'MySQL → Databend:Flink CDC' +sidebar_label: 'Flink CDC' --- -> **能力**:CDC、全量、转换 +> **功能**: CDC, 全量导入, 转换 -本教程将演示如何借助 Apache Flink CDC 将 MySQL 数据迁移到 Databend Cloud。 +在本教程中,我们将引导您完成使用 Apache Flink CDC 从 MySQL 迁移到 Databend Cloud 的过程。 ## 开始之前 -请确保: +在开始之前,请确保您已准备好以下先决条件: -- 本地已安装 [Docker](https://www.docker.com/),用于启动 MySQL。 -- 本地安装 Java 8 或 11,用于运行 [Flink Databend Connector](https://github.com/databendcloud/flink-connector-databend)。 -- 本地安装 BendSQL,参见 [安装 BendSQL](/guides/sql-clients/bendsql/#installing-bendsql)。 +- 您的本地机器上已安装 [Docker](https://www.docker.com/),因为它将用于启动 MySQL。 +- 您的本地机器上已安装 Java 8 或 11,这是 [Flink Databend Connector](https://github.com/databendcloud/flink-connector-databend) 所必需的。 +- 您的本地机器上已安装 BendSQL。有关如何使用各种包管理器安装 BendSQL 的说明,请参阅 [安装 BendSQL](/guides/sql-clients/bendsql/#installing-bendsql)。 ## 步骤 1:在 Docker 中启动 MySQL -1. 创建配置文件 **mysql.cnf**,并保存到稍后挂载到容器的目录(示例 `/Users/eric/Downloads/mysql.cnf`): +1. 创建一个名为 **mysql.cnf** 的配置文件,内容如下,并将此文件保存在将映射到 MySQL 容器的本地目录中,例如 `/Users/eric/Downloads/mysql.cnf`: ```cnf [mysqld] +# Basic settings server-id=1 log-bin=mysql-bin binlog_format=ROW binlog_row_image=FULL expire_logs_days=3 + +# Character set settings character_set_server=utf8mb4 collation-server=utf8mb4_unicode_ci + +# Authentication settings default-authentication-plugin=mysql_native_password ``` -2. 启动名为 **mysql-server** 的 MySQL 容器,创建 `mydb` 数据库,并把 root 密码设置为 `root`: +2. 在您的本地机器上启动一个 MySQL 容器。以下命令启动一个名为 **mysql-server** 的 MySQL 容器,创建一个名为 **mydb** 的数据库,并将 root 密码设置为 `root`: ```bash docker run \ @@ -45,28 +50,50 @@ docker run \ -d mysql:5.7 ``` -3. 通过 `docker ps` 检查容器: +3. 验证 MySQL 是否正在运行: + +```bash +docker ps +``` + +检查输出中是否有名为 **mysql-server** 的容器: ```bash -CONTAINER ID IMAGE ... NAMES -aac4c28be56e mysql:5.7 ... mysql-server +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +aac4c28be56e mysql:5.7 "docker-entrypoint.s…" 17 hours ago Up 17 hours 0.0.0.0:3306->3306/tcp, 33060/tcp mysql-server ``` -## 步骤 2:写入示例数据 +## 步骤 2:使用示例数据填充 MySQL -1. 登录容器并输入密码 `root`: +1. 登录到 MySQL 容器,并在出现提示时输入密码 `root`: ```bash docker exec -it mysql-server mysql -u root -p ``` -2. 切换到 `mydb`: +``` +Enter password: +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 71 +Server version: 5.7.44-log MySQL Community Server (GPL) + +Copyright (c) 2000, 2023, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. +``` + +2. 切换到 **mydb** 数据库: ```bash mysql> USE mydb; +Database changed ``` -3. 创建 `products` 表并插入数据: +3. 复制并粘贴以下 SQL 以创建一个名为 **products** 的表并插入数据: ```sql CREATE TABLE products (id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY,name VARCHAR(255) NOT NULL,description VARCHAR(512)); @@ -85,17 +112,32 @@ INSERT INTO products VALUES (default,"scooter","Small 2-wheel scooter"), (default,"spare tire","24 inch spare tire"); ``` -4. 查询确认: +4. 验证数据: ```bash mysql> select * from products; ++----+--------------------+---------------------------------------------------------+ +| id | name | description | ++----+--------------------+---------------------------------------------------------+ +| 10 | scooter | Small 2-wheel scooter | +| 11 | car battery | 12V car battery | +| 12 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 | +| 13 | hammer | 12oz carpenter's hammer | +| 14 | hammer | 14oz carpenter's hammer | +| 15 | hammer | 16oz carpenter's hammer | +| 16 | rocks | box of assorted rocks | +| 17 | jacket | black wind breaker | +| 18 | cloud | test for databend | +| 19 | spare tire | 24 inch spare tire | ++----+--------------------+---------------------------------------------------------+ +10 rows in set (0.01 sec) ``` -## 步骤 3:在 Databend Cloud 创建目标表 +## 步骤 3:在 Databend Cloud 中设置目标 -1. 使用 BendSQL 连接 Databend Cloud,参见 [使用 BendSQL 连接 Databend Cloud](../getting-started/connect-to-databendcloud-bendsql.md)。 +1. 使用 BendSQL 连接到 Databend Cloud。如果您不熟悉 BendSQL,请参阅本教程:[使用 BendSQL 连接到 Databend Cloud](../connect/connect-to-databendcloud-bendsql.md)。 -2. 创建 `products` 表: +2. 复制并粘贴以下 SQL 以创建一个名为 **products** 的目标表: ```sql CREATE TABLE products ( @@ -115,7 +157,7 @@ tar -xvzf flink-1.17.1-bin-scala_2.12.tgz cd flink-1.17.1 ``` -2. 将 Databend 与 MySQL Connector 下载到 `lib` 目录: +2. 将 Databend 和 MySQL 连接器下载到 **lib** 文件夹中: ```bash curl -Lo lib/flink-connector-databend.jar https://github.com/databendcloud/flink-connector-databend/releases/latest/download/flink-connector-databend.jar @@ -123,17 +165,23 @@ curl -Lo lib/flink-connector-databend.jar https://github.com/databendcloud/flink curl -Lo lib/flink-sql-connector-mysql-cdc-2.4.1.jar https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-mysql-cdc/2.4.1/flink-sql-connector-mysql-cdc-2.4.1.jar ``` -3. 编辑 `flink-1.17.1/conf/flink-conf.yaml`,将 `taskmanager.memory.process.size` 设置为 `4096m`。 +3. 打开 `flink-1.17.1/conf/` 下的 **flink-conf.yaml** 文件,将 `taskmanager.memory.process.size` 更新为 `4096m`,然后保存文件。 -4. 启动 Flink 集群: +```yaml +taskmanager.memory.process.size: 4096m +``` + +4. 启动一个 Flink 集群: ```shell ./bin/start-cluster.sh ``` -然后访问 [http://localhost:8081](http://localhost:8081) 打开 Flink Dashboard。 +现在,如果您在浏览器中访问 [http://localhost:8081](http://localhost:8081),则可以打开 Apache Flink 仪表板: + +![Alt text](/img/load/cdc-dashboard.png) -## 步骤 5:启动迁移 +## 步骤 5:开始迁移 1. 启动 Flink SQL Client: @@ -141,13 +189,61 @@ curl -Lo lib/flink-sql-connector-mysql-cdc-2.4.1.jar https://repo1.maven.org/mav ./bin/sql-client.sh ``` -2. 设置 Checkpoint 间隔为 3 秒: +您将看到 Flink SQL Client 启动横幅,确认客户端已成功启动。 + +```bash +``` + + ▒▓██▓██▒ + ▓████▒▒█▓▒▓███▓▒ + ▓███▓░░ ▒▒▒▓██▒ ▒ + ░██▒ ▒▒▓▓█▓▓▒░ ▒████ + ██▒ ░▒▓███▒ ▒█▒█▒ + ░▓█ ███ ▓░▒██ + ▓█ ▒▒▒▒▒▓██▓░▒░▓▓█ + █░ █ ▒▒░ ███▓▓█ ▒█▒▒▒ + ████░ ▒▓█▓ ██▒▒▒ ▓███▒ + ░▒█▓▓██ ▓█▒ ▓█▒▓██▓ ░█░ + ▓░▒▓████▒ ██ ▒█ █▓░▒█▒░▒█▒ + ███▓░██▓ ▓█ █ █▓ ▒▓█▓▓█▒ + ░██▓ ░█░ █ █▒ ▒█████▓▒ ██▓░▒ + ███░ ░ █░ ▓ ░█ █████▒░░ ░█░▓ ▓░ + ██▓█ ▒▒▓▒ ▓███████▓░ ▒█▒ ▒▓ ▓██▓ + ▒██▓ ▓█ █▓█ ░▒█████▓▓▒░ ██▒▒ █ ▒ ▓█▒ + ▓█▓ ▓█ ██▓ ░▓▓▓▓▓▓▓▒ ▒██▓ ░█▒ + ▓█ █ ▓███▓▒░ ░▓▓▓███▓ ░▒░ ▓█ + ██▓ ██▒ ░▒▓▓███▓▓▓▓▓██████▓▒ ▓███ █ + ▓███▒ ███ ░▓▓▒░░ ░▓████▓░ ░▒▓▒ █▓ + █▓▒▒▓▓██ ░▒▒░░░▒▒▒▒▓██▓░ █▓ + ██ ▓░▒█ ▓▓▓▓▒░░ ▒█▓ ▒▓▓██▓ ▓▒ ▒▒▓ + ▓█▓ ▓▒█ █▓░ ░▒▓▓██▒ ░▓█▒ ▒▒▒░▒▒▓█████▒ + ██░ ▓█▒█▒ ▒▓▓▒ ▓█ █░ ░░░░ ░█▒ + ▓█ ▒█▓ ░ █░ ▒█ █▓ + █▓ ██ █░ ▓▓ ▒█▓▓▓▒█░ + █▓ ░▓██░ ▓▒ ▓█▓▒░░░▒▓█░ ▒█ + ██ ▓█▓░ ▒ ░▒█▒██▒ ▓▓ + ▓█▒ ▒█▓▒░ ▒▒ █▒█▓▒▒░░▒██ + ░██▒ ▒▓▓▒ ▓██▓▒█▒ ░▓▓▓▓▒█▓ + ░▓██▒ ▓░ ▒█▓█ ░░▒▒▒ + ▒▓▓▓▓▓▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒░░▓▓ ▓░▒█░ + + ______ _ _ _ _____ ____ _ _____ _ _ _ BETA + | ____| (_) | | / ____|/ __ \| | / ____| (_) | | + | |__ | |_ _ __ | | __ | (___ | | | | | | | | |_ ___ _ __ | |_ + | __| | | | '_ \| |/ / \___ \| | | | | | | | | |/ _ \ '_ \| __| + | | | | | | | | < ____) | |__| | |____ | |____| | | __/ | | | |_ + |_| |_|_|_| |_|_|\_\ |_____/ \___\_\______| \_____|_|_|\___|_| |_|\__| + + Welcome! Enter 'HELP;' to list all available commands. 'QUIT;' to exit. +``` + +2. 将检查点间隔设置为 3 秒。 ```bash Flink SQL> SET execution.checkpointing.interval = 3s; ``` -3. 在 Flink SQL Client 中创建 MySQL 与 Databend 表(替换占位符): +3. 在 Flink SQL Client 中使用 MySQL 和 Databend 连接器创建相应的表(将占位符替换为您的实际值): ```sql CREATE TABLE mysql_products (id INT,name STRING,description STRING,PRIMARY KEY (id) NOT ENFORCED) @@ -172,22 +268,69 @@ WITH ('connector' = 'databend', 'sink.max-retries' = '3'); ``` -4. 执行同步: +4. 在 Flink SQL Client 中,将数据从 mysql_products 表同步到 databend_products 表: ```sql Flink SQL> INSERT INTO databend_products SELECT * FROM mysql_products; +> +[INFO] Submitting SQL update statement to the cluster... +[INFO] SQL update statement has been successfully submitted to the cluster: +Job ID: 5b505d752b7c211cbdcb5566175b9182 ``` -Flink Dashboard 会显示运行中的任务。 +现在您可以在 Apache Flink Dashboard 中看到一个正在运行的作业: -完成后,在 BendSQL 中查询 `products` 表即可看到同步的数据。继续在 MySQL 中插入记录,例如: +![Alt text](/img/load/cdc-job.png) + +一切就绪!如果您返回到 BendSQL 终端并查询 Databend Cloud 中的 **products** 表,您将看到 MySQL 中的数据已成功同步: + +```sql +SELECT * FROM products; + +┌──────────────────────────────────────────────────────────────────────────────────────┐ +│ id │ name │ description │ +│ Int32 │ String │ Nullable(String) │ +├───────┼────────────────────┼─────────────────────────────────────────────────────────┤ +│ 18 │ cloud │ test for databend │ +│ 19 │ spare tire │ 24 inch spare tire │ +│ 16 │ rocks │ box of assorted rocks │ +│ 17 │ jacket │ black wind breaker │ +│ 14 │ hammer │ 14oz carpenter's hammer │ +│ 15 │ hammer │ 16oz carpenter's hammer │ +│ 12 │ 12-pack drill bits │ 12-pack of drill bits with sizes ranging from #40 to #3 │ +│ 13 │ hammer │ 12oz carpenter's hammer │ +│ 10 │ scooter │ Small 2-wheel scooter │ +│ 11 │ car battery │ 12V car battery │ +└──────────────────────────────────────────────────────────────────────────────────────┘ +``` + +5. 返回到 MySQL 终端并插入一个新产品: ```sql INSERT INTO products VALUES (default, "bicycle", "Lightweight road bicycle"); ``` -再查询 Databend,能看到新插入的记录: +接下来,在 BendSQL 终端中,再次查询 **products** 表以验证新产品是否已同步: ```sql SELECT * FROM products; ``` + +``` +┌──────────────────────────────────────────────────────────────────────────────────────┐ +│ id │ name │ description │ +│ Int32 │ String │ Nullable(String) │ +├───────┼────────────────────┼─────────────────────────────────────────────────────────┤ +│ 12 │ 12-pack drill bits │ 12-pack of drill bits with sizes ranging from #40 to #3 │ +│ 11 │ car battery │ 12V car battery │ +│ 14 │ hammer │ 14oz carpenter's hammer │ +│ 13 │ hammer │ 12oz carpenter's hammer │ +│ 10 │ scooter │ Small 2-wheel scooter │ +│ 20 │ bicycle │ Lightweight road bicycle │ +│ 19 │ spare tire │ 24 inch spare tire │ +│ 16 │ rocks │ box of assorted rocks │ +│ 15 │ hammer │ 16oz carpenter's hammer │ +│ 18 │ cloud │ test for databend │ +│ 17 │ jacket │ black wind breaker │ +└──────────────────────────────────────────────────────────────────────────────────────┘ +``` \ No newline at end of file diff --git a/docs/cn/tutorials/migrate/migrating-from-mysql-with-kafka-connect.md b/docs/cn/tutorials/migrate/migrating-from-mysql-with-kafka-connect.md index 33a3a9c046..03680363e1 100644 --- a/docs/cn/tutorials/migrate/migrating-from-mysql-with-kafka-connect.md +++ b/docs/cn/tutorials/migrate/migrating-from-mysql-with-kafka-connect.md @@ -1,39 +1,39 @@ --- -title: 使用 Kafka Connect 迁移 MySQL(CDC) -sidebar_label: 'MySQL → Databend:Kafka Connect(CDC)' +title: 使用 Kafka Connect 迁移 MySQL +sidebar_label: 'Kafka Connect' --- -> **能力**:CDC、增量、全量 +> **功能**: CDC, 增量导入, 全量导入 -本教程展示如何使用 Kafka Connect 构建从 MySQL 到 Databend 的实时数据管道。 +本教程展示了如何使用 Kafka Connect 构建从 MySQL 到 Databend 的实时数据管道。 -## 概览 +## 概述 -Kafka Connect 是在 Apache Kafka 与其他系统之间可靠大规模传输数据的工具,可标准化数据进出 Kafka。本方案通过 Kafka Connect 提供: +Kafka Connect 是一个在 Apache Kafka 和其他系统之间可靠且大规模地流式传输数据的工具。它通过标准化 Kafka 数据的传入和传出,简化了实时数据管道的构建。对于 MySQL 到 Databend 的迁移,Kafka Connect 提供了一个无缝的解决方案,可以实现: -- 从 MySQL 到 Databend 的实时同步 -- 自动 schema 演进与建表 -- 既支持新增数据,也支持对既有行的更新 +- 从 MySQL 到 Databend 的实时数据同步 +- 自动模式演变和表创建 +- 支持新数据捕获和现有数据的更新 -迁移链路包含两个组件: +迁移管道由两个主要组件组成: -- **MySQL JDBC Source Connector**:从 MySQL 读取数据写入 Kafka Topic -- **Databend Sink Connector**:从 Kafka 读取数据写入 Databend +- **MySQL JDBC Source Connector**: 从 MySQL 读取数据并将其发布到 Kafka topics +- **Databend Sink Connector**: 从 Kafka topics 消费数据并将其写入 Databend ## 前提条件 -- 已有待迁移数据的 MySQL 数据库 -- 已安装 Apache Kafka(参见 [Kafka 快速入门](https://kafka.apache.org/quickstart)) -- 已部署 Databend 实例 -- 具备基础 SQL 与命令行知识 +- 包含要迁移数据的 MySQL 数据库 +- 已安装的 Apache Kafka ([Kafka 快速入门指南](https://kafka.apache.org/quickstart)) +- 正在运行的 Databend 实例 +- SQL 和命令行的基本知识 -## 步骤 1:配置 Kafka Connect +## 步骤 1:设置 Kafka Connect -本教程使用 Standalone 模式,便于测试。 +Kafka Connect 支持两种执行模式:Standalone 和 Distributed。在本教程中,我们将使用 Standalone 模式,这种模式更简单,适合测试。 -### Worker 配置 +### 配置 Kafka Connect -在 Kafka `config` 目录创建 `connect-standalone.properties`: +在 Kafka `config` 目录中创建一个基本的 worker 配置文件 `connect-standalone.properties`: ```properties bootstrap.servers=localhost:9092 @@ -47,46 +47,61 @@ offset.flush.interval.ms=10000 ## 步骤 2:配置 MySQL Source Connector -### 安装依赖 +### 安装所需组件 -1. 从 Confluent Hub 下载 [Kafka Connect JDBC](https://www.confluent.io/hub/confluentinc/kafka-connect-jdbc) 插件并解压到 Kafka `libs` 目录。 -2. 下载 [MySQL JDBC Driver](https://repo1.maven.org/maven2/com/mysql/mysql-connector-j/8.0.32/) 并放入同一目录。 +1. 从 Confluent Hub 下载 [Kafka Connect JDBC](https://www.confluent.io/hub/confluentinc/kafka-connect-jdbc) 插件,并将其解压到 Kafka `libs` 目录 -### 创建配置 +2. 下载 [MySQL JDBC Driver](https://repo1.maven.org/maven2/com/mysql/mysql-connector-j/8.0.32/) 并将 JAR 文件复制到相同的 `libs` 目录 -在 Kafka `config` 目录创建 `mysql-source.properties`: +### 创建 MySQL Source 配置 + +在 Kafka `config` 目录中创建一个文件 `mysql-source.properties`,内容如下: ```properties name=mysql-source connector.class=io.confluent.connect.jdbc.JdbcSourceConnector tasks.max=1 + +# Connection settings connection.url=jdbc:mysql://localhost:3306/your_database?useSSL=false connection.user=your_username connection.password=your_password + +# Table selection and topic mapping table.whitelist=your_database.your_table topics=mysql_data + +# Sync mode configuration mode=incrementing incrementing.column.name=id + +# Polling frequency poll.interval.ms=5000 ``` -将其中 `your_database`、`your_username`、`your_password`、`your_table` 替换为真实值。 +将以下值替换为您的实际 MySQL 配置: +- `your_database`: 您的 MySQL 数据库名称 +- `your_username`: MySQL 用户名 +- `your_password`: MySQL 密码 +- `your_table`: 您要迁移的表 ### 同步模式 -MySQL Source Connector 支持三种模式: +MySQL Source Connector 支持三种同步模式: -1. **Incrementing**:适用于拥有自增 ID 的表。 +1. **Incrementing Mode**: 最适合具有自动递增 ID 列的表 ```properties mode=incrementing incrementing.column.name=id ``` -2. **Timestamp**:适合需要捕获插入与更新。 + +2. **Timestamp Mode**: 最适合捕获插入和更新 ```properties mode=timestamp timestamp.column.name=updated_at ``` -3. **Timestamp+Incrementing**:最稳妥的模式。 + +3. **Timestamp+Incrementing Mode**: 对于所有更改最可靠 ```properties mode=timestamp+incrementing incrementing.column.name=id @@ -95,37 +110,46 @@ MySQL Source Connector 支持三种模式: ## 步骤 3:配置 Databend Sink Connector -### 安装依赖 +### 安装所需组件 + +1. 下载 [Databend Kafka Connector](https://github.com/databendcloud/databend-kafka-connect/releases) 并将其放置在 Kafka `libs` 目录中 -1. 下载 [Databend Kafka Connector](https://github.com/databendcloud/databend-kafka-connect/releases) 至 Kafka `libs`。 -2. 下载 [Databend JDBC Driver](https://central.sonatype.com/artifact/com.databend/databend-jdbc/) 至同一目录。 +2. 下载 [Databend JDBC Driver](https://central.sonatype.com/artifact/com.databend/databend-jdbc/) 并将其复制到 Kafka `libs` 目录 -### 创建配置 +### 创建 Databend Sink 配置 -在 `config` 目录创建 `databend-sink.properties`: +在 Kafka `config` 目录中创建一个文件 `databend-sink.properties`: ```properties name=databend-sink connector.class=com.databend.kafka.connect.DatabendSinkConnector + +# Connection settings connection.url=jdbc:databend://localhost:8000 connection.user=databend connection.password=databend connection.database=default + +# Topic to table mapping topics=mysql_data table.name.format=${topic} + +# Table management auto.create=true auto.evolve=true + +# Write behavior insert.mode=upsert pk.mode=record_value pk.fields=id batch.size=1000 ``` -根据环境调整连接信息。 +根据您的环境需要调整 Databend 连接设置。 -## 步骤 4:启动迁移链路 +## 步骤 4:启动迁移管道 -执行: +使用两个连接器配置启动 Kafka Connect: ```shell bin/connect-standalone.sh config/connect-standalone.properties \ @@ -135,31 +159,33 @@ bin/connect-standalone.sh config/connect-standalone.properties \ ## 步骤 5:验证迁移 -### 检查同步进度 +### 检查数据同步 -1. **监控 Kafka Connect 日志**: +1. **监控 Kafka Connect 日志** ```shell tail -f /path/to/kafka/logs/connect.log ``` -2. **在 Databend 中验证数据**: +2. **验证 Databend 中的数据** + + 连接到您的 Databend 实例并运行: ```sql SELECT * FROM mysql_data LIMIT 10; ``` -### 测试 Schema 演进 +### 测试模式演变 -若在 MySQL 表中新增列,Schema 会自动同步: +如果您向 MySQL 表中添加新列,则模式更改将自动传播到 Databend: -1. 在 MySQL 中执行: +1. **在 MySQL 中添加列** ```sql ALTER TABLE your_table ADD COLUMN new_field VARCHAR(100); ``` -2. 在 Databend 中验证: +2. **验证 Databend 中的模式更新** ```sql DESC mysql_data; @@ -167,31 +193,38 @@ bin/connect-standalone.sh config/connect-standalone.properties \ ### 测试更新操作 -确保 Source Connector 使用 timestamp 或 timestamp+incrementing 模式后: +要测试更新,请确保您正在使用 timestamp 或 timestamp+incrementing 模式: + +1. **更新您的 MySQL 连接器配置** + + 如果您的表具有时间戳列,请编辑 `mysql-source.properties` 以使用 timestamp+incrementing 模式。 -1. 修改 `mysql-source.properties` 以启用相应模式。 -2. 在 MySQL 中更新数据: +2. **更新 MySQL 中的数据** ```sql UPDATE your_table SET some_column='new value' WHERE id=1; ``` -3. 在 Databend 中确认: +3. **验证 Databend 中的更新** ```sql SELECT * FROM mysql_data WHERE id=1; ``` -## Databend Kafka Connect 的关键特性 +## Databend Kafka Connect 的主要功能 + +1. **自动表和列创建**: 通过 `auto.create` 和 `auto.evolve` 设置,表和列会根据 Kafka topic 数据自动创建 + +2. **模式支持**: 支持 Avro、JSON Schema 和 Protobuf 输入数据格式(需要 Schema Registry) + +3. **多种写入模式**: 支持 `insert` 和 `upsert` 写入模式 + +4. **多任务支持**: 可以运行多个任务以提高性能 -1. **自动建表与列创建**:`auto.create`、`auto.evolve` 自动匹配 Kafka Topic schema。 -2. **Schema 支持**:兼容 Avro、JSON Schema、Protobuf(需 Schema Registry)。 -3. **多种写入模式**:同时支持 `insert` 与 `upsert`。 -4. **多任务支持**:可通过多任务提升吞吐。 -5. **高可用**:在分布式模式下支持动态扩缩容与容错。 +5. **高可用性**: 在分布式模式下,工作负载会自动平衡,具有动态伸缩和容错能力 -## 常见问题排查 +## 故障排除 -- **Connector 无法启动**:检查 Kafka Connect 日志。 -- **Databend 中无数据**:使用 Kafka 控制台消费数据,确认 Topic 有消息。 -- **Schema 异常**:确保 `auto.create` 与 `auto.evolve` 均为 `true`。 +- **连接器未启动**: 检查 Kafka Connect 日志以查找错误 +- **Databend 中没有数据**: 使用 Kafka 控制台消费者验证 topic 是否存在并包含数据 +- **模式问题**: 确保 `auto.create` 和 `auto.evolve` 设置为 `true` diff --git a/docs/cn/tutorials/migrate/migrating-from-snowflake.md b/docs/cn/tutorials/migrate/migrating-from-snowflake.md index 224d465c9a..67ea0cd8b7 100644 --- a/docs/cn/tutorials/migrate/migrating-from-snowflake.md +++ b/docs/cn/tutorials/migrate/migrating-from-snowflake.md @@ -1,28 +1,30 @@ --- -title: 从 Snowflake 迁移到 Databend -sidebar_label: Snowflake → Databend +title: 迁移 Snowflake +sidebar_label: 'Snowflake' --- -> **能力**:全量 +> **功能**: 全量导入 -本教程介绍如何将 Snowflake 数据迁移到 Databend:先把数据导出到 Amazon S3,再加载到 Databend。整体分为三步: +本教程将指导您完成从 Snowflake 迁移数据到 Databend 的过程。迁移过程包括将数据从 Snowflake 导出到 Amazon S3 存储桶,然后将其加载到 Databend 中。该过程分为三个主要步骤: ![alt text](@site/static/img/load/snowflake-databend.png) +在本教程中,我们将指导您完成将数据从 Snowflake 以 Parquet 格式导出到 Amazon S3 存储桶,然后将其加载到 Databend Cloud 的过程。 + ## 开始之前 -请准备以下资源: +在开始之前,请确保您已具备以下先决条件: -- **Amazon S3 Bucket**:用于存放导出的数据,并具备上传权限。示例使用 `s3://databend-doc/snowflake/`。[了解如何创建 Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html)。 -- **AWS 凭证**:具备 Bucket 访问权限的 Access Key 与 Secret Key。[管理凭证](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys)。 -- **管理 IAM 角色与策略的权限**:需要在 Snowflake 与 S3 之间配置可信访问。[了解 IAM 角色与策略](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html)。 +- **Amazon S3 存储桶**: 一个用于存储导出数据的 S3 存储桶,以及上传文件所需的权限。[了解如何创建 S3 存储桶](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html)。在本教程中,我们使用 `s3://databend-doc/snowflake/` 作为暂存导出数据的位置。 +- **AWS 凭证**: 具有访问 S3 存储桶足够权限的 AWS Access Key ID 和 Secret Access Key。[管理您的 AWS 凭证](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys)。 +- **管理 IAM 角色和策略的权限**: 确保您具有创建和管理 IAM 角色和策略的必要权限,这是配置 Snowflake 和 Amazon S3 之间访问所必需的。[了解 IAM 角色和策略](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html)。 -## 步骤 1:为 Amazon S3 配置 Snowflake Storage Integration +## 步骤 1: 为 Amazon S3 配置 Snowflake Storage Integration -本步骤会通过 IAM Role 让 Snowflake 能访问 S3。 +在此步骤中,我们将配置 Snowflake 使用 IAM 角色访问 Amazon S3。首先,我们将创建一个 IAM 角色,然后使用该角色建立 Snowflake Storage Integration 以实现安全的数据访问。 -1. 登录 AWS Console,在 **IAM** > **Policies** 创建策略,内容如下(Bucket 名称与路径请按需修改): +1. 登录 AWS 管理控制台,然后在 **IAM** > **Policies** 中使用以下 JSON 代码创建策略: ```json { @@ -58,9 +60,21 @@ sidebar_label: Snowflake → Databend } ``` -2. 在 **IAM** > **Roles** 创建名为 `databend-doc-role` 的角色,选择 **AWS account** → **This account**,并附加刚创建的策略。创建完成后保存角色 ARN,如 `arn:aws:iam::123456789012:role/databend-doc-role`。稍后还需更新该角色的信任策略。 +此策略适用于名为 `databend-doc` 的 S3 存储桶,特别是该存储桶内的 `snowflake` 文件夹。 + +- `s3:PutObject`, `s3:GetObject`, `s3:GetObjectVersion`, `s3:DeleteObject`, `s3:DeleteObjectVersion`: 允许对 snowflake 文件夹内的对象进行操作 (例如,`s3://databend-doc/snowflake/`)。您可以在此文件夹中上传、读取和删除对象。 +- `s3:ListBucket`, `s3:GetBucketLocation`: 允许列出 `databend-doc` 存储桶的内容并检索其位置。`Condition` 元素确保列表操作仅限于 `snowflake` 文件夹内的对象。 + +2. 在 **IAM** > **Roles** 中创建名为 `databend-doc-role` 的角色并附加我们创建的策略。 + - 在创建角色的第一步中,为 **Trusted entity type** 选择 **AWS account**,为 **An AWS account** 选择 **This account (xxxxx)**。 + + ![alt text](../../../../static/img/documents/tutorials/trusted-entity.png) + + - 角色创建后,复制并将角色 ARN 保存在安全位置,例如 `arn:aws:iam::123456789012:role/databend-doc-role`。 + - 我们稍后将更新角色的 **Trust Relationships**,在获得 Snowflake 账户的 IAM 用户 ARN 之后。 + -3. 在 Snowflake 中创建名为 `my_s3_integration` 的 Storage Integration: +3. 在 Snowflake 中打开 SQL 工作区,使用角色 ARN 创建名为 `my_s3_integration` 的 storage integration。 ```sql CREATE OR REPLACE STORAGE INTEGRATION my_s3_integration @@ -71,13 +85,13 @@ CREATE OR REPLACE STORAGE INTEGRATION my_s3_integration ENABLED = TRUE; ``` -4. 查看 Integration 详情,记录 `STORAGE_AWS_IAM_USER_ARN`(示例:`arn:aws:iam::123456789012:user/example`),稍后需要将其写入角色的信任关系: +4. 显示 storage integration 详细信息并获取结果中 `STORAGE_AWS_IAM_USER_ARN` 属性的值,例如 `arn:aws:iam::123456789012:user/example`。我们将在下一步中使用此值来更新角色 `databend-doc-role` 的 **Trust Relationships**。 ```sql DESCRIBE INTEGRATION my_s3_integration; ``` -5. 回到 AWS Console,打开角色 `databend-doc-role`,在 **Trust relationships** 中编辑策略,粘贴以下内容,并将 `arn:aws:iam::123456789012:user/example` 替换为上一步得到的值: +5. 返回 AWS 管理控制台,打开角色 `databend-doc-role`,导航到 **Trust relationships** > **Edit trust policy**。将以下代码复制到编辑器中: ```json { @@ -94,9 +108,12 @@ DESCRIBE INTEGRATION my_s3_integration; } ``` -## 步骤 2:准备并导出数据到 Amazon S3 + ARN `arn:aws:iam::123456789012:user/example` 是我们在上一步中获得的 Snowflake 账户的 IAM 用户 ARN。 -1. 在 Snowflake 中使用上一步创建的 Integration 定义 External Stage: + +## 步骤 2: 准备并导出数据到 Amazon S3 + +1. 在 Snowflake 中使用 Snowflake storage integration `my_s3_integration` 创建外部 stage: ```sql CREATE OR REPLACE STAGE my_external_stage @@ -105,7 +122,9 @@ CREATE OR REPLACE STAGE my_external_stage FILE_FORMAT = (TYPE = 'PARQUET'); ``` -2. 创建示例数据: +`URL = 's3://databend-doc/snowflake/'` 指定了数据将要暂存的 S3 存储桶和文件夹。路径 `s3://databend-doc/snowflake/` 对应 S3 存储桶 `databend-doc` 以及该存储桶内的 `snowflake` 文件夹。 + +2. 准备一些要导出的数据。 ```sql CREATE DATABASE doc; @@ -123,7 +142,7 @@ INSERT INTO my_table (id, name, age) VALUES (3, 'Charlie', 35); ``` -3. 将数据导出到 Stage: +3. 使用 COPY INTO 将表数据导出到外部 Stage: ```sql COPY INTO @my_external_stage/my_table_data_ @@ -131,11 +150,13 @@ COPY INTO @my_external_stage/my_table_data_ FILE_FORMAT = (TYPE = 'PARQUET') HEADER=true; ``` -到 S3 查看 `databend-doc/snowflake` 即可看到生成的 Parquet 文件。 +如果您现在打开存储桶 `databend-doc`,应该会在 `snowflake` 文件夹中看到一个 Parquet 文件: -## 步骤 3:加载数据到 Databend Cloud +![alt text](../../../../static/img/documents/tutorials/bucket-folder.png) -1. 在 Databend Cloud 内创建目标表: +## 步骤 3:将数据加载到 Databend Cloud + +1. 在 Databend Cloud 中创建目标表: ```sql CREATE DATABASE doc; @@ -148,7 +169,7 @@ CREATE TABLE my_target_table ( ); ``` -2. 使用 [COPY INTO](/sql/sql-commands/dml/dml-copy-into-table) 从 Bucket 加载数据: +2. 使用 [COPY INTO](/sql/sql-commands/dml/dml-copy-into-table) 加载存储桶中的导出数据: ```sql COPY INTO my_target_table @@ -163,7 +184,7 @@ FILE_FORMAT = ( ); ``` -3. 验证数据: +3. 验证加载的数据: ```sql SELECT * FROM my_target_table; @@ -175,4 +196,4 @@ SELECT * FROM my_target_table; │ 2 │ Bob │ 25 │ │ 3 │ Charlie │ 35 │ └──────────────────────────────────────────────────────┘ -``` +``` \ No newline at end of file diff --git a/docs/cn/tutorials/operate-and-recover/_category_.json b/docs/cn/tutorials/operate-and-recover/_category_.json deleted file mode 100644 index c6ebd0cff0..0000000000 --- a/docs/cn/tutorials/operate-and-recover/_category_.json +++ /dev/null @@ -1,4 +0,0 @@ -{ - "label": "运维与恢复", - "position": 5 -} diff --git a/docs/cn/tutorials/operate-and-recover/bendsave.md b/docs/cn/tutorials/operate-and-recover/bendsave.md deleted file mode 100644 index 35e26c0b68..0000000000 --- a/docs/cn/tutorials/operate-and-recover/bendsave.md +++ /dev/null @@ -1,190 +0,0 @@ ---- -title: 使用 BendSave 备份与恢复 ---- - -本教程将演示如何使用 BendSave 备份与恢复数据。我们会以本地 MinIO 作为 Databend 的 S3 兼容存储以及备份目标。 - -## 开始之前 - -请准备: - -- 一台 Linux 机器(x86_64 或 aarch64):本教程在 Linux 上部署 Databend,可使用本地、虚拟机或云服务器(如 AWS EC2)。 - - [Docker](https://www.docker.com/):用于部署本地 MinIO。 - - [AWS CLI](https://aws.amazon.com/cli/):用于管理 MinIO 中的 Bucket。 - - 如果在 AWS EC2 上操作,请确保安全组放开 `8000` 端口,以便 BendSQL 连接 Databend。 -- 本地安装 BendSQL,参见 [安装 BendSQL](/guides/sql-clients/bendsql/#installing-bendsql)。 -- Databend 发布包:从 [Databend GitHub Releases](https://github.com/databendlabs/databend/releases) 下载。该包的 `bin` 目录包含本教程所需的 `databend-bendsave` 二进制: - -```bash -databend-v1.2.725-nightly-x86_64-unknown-linux-gnu/ -├── bin -│ ├── bendsql -│ ├── databend-bendsave # 本教程所用的 BendSave -│ ├── databend-meta -│ ├── databend-metactl -│ └── databend-query -├── configs -│ ├── databend-meta.toml -│ └── databend-query.toml -└── ... -``` - -## 步骤 1:在 Docker 中启动 MinIO - -1. 在 Linux 机器上启动 MinIO 容器,映射 9000(API)与 9001(Web Console)端口: - -```bash -docker run -d --name minio \ - -e "MINIO_ACCESS_KEY=minioadmin" \ - -e "MINIO_SECRET_KEY=minioadmin" \ - -p 9000:9000 \ - -p 9001:9001 \ - minio/minio server /data \ - --address :9000 \ - --console-address :9001 -``` - -2. 配置凭证并通过 AWS CLI 创建两个 Bucket:一个用于备份(`backupbucket`),一个作为 Databend 的存储(`databend`)。 - -```bash -export AWS_ACCESS_KEY_ID=minioadmin -export AWS_SECRET_ACCESS_KEY=minioadmin - -aws --endpoint-url http://127.0.0.1:9000/ s3 mb s3://backupbucket -aws --endpoint-url http://127.0.0.1:9000/ s3 mb s3://databend -``` - -## 步骤 2:部署 Databend - -1. 下载并解压最新 Databend: - -```bash -wget https://github.com/databendlabs/databend/releases/download/v1.2.25-nightly/databend-dbg-v1.2.725-nightly-x86_64-unknown-linux-gnu.tar.gz - -tar -xzvf databend-dbg-v1.2.725-nightly-x86_64-unknown-linux-gnu.tar.gz -``` - -2. 编辑 **configs/databend-query.toml**: - -```bash -vi configs/databend-query.toml -``` - -关键配置如下: - -```toml -... -[[query.users]] -name = "root" -auth_type = "no_password" -... -[storage] -type = "s3" -... -[storage.s3] -bucket = "databend" -endpoint_url = "http://127.0.0.1:9000" -access_key_id = "minioadmin" -secret_access_key = "minioadmin" -enable_virtual_host_style = false -``` - -3. 启动 Meta 与 Query 服务: - -```bash -./databend-meta -c ../configs/databend-meta.toml > meta.log 2>&1 & -``` - -```bash -./databend-query -c ../configs/databend-query.toml > query.log 2>&1 & -``` - -通过健康检查确认服务已启动: - -```bash -curl -I http://127.0.0.1:28002/v1/health -curl -I http://127.0.0.1:8080/v1/health -``` - -4. 使用 BendSQL 连接 Databend,激活企业 License、创建表并插入示例数据: - -```bash -bendsql -h -``` - -```sql -SET GLOBAL enterprise_license=''; -``` - -```sql -CREATE TABLE books ( - id BIGINT UNSIGNED, - title VARCHAR, - genre VARCHAR DEFAULT 'General' -); - -INSERT INTO books(id, title) VALUES(1, 'Invisible Stars'); -``` - -5. 在 Linux 主机上检查 Databend Bucket,确认已有数据: - -```bash -aws --endpoint-url http://127.0.0.1:9000 s3 ls s3://databend/ --recursive -``` - -## 步骤 3:使用 BendSave 备份 - -1. 运行 BendSave,将 Databend 数据备份至 `backupbucket`: - -```bash -export AWS_ACCESS_KEY_ID=minioadmin -export AWS_SECRET_ACCESS_KEY=minioadmin - -./databend-bendsave backup \ - --from ../configs/databend-query.toml \ - --to 's3://backupbucket?endpoint=http://127.0.0.1:9000/®ion=us-east-1' -``` - -2. 列出 `backupbucket`,确认备份文件: - -```bash -aws --endpoint-url http://127.0.0.1:9000 s3 ls s3://backupbucket/ --recursive -``` - -## 步骤 4:使用 BendSave 恢复 - -1. 清空 `databend` Bucket: - -```bash -aws --endpoint-url http://127.0.0.1:9000 s3 rm s3://databend/ --recursive -``` - -2. 再次在 BendSQL 中查询 `books`,会因为文件缺失而失败。 - -3. 执行恢复命令: - -```bash -./databend-bendsave restore \ - --from "s3://backupbucket?endpoint=http://127.0.0.1:9000/®ion=us-east-1" \ - --to-query ../configs/databend-query.toml \ - --to-meta ../configs/databend-meta.toml \ - --confirm -``` - -4. 列出 `databend` Bucket,确认文件已恢复: - -```bash -aws --endpoint-url http://127.0.0.1:9000 s3 ls s3://databend/ --recursive -``` - -5. 在 BendSQL 中再次查询 `books`,即可看到记录: - -```sql -SELECT * FROM books; - -┌────────────────────────────────────────────────────────┐ -│ id │ title │ genre │ -├──────────────────┼──────────────────┼──────────────────┤ -│ 1 │ Invisible Stars │ General │ -└────────────────────────────────────────────────────────┘ -``` diff --git a/docs/cn/tutorials/programming/_category_.json b/docs/cn/tutorials/programming/_category_.json new file mode 100644 index 0000000000..e4cf0716dd --- /dev/null +++ b/docs/cn/tutorials/programming/_category_.json @@ -0,0 +1,3 @@ +{ + "label": "编程" +} \ No newline at end of file diff --git a/docs/cn/tutorials/develop/python/_category_.json b/docs/cn/tutorials/programming/python/_category_.json similarity index 100% rename from docs/cn/tutorials/develop/python/_category_.json rename to docs/cn/tutorials/programming/python/_category_.json diff --git a/docs/cn/tutorials/programming/python/integrating-with-databend-cloud-using-databend-driver.md b/docs/cn/tutorials/programming/python/integrating-with-databend-cloud-using-databend-driver.md new file mode 100644 index 0000000000..0cf0cbcd7d --- /dev/null +++ b/docs/cn/tutorials/programming/python/integrating-with-databend-cloud-using-databend-driver.md @@ -0,0 +1,56 @@ +--- +title: 集成 (databend-driver) +--- + +本教程介绍如何使用 `databend-driver` 连接 Databend Cloud,并使用 Python 进行数据操作。 + +## 开始之前 + +在开始之前,请确保您已成功创建计算集群并获得了连接信息。 有关如何执行此操作,请参见 [连接到计算集群](/guides/cloud/using-databend-cloud/warehouses#connecting)。 + +## 步骤 1:使用 pip 安装依赖项 + +```shell +pip install databend-driver +``` + +## 步骤 2:使用 databend-driver 连接 + +1. 复制以下代码并粘贴到文件 `main.py` 中: + +```python +from databend_driver import BlockingDatabendClient + +# 使用您的凭据连接到 Databend Cloud(替换 PASSWORD、HOST、DATABASE 和 WAREHOUSE_NAME) +client = BlockingDatabendClient(f"databend://cloudapp:{PASSWORD}@{HOST}:443/{DATABASE}?warehouse={WAREHOUSE_NAME}") + +# 从客户端获取游标以执行查询 +cursor = client.cursor() + +# 如果表存在则删除表 +cursor.execute('DROP TABLE IF EXISTS data') + +# 如果表不存在则创建表 +cursor.execute('CREATE TABLE IF NOT EXISTS data (x Int32, y String)') + +# 将数据插入表 +cursor.execute("INSERT INTO data (x, y) VALUES (1, 'yy'), (2, 'xx')") + +# 从表中选择所有数据 +cursor.execute('SELECT * FROM data') + +# 从结果中获取所有行 +rows = cursor.fetchall() + +# 打印结果 +for row in rows: + print(row.values()) +``` + +2. 运行 `python main.py`: + +```bash +python main.py +(1, 'yy') +(2, 'xx') +``` \ No newline at end of file diff --git a/docs/cn/tutorials/programming/python/integrating-with-databend-cloud-using-databend-sqlalchemy.md b/docs/cn/tutorials/programming/python/integrating-with-databend-cloud-using-databend-sqlalchemy.md new file mode 100644 index 0000000000..06e67f12c0 --- /dev/null +++ b/docs/cn/tutorials/programming/python/integrating-with-databend-cloud-using-databend-sqlalchemy.md @@ -0,0 +1,42 @@ +--- +title: 集成 (SQLAlchemy) +--- + +本教程介绍如何使用 `databend-sqlalchemy` 连接 Databend Cloud,并使用 Python 进行数据操作。 + +## 在开始之前 + +开始前,请确保已成功创建计算集群(Warehouse)并获取连接信息。具体操作请参阅[连接计算集群(Warehouse)](/guides/cloud/using-databend-cloud/warehouses#connecting)。 + +## 第一步:使用 pip 安装依赖项 + +```shell +pip install databend-sqlalchemy +``` + +## 第二步:使用 databend_sqlalchemy 连接 + +1. 复制以下代码至文件 `main.py`: + +```python +from sqlalchemy import create_engine, text +from sqlalchemy.engine.base import Connection, Engine + +# Connecting to Databend Cloud with your credentials (replace PASSWORD, HOST, DATABASE, and WAREHOUSE_NAME) +engine = create_engine( + f"databend://{username}:{password}@{host_port_name}/{database_name}?sslmode=disable" +) +cursor = engine.connect() +cursor.execute(text('DROP TABLE IF EXISTS data')) +cursor.execute(text('CREATE TABLE IF NOT EXISTS data( Col1 TINYINT, Col2 VARCHAR )')) +cursor.execute(text("INSERT INTO data VALUES (1,'zz')")) +res = cursor.execute(text("SELECT * FROM data")) +print(res.fetchall()) +``` + +2. 运行 `python main.py`: + +```bash +python main.py +[(1, 'zz')] +``` \ No newline at end of file diff --git a/docs/cn/tutorials/develop/python/integrating-with-self-hosted-databend.md b/docs/cn/tutorials/programming/python/integrating-with-self-hosted-databend.md similarity index 58% rename from docs/cn/tutorials/develop/python/integrating-with-self-hosted-databend.md rename to docs/cn/tutorials/programming/python/integrating-with-self-hosted-databend.md index ad6bdafb8f..cf3c2af9ad 100644 --- a/docs/cn/tutorials/develop/python/integrating-with-self-hosted-databend.md +++ b/docs/cn/tutorials/programming/python/integrating-with-self-hosted-databend.md @@ -1,16 +1,18 @@ --- -title: "Python:连接自建 Databend" +title: 集成私有化 Databend --- -本教程介绍如何通过 Python 连接本地部署的 Databend,并分别使用 `databend-driver`、`databend-sqlalchemy` Connector 以及 Engine 三种方式完成建库、建表、写入、查询与清理等操作。 +本教程演示如何使用 Python 连接私有化部署的 Databend,涵盖三种连接方法:`databend-driver`、使用连接器的 `databend-sqlalchemy` 以及使用引擎的 `databend-sqlalchemy`。 ## 开始之前 -请确认已成功安装本地 Databend,详见 [本地与 Docker 部署](/guides/deploy/deploy/non-production/deploying-local)。 +在开始之前,请确保已成功安装本地 Databend。有关详细说明,请参阅 [本地和 Docker 部署](/guides/deploy/deploy/non-production/deploying-local)。 -## 步骤 1:准备 SQL 账号 +## 步骤 1:准备 SQL 用户帐户 -要让程序连接 Databend 并执行 SQL,需要在代码中提供具备相应权限的 SQL 用户。请在 Databend 中创建账号并授予必要权限。本教程示例使用用户名 `user1`、密码 `abc123`,由于程序会写入数据,因此用户需要 ALL 权限。关于 SQL 用户与权限管理,参见 [User & Role](/sql/sql-commands/ddl/user/)。 +要将程序连接到 Databend 并执行 SQL 操作,必须在代码中提供具有适当权限的 SQL 用户帐户。如果需要,在 Databend 中创建一个,并确保 SQL 用户仅具有必要的权限以确保安全。 + +本教程以 SQL 用户 'user1',密码为 'abc123' 为例。由于程序会将数据写入 Databend,因此该用户需要 ALL 权限。有关如何管理 SQL 用户及其权限,请参阅 [用户 & 角色](/sql/sql-commands/ddl/user/)。 ```sql CREATE USER user1 IDENTIFIED BY 'abc123'; @@ -19,7 +21,7 @@ GRANT ALL on *.* TO user1; ## 步骤 2:编写 Python 程序 -接下来编写一段简单程序与 Databend 交互,完成建表、插数与查询等操作。 +在此步骤中,你将创建一个与 Databend 通信的简单 Python 程序。该程序将涉及创建表、插入数据和执行数据查询等任务。 import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; @@ -33,28 +35,28 @@ import TabItem from '@theme/TabItem'; pip install databend-driver ``` -2. 将以下代码保存为 `main.py`: +2. 将以下代码复制并粘贴到文件 `main.py` 中: ```python title='main.py' from databend_driver import BlockingDatabendClient -# 示例:使用 SQL 用户 user1/abc123 连接本地 Databend。 +# 连接到本地 Databend,以 SQL 用户 'user1' 和密码 'abc123' 为例。 client = BlockingDatabendClient('databend://user1:abc123@127.0.0.1:8000/?sslmode=disable') -# 创建游标与 Databend 交互 +# 创建一个 cursor 以与 Databend 交互 cursor = client.cursor() -# 创建数据库并切换 +# 创建数据库并使用它 cursor.execute("CREATE DATABASE IF NOT EXISTS bookstore") cursor.execute("USE bookstore") # 创建表 cursor.execute("CREATE TABLE IF NOT EXISTS booklist(title VARCHAR, author VARCHAR, date VARCHAR)") -# 插入数据 +# 将数据插入到表中 cursor.execute("INSERT INTO booklist VALUES('Readings in Database Systems', 'Michael Stonebraker', '2004')") -# 查询数据 +# 从表中查询数据 cursor.execute("SELECT * FROM booklist") rows = cursor.fetchall() @@ -62,14 +64,15 @@ rows = cursor.fetchall() for row in rows: print(f"{row[0]} {row[1]} {row[2]}") -# 清理资源 +# 删除表和数据库 cursor.execute('DROP TABLE booklist') cursor.execute('DROP DATABASE bookstore') +# 关闭 cursor cursor.close() ``` -3. 执行 `python main.py`: +3. 运行 `python main.py`: ```bash python main.py @@ -80,7 +83,7 @@ Readings in Database Systems Michael Stonebraker 2004 -该方式使用 databend-sqlalchemy 提供的 connector 对象,再通过 cursor 执行 SQL。 +你将使用 databend-sqlalchemy 库创建一个 connector 实例,并使用 cursor 对象执行 SQL 查询。 1. 安装 databend-sqlalchemy。 @@ -88,12 +91,13 @@ Readings in Database Systems Michael Stonebraker 2004 pip install databend-sqlalchemy ``` -2. 将以下代码保存为 `main.py`: +2. 将以下代码复制并粘贴到文件 `main.py` 中: ```python title='main.py' from databend_sqlalchemy import connector -# 示例:使用 SQL 用户 user1/abc123 连接本地 Databend。 +# 连接到本地 Databend,以 SQL 用户 'user1' 和密码 'abc123' 为例。 +# 请随意使用你自己的值,同时保持相同的格式。 conn = connector.connect(f"http://user1:abc123@127.0.0.1:8000").cursor() conn.execute("CREATE DATABASE IF NOT EXISTS bookstore") conn.execute("USE bookstore") @@ -107,10 +111,11 @@ for (title, author, date) in results: conn.execute('drop table booklist') conn.execute('drop database bookstore') +# 关闭 Connect。 conn.close() ``` -3. 执行 `python main.py`: +3. 运行 `python main.py`: ```text Readings in Database Systems Michael Stonebraker 2004 @@ -120,7 +125,7 @@ Readings in Database Systems Michael Stonebraker 2004 -该方式使用 databend-sqlalchemy 创建 Engine,通过 `connect()` 获取连接并执行 SQL。 +你将使用 databend-sqlalchemy 库创建一个引擎实例,并使用 connect 方法执行 SQL 查询以创建可以执行查询的连接。 1. 安装 databend-sqlalchemy。 @@ -128,13 +133,14 @@ Readings in Database Systems Michael Stonebraker 2004 pip install databend-sqlalchemy ``` -2. 将以下代码保存为 `main.py`: +2. 将以下代码复制并粘贴到文件 `main.py` 中: ```python title='main.py' from sqlalchemy import create_engine, text -# 示例:使用 SQL 用户 user1/abc123 连接本地 Databend。 -# secure=False 表示通过 HTTP(非 HTTPS)连接。 +# 连接到本地 Databend,以 SQL 用户 'user1' 和密码 'abc123' 为例。 +# 请随意使用你自己的值,同时保持相同的格式。 +# Setting secure=False means the client will connect to Databend using HTTP instead of HTTPS. engine = create_engine("databend://user1:abc123@127.0.0.1:8000/default?secure=False") connection1 = engine.connect() @@ -152,16 +158,17 @@ results = result.fetchall() for (title, author, date) in results: print("{} {} {}".format(title, author, date)) +# 关闭 Connect。 connection1.close() connection2.close() engine.dispose() ``` -3. 执行 `python main.py`: +3. 运行 `python main.py`: ```text Readings in Database Systems Michael Stonebraker 2004 ``` - + \ No newline at end of file diff --git a/docs/cn/tutorials/recovery/_category_.json b/docs/cn/tutorials/recovery/_category_.json new file mode 100644 index 0000000000..8ed819ceb1 --- /dev/null +++ b/docs/cn/tutorials/recovery/_category_.json @@ -0,0 +1,3 @@ +{ + "label": "数据恢复" +} diff --git a/docs/cn/tutorials/recovery/bendsave.md b/docs/cn/tutorials/recovery/bendsave.md new file mode 100644 index 0000000000..e8908ab750 --- /dev/null +++ b/docs/cn/tutorials/recovery/bendsave.md @@ -0,0 +1,232 @@ +--- +title: 备份与恢复 (BendSave) +--- + +本教程介绍如何使用 BendSave 进行数据备份和恢复。我们将使用本地 MinIO 实例作为存储后端和备份目标。 + +## 在开始之前 + +在开始之前,请确保你已满足以下先决条件: + +- 一台 Linux 机器(x86_64 或 aarch64 架构):在本教程中,我们将在 Linux 机器上部署 Databend。你可以使用本地机器、虚拟机或云实例(如 AWS EC2)。 + - [Docker](https://www.docker.com/): 用于部署本地 MinIO 实例。 + - [AWS CLI](https://aws.amazon.com/cli/): 用于管理 MinIO 中的存储桶(Bucket)。 + - 如果你使用的是 AWS EC2,请确保你的安全组允许端口 `8000` 的入站流量,因为这是 BendSQL 连接到 Databend 所必需的。 + +- BendSQL 已安装在你的本地机器上。有关如何使用各种包管理器安装 BendSQL 的说明,请参阅 [安装 BendSQL](/guides/sql-clients/bendsql/#installing-bendsql)。 + +- Databend 发布包:从 [Databend GitHub 发布页面](https://github.com/databendlabs/databend/releases) 下载发布包。该包的 `bin` 目录中包含 `databend-bendsave` 二进制文件,这是我们在本教程中用于备份和恢复操作的工具。 +```bash +databend-v1.2.725-nightly-x86_64-unknown-linux-gnu/ +├── bin +│ ├── bendsql +│ ├── databend-bendsave # 本教程中使用的 BendSave 二进制文件 +│ ├── databend-meta +│ ├── databend-metactl +│ └── databend-query +├── configs +│ ├── databend-meta.toml +│ └── databend-query.toml +└── ... +``` + +## 第一步:在 Docker 中启动 MinIO + +1. 在你的 Linux 机器上启动一个 MinIO 容器。以下命令将启动一个名为 **minio** 的 MinIO 容器,并暴露端口 `9000`(用于 API)和 `9001`(用于 Web 控制台): + +```bash +docker run -d --name minio \ + -e "MINIO_ACCESS_KEY=minioadmin" \ + -e "MINIO_SECRET_KEY=minioadmin" \ + -p 9000:9000 \ + -p 9001:9001 \ + minio/minio server /data \ + --address :9000 \ + --console-address :9001 +``` + +2. 将你的 MinIO 凭据设置为环境变量,然后使用 AWS CLI 创建两个存储桶(Bucket):一个用于存储备份(**backupbucket**),另一个用于存储 Databend 数据(**databend**): + +```bash +export AWS_ACCESS_KEY_ID=minioadmin +export AWS_SECRET_ACCESS_KEY=minioadmin + +aws --endpoint-url http://127.0.0.1:9000/ s3 mb s3://backupbucket +aws --endpoint-url http://127.0.0.1:9000/ s3 mb s3://databend +``` + +## 第二步:设置 Databend + +1. 下载最新的 Databend 发布包并解压以获取必要的二进制文件: + +```bash +wget https://github.com/databendlabs/databend/releases/download/v1.2.25-nightly/databend-dbg-v1.2.725-nightly-x86_64-unknown-linux-gnu.tar.gz + +tar -xzvf databend-dbg-v1.2.725-nightly-x86_64-unknown-linux-gnu.tar.gz +``` + +2. 配置 **configs** 文件夹中的 **databend-query.toml** 配置文件。 + +```bash +vi configs/databend-query.toml +``` + +以下显示了本教程所需的关键配置: + +```toml +... +[[query.users]] +name = "root" +auth_type = "no_password" +... +# Storage config. +[storage] +# fs | s3 | azblob | gcs | oss | cos +type = "s3" +... +# To use an Amazon S3-like storage service, uncomment this block and set your values. +[storage.s3] +bucket = "databend" +endpoint_url = "http://127.0.0.1:9000" +access_key_id = "minioadmin" +secret_access_key = "minioadmin" +enable_virtual_host_style = false +``` + +3. 使用以下命令启动 Meta 和 Query 服务: + +```bash +./databend-meta -c ../configs/databend-meta.toml > meta.log 2>&1 & +``` + +```bash +./databend-query -c ../configs/databend-query.toml > query.log 2>&1 & +``` + +启动服务后,通过检查它们的健康检查端点来验证它们是否正在运行。成功的响应应返回 HTTP 状态 200 OK。 + +```bash +curl -I http://127.0.0.1:28002/v1/health + +curl -I http://127.0.0.1:8080/v1/health +``` + +4. 使用 BendSQL 从你的本地机器连接到 Databend 实例,然后应用你的 Databend 企业版(Enterprise)许可证,创建一个表并插入一些示例数据。 + +```bash +bendsql -h +``` + +```sql +SET GLOBAL enterprise_license=''; +``` + +```sql +CREATE TABLE books ( + id BIGINT UNSIGNED, + title VARCHAR, + genre VARCHAR DEFAULT 'General' +); + +INSERT INTO books(id, title) VALUES(1, 'Invisible Stars'); +``` + +5. 回到你的 Linux 机器上,验证表数据是否已存储在你的 Databend 存储桶(Bucket)中: + +```bash +aws --endpoint-url http://127.0.0.1:9000 s3 ls s3://databend/ --recursive +``` + +```bash +2025-04-07 15:27:06 748 1/169/_b/h0196160323247b1cab49be6060d42df8_v2.parquet +2025-04-07 15:27:06 646 1/169/_sg/h0196160323247c5eb0a1a860a6442c70_v4.mpk +2025-04-07 15:27:06 550 1/169/_ss/h019610dcc72474adb32ef43698db2a09_v4.mpk +2025-04-07 15:27:06 143 1/169/last_snapshot_location_hint_v2 +``` + +## 第三步:使用 BendSave 备份 + +1. 运行以下命令将你的 Databend 数据备份到 MinIO 中的 **backupbucket**: + +```bash +export AWS_ACCESS_KEY_ID=minioadmin +export AWS_SECRET_ACCESS_KEY=minioadmin + +./databend-bendsave backup \ + --from ../configs/databend-query.toml \ + --to 's3://backupbucket?endpoint=http://127.0.0.1:9000/®ion=us-east-1' +: Number of CPUs detected is not deterministic. Per-CPU arena disabled. +Backing up from ../configs/databend-query.toml to s3://backupbucket?endpoint=http://127.0.0.1:9000/®ion=us-east-1 +``` + +2. 备份完成后,你可以通过列出 **backupbucket** 的内容来验证文件是否已写入: + +```bash +aws --endpoint-url http://127.0.0.1:9000 s3 ls s3://backupbucket/ --recursive +``` + +```bash +2025-04-07 15:44:29 748 1/169/_b/h0196160323247b1cab49be6060d42df8_v2.parquet +2025-04-07 15:44:29 646 1/169/_sg/h0196160323247c5eb0a1a860a6442c70_v4.mpk +2025-04-07 15:44:29 550 1/169/_ss/h019610dcc72474adb32ef43698db2a09_v4.mpk +2025-04-07 15:44:29 143 1/169/last_snapshot_location_hint_v2 +2025-04-07 15:44:29 344781 databend_meta.db +``` + +## 第四步:使用 BendSave 恢复 + +1. 删除 **databend** 存储桶(Bucket)中的所有文件: + +```bash +aws --endpoint-url http://127.0.0.1:9000 s3 rm s3://databend/ --recursive +``` + +2. 删除后,你可以使用 BendSQL 验证在 Databend 中查询该表会失败: + +```sql +SELECT * FROM books; +``` + +```bash +error: APIError: QueryFailed: [3001]NotFound (persistent) at read, context: { uri: http://127.0.0.1:9000/databend/1/169/_ss/h019610dcc72474adb32ef43698db2a09_v4.mpk, response: Parts { status: 404, version: HTTP/1.1, headers: {"accept-ranges": "bytes", "content-length": "423", "content-type": "application/xml", "server": "MinIO", "strict-transport-security": "max-age=31536000; includeSubDomains", "vary": "Origin", "vary": "Accept-Encoding", "x-amz-id-2": "dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8", "x-amz-request-id": "18342C51C209C7E9", "x-content-type-options": "nosniff", "x-ratelimit-limit": "144", "x-ratelimit-remaining": "144", "x-xss-protection": "1; mode=block", "date": "Mon, 07 Apr 2025 23:14:45 GMT"} }, service: s3, path: 1/169/_ss/h019610dcc72474adb32ef43698db2a09_v4.mpk, range: 0- } => S3Error { code: "NoSuchKey", message: "The specified key does not exist.", resource: "/databend/1/169/_ss/h019610dcc72474adb32ef43698db2a09_v4.mpk", request_id: "18342C51C209C7E9" } +``` + +3. 运行以下命令将你的 Databend 数据恢复到 MinIO 中的 **databend** 存储桶(Bucket): + +```bash +./databend-bendsave restore \ + --from "s3://backupbucket?endpoint=http://127.0.0.1:9000/®ion=us-east-1" \ + --to-query ../configs/databend-query.toml \ + --to-meta ../configs/databend-meta.toml \ + --confirm +: Number of CPUs detected is not deterministic. Per-CPU arena disabled. +Restoring from s3://backupbucket?endpoint=http://127.0.0.1:9000/®ion=us-east-1 to query ../configs/databend-query.toml and meta ../configs/databend-meta.toml with confirmation +``` + +4. 恢复完成后,你可以通过列出 **databend** 存储桶(Bucket)的内容来验证文件是否已写回: + +```bash +aws --endpoint-url http://127.0.0.1:9000 s3 ls s3://databend/ --recursive +``` + +```bash +2025-04-07 23:21:39 748 1/169/_b/h0196160323247b1cab49be6060d42df8_v2.parquet +2025-04-07 23:21:39 646 1/169/_sg/h0196160323247c5eb0a1a860a6442c70_v4.mpk +2025-04-07 23:21:39 550 1/169/_ss/h019610dcc72474adb32ef43698db2a09_v4.mpk +2025-04-07 23:21:39 143 1/169/last_snapshot_location_hint_v2 +2025-04-07 23:21:39 344781 databend_meta.db +``` + +5. 再次使用 BendSQL 查询该表,你会看到查询现在成功了: + +```sql +SELECT * FROM books; +``` + +```sql +┌────────────────────────────────────────────────────────┐ +│ id │ 标题 │ 类型 │ +├──────────────────┼──────────────────┼──────────────────┤ +│ 1 │ 隐形的星星 │ 通用 │ +└────────────────────────────────────────────────────────┘ +``` \ No newline at end of file diff --git a/docs/en/release-notes/databend.md b/docs/en/release-notes/databend.md index 014bed449f..3700c9c965 100644 --- a/docs/en/release-notes/databend.md +++ b/docs/en/release-notes/databend.md @@ -12,244 +12,7 @@ This page provides information about recent features, enhancements, and bug fixe - - -## Nov 24, 2025 (v1.2.848-nightly) - -## What's Changed -### Thoughtful Bug Fix 🔧 -* fix: unable to get field on rank limit when rule_eager_aggregation applied by **@KKould** in [#19007](https://github.com/databendlabs/databend/pull/19007) -* fix: pivot extra columns on projection by **@KKould** in [#18994](https://github.com/databendlabs/databend/pull/18994) -### Code Refactor 🎉 -* refactor: bump crates arrow* and parquet to version 56 by **@dantengsky** in [#18997](https://github.com/databendlabs/databend/pull/18997) -### Others 📒 -* chore(ut): support for const columns as input to function unit tests by **@forsaken628** in [#19009](https://github.com/databendlabs/databend/pull/19009) -* chore(query): enable to cache the previous python import directory for python udf by **@sundy-li** in [#19003](https://github.com/databendlabs/databend/pull/19003) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.848-nightly - - - - - -## Nov 21, 2025 (v1.2.847-nightly) - -## What's Changed -### Others 📒 -* chore: make query service start after meta by **@everpcpc** in [#19002](https://github.com/databendlabs/databend/pull/19002) -* chore(query): Refresh virtual column support limit and selection by **@b41sh** in [#19001](https://github.com/databendlabs/databend/pull/19001) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.847-nightly - - - - - -## Nov 21, 2025 (v1.2.846-nightly) - -## What's Changed -### Thoughtful Bug Fix 🔧 -* fix: Block::to_record_batch fail when a column is array of NULLs. by **@youngsofun** in [#18989](https://github.com/databendlabs/databend/pull/18989) -* fix: `desc password policy ` column types must match schema types. by **@youngsofun** in [#18990](https://github.com/databendlabs/databend/pull/18990) -### Code Refactor 🎉 -* refactor(query): pass timezone by reference to avoid Arc churn by **@TCeason** in [#18998](https://github.com/databendlabs/databend/pull/18998) -* refactor(query): remove potential performance hotspots caused by fetch_add by **@zhang2014** in [#18995](https://github.com/databendlabs/databend/pull/18995) -### Others 📒 -* chore(query): Accelerate vector index quantization score calculation with SIMD by **@b41sh** in [#18957](https://github.com/databendlabs/databend/pull/18957) -* chore(query): clamp timestamps to jiff range before timezone conversion by **@TCeason** in [#18996](https://github.com/databendlabs/databend/pull/18996) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.846-nightly - - - - - -## Nov 20, 2025 (v1.2.845-nightly) - -## What's Changed -### Exciting New Features ✨ -* feat: impl UDTF Server by **@KKould** in [#18947](https://github.com/databendlabs/databend/pull/18947) -* feat(query):masking policy support rbac by **@TCeason** in [#18982](https://github.com/databendlabs/databend/pull/18982) -* feat: improve runtime filter [Part 2] by **@SkyFan2002** in [#18955](https://github.com/databendlabs/databend/pull/18955) -### Build/Testing/CI Infra Changes 🔌 -* ci: upgrade k3s for meta chaos by **@everpcpc** in [#18983](https://github.com/databendlabs/databend/pull/18983) -### Others 📒 -* chore: bump opendal to 0.54.1 by **@dqhl76** in [#18970](https://github.com/databendlabs/databend/pull/18970) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.845-nightly - - - - - -## Nov 18, 2025 (v1.2.844-nightly) - -## What's Changed -### Others 📒 -* chore: adjust the storage method of timestamp_tz so that the timestamp value is retrieved directly. by **@KKould** in [#18974](https://github.com/databendlabs/databend/pull/18974) -* chore: add more logs to cover aggregate spill by **@dqhl76** in [#18980](https://github.com/databendlabs/databend/pull/18980) -* chore(query): Virtual column support external table by **@b41sh** in [#18981](https://github.com/databendlabs/databend/pull/18981) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.844-nightly - - - - - -## Nov 18, 2025 (v1.2.843-nightly) - -## What's Changed -### Thoughtful Bug Fix 🔧 -* fix(query): count_distinct needs to handle nullable correctly by **@forsaken628** in [#18973](https://github.com/databendlabs/databend/pull/18973) -### Build/Testing/CI Infra Changes 🔌 -* ci: fix dependency for test cloud control server by **@everpcpc** in [#18978](https://github.com/databendlabs/databend/pull/18978) -### Others 📒 -* chore(query): improve python udf script by **@sundy-li** in [#18960](https://github.com/databendlabs/databend/pull/18960) -* chore(query): delete replace masking/row access policy by **@TCeason** in [#18972](https://github.com/databendlabs/databend/pull/18972) -* chore(query): Optimize Optimizer Performance by Reducing Redundant Computations by **@b41sh** in [#18979](https://github.com/databendlabs/databend/pull/18979) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.843-nightly - - - - - -## Nov 17, 2025 (v1.2.842-nightly) - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.842-nightly - - - - - -## Nov 14, 2025 (v1.2.841-nightly) - -## What's Changed -### Exciting New Features ✨ -* feat: http handler return geometry_output_format with data. by **@youngsofun** in [#18963](https://github.com/databendlabs/databend/pull/18963) -* feat(query): add table statistics admin api by **@zhang2014** in [#18967](https://github.com/databendlabs/databend/pull/18967) -* feat: upgrade nom to version 8.0.0 and accelerate expr_element using the first token. by **@KKould** in [#18935](https://github.com/databendlabs/databend/pull/18935) -### Thoughtful Bug Fix 🔧 -* fix(query): or_filter get incorrectly result by **@zhyass** in [#18965](https://github.com/databendlabs/databend/pull/18965) -* fix(query): Fix copy into Variant field panic with infinite number by **@b41sh** in [#18962](https://github.com/databendlabs/databend/pull/18962) -### Code Refactor 🎉 -* refactor: stream spill triggering for partial aggregation by **@dqhl76** in [#18943](https://github.com/databendlabs/databend/pull/18943) -* chore: optimize ExprBloomFilter to use references instead of clones by **@dantengsky** in [#18157](https://github.com/databendlabs/databend/pull/18157) -### Others 📒 -* chore(query): adjust the default Bloom filter enable setting by **@zhang2014** in [#18966](https://github.com/databendlabs/databend/pull/18966) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.841-nightly - - - - - -## Nov 14, 2025 (v1.2.840-nightly) - -## What's Changed -### Exciting New Features ✨ -* feat: new fuse table option `enable_parquet_dictionary` by **@dantengsky** in [#17675](https://github.com/databendlabs/databend/pull/17675) -### Thoughtful Bug Fix 🔧 -* fix: timestamp_tz display by **@KKould** in [#18958](https://github.com/databendlabs/databend/pull/18958) -### Others 📒 -* chore: flaky test by **@zhyass** in [#18959](https://github.com/databendlabs/databend/pull/18959) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.840-nightly - - - - - -## Nov 13, 2025 (v1.2.839-nightly) - -## What's Changed -### Thoughtful Bug Fix 🔧 -* fix: return timezone when set in query level. by **@youngsofun** in [#18952](https://github.com/databendlabs/databend/pull/18952) -* fix(query): Skip sequence lookups when re-binding stored defaults by **@TCeason** in [#18946](https://github.com/databendlabs/databend/pull/18946) -* fix(query): build mysql tls config by **@everpcpc** in [#18953](https://github.com/databendlabs/databend/pull/18953) -* fix(query): defer MySQL session creation until the handshake completes by **@everpcpc** in [#18956](https://github.com/databendlabs/databend/pull/18956) -### Code Refactor 🎉 -* refactor(query): prevent masking/row access policy name conflicts by **@TCeason** in [#18937](https://github.com/databendlabs/databend/pull/18937) -* refactor(query): optimize visibility checker for large-scale deployments improved 10x by **@TCeason** in [#18954](https://github.com/databendlabs/databend/pull/18954) -### Others 📒 -* chore(query): improve resolve large array by **@sundy-li** in [#18949](https://github.com/databendlabs/databend/pull/18949) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.839-nightly - - - - - -## Nov 12, 2025 (v1.2.838-nightly) - -## What's Changed -### Exciting New Features ✨ -* feat(query): support policy_reference table function by **@TCeason** in [#18944](https://github.com/databendlabs/databend/pull/18944) -* feat: improve runtime filter [Part 1] by **@SkyFan2002** in [#18893](https://github.com/databendlabs/databend/pull/18893) -### Thoughtful Bug Fix 🔧 -* fix(query): fix query function parsing nested conditions by **@b41sh** in [#18940](https://github.com/databendlabs/databend/pull/18940) -* fix(query): handle complex types in procedure argument parsing by **@TCeason** in [#18929](https://github.com/databendlabs/databend/pull/18929) -* fix: error in multi statement transaction retry by **@SkyFan2002** in [#18934](https://github.com/databendlabs/databend/pull/18934) -* fix: flaky test progress not updated in real time in cluster mode by **@youngsofun** in [#18945](https://github.com/databendlabs/databend/pull/18945) -### Code Refactor 🎉 -* refactor(binder): move the rewrite of ASOF JOIN to the logical plan and remove scalar_expr from `DerivedColumn` by **@forsaken628** in [#18938](https://github.com/databendlabs/databend/pull/18938) -* refactor(query): optimized `UnaryState` design and simplified `string_agg` implementation by **@forsaken628** in [#18941](https://github.com/databendlabs/databend/pull/18941) -* refactor(query): rename exchange hash to node to node hash by **@zhang2014** in [#18948](https://github.com/databendlabs/databend/pull/18948) -### Others 📒 -* chore(query): ignore assert const in memo logical test by **@zhang2014** in [#18950](https://github.com/databendlabs/databend/pull/18950) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.838-nightly - - - - - -## Nov 10, 2025 (v1.2.837-nightly) - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.837-nightly - - - - - -## Nov 8, 2025 (v1.2.836-nightly) - -## What's Changed -### Exciting New Features ✨ -* feat(query): Support `bitmap_to_array` function by **@b41sh** in [#18927](https://github.com/databendlabs/databend/pull/18927) -* feat(query): prevent dropping in-use security policies by **@TCeason** in [#18918](https://github.com/databendlabs/databend/pull/18918) -* feat(mysql): add JDBC healthcheck regex to support SELECT 1 FROM DUAL by **@yufan022** in [#18933](https://github.com/databendlabs/databend/pull/18933) -* feat: return timezone in HTTP handler. by **@youngsofun** in [#18936](https://github.com/databendlabs/databend/pull/18936) -### Thoughtful Bug Fix 🔧 -* fix: FilterExecutor needs to handle projections when `enable_selector_executor` is turned off. by **@forsaken628** in [#18921](https://github.com/databendlabs/databend/pull/18921) -* fix(query): fix Inverted/Vector index panic with Native Storage Format by **@b41sh** in [#18932](https://github.com/databendlabs/databend/pull/18932) -* fix(query): optimize the cost estimation of some query plans by **@zhang2014** in [#18926](https://github.com/databendlabs/databend/pull/18926) -* fix: alter column with specify approx distinct by **@zhyass** in [#18928](https://github.com/databendlabs/databend/pull/18928) -### Code Refactor 🎉 -* refactor: refine experimental final aggregate spill by **@dqhl76** in [#18907](https://github.com/databendlabs/databend/pull/18907) -* refactor(query): AccessType downcasts now return Result so mismatches surface useful diagnostics by **@forsaken628** in [#18923](https://github.com/databendlabs/databend/pull/18923) -* refactor(query): merge pipeline core, sources and sinks crate by **@zhang2014** in [#18939](https://github.com/databendlabs/databend/pull/18939) -### Others 📒 -* chore: remove fixeme on TimestampTz by **@KKould** in [#18924](https://github.com/databendlabs/databend/pull/18924) -* chore: fixed time zone on shanghai to fix flasky 02_0079_function_interval.test by **@KKould** in [#18930](https://github.com/databendlabs/databend/pull/18930) -* chore: DataType::TimestampTz display: "TimestampTz" -> "Timestamp_Tz" by **@KKould** in [#18931](https://github.com/databendlabs/databend/pull/18931) - - -**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.836-nightly - - - - + ## Nov 4, 2025 (v1.2.835-nightly) @@ -272,7 +35,7 @@ This page provides information about recent features, enhancements, and bug fixe - + ## Nov 3, 2025 (v1.2.834-nightly) @@ -594,4 +357,283 @@ This page provides information about recent features, enhancements, and bug fixe + + +## Sep 24, 2025 (v1.2.818-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat(meta): add member-list subcommand to databend-metactl by **@drmingdrmer** in [#18760](https://github.com/databendlabs/databend/pull/18760) +* feat(meta-service): add snapshot V004 streaming protocol by **@drmingdrmer** in [#18763](https://github.com/databendlabs/databend/pull/18763) +### Thoughtful Bug Fix 🔧 +* fix: auto commit of ddl not work when calling procedure in transaction by **@SkyFan2002** in [#18753](https://github.com/databendlabs/databend/pull/18753) +* fix: vacuum tables that are dropped by `create or replace` statement by **@dantengsky** in [#18751](https://github.com/databendlabs/databend/pull/18751) +* fix(query): fix data lost caused by nullable in spill by **@zhang2014** in [#18766](https://github.com/databendlabs/databend/pull/18766) +### Code Refactor 🎉 +* refactor(query): improve the readability of aggregate function hash table by **@forsaken628** in [#18747](https://github.com/databendlabs/databend/pull/18747) +* refactor(query): Optimize Virtual Column Write Performance by **@b41sh** in [#18752](https://github.com/databendlabs/databend/pull/18752) +### Others 📒 +* chore: resolve post-merge compilation failure after KvApi refactoring by **@dantengsky** in [#18761](https://github.com/databendlabs/databend/pull/18761) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.818-nightly + + + + + +## Sep 22, 2025 (v1.2.817-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat: databend-metabench: benchmark list by **@drmingdrmer** in [#18745](https://github.com/databendlabs/databend/pull/18745) +* feat: /v1/status include last_query_request_at. by **@youngsofun** in [#18750](https://github.com/databendlabs/databend/pull/18750) +### Thoughtful Bug Fix 🔧 +* fix: query dropped table in fuse_time_travel_size() report error by **@SkyFan2002** in [#18748](https://github.com/databendlabs/databend/pull/18748) +### Code Refactor 🎉 +* refactor(meta-service): separate raft-log-store and raft-state-machine store by **@drmingdrmer** in [#18746](https://github.com/databendlabs/databend/pull/18746) +* refactor: meta-service: simplify raft store and state machine by **@drmingdrmer** in [#18749](https://github.com/databendlabs/databend/pull/18749) +* refactor(query): stream style block writer for hash join spill by **@zhang2014** in [#18742](https://github.com/databendlabs/databend/pull/18742) +* refactor(native): preallocate zero offsets before compression by **@BohuTANG** in [#18756](https://github.com/databendlabs/databend/pull/18756) +* refactor: meta-service: compact immutable levels periodically by **@drmingdrmer** in [#18757](https://github.com/databendlabs/databend/pull/18757) +* refactor(query): add async buffer for spill data by **@zhang2014** in [#18758](https://github.com/databendlabs/databend/pull/18758) +### Build/Testing/CI Infra Changes 🔌 +* ci: add compat test for databend-go. by **@youngsofun** in [#18734](https://github.com/databendlabs/databend/pull/18734) +### Others 📒 +* chore: move auto implemented KvApi methods to Ext trait by **@drmingdrmer** in [#18759](https://github.com/databendlabs/databend/pull/18759) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.817-nightly + + + + + +## Sep 19, 2025 (v1.2.816-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat(rbac): procedure object support rbac by **@TCeason** in [#18730](https://github.com/databendlabs/databend/pull/18730) +### Thoughtful Bug Fix 🔧 +* fix(query): reduce redundant result-set-spill logs during query waits by **@BohuTANG** in [#18741](https://github.com/databendlabs/databend/pull/18741) +* fix: fuse_vacuum2 panic while vauuming empty table with data_retentio… by **@dantengsky** in [#18744](https://github.com/databendlabs/databend/pull/18744) +### Code Refactor 🎉 +* refactor: compactor internal structure by **@drmingdrmer** in [#18738](https://github.com/databendlabs/databend/pull/18738) +* refactor(query): refactor the join partition to reduce memory amplification by **@zhang2014** in [#18732](https://github.com/databendlabs/databend/pull/18732) +* refactor: Make the ownership key deletion and table/database replace in the same transaction by **@TCeason** in [#18739](https://github.com/databendlabs/databend/pull/18739) +### Others 📒 +* chore(meta-service): re-organize tests for raft-store by **@drmingdrmer** in [#18740](https://github.com/databendlabs/databend/pull/18740) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.816-nightly + + + + + +## Sep 18, 2025 (v1.2.815-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat: add ANY_VALUE as alias for ANY aggregate function by **@BohuTANG** in [#18728](https://github.com/databendlabs/databend/pull/18728) +* feat: add Immutable::compact to merge two level by **@drmingdrmer** in [#18731](https://github.com/databendlabs/databend/pull/18731) +### Thoughtful Bug Fix 🔧 +* fix: last query id not only contain those cached. by **@youngsofun** in [#18727](https://github.com/databendlabs/databend/pull/18727) +### Code Refactor 🎉 +* refactor: raft-store: in-memory readonly level compaction by **@drmingdrmer** in [#18736](https://github.com/databendlabs/databend/pull/18736) +* refactor: new setting `max_vacuum_threads` by **@dantengsky** in [#18737](https://github.com/databendlabs/databend/pull/18737) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.815-nightly + + + + + +## Sep 17, 2025 (v1.2.814-nightly) + +## What's Changed +### Thoughtful Bug Fix 🔧 +* fix(query): ensure jwt roles to user if not exists by **@everpcpc** in [#18720](https://github.com/databendlabs/databend/pull/18720) +* fix(query): Set Parquet default encoding to `PLAIN` to ensure data compatibility by **@b41sh** in [#18724](https://github.com/databendlabs/databend/pull/18724) +### Others 📒 +* chore: replace Arc<Mutex<SysData>> with SysData by **@drmingdrmer** in [#18723](https://github.com/databendlabs/databend/pull/18723) +* chore: add error check on private task test script by **@KKould** in [#18698](https://github.com/databendlabs/databend/pull/18698) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.814-nightly + + + + + +## Sep 16, 2025 (v1.2.813-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat(query): support result set spilling by **@forsaken628** in [#18679](https://github.com/databendlabs/databend/pull/18679) +### Thoughtful Bug Fix 🔧 +* fix(meta-service): detach the SysData to avoid race condition by **@drmingdrmer** in [#18722](https://github.com/databendlabs/databend/pull/18722) +### Code Refactor 🎉 +* refactor(raft-store): update trait interfaces and restructure leveled map by **@drmingdrmer** in [#18719](https://github.com/databendlabs/databend/pull/18719) +### Documentation 📔 +* docs(raft-store): enhance documentation across all modules by **@drmingdrmer** in [#18721](https://github.com/databendlabs/databend/pull/18721) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.813-nightly + + + + + +## Sep 15, 2025 (v1.2.812-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat: `infer_schema` expands csv and ndjson support by **@KKould** in [#18552](https://github.com/databendlabs/databend/pull/18552) +### Thoughtful Bug Fix 🔧 +* fix(query): column default expr should not cause seq.nextval modify by **@b41sh** in [#18694](https://github.com/databendlabs/databend/pull/18694) +* fix: `vacuum2` all should ignore SYSTEM dbs by **@dantengsky** in [#18712](https://github.com/databendlabs/databend/pull/18712) +* fix(meta-service): snapshot key count should be reset by **@drmingdrmer** in [#18718](https://github.com/databendlabs/databend/pull/18718) +### Code Refactor 🎉 +* refactor(meta-service): respond mget items in stream instead of in a vector by **@drmingdrmer** in [#18716](https://github.com/databendlabs/databend/pull/18716) +* refactor(meta-service0): rotbl: use `spawn_blocking()` instead `blocking_in_place()` by **@drmingdrmer** in [#18717](https://github.com/databendlabs/databend/pull/18717) +### Build/Testing/CI Infra Changes 🔌 +* ci: migration `09_http_handler` to pytest by **@forsaken628** in [#18714](https://github.com/databendlabs/databend/pull/18714) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.812-nightly + + + + + +## Sep 11, 2025 (v1.2.811-nightly) + +## What's Changed +### Thoughtful Bug Fix 🔧 +* fix: error occurred when retrying transaction on empty table by **@SkyFan2002** in [#18703](https://github.com/databendlabs/databend/pull/18703) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.811-nightly + + + + + +## Sep 10, 2025 (v1.2.810-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat: impl Date & Timestamp on `RANGE BETWEEN` by **@KKould** in [#18696](https://github.com/databendlabs/databend/pull/18696) +* feat: add pybend Python binding with S3 connection and stage support by **@BohuTANG** in [#18704](https://github.com/databendlabs/databend/pull/18704) +* feat(query): add api to list stream by **@everpcpc** in [#18701](https://github.com/databendlabs/databend/pull/18701) +### Thoughtful Bug Fix 🔧 +* fix: collected profiles lost in cluster mode by **@dqhl76** in [#18680](https://github.com/databendlabs/databend/pull/18680) +* fix(python-binding): complete Python binding CI configuration by **@BohuTANG** in [#18686](https://github.com/databendlabs/databend/pull/18686) +* fix(python-binding): resolve virtual environment permission conflicts in CI by **@BohuTANG** in [#18708](https://github.com/databendlabs/databend/pull/18708) +* fix: error when using materialized CTE in multi-statement transactions by **@SkyFan2002** in [#18707](https://github.com/databendlabs/databend/pull/18707) +* fix(query): add config to the embed mode to clarify this mode by **@zhang2014** in [#18710](https://github.com/databendlabs/databend/pull/18710) +### Build/Testing/CI Infra Changes 🔌 +* ci: run behave test of bendsql for compact. by **@youngsofun** in [#18697](https://github.com/databendlabs/databend/pull/18697) +* ci: Temporarily disable warehouse testing of private tasks by **@KKould** in [#18709](https://github.com/databendlabs/databend/pull/18709) +### Others 📒 +* chore(python-binding): documentation and PyPI metadata by **@BohuTANG** in [#18711](https://github.com/databendlabs/databend/pull/18711) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.810-nightly + + + + + +## Sep 8, 2025 (v1.2.809-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat: support reset of worksheet session. by **@youngsofun** in [#18688](https://github.com/databendlabs/databend/pull/18688) +### Thoughtful Bug Fix 🔧 +* fix(query): fix unable cast Variant Nullable type to Int32 type in MERGE INTO by **@b41sh** in [#18687](https://github.com/databendlabs/databend/pull/18687) +* fix: meta-semaphore: re-connect when no event recevied by **@drmingdrmer** in [#18690](https://github.com/databendlabs/databend/pull/18690) +### Code Refactor 🎉 +* refactor(meta-semaphore): handle error occurs during new-stream, lease-extend by **@drmingdrmer** in [#18695](https://github.com/databendlabs/databend/pull/18695) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.809-nightly + + + + + +## Sep 8, 2025 (v1.2.808-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat: support Check Constraint by **@KKould** in [#18661](https://github.com/databendlabs/databend/pull/18661) +* feat(parser): add intelligent SQL error suggestion system by **@BohuTANG** in [#18670](https://github.com/databendlabs/databend/pull/18670) +* feat: enhance resource scheduling logs with clear status and configuration details by **@BohuTANG** in [#18684](https://github.com/databendlabs/databend/pull/18684) +* feat(meta-semaphore): allows to specify timestamp as semaphore seq by **@drmingdrmer** in [#18685](https://github.com/databendlabs/databend/pull/18685) +### Thoughtful Bug Fix 🔧 +* fix: clean `db_id_table_name` during vacuuming dropped tables by **@dantengsky** in [#18665](https://github.com/databendlabs/databend/pull/18665) +* fix: forbid transform with where clause. by **@youngsofun** in [#18681](https://github.com/databendlabs/databend/pull/18681) +* fix(query): fix incorrect order of group by items with CTE or subquery by **@sundy-li** in [#18692](https://github.com/databendlabs/databend/pull/18692) +### Code Refactor 🎉 +* refactor(meta): extract utilities from monolithic util.rs by **@drmingdrmer** in [#18678](https://github.com/databendlabs/databend/pull/18678) +* refactor(query): split Spiller to provide more scalability by **@forsaken628** in [#18691](https://github.com/databendlabs/databend/pull/18691) +### Build/Testing/CI Infra Changes 🔌 +* ci: compat test for JDBC use test from main. by **@youngsofun** in [#18668](https://github.com/databendlabs/databend/pull/18668) +### Others 📒 +* chore: add test about create sequence to keep old version by **@TCeason** in [#18673](https://github.com/databendlabs/databend/pull/18673) +* chore: add some log for runtime filter by **@SkyFan2002** in [#18674](https://github.com/databendlabs/databend/pull/18674) +* chore: add profile for runtime filter by **@SkyFan2002** in [#18675](https://github.com/databendlabs/databend/pull/18675) +* chore: catch `to_date`/`to_timestamp` unwrap by **@KKould** in [#18677](https://github.com/databendlabs/databend/pull/18677) +* chore(query): add retry for semaphore queue by **@zhang2014** in [#18689](https://github.com/databendlabs/databend/pull/18689) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.808-nightly + + + + + +## Sep 3, 2025 (v1.2.807-nightly) + +## What's Changed +### Exciting New Features ✨ +* feat(query): Add SecureFilter for Row Access Policies and Stats Privacy by **@TCeason** in [#18623](https://github.com/databendlabs/databend/pull/18623) +* feat(query): support `start` and `increment` options for sequence creation by **@TCeason** in [#18659](https://github.com/databendlabs/databend/pull/18659) +### Thoughtful Bug Fix 🔧 +* fix(rbac): create or replace ownership_object should delete the old ownership key by **@TCeason** in [#18667](https://github.com/databendlabs/databend/pull/18667) +* fix(history-table): stop heartbeat when another node starts by **@dqhl76** in [#18664](https://github.com/databendlabs/databend/pull/18664) +### Code Refactor 🎉 +* refactor: extract garbage collection api to garbage_collection_api.rs by **@drmingdrmer** in [#18663](https://github.com/databendlabs/databend/pull/18663) +* refactor(meta): complete SchemaApi trait decomposition by **@drmingdrmer** in [#18669](https://github.com/databendlabs/databend/pull/18669) +### Others 📒 +* chore: enable distributed recluster by **@zhyass** in [#18644](https://github.com/databendlabs/databend/pull/18644) +* chore(ci): make ci success by **@TCeason** in [#18672](https://github.com/databendlabs/databend/pull/18672) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.807-nightly + + + + + +## Sep 2, 2025 (v1.2.806-nightly) + +## What's Changed +### Thoughtful Bug Fix 🔧 +* fix(query): try fix hang for cluster aggregate by **@zhang2014** in [#18655](https://github.com/databendlabs/databend/pull/18655) +### Code Refactor 🎉 +* refactor(schema-api): extract SecurityApi trait by **@drmingdrmer** in [#18658](https://github.com/databendlabs/databend/pull/18658) +* refactor(query): remove useless ee feature by **@zhang2014** in [#18660](https://github.com/databendlabs/databend/pull/18660) +### Build/Testing/CI Infra Changes 🔌 +* ci: fix download artifact for sqlsmith by **@everpcpc** in [#18662](https://github.com/databendlabs/databend/pull/18662) +* ci: ttc test with nginx and minio. by **@youngsofun** in [#18657](https://github.com/databendlabs/databend/pull/18657) + + +**Full Changelog**: https://github.com/databendlabs/databend/releases/tag/v1.2.806-nightly + + + diff --git a/package.json b/package.json index ba98a03137..3126c98ce7 100644 --- a/package.json +++ b/package.json @@ -39,10 +39,9 @@ "@mdx-js/react": "^3.0.0", "@types/turndown": "^5.0.5", "ahooks": "^3.8.0", - "antd": "^6.0.0", + "antd": "^5.24.8", "axios": "^1.13.2", "clsx": "^2.0.0", - "copy-to-clipboard": "^3.3.3", "copyforjs": "^1.0.6", "databend-logos": "^0.0.16", "docusaurus-plugin-devserver": "^1.0.6", diff --git a/site-config.ts b/site-config.ts index 9a60de71ab..e75114787c 100644 --- a/site-config.ts +++ b/site-config.ts @@ -34,4 +34,4 @@ export const ASKBEND_URL = "https://ask.databend.com"; export const tagline = "Databend - Your best alternative to Snowflake. Cost-effective and simple for massive-scale analytics." -export const announcementBarContent = `⭐️ If you like Databend, give it a star on GitHub and follow us on X(Twitter) ${XSvg}` \ No newline at end of file +export const announcementBarContent = `⭐️ If you like Databend, give it a star on GitHub and follow us on X(Twitter) ${XSvg}` diff --git a/src/css/menu.scss b/src/css/menu.scss index 149472e253..d7f62febae 100644 --- a/src/css/menu.scss +++ b/src/css/menu.scss @@ -3,21 +3,22 @@ nav.menu.thin-scrollbar { } .menu { font-weight: var(--ifm-font-weight-regular); - font-size: 14px; - .menu__link { + font-size:14px; + .menu__link{ padding-top: 0.375rem; padding-bottom: 0.375rem; border-radius: 0.25rem; line-height: 1.45rem; + &.menu__link--active { + font-weight: 600; + } } } -.menu__caret:before, -.menu__link--sublist-caret:after { +.menu__caret:before, .menu__link--sublist-caret:after{ display: none; } -.menu__caret:after, -.menu__link--sublist-caret:before { - content: ""; +.menu__caret:after, .menu__link--sublist-caret:before { + content: ''; position: absolute; right: 0.8rem; top: 33%; @@ -36,10 +37,11 @@ nav.menu.thin-scrollbar { } .menu__list-item--collapsed .menu__caret:after { transform: rotate(-135deg); + } -.menu__caret { +.menu__caret{ &[aria-expanded="true"] { - &::after { + &::after{ transform: rotate(-45deg); } } @@ -53,11 +55,11 @@ nav.menu.thin-scrollbar { } .menu__link--sublist-caret { &[aria-expanded="true"] { - &::before { + &::before{ transform: rotate(-45deg); } } } .menu__list .menu__list { margin-top: 0; -} +} \ No newline at end of file diff --git a/src/css/navbar.scss b/src/css/navbar.scss index 3f1418092e..40deaf814d 100644 --- a/src/css/navbar.scss +++ b/src/css/navbar.scss @@ -58,6 +58,11 @@ --ifm-navbar-search-input-background-color: var(--color-fill-0) !important; --ifm-navbar-search-input-color: var(--color-text-1); } +.navbar__items { + div[class*="colorModeToggle"] { + margin-left: 1.5rem; + } +} // navbar end .navbar__title { diff --git a/yarn.lock b/yarn.lock index 89ff047d07..904a6697de 100644 --- a/yarn.lock +++ b/yarn.lock @@ -203,26 +203,39 @@ dependencies: "@ctrl/tinycolor" "^3.6.1" -"@ant-design/colors@^8.0.0": - version "8.0.0" - resolved "https://registry.npmmirror.com/@ant-design/colors/-/colors-8.0.0.tgz#92b5aa1cd44896b62c7b67133b4d5a6a00266162" - integrity sha512-6YzkKCw30EI/E9kHOIXsQDHmMvTllT8STzjMb4K2qzit33RW2pqCJP0sk+hidBntXxE+Vz4n1+RvCTfBw6OErw== +"@ant-design/colors@^7.2.0": + version "7.2.0" + resolved "https://registry.yarnpkg.com/@ant-design/colors/-/colors-7.2.0.tgz#80d7325d20463f09c7839d28da630043dd5c263a" + integrity sha512-bjTObSnZ9C/O8MB/B4OUtd/q9COomuJAR2SYfhxLyHvCKn4EKwCN3e+fWGMo7H5InAyV0wL17jdE9ALrdOW/6A== dependencies: - "@ant-design/fast-color" "^3.0.0" + "@ant-design/fast-color" "^2.0.6" -"@ant-design/cssinjs-utils@^2.0.0": - version "2.0.1" - resolved "https://registry.npmmirror.com/@ant-design/cssinjs-utils/-/cssinjs-utils-2.0.1.tgz#19993b1f49889fe76c1169d8eff56ecce18ebd30" - integrity sha512-1KvINa1ih5jb34hxuHPnHXvV7+hNIM1ZQLNfsDyO9xFFucuOV1g4cYjAG3RnV8mwKkhLKhcraS8RJPXUpUBQsw== +"@ant-design/cssinjs-utils@^1.1.3": + version "1.1.3" + resolved "https://registry.yarnpkg.com/@ant-design/cssinjs-utils/-/cssinjs-utils-1.1.3.tgz#5dd79126057920a6992d57b38dd84e2c0b707977" + integrity sha512-nOoQMLW1l+xR1Co8NFVYiP8pZp3VjIIzqV6D6ShYF2ljtdwWJn5WSsH+7kvCktXL/yhEtWURKOfH5Xz/gzlwsg== dependencies: - "@ant-design/cssinjs" "^2.0.0" + "@ant-design/cssinjs" "^1.21.0" "@babel/runtime" "^7.23.2" rc-util "^5.38.0" -"@ant-design/cssinjs@^2.0.0": - version "2.0.0" - resolved "https://registry.npmmirror.com/@ant-design/cssinjs/-/cssinjs-2.0.0.tgz#81d9b97642f6da36ae15d6696aa926baa089d230" - integrity sha512-T7B8nXJWSQA1M5Q9Wg2lrUUSaQSGwNpmI8DOZS/32WFP3/2Y3CbSn+tuGz8iZXFe9bv6OaCH2zNk5HiSRVulLg== +"@ant-design/cssinjs@^1.21.0": + version "1.21.0" + resolved "https://registry.yarnpkg.com/@ant-design/cssinjs/-/cssinjs-1.21.0.tgz#de7289bfd71c7a494a28b96569ad88f999619105" + integrity sha512-gIilraPl+9EoKdYxnupxjHB/Q6IHNRjEXszKbDxZdsgv4sAZ9pjkCq8yanDWNvyfjp4leir2OVAJm0vxwKK8YA== + dependencies: + "@babel/runtime" "^7.11.1" + "@emotion/hash" "^0.8.0" + "@emotion/unitless" "^0.7.5" + classnames "^2.3.1" + csstype "^3.1.3" + rc-util "^5.35.0" + stylis "^4.0.13" + +"@ant-design/cssinjs@^1.23.0": + version "1.23.0" + resolved "https://registry.yarnpkg.com/@ant-design/cssinjs/-/cssinjs-1.23.0.tgz#492efba9b15d64f42a4cb5d568cab0607d0c2b16" + integrity sha512-7GAg9bD/iC9ikWatU9ym+P9ugJhi/WbsTWzcKN6T4gU0aehsprtke1UAaaSxxkjjmkJb3llet/rbUSLPgwlY4w== dependencies: "@babel/runtime" "^7.11.1" "@emotion/hash" "^0.8.0" @@ -232,10 +245,12 @@ rc-util "^5.35.0" stylis "^4.3.4" -"@ant-design/fast-color@^3.0.0": - version "3.0.0" - resolved "https://registry.npmmirror.com/@ant-design/fast-color/-/fast-color-3.0.0.tgz#fb5178203de825f284809538f5142203d0ef3d80" - integrity sha512-eqvpP7xEDm2S7dUzl5srEQCBTXZMmY3ekf97zI+M2DHOYyKdJGH0qua0JACHTqbkRnD/KHFQP9J1uMJ/XWVzzA== +"@ant-design/fast-color@^2.0.6": + version "2.0.6" + resolved "https://registry.yarnpkg.com/@ant-design/fast-color/-/fast-color-2.0.6.tgz#ab4d4455c1542c9017d367c2fa8ca3e4215d0ba2" + integrity sha512-y2217gk4NqL35giHl72o6Zzqji9O7vHh9YmhUVkPtAOpoTCH4uWxo/pr4VE8t0+ChEPs0qo4eJRC5Q1eXWo3vA== + dependencies: + "@babel/runtime" "^7.24.7" "@ant-design/icons-svg@^4.4.0": version "4.4.2" @@ -253,15 +268,16 @@ classnames "^2.2.6" rc-util "^5.31.1" -"@ant-design/icons@^6.1.0": - version "6.1.0" - resolved "https://registry.npmmirror.com/@ant-design/icons/-/icons-6.1.0.tgz#97cc14a3c0528b8e2b37f41f232b019f2ca38c2c" - integrity sha512-KrWMu1fIg3w/1F2zfn+JlfNDU8dDqILfA5Tg85iqs1lf8ooyGlbkA+TkwfOKKgqpUmAiRY1PTFpuOU2DAIgSUg== +"@ant-design/icons@^5.6.1": + version "5.6.1" + resolved "https://registry.yarnpkg.com/@ant-design/icons/-/icons-5.6.1.tgz#7290fcdc3d96ff3fca793ed399053cd29ad5dbd3" + integrity sha512-0/xS39c91WjPAZOWsvi1//zjx6kAp4kxWwctR6kuU6p133w8RU0D2dSCvZC19uQyharg/sAvYxGYWl01BbZZfg== dependencies: - "@ant-design/colors" "^8.0.0" + "@ant-design/colors" "^7.0.0" "@ant-design/icons-svg" "^4.4.0" - "@rc-component/util" "^1.3.0" - clsx "^2.1.1" + "@babel/runtime" "^7.24.8" + classnames "^2.2.6" + rc-util "^5.31.1" "@ant-design/react-slick@~1.1.2": version "1.1.2" @@ -2101,14 +2117,14 @@ core-js-pure "^3.30.2" regenerator-runtime "^0.14.0" -"@babel/runtime@^7.1.2", "@babel/runtime@^7.10.1", "@babel/runtime@^7.10.3", "@babel/runtime@^7.10.4", "@babel/runtime@^7.11.1", "@babel/runtime@^7.11.2", "@babel/runtime@^7.12.13", "@babel/runtime@^7.12.5", "@babel/runtime@^7.18.0", "@babel/runtime@^7.18.3", "@babel/runtime@^7.20.0", "@babel/runtime@^7.20.7", "@babel/runtime@^7.21.0", "@babel/runtime@^7.23.2", "@babel/runtime@^7.24.4", "@babel/runtime@^7.24.7", "@babel/runtime@^7.8.4": +"@babel/runtime@^7.1.2", "@babel/runtime@^7.10.1", "@babel/runtime@^7.10.3", "@babel/runtime@^7.10.4", "@babel/runtime@^7.11.1", "@babel/runtime@^7.11.2", "@babel/runtime@^7.12.13", "@babel/runtime@^7.12.5", "@babel/runtime@^7.16.7", "@babel/runtime@^7.18.0", "@babel/runtime@^7.18.3", "@babel/runtime@^7.20.0", "@babel/runtime@^7.20.7", "@babel/runtime@^7.21.0", "@babel/runtime@^7.22.5", "@babel/runtime@^7.23.2", "@babel/runtime@^7.23.6", "@babel/runtime@^7.23.9", "@babel/runtime@^7.24.4", "@babel/runtime@^7.24.7", "@babel/runtime@^7.8.4": version "7.24.8" resolved "https://registry.yarnpkg.com/@babel/runtime/-/runtime-7.24.8.tgz#5d958c3827b13cc6d05e038c07fb2e5e3420d82e" integrity sha512-5F7SDGs1T72ZczbRwbGO9lQi0NLjQxzl6i4lJxLxfW9U5UluCSyEJeniWvnhl3/euNiqQVbo8zruhsDfid0esA== dependencies: regenerator-runtime "^0.14.0" -"@babel/runtime@^7.25.9": +"@babel/runtime@^7.24.8", "@babel/runtime@^7.25.7", "@babel/runtime@^7.25.9", "@babel/runtime@^7.26.0": version "7.27.0" resolved "https://registry.yarnpkg.com/@babel/runtime/-/runtime-7.27.0.tgz#fbee7cf97c709518ecc1f590984481d5460d4762" integrity sha512-VtPOkrdPHZsKc/clNqyi9WUA8TINkZ4cGk63UUE3u4pmB2k+ZMQRDuIOagv8UVd6j7k0T3+RRIb7beKTebNbcw== @@ -3465,137 +3481,23 @@ dependencies: "@babel/runtime" "^7.24.4" -"@rc-component/cascader@~1.7.0": - version "1.7.0" - resolved "https://registry.npmmirror.com/@rc-component/cascader/-/cascader-1.7.0.tgz#1f6c07d26d1cc784938fd628f0aede75e731241b" - integrity sha512-Cg8AlH+9N7vht7n+bKMkJCP5ERn9HJXMYLuaLC2wVq+Fapzr+3Ei7lNr7F4OjLkXdtMhkgiX4AZBEqja8+goxw== - dependencies: - "@rc-component/select" "~1.2.0" - "@rc-component/tree" "~1.0.0" - "@rc-component/util" "^1.3.0" - clsx "^2.1.1" - -"@rc-component/checkbox@~1.0.0": - version "1.0.0" - resolved "https://registry.npmmirror.com/@rc-component/checkbox/-/checkbox-1.0.0.tgz#32f2e69e23547a81359bfa5b9c5560d549bb273e" - integrity sha512-OvOsRsbCQvCQGafq429AuI6HGKtNdBRorR7EK0VF3X2ASMHQ6XzfgKGi/kf/FZ3VjPkvvzME7tMXayAD4J6wYg== - dependencies: - "@rc-component/util" "^1.3.0" - classnames "^2.3.2" - -"@rc-component/collapse@~1.1.1": - version "1.1.1" - resolved "https://registry.npmmirror.com/@rc-component/collapse/-/collapse-1.1.1.tgz#857bbf38803b9fb12b97ae4c9a9f9cb12db2c7d8" - integrity sha512-m/E99iY8ItON58UIjfUXtV/p4TESy8DoUu4cIARyN+pAtNYqwk+9FpsjJtN/7sXlcdmFoTIcOh0z09eoS1dKhw== - dependencies: - "@babel/runtime" "^7.10.1" - "@rc-component/motion" "^1.1.4" - "@rc-component/util" "^1.3.0" - classnames "2.x" - -"@rc-component/color-picker@~3.0.2": - version "3.0.2" - resolved "https://registry.npmmirror.com/@rc-component/color-picker/-/color-picker-3.0.2.tgz#a2c16be738336f397d4456b6730b11af8cef676c" - integrity sha512-mCoBKA4j7BZpQaUqKDAHUf3xlMY8hYiy0v8WxIqrFOS3Oly376Qv6k+3QJC5OH21zv7bHw8IrI5T2HIrFCl8Bw== - dependencies: - "@ant-design/fast-color" "^3.0.0" - "@rc-component/util" "^1.3.0" - classnames "^2.2.6" - -"@rc-component/context@^2.0.1": +"@rc-component/color-picker@~2.0.1": version "2.0.1" - resolved "https://registry.npmmirror.com/@rc-component/context/-/context-2.0.1.tgz#88c7a565ae92c34a7f02f33c34b145e4039deed0" - integrity sha512-HyZbYm47s/YqtP6pKXNMjPEMaukyg7P0qVfgMLzr7YiFNMHbK2fKTAGzms9ykfGHSfyf75nBbgWw+hHkp+VImw== + resolved "https://registry.yarnpkg.com/@rc-component/color-picker/-/color-picker-2.0.1.tgz#6b9b96152466a9d4475cbe72b40b594bfda164be" + integrity sha512-WcZYwAThV/b2GISQ8F+7650r5ZZJ043E57aVBFkQ+kSY4C6wdofXgB0hBx+GPGpIU0Z81eETNoDUJMr7oy/P8Q== dependencies: - "@rc-component/util" "^1.3.0" - -"@rc-component/dialog@~1.5.0": - version "1.5.0" - resolved "https://registry.npmmirror.com/@rc-component/dialog/-/dialog-1.5.0.tgz#02e8f530592c9b03e5f79dae2393784d88bfcfbc" - integrity sha512-P93IM/JK57Xj/gMqfdwFcnJA8lAnrRy1svCtFKmzvbzG5Bfe6se/HMt5bnqyYGALZ4xRFCmG9cUrS5MthD8+wQ== - dependencies: - "@rc-component/motion" "^1.1.3" - "@rc-component/portal" "^2.0.0" - "@rc-component/util" "^1.0.1" - classnames "^2.2.6" - -"@rc-component/drawer@~1.2.0": - version "1.2.0" - resolved "https://registry.npmmirror.com/@rc-component/drawer/-/drawer-1.2.0.tgz#4e1b08beed21f02a8e31f47ea741ef14dcf5eff4" - integrity sha512-RZ8IoNUv/soNVMYIWdjelKXX/3LWhVrKUQAeoc966Y55cIGc+PQKni025xshsvTY/+ntq10wqlBw1WCi77MvYQ== - dependencies: - "@rc-component/motion" "^1.1.4" - "@rc-component/portal" "^2.0.0" - "@rc-component/util" "^1.2.1" - classnames "^2.2.6" - -"@rc-component/dropdown@~1.0.0": - version "1.0.0" - resolved "https://registry.npmmirror.com/@rc-component/dropdown/-/dropdown-1.0.0.tgz#2d75e2f2088485f062beb4aae0386a3a27fa7f2d" - integrity sha512-pIf/JyX46HWjScz6q9XlZwpdYBo4a30pPcuD0GbIaJgowJpxdR8Er0/Tt53x+p3JmAXnQvluV9YJ7Rns6ZibgQ== - dependencies: - "@rc-component/trigger" "^3.0.0" - "@rc-component/util" "^1.2.1" + "@ant-design/fast-color" "^2.0.6" + "@babel/runtime" "^7.23.6" classnames "^2.2.6" + rc-util "^5.38.1" -"@rc-component/form@~1.4.0": +"@rc-component/context@^1.4.0": version "1.4.0" - resolved "https://registry.npmmirror.com/@rc-component/form/-/form-1.4.0.tgz#bee504c182bbb768b5fb68809e82b69deef9aec0" - integrity sha512-C8MN/2wIaW9hSrCCtJmcgCkWTQNIspN7ARXLFA4F8PGr8Qxk39U5pS3kRK51/bUJNhb/fEtdFnaViLlISGKI2A== + resolved "https://registry.yarnpkg.com/@rc-component/context/-/context-1.4.0.tgz#dc6fb021d6773546af8f016ae4ce9aea088395e8" + integrity sha512-kFcNxg9oLRMoL3qki0OMxK+7g5mypjgaaJp/pkOis/6rVxma9nJBF/8kCIuTYHUQNr0ii7MxqE33wirPZLJQ2w== dependencies: - "@rc-component/async-validator" "^5.0.3" - "@rc-component/util" "^1.3.0" - clsx "^2.1.1" - -"@rc-component/image@~1.5.1": - version "1.5.1" - resolved "https://registry.npmmirror.com/@rc-component/image/-/image-1.5.1.tgz#b1e9cf17574f8e9b54d217b28b0da739940b7edf" - integrity sha512-1fG+ph22p3+gCQKrRNqWsVs500EyAoqDNZV6cZaPcLXLljtqJ08IvlAXkFMy2pkg3uDt5KY9zVD/MRLJVyOF/A== - dependencies: - "@rc-component/motion" "^1.0.0" - "@rc-component/portal" "^2.0.0" - "@rc-component/util" "^1.3.0" - classnames "^2.2.6" - -"@rc-component/input-number@~1.6.2": - version "1.6.2" - resolved "https://registry.npmmirror.com/@rc-component/input-number/-/input-number-1.6.2.tgz#ae04e1ee69393fc047588c632e7ce6e19faf617f" - integrity sha512-Gjcq7meZlCOiWN1t1xCC+7/s85humHVokTBI7PJgTfoyw5OWF74y3e6P8PHX104g9+b54jsodFIzyaj6p8LI9w== - dependencies: - "@rc-component/mini-decimal" "^1.0.1" - "@rc-component/util" "^1.4.0" - clsx "^2.1.1" - -"@rc-component/input@~1.1.0": - version "1.1.2" - resolved "https://registry.npmmirror.com/@rc-component/input/-/input-1.1.2.tgz#5fdb55741c012a3f8847d7bd24e318ed1d02cc05" - integrity sha512-Q61IMR47piUBudgixJ30CciKIy9b1H95qe7GgEKOmSJVJXvFRWJllJfQry9tif+MX2cWFXWJf/RXz4kaCeq/Fg== - dependencies: - "@rc-component/util" "^1.4.0" - clsx "^2.1.1" - -"@rc-component/mentions@~1.5.5": - version "1.5.5" - resolved "https://registry.npmmirror.com/@rc-component/mentions/-/mentions-1.5.5.tgz#3fbe90d929951dde410fe7f43a697399883dcce4" - integrity sha512-m39JW6ZyR0+foE1ojgOx2+GH8kMaJS279A2cI0vV0gIEZMp+2hOpPhJgKR7vMOGdhvkiXwgfM49EaPw30NonNw== - dependencies: - "@rc-component/input" "~1.1.0" - "@rc-component/menu" "~1.1.0" - "@rc-component/textarea" "~1.1.0" - "@rc-component/trigger" "^3.0.0" - "@rc-component/util" "^1.3.0" - clsx "^2.1.1" - -"@rc-component/menu@~1.1.0", "@rc-component/menu@~1.1.4": - version "1.1.4" - resolved "https://registry.npmmirror.com/@rc-component/menu/-/menu-1.1.4.tgz#75639a13b5e0e8afefe084fd70240922d45619af" - integrity sha512-JP4ZWrUUqvT0t8EGTMf+BcLPFjlzOF0Y5zQN/4hQlpNzqxUWbGPrGBVfHr4Li5rWoR3u9X3nOFzrH7+NqiS8Qw== - dependencies: - "@rc-component/motion" "^1.1.4" - "@rc-component/trigger" "^3.0.0" - "@rc-component/util" "^1.3.0" - classnames "2.x" - rc-overflow "^1.3.1" + "@babel/runtime" "^7.10.1" + rc-util "^5.27.0" "@rc-component/mini-decimal@^1.0.1": version "1.1.0" @@ -3604,234 +3506,67 @@ dependencies: "@babel/runtime" "^7.18.0" -"@rc-component/motion@^1.0.0", "@rc-component/motion@^1.1.3", "@rc-component/motion@^1.1.4", "@rc-component/motion@~1.1.4": - version "1.1.4" - resolved "https://registry.npmmirror.com/@rc-component/motion/-/motion-1.1.4.tgz#32f82a161697f819bb4f47c2da2923d7c6d21383" - integrity sha512-rz3+kqQ05xEgIAB9/UKQZKCg5CO/ivGNU78QWYKVfptmbjJKynZO4KXJ7pJD3oMxE9aW94LD/N3eppXWeysTjw== - dependencies: - "@rc-component/util" "^1.2.0" - classnames "^2.2.1" - -"@rc-component/mutate-observer@^2.0.0": - version "2.0.0" - resolved "https://registry.npmmirror.com/@rc-component/mutate-observer/-/mutate-observer-2.0.0.tgz#57caaf9361da06b218e0ca14d9b16e81aa3c1e94" - integrity sha512-hcHRFgtKAJfqFW+p4qgea4oLuwDxR2oyDY+3VFcZCRuf723Y0ZO2JFzqfDeL0CY+FO+Fs9G+CRg7WFOZjIymtA== +"@rc-component/mutate-observer@^1.1.0": + version "1.1.0" + resolved "https://registry.yarnpkg.com/@rc-component/mutate-observer/-/mutate-observer-1.1.0.tgz#ee53cc88b78aade3cd0653609215a44779386fd8" + integrity sha512-QjrOsDXQusNwGZPf4/qRQasg7UFEj06XiCJ8iuiq/Io7CrHrgVi6Uuetw60WAMG1799v+aM8kyc+1L/GBbHSlw== dependencies: - "@rc-component/util" "^1.2.0" + "@babel/runtime" "^7.18.0" classnames "^2.3.2" + rc-util "^5.24.4" -"@rc-component/notification@~1.2.0": - version "1.2.0" - resolved "https://registry.npmmirror.com/@rc-component/notification/-/notification-1.2.0.tgz#dd7c7d50f1d3217bfbc75bc46259e212096855c5" - integrity sha512-OX3J+zVU7rvoJCikjrfW7qOUp7zlDeFBK2eA3SFbGSkDqo63Sl4Ss8A04kFP+fxHSxMDIS9jYVEZtU1FNCFuBA== - dependencies: - "@rc-component/motion" "^1.1.4" - "@rc-component/util" "^1.2.1" - clsx "^2.1.1" - -"@rc-component/pagination@~1.2.0": - version "1.2.0" - resolved "https://registry.npmmirror.com/@rc-component/pagination/-/pagination-1.2.0.tgz#3a97abda8f1077f514e03a74b3b9c77f9e68499a" - integrity sha512-YcpUFE8dMLfSo6OARJlK6DbHHvrxz7pMGPGmC/caZSJJz6HRKHC1RPP001PRHCvG9Z/veD039uOQmazVuLJzlw== - dependencies: - "@rc-component/util" "^1.3.0" - clsx "^2.1.1" - -"@rc-component/picker@~1.6.0": - version "1.6.0" - resolved "https://registry.npmmirror.com/@rc-component/picker/-/picker-1.6.0.tgz#d394a41862c27d7cd887ef85114cf583b341d493" - integrity sha512-5gmNlnsK18Xu8W9xqluz8JzfRBHwPKfdUnkTwMmhGg7P8vjVUveYRHGQbyPZAE2Q11maE42x457l36FlXi4Hyw== - dependencies: - "@rc-component/resize-observer" "^1.0.0" - "@rc-component/trigger" "^3.6.15" - "@rc-component/util" "^1.3.0" - clsx "^2.1.1" - rc-overflow "^1.3.2" - -"@rc-component/portal@^2.0.0": - version "2.0.0" - resolved "https://registry.npmmirror.com/@rc-component/portal/-/portal-2.0.0.tgz#702a5dc8c1be9110bc2f155b99e55e369e702549" - integrity sha512-337ADhBfgH02S8OujUl33OT+8zVJ67eyuUq11j/dE71rXKYNihMsggW8R2VfI2aL3SciDp8gAFsmPVoPkxLUGw== +"@rc-component/portal@^1.0.0-8", "@rc-component/portal@^1.0.0-9", "@rc-component/portal@^1.0.2", "@rc-component/portal@^1.1.0", "@rc-component/portal@^1.1.1": + version "1.1.2" + resolved "https://registry.yarnpkg.com/@rc-component/portal/-/portal-1.1.2.tgz#55db1e51d784e034442e9700536faaa6ab63fc71" + integrity sha512-6f813C0IsasTZms08kfA8kPAGxbbkYToa8ALaiDIGGECU4i9hj8Plgbx0sNJDrey3EtHO30hmdaxtT0138xZcg== dependencies: - "@rc-component/util" "^1.2.1" + "@babel/runtime" "^7.18.0" classnames "^2.3.2" + rc-util "^5.24.4" -"@rc-component/progress@~1.0.1": - version "1.0.1" - resolved "https://registry.npmmirror.com/@rc-component/progress/-/progress-1.0.1.tgz#07d23aa4e44091d10935a2c6a246a29a3aaa86f9" - integrity sha512-CM4E8NJbHBb4XHurTrKWqWiU5UwSEZ96rmpyIYiU5xET8coaDaVcHPdjtfdzQbamgKrik6a+SL/z35hP3zRBnw== - dependencies: - "@rc-component/util" "^1.2.1" - classnames "^2.2.6" - -"@rc-component/qrcode@~1.1.0": - version "1.1.0" - resolved "https://registry.npmmirror.com/@rc-component/qrcode/-/qrcode-1.1.0.tgz#4e38f1d7c2c8aae7f62d60ab110a842c1395db3e" - integrity sha512-ABA80Yer0c6I2+moqNY0kF3Y1NxIT6wDP/EINIqbiRbfZKP1HtHpKMh8WuTXLgVGYsoWG2g9/n0PgM8KdnJb4Q== +"@rc-component/qrcode@~1.0.0": + version "1.0.0" + resolved "https://registry.yarnpkg.com/@rc-component/qrcode/-/qrcode-1.0.0.tgz#48a8de5eb11d0e65926f1377c4b1ef4c888997f5" + integrity sha512-L+rZ4HXP2sJ1gHMGHjsg9jlYBX/SLN2D6OxP9Zn3qgtpMWtO2vUfxVFwiogHpAIqs54FnALxraUy/BCO1yRIgg== dependencies: "@babel/runtime" "^7.24.7" classnames "^2.3.2" + rc-util "^5.38.0" -"@rc-component/rate@~1.0.0": - version "1.0.0" - resolved "https://registry.npmmirror.com/@rc-component/rate/-/rate-1.0.0.tgz#89fe758fcbd713ec47a0437981eb968cd6f61fdb" - integrity sha512-X6LPdN67Sjsya/MxnM7bTYJ3wmua9FYt1wgw9L08oM9FfmsiTYSIHJy8D8aMWyDl99LBVq3vTIu135ghCwWkEA== - dependencies: - "@rc-component/util" "^1.3.0" - classnames "^2.2.5" - -"@rc-component/resize-observer@^1.0.0": - version "1.0.0" - resolved "https://registry.npmmirror.com/@rc-component/resize-observer/-/resize-observer-1.0.0.tgz#93486fc12e95318eddd2d4e7a863b274e5a2a44f" - integrity sha512-inR8Ka87OOwtrDJzdVp2VuEVlc5nK20lHolvkwFUnXwV50p+nLhKny1NvNTCKvBmS/pi/rTn/1Hvsw10sRRnXA== - dependencies: - "@rc-component/util" "^1.2.0" - classnames "^2.2.1" - -"@rc-component/segmented@~1.2.2": - version "1.2.2" - resolved "https://registry.npmmirror.com/@rc-component/segmented/-/segmented-1.2.2.tgz#f95926340587f170551bfdade9c618f28c6bbd2e" - integrity sha512-VgGRpsYEZ0nOmC/uOFLM0DuoglYFBtcR9T4htCU/3tsmiG3zM9D1I6pU7gaqebPI2eecibxL1W1aGuARsgMWHQ== - dependencies: - "@babel/runtime" "^7.11.1" - "@rc-component/util" "^1.3.0" - classnames "^2.2.1" - rc-motion "^2.4.4" - -"@rc-component/select@~1.2.0", "@rc-component/select@~1.2.1": - version "1.2.1" - resolved "https://registry.npmmirror.com/@rc-component/select/-/select-1.2.1.tgz#1f7afe09d981168ccc6bc9445f1a802a44e74e52" - integrity sha512-Ljsv/oDFxAACQprXXvSdSbi0Ckkr/2cVHEMo3uWS5P5m/QF/E+PLMLahN+E2U30P3O0gI3YcqdVKIt5wLauViQ== - dependencies: - "@rc-component/motion" "^1.1.4" - "@rc-component/trigger" "^3.0.0" - "@rc-component/util" "^1.3.0" - clsx "^2.1.1" - rc-overflow "^1.5.0" - rc-virtual-list "^3.5.2" - -"@rc-component/slider@~1.0.0": - version "1.0.0" - resolved "https://registry.npmmirror.com/@rc-component/slider/-/slider-1.0.0.tgz#fac767676fb182bf95b18ae6c4b22c8a0fa4be1b" - integrity sha512-ZC/ARv2o+VzyLMgEUWyOLV0JTRlJqbFSNegtERoAfPVxCPNt92s5baIp22OW487Wgtk1xCK9GZex8sD/zo91ig== - dependencies: - "@rc-component/util" "^1.3.0" - classnames "^2.2.5" - -"@rc-component/steps@~1.2.1": - version "1.2.1" - resolved "https://registry.npmmirror.com/@rc-component/steps/-/steps-1.2.1.tgz#9909f78f2bd8f18e58d8d87c7d74b5e941bf228f" - integrity sha512-cdatR6ux07Gxq8YYo+sn8LfiBGZ+C3Cn4KKf8HUFY567YIVJ1lmr3leBVsP4BoP7MjkkBllZJhKv5T87Ka7PhQ== - dependencies: - "@rc-component/util" "^1.2.1" - classnames "^2.2.3" - -"@rc-component/switch@~1.0.2": - version "1.0.2" - resolved "https://registry.npmmirror.com/@rc-component/switch/-/switch-1.0.2.tgz#9f44dd22b2b9221d463f693175a39dfe3764c780" - integrity sha512-m0vjpdmrSYw55dXwxWxCwwM798lr1Jt30R7rVUOzRfPrnxIa/dJtN/BckR4gpPaDosjoRu/UPal3pLxQUIB/Rw== - dependencies: - "@rc-component/util" "^1.3.0" - classnames "^2.2.1" - -"@rc-component/table@~1.8.1": - version "1.8.2" - resolved "https://registry.npmmirror.com/@rc-component/table/-/table-1.8.2.tgz#021755c329bae6988141f9be46646a7cdc784e9e" - integrity sha512-GUuuXIGx2M3KVEcqhze8cDs0cwkSby9VRnOrm6zbnryMFUr+WUL1eu7NA1j4Gi43Rd3/CIL8OmXhRdUz1L/Xug== - dependencies: - "@rc-component/context" "^2.0.1" - "@rc-component/resize-observer" "^1.0.0" - "@rc-component/util" "^1.1.0" - clsx "^2.1.1" - rc-virtual-list "^3.14.2" - -"@rc-component/tabs@~1.6.0": - version "1.6.0" - resolved "https://registry.npmmirror.com/@rc-component/tabs/-/tabs-1.6.0.tgz#8beb3dc4bed77e6eed592a36df70ff39a6f07269" - integrity sha512-2OY02yhS7E0y0Yr5LBI3o5KdM7h4yJ5lBR6V4PEC1dx/sUZggEw7vAHGCArqCcpsZ6pzjOGJbGiVhz7dSMiehA== - dependencies: - "@rc-component/dropdown" "~1.0.0" - "@rc-component/menu" "~1.1.0" - "@rc-component/motion" "^1.1.3" - "@rc-component/resize-observer" "^1.0.0" - "@rc-component/util" "^1.3.0" - clsx "^2.1.1" - -"@rc-component/textarea@~1.1.0", "@rc-component/textarea@~1.1.2": - version "1.1.2" - resolved "https://registry.npmmirror.com/@rc-component/textarea/-/textarea-1.1.2.tgz#2daa5dcb997840040fb8892b0d601ef28d9d1f37" - integrity sha512-9rMUEODWZDMovfScIEHXWlVZuPljZ2pd1LKNjslJVitn4SldEzq5vO1CL3yy3Dnib6zZal2r2DPtjy84VVpF6A== - dependencies: - "@rc-component/input" "~1.1.0" - "@rc-component/resize-observer" "^1.0.0" - "@rc-component/util" "^1.3.0" - clsx "^2.1.1" - -"@rc-component/tooltip@~1.3.3": - version "1.3.3" - resolved "https://registry.npmmirror.com/@rc-component/tooltip/-/tooltip-1.3.3.tgz#3851fa1bb8376f13c6dd2222340984c77a2c5694" - integrity sha512-6wNDh60lh+RZFGJYm5vwNqB/S7YxkioZYF4Vj57tWIlKScxJWW5I2qXOc7gv99CXTDGclutVwcefZFbq9JANFQ== +"@rc-component/tour@~1.15.1": + version "1.15.1" + resolved "https://registry.yarnpkg.com/@rc-component/tour/-/tour-1.15.1.tgz#9b79808254185fc19e964172d99e25e8c6800ded" + integrity sha512-Tr2t7J1DKZUpfJuDZWHxyxWpfmj8EZrqSgyMZ+BCdvKZ6r1UDsfU46M/iWAAFBy961Ssfom2kv5f3UcjIL2CmQ== dependencies: - "@rc-component/trigger" "^3.6.15" - "@rc-component/util" "^1.3.0" - classnames "^2.3.1" + "@babel/runtime" "^7.18.0" + "@rc-component/portal" "^1.0.0-9" + "@rc-component/trigger" "^2.0.0" + classnames "^2.3.2" + rc-util "^5.24.4" -"@rc-component/tour@~2.2.0": +"@rc-component/trigger@^2.0.0", "@rc-component/trigger@^2.1.1": version "2.2.0" - resolved "https://registry.npmmirror.com/@rc-component/tour/-/tour-2.2.0.tgz#e053f2fb3033582142d6e331b67aa041bbd38fdc" - integrity sha512-9WF944DcDJaUZMB0gxbDgrVYkPyrxib2WketvUBHy2nqiAQDXZ0tSMTEDlis+rVYJmxF56JjT1SvkQsYcJTaOg== + resolved "https://registry.yarnpkg.com/@rc-component/trigger/-/trigger-2.2.0.tgz#503a48b0895a2cfddee0a5b7b11492c3df2a493d" + integrity sha512-QarBCji02YE9aRFhZgRZmOpXBj0IZutRippsVBv85sxvG4FGk/vRxwAlkn3MS9zK5mwbETd86mAVg2tKqTkdJA== dependencies: - "@rc-component/portal" "^2.0.0" - "@rc-component/trigger" "^3.0.0" - "@rc-component/util" "^1.3.0" + "@babel/runtime" "^7.23.2" + "@rc-component/portal" "^1.1.0" classnames "^2.3.2" + rc-motion "^2.0.0" + rc-resize-observer "^1.3.1" + rc-util "^5.38.0" -"@rc-component/tree-select@~1.3.0": - version "1.3.0" - resolved "https://registry.npmmirror.com/@rc-component/tree-select/-/tree-select-1.3.0.tgz#8186dfc993950b9a3809004a079200c49d33825b" - integrity sha512-ClBy4J5X5FQQcQwQPyZmrrhpCBSobccASQaBWDm0wYWjE7WZ10B4lG6b2tJAFw9jBjmFD+lfGS9QogNoccUCWA== - dependencies: - "@rc-component/select" "~1.2.0" - "@rc-component/tree" "~1.0.1" - "@rc-component/util" "^1.3.0" - clsx "^2.1.1" - -"@rc-component/tree@~1.0.0", "@rc-component/tree@~1.0.1": - version "1.0.1" - resolved "https://registry.npmmirror.com/@rc-component/tree/-/tree-1.0.1.tgz#396552e62a522537919c14694a9418fae99bb323" - integrity sha512-pUhcYUXv2Xqt093JcIAikj4QFGIhTsGntFgLEv8Vwmw1fyG0x6rIFqopUxYqr0oWulxVob9ECu0O86SC42fuOw== - dependencies: - "@rc-component/motion" "^1.0.0" - "@rc-component/util" "^1.2.1" - classnames "2.x" - rc-virtual-list "^3.5.1" - -"@rc-component/trigger@^3.0.0", "@rc-component/trigger@^3.6.15": - version "3.6.15" - resolved "https://registry.npmmirror.com/@rc-component/trigger/-/trigger-3.6.15.tgz#fe4192944ad2be846c9cfa7ff88125cd0d6b6b29" - integrity sha512-agmLUpfYbgWhVBrXyQGiupc+YoQ9NaUyt1cf+LcyRi3waq1PDj6Q+D/bA3UlvcTr53Xg9592u3zmZ3yodRvBbA== +"@rc-component/trigger@^2.2.6": + version "2.2.6" + resolved "https://registry.yarnpkg.com/@rc-component/trigger/-/trigger-2.2.6.tgz#bfe6602313b3fadd659687746511f813299d5ea4" + integrity sha512-/9zuTnWwhQ3S3WT1T8BubuFTT46kvnXgaERR9f4BTKyn61/wpf/BvbImzYBubzJibU707FxwbKszLlHjcLiv1Q== dependencies: - "@rc-component/motion" "^1.1.4" - "@rc-component/portal" "^2.0.0" - "@rc-component/resize-observer" "^1.0.0" - "@rc-component/util" "^1.2.1" + "@babel/runtime" "^7.23.2" + "@rc-component/portal" "^1.1.0" classnames "^2.3.2" - -"@rc-component/upload@~1.1.0": - version "1.1.0" - resolved "https://registry.npmmirror.com/@rc-component/upload/-/upload-1.1.0.tgz#cb634587ffdf8a8a4a26a279fac06989fb47f593" - integrity sha512-LIBV90mAnUE6VK5N4QvForoxZc4XqEYZimcp7fk+lkE4XwHHyJWxpIXQQwMU8hJM+YwBbsoZkGksL1sISWHQxw== - dependencies: - "@rc-component/util" "^1.3.0" - clsx "^2.1.1" - -"@rc-component/util@^1.0.1", "@rc-component/util@^1.1.0", "@rc-component/util@^1.2.0", "@rc-component/util@^1.2.1", "@rc-component/util@^1.3.0", "@rc-component/util@^1.4.0": - version "1.4.0" - resolved "https://registry.npmmirror.com/@rc-component/util/-/util-1.4.0.tgz#7509c47b2f17e370be65c05e0e8c1aa743d674db" - integrity sha512-LQlShcJKu0p3JUTAenKrWtqVW0+c4PJKedOqEaef9gTVL70O3cG4xZJ7VXfm0blGzORKFEkd3oQGalaUBNZ3Lg== - dependencies: - is-mobile "^5.0.0" - react-is "^18.2.0" + rc-motion "^2.0.0" + rc-resize-observer "^1.3.1" + rc-util "^5.44.0" "@rspack/binding-darwin-arm64@1.6.0": version "1.6.0" @@ -5290,56 +5025,58 @@ ansi-styles@^6.1.0: resolved "https://registry.yarnpkg.com/ansi-styles/-/ansi-styles-6.2.1.tgz#0e62320cf99c21afff3b3012192546aacbfb05c5" integrity sha512-bN798gFfQX+viw3R7yrGWRqnrN2oRkEkUjjl4JNn4E8GxxbjtG3FbrEIIY3l8/hrwUwIeCZvi4QuOTP4MErVug== -antd@^6.0.0: - version "6.0.0" - resolved "https://registry.npmmirror.com/antd/-/antd-6.0.0.tgz#d194fb05a4c7f56767380ba1d50d9e55be0af6ce" - integrity sha512-OoalcsmgsLFI8UWLkfDJftABP2KmNDiU9REaTApb0s7cd3vZfIok7OnHKuNGQ3tCNY1NKPDvoRtWKXlpaq7zWQ== - dependencies: - "@ant-design/colors" "^8.0.0" - "@ant-design/cssinjs" "^2.0.0" - "@ant-design/cssinjs-utils" "^2.0.0" - "@ant-design/fast-color" "^3.0.0" - "@ant-design/icons" "^6.1.0" +antd@^5.24.8: + version "5.24.8" + resolved "https://registry.yarnpkg.com/antd/-/antd-5.24.8.tgz#908ceb91d69f9bfd57211bf60b62ee100fd527ce" + integrity sha512-vJcW81WSRq+ymBKTiA3NE+FddmiqTAKxdWVRZU+HnLLrRrIz896svcUxXFPa7M4mH9HqyeJ5JPOHsne4sQAC1A== + dependencies: + "@ant-design/colors" "^7.2.0" + "@ant-design/cssinjs" "^1.23.0" + "@ant-design/cssinjs-utils" "^1.1.3" + "@ant-design/fast-color" "^2.0.6" + "@ant-design/icons" "^5.6.1" "@ant-design/react-slick" "~1.1.2" - "@rc-component/cascader" "~1.7.0" - "@rc-component/checkbox" "~1.0.0" - "@rc-component/collapse" "~1.1.1" - "@rc-component/color-picker" "~3.0.2" - "@rc-component/dialog" "~1.5.0" - "@rc-component/drawer" "~1.2.0" - "@rc-component/dropdown" "~1.0.0" - "@rc-component/form" "~1.4.0" - "@rc-component/image" "~1.5.1" - "@rc-component/input" "~1.1.0" - "@rc-component/input-number" "~1.6.2" - "@rc-component/mentions" "~1.5.5" - "@rc-component/menu" "~1.1.4" - "@rc-component/motion" "~1.1.4" - "@rc-component/mutate-observer" "^2.0.0" - "@rc-component/notification" "~1.2.0" - "@rc-component/pagination" "~1.2.0" - "@rc-component/picker" "~1.6.0" - "@rc-component/progress" "~1.0.1" - "@rc-component/qrcode" "~1.1.0" - "@rc-component/rate" "~1.0.0" - "@rc-component/resize-observer" "^1.0.0" - "@rc-component/segmented" "~1.2.2" - "@rc-component/select" "~1.2.1" - "@rc-component/slider" "~1.0.0" - "@rc-component/steps" "~1.2.1" - "@rc-component/switch" "~1.0.2" - "@rc-component/table" "~1.8.1" - "@rc-component/tabs" "~1.6.0" - "@rc-component/textarea" "~1.1.2" - "@rc-component/tooltip" "~1.3.3" - "@rc-component/tour" "~2.2.0" - "@rc-component/tree" "~1.0.1" - "@rc-component/tree-select" "~1.3.0" - "@rc-component/trigger" "^3.6.15" - "@rc-component/upload" "~1.1.0" - "@rc-component/util" "^1.4.0" - clsx "^2.1.1" + "@babel/runtime" "^7.26.0" + "@rc-component/color-picker" "~2.0.1" + "@rc-component/mutate-observer" "^1.1.0" + "@rc-component/qrcode" "~1.0.0" + "@rc-component/tour" "~1.15.1" + "@rc-component/trigger" "^2.2.6" + classnames "^2.5.1" + copy-to-clipboard "^3.3.3" dayjs "^1.11.11" + rc-cascader "~3.33.1" + rc-checkbox "~3.5.0" + rc-collapse "~3.9.0" + rc-dialog "~9.6.0" + rc-drawer "~7.2.0" + rc-dropdown "~4.2.1" + rc-field-form "~2.7.0" + rc-image "~7.11.1" + rc-input "~1.8.0" + rc-input-number "~9.5.0" + rc-mentions "~2.20.0" + rc-menu "~9.16.1" + rc-motion "^2.9.5" + rc-notification "~5.6.4" + rc-pagination "~5.1.0" + rc-picker "~4.11.3" + rc-progress "~4.0.0" + rc-rate "~2.13.1" + rc-resize-observer "^1.4.3" + rc-segmented "~2.7.0" + rc-select "~14.16.6" + rc-slider "~11.1.8" + rc-steps "~6.0.1" + rc-switch "~4.1.0" + rc-table "~7.50.4" + rc-tabs "~15.6.0" + rc-textarea "~1.10.0" + rc-tooltip "~6.4.0" + rc-tree "~5.13.1" + rc-tree-select "~5.27.0" + rc-upload "~4.8.1" + rc-util "^5.44.4" scroll-into-view-if-needed "^3.1.0" throttle-debounce "^5.0.2" @@ -5838,7 +5575,7 @@ ci-info@^3.2.0: resolved "https://registry.yarnpkg.com/ci-info/-/ci-info-3.9.0.tgz#4279a62028a7b1f262f3473fc9605f5e218c59b4" integrity sha512-NIxF55hv4nSqQswkAeiOi1r83xy8JldOFDTWiug55KBu9Jnblncd2U6ViHmYgHf01TPZS77NJBhBMKdWj9HQMQ== -classnames@2.x, classnames@^2.2.1, classnames@^2.2.3, classnames@^2.2.5, classnames@^2.2.6, classnames@^2.3.1, classnames@^2.3.2: +classnames@2.x, classnames@^2.2.1, classnames@^2.2.3, classnames@^2.2.5, classnames@^2.2.6, classnames@^2.3.1, classnames@^2.3.2, classnames@^2.5.1: version "2.5.1" resolved "https://registry.yarnpkg.com/classnames/-/classnames-2.5.1.tgz#ba774c614be0f016da105c858e7159eae8e7687b" integrity sha512-saHYOzhIQs6wy2sVxTM6bUDsQO4F50V9RQ22qBpEdCW+I+/Wmke2HOl6lS6dTpdxVhb88/I6+Hs+438c3lfUow== @@ -5887,7 +5624,7 @@ clone-deep@^4.0.1: kind-of "^6.0.2" shallow-clone "^3.0.0" -clsx@^2.0.0, clsx@^2.1.1: +clsx@^2.0.0: version "2.1.1" resolved "https://registry.yarnpkg.com/clsx/-/clsx-2.1.1.tgz#eed397c9fd8bd882bfb18deab7102049a2f32999" integrity sha512-eYm0QWBtUrBWZWG0d386OGAw16Z995PiOVo2B7bjWSbHedGl5e0ZWaq65kOGgUSNesEIDkB9ISbTg/JK9dhCZA== @@ -6081,7 +5818,7 @@ cookie@0.7.1: copy-to-clipboard@^3.3.3: version "3.3.3" - resolved "https://registry.npmmirror.com/copy-to-clipboard/-/copy-to-clipboard-3.3.3.tgz#55ac43a1db8ae639a4bd99511c148cdd1b83a1b0" + resolved "https://registry.yarnpkg.com/copy-to-clipboard/-/copy-to-clipboard-3.3.3.tgz#55ac43a1db8ae639a4bd99511c148cdd1b83a1b0" integrity sha512-2KV8NhB5JqC3ky0r9PMCAZKbUHSwtEo4CwCs0KXgruG43gX5PMqDEBbVU4OUzw2MuAWUfsuFmWvEKG5QRfSnJA== dependencies: toggle-selection "^1.0.6" @@ -8348,11 +8085,6 @@ is-installed-globally@^0.4.0: global-dirs "^3.0.0" is-path-inside "^3.0.2" -is-mobile@^5.0.0: - version "5.0.0" - resolved "https://registry.npmmirror.com/is-mobile/-/is-mobile-5.0.0.tgz#1e08a0ef2c38a67bff84a52af68d67bcef445333" - integrity sha512-Tz/yndySvLAEXh+Uk8liFCxOwVH6YutuR74utvOcu7I9Di+DwM0mtdPVZNaVvvBUM2OXxne/NhOs1zAO7riusQ== - is-network-error@^1.0.0: version "1.3.0" resolved "https://registry.npmmirror.com/is-network-error/-/is-network-error-1.3.0.tgz#2ce62cbca444abd506f8a900f39d20b898d37512" @@ -10903,7 +10635,145 @@ raw-body@2.5.2: iconv-lite "0.4.24" unpipe "1.0.0" -rc-motion@^2.4.4: +rc-cascader@~3.33.1: + version "3.33.1" + resolved "https://registry.yarnpkg.com/rc-cascader/-/rc-cascader-3.33.1.tgz#19e01462ef5ef51b723c1f562c7b9cde4691e7ee" + integrity sha512-Kyl4EJ7ZfCBuidmZVieegcbFw0RcU5bHHSbtEdmuLYd0fYHCAiYKZ6zon7fWAVyC6rWWOOib0XKdTSf7ElC9rg== + dependencies: + "@babel/runtime" "^7.25.7" + classnames "^2.3.1" + rc-select "~14.16.2" + rc-tree "~5.13.0" + rc-util "^5.43.0" + +rc-checkbox@~3.5.0: + version "3.5.0" + resolved "https://registry.yarnpkg.com/rc-checkbox/-/rc-checkbox-3.5.0.tgz#3ae2441e3a321774d390f76539e706864fcf5ff0" + integrity sha512-aOAQc3E98HteIIsSqm6Xk2FPKIER6+5vyEFMZfo73TqM+VVAIqOkHoPjgKLqSNtVLWScoaM7vY2ZrGEheI79yg== + dependencies: + "@babel/runtime" "^7.10.1" + classnames "^2.3.2" + rc-util "^5.25.2" + +rc-collapse@~3.9.0: + version "3.9.0" + resolved "https://registry.yarnpkg.com/rc-collapse/-/rc-collapse-3.9.0.tgz#972404ce7724e1c9d1d2476543e1175404a36806" + integrity sha512-swDdz4QZ4dFTo4RAUMLL50qP0EY62N2kvmk2We5xYdRwcRn8WcYtuetCJpwpaCbUfUt5+huLpVxhvmnK+PHrkA== + dependencies: + "@babel/runtime" "^7.10.1" + classnames "2.x" + rc-motion "^2.3.4" + rc-util "^5.27.0" + +rc-dialog@~9.6.0: + version "9.6.0" + resolved "https://registry.yarnpkg.com/rc-dialog/-/rc-dialog-9.6.0.tgz#dc7a255c6ad1cb56021c3a61c7de86ee88c7c371" + integrity sha512-ApoVi9Z8PaCQg6FsUzS8yvBEQy0ZL2PkuvAgrmohPkN3okps5WZ5WQWPc1RNuiOKaAYv8B97ACdsFU5LizzCqg== + dependencies: + "@babel/runtime" "^7.10.1" + "@rc-component/portal" "^1.0.0-8" + classnames "^2.2.6" + rc-motion "^2.3.0" + rc-util "^5.21.0" + +rc-drawer@~7.2.0: + version "7.2.0" + resolved "https://registry.yarnpkg.com/rc-drawer/-/rc-drawer-7.2.0.tgz#8d7de2f1fd52f3ac5a25f54afbb8ac14c62e5663" + integrity sha512-9lOQ7kBekEJRdEpScHvtmEtXnAsy+NGDXiRWc2ZVC7QXAazNVbeT4EraQKYwCME8BJLa8Bxqxvs5swwyOepRwg== + dependencies: + "@babel/runtime" "^7.23.9" + "@rc-component/portal" "^1.1.1" + classnames "^2.2.6" + rc-motion "^2.6.1" + rc-util "^5.38.1" + +rc-dropdown@~4.2.0: + version "4.2.0" + resolved "https://registry.yarnpkg.com/rc-dropdown/-/rc-dropdown-4.2.0.tgz#c6052fcfe9c701487b141e411cdc277dc7c6f061" + integrity sha512-odM8Ove+gSh0zU27DUj5cG1gNKg7mLWBYzB5E4nNLrLwBmYEgYP43vHKDGOVZcJSVElQBI0+jTQgjnq0NfLjng== + dependencies: + "@babel/runtime" "^7.18.3" + "@rc-component/trigger" "^2.0.0" + classnames "^2.2.6" + rc-util "^5.17.0" + +rc-dropdown@~4.2.1: + version "4.2.1" + resolved "https://registry.yarnpkg.com/rc-dropdown/-/rc-dropdown-4.2.1.tgz#44729eb2a4272e0353d31ac060da21e606accb1c" + integrity sha512-YDAlXsPv3I1n42dv1JpdM7wJ+gSUBfeyPK59ZpBD9jQhK9jVuxpjj3NmWQHOBceA1zEPVX84T2wbdb2SD0UjmA== + dependencies: + "@babel/runtime" "^7.18.3" + "@rc-component/trigger" "^2.0.0" + classnames "^2.2.6" + rc-util "^5.44.1" + +rc-field-form@~2.7.0: + version "2.7.0" + resolved "https://registry.yarnpkg.com/rc-field-form/-/rc-field-form-2.7.0.tgz#22413e793f35bfc1f35b0ec462774d7277f5a399" + integrity sha512-hgKsCay2taxzVnBPZl+1n4ZondsV78G++XVsMIJCAoioMjlMQR9YwAp7JZDIECzIu2Z66R+f4SFIRrO2DjDNAA== + dependencies: + "@babel/runtime" "^7.18.0" + "@rc-component/async-validator" "^5.0.3" + rc-util "^5.32.2" + +rc-image@~7.11.1: + version "7.11.1" + resolved "https://registry.yarnpkg.com/rc-image/-/rc-image-7.11.1.tgz#3ab290708dc053d3681de94186522e4e594f6772" + integrity sha512-XuoWx4KUXg7hNy5mRTy1i8c8p3K8boWg6UajbHpDXS5AlRVucNfTi5YxTtPBTBzegxAZpvuLfh3emXFt6ybUdA== + dependencies: + "@babel/runtime" "^7.11.2" + "@rc-component/portal" "^1.0.2" + classnames "^2.2.6" + rc-dialog "~9.6.0" + rc-motion "^2.6.2" + rc-util "^5.34.1" + +rc-input-number@~9.5.0: + version "9.5.0" + resolved "https://registry.yarnpkg.com/rc-input-number/-/rc-input-number-9.5.0.tgz#b47963d0f2cbd85ab2f1badfdc089a904c073f38" + integrity sha512-bKaEvB5tHebUURAEXw35LDcnRZLq3x1k7GxfAqBMzmpHkDGzjAtnUL8y4y5N15rIFIg5IJgwr211jInl3cipag== + dependencies: + "@babel/runtime" "^7.10.1" + "@rc-component/mini-decimal" "^1.0.1" + classnames "^2.2.5" + rc-input "~1.8.0" + rc-util "^5.40.1" + +rc-input@~1.8.0: + version "1.8.0" + resolved "https://registry.yarnpkg.com/rc-input/-/rc-input-1.8.0.tgz#d2f4404befebf2fbdc28390d5494c302f74ae974" + integrity sha512-KXvaTbX+7ha8a/k+eg6SYRVERK0NddX8QX7a7AnRvUa/rEH0CNMlpcBzBkhI0wp2C8C4HlMoYl8TImSN+fuHKA== + dependencies: + "@babel/runtime" "^7.11.1" + classnames "^2.2.1" + rc-util "^5.18.1" + +rc-mentions@~2.20.0: + version "2.20.0" + resolved "https://registry.yarnpkg.com/rc-mentions/-/rc-mentions-2.20.0.tgz#3bbeac0352b02e0ce3e1244adb48701bb6903bf7" + integrity sha512-w8HCMZEh3f0nR8ZEd466ATqmXFCMGMN5UFCzEUL0bM/nGw/wOS2GgRzKBcm19K++jDyuWCOJOdgcKGXU3fXfbQ== + dependencies: + "@babel/runtime" "^7.22.5" + "@rc-component/trigger" "^2.0.0" + classnames "^2.2.6" + rc-input "~1.8.0" + rc-menu "~9.16.0" + rc-textarea "~1.10.0" + rc-util "^5.34.1" + +rc-menu@~9.16.0, rc-menu@~9.16.1: + version "9.16.1" + resolved "https://registry.yarnpkg.com/rc-menu/-/rc-menu-9.16.1.tgz#9df1168e41d87dc7164c582173e1a1d32011899f" + integrity sha512-ghHx6/6Dvp+fw8CJhDUHFHDJ84hJE3BXNCzSgLdmNiFErWSOaZNsihDAsKq9ByTALo/xkNIwtDFGIl6r+RPXBg== + dependencies: + "@babel/runtime" "^7.10.1" + "@rc-component/trigger" "^2.0.0" + classnames "2.x" + rc-motion "^2.4.3" + rc-overflow "^1.3.1" + rc-util "^5.27.0" + +rc-motion@^2.0.0, rc-motion@^2.0.1, rc-motion@^2.3.0, rc-motion@^2.3.4, rc-motion@^2.4.3, rc-motion@^2.4.4, rc-motion@^2.6.1, rc-motion@^2.6.2, rc-motion@^2.9.0: version "2.9.2" resolved "https://registry.yarnpkg.com/rc-motion/-/rc-motion-2.9.2.tgz#f7c6d480250df8a512d0cfdce07ff3da906958cf" integrity sha512-fUAhHKLDdkAXIDLH0GYwof3raS58dtNUmzLF2MeiR8o6n4thNpSDQhOqQzWE4WfFZDCi9VEN8n7tiB7czREcyw== @@ -10912,6 +10782,25 @@ rc-motion@^2.4.4: classnames "^2.2.1" rc-util "^5.43.0" +rc-motion@^2.9.5: + version "2.9.5" + resolved "https://registry.yarnpkg.com/rc-motion/-/rc-motion-2.9.5.tgz#12c6ead4fd355f94f00de9bb4f15df576d677e0c" + integrity sha512-w+XTUrfh7ArbYEd2582uDrEhmBHwK1ZENJiSJVb7uRxdE7qJSYjbO2eksRXmndqyKqKoYPc9ClpPh5242mV1vA== + dependencies: + "@babel/runtime" "^7.11.1" + classnames "^2.2.1" + rc-util "^5.44.0" + +rc-notification@~5.6.4: + version "5.6.4" + resolved "https://registry.yarnpkg.com/rc-notification/-/rc-notification-5.6.4.tgz#ea89c39c13cd517fdfd97fe63f03376fabb78544" + integrity sha512-KcS4O6B4qzM3KH7lkwOB7ooLPZ4b6J+VMmQgT51VZCeEcmghdeR4IrMcFq0LG+RPdnbe/ArT086tGM8Snimgiw== + dependencies: + "@babel/runtime" "^7.10.1" + classnames "2.x" + rc-motion "^2.9.0" + rc-util "^5.20.1" + rc-overflow@^1.3.1, rc-overflow@^1.3.2: version "1.3.2" resolved "https://registry.yarnpkg.com/rc-overflow/-/rc-overflow-1.3.2.tgz#72ee49e85a1308d8d4e3bd53285dc1f3e0bcce2c" @@ -10922,17 +10811,46 @@ rc-overflow@^1.3.1, rc-overflow@^1.3.2: rc-resize-observer "^1.0.0" rc-util "^5.37.0" -rc-overflow@^1.5.0: - version "1.5.0" - resolved "https://registry.npmmirror.com/rc-overflow/-/rc-overflow-1.5.0.tgz#02e58a15199e392adfcc87e0d6e9e7c8e57f2771" - integrity sha512-Lm/v9h0LymeUYJf0x39OveU52InkdRXqnn2aYXfWmo8WdOonIKB2kfau+GF0fWq6jPgtdO9yMqveGcK6aIhJmg== +rc-pagination@~5.1.0: + version "5.1.0" + resolved "https://registry.yarnpkg.com/rc-pagination/-/rc-pagination-5.1.0.tgz#a6e63a2c5db29e62f991282eb18a2d3ee725ba8b" + integrity sha512-8416Yip/+eclTFdHXLKTxZvn70duYVGTvUUWbckCCZoIl3jagqke3GLsFrMs0bsQBikiYpZLD9206Ej4SOdOXQ== dependencies: - "@babel/runtime" "^7.11.1" + "@babel/runtime" "^7.10.1" + classnames "^2.3.2" + rc-util "^5.38.0" + +rc-picker@~4.11.3: + version "4.11.3" + resolved "https://registry.yarnpkg.com/rc-picker/-/rc-picker-4.11.3.tgz#7e7e3ad83aa461c284b8391c697492d1c34d2cb8" + integrity sha512-MJ5teb7FlNE0NFHTncxXQ62Y5lytq6sh5nUw0iH8OkHL/TjARSEvSHpr940pWgjGANpjCwyMdvsEV55l5tYNSg== + dependencies: + "@babel/runtime" "^7.24.7" + "@rc-component/trigger" "^2.0.0" classnames "^2.2.1" - rc-resize-observer "^1.0.0" - rc-util "^5.37.0" + rc-overflow "^1.3.2" + rc-resize-observer "^1.4.0" + rc-util "^5.43.0" + +rc-progress@~4.0.0: + version "4.0.0" + resolved "https://registry.yarnpkg.com/rc-progress/-/rc-progress-4.0.0.tgz#5382147d9add33d3a5fbd264001373df6440e126" + integrity sha512-oofVMMafOCokIUIBnZLNcOZFsABaUw8PPrf1/y0ZBvKZNpOiu5h4AO9vv11Sw0p4Hb3D0yGWuEattcQGtNJ/aw== + dependencies: + "@babel/runtime" "^7.10.1" + classnames "^2.2.6" + rc-util "^5.16.1" + +rc-rate@~2.13.1: + version "2.13.1" + resolved "https://registry.yarnpkg.com/rc-rate/-/rc-rate-2.13.1.tgz#29af7a3d4768362e9d4388f955a8b6389526b7fd" + integrity sha512-QUhQ9ivQ8Gy7mtMZPAjLbxBt5y9GRp65VcUyGUMF3N3fhiftivPHdpuDIaWIMOTEprAjZPC08bls1dQB+I1F2Q== + dependencies: + "@babel/runtime" "^7.10.1" + classnames "^2.2.5" + rc-util "^5.0.1" -rc-resize-observer@^1.0.0: +rc-resize-observer@^1.0.0, rc-resize-observer@^1.1.0, rc-resize-observer@^1.3.1, rc-resize-observer@^1.4.0: version "1.4.0" resolved "https://registry.yarnpkg.com/rc-resize-observer/-/rc-resize-observer-1.4.0.tgz#7bba61e6b3c604834980647cce6451914750d0cc" integrity sha512-PnMVyRid9JLxFavTjeDXEXo65HCRqbmLBw9xX9gfC4BZiSzbLXKzW3jPz+J0P71pLbD5tBMTT+mkstV5gD0c9Q== @@ -10942,7 +10860,144 @@ rc-resize-observer@^1.0.0: rc-util "^5.38.0" resize-observer-polyfill "^1.5.1" -rc-util@^5.31.1, rc-util@^5.35.0, rc-util@^5.36.0, rc-util@^5.37.0, rc-util@^5.38.0, rc-util@^5.43.0: +rc-resize-observer@^1.4.3: + version "1.4.3" + resolved "https://registry.yarnpkg.com/rc-resize-observer/-/rc-resize-observer-1.4.3.tgz#4fd41fa561ba51362b5155a07c35d7c89a1ea569" + integrity sha512-YZLjUbyIWox8E9i9C3Tm7ia+W7euPItNWSPX5sCcQTYbnwDb5uNpnLHQCG1f22oZWUhLw4Mv2tFmeWe68CDQRQ== + dependencies: + "@babel/runtime" "^7.20.7" + classnames "^2.2.1" + rc-util "^5.44.1" + resize-observer-polyfill "^1.5.1" + +rc-segmented@~2.7.0: + version "2.7.0" + resolved "https://registry.yarnpkg.com/rc-segmented/-/rc-segmented-2.7.0.tgz#f56c2044abf8f03958b3a9a9d32987f10dcc4fc4" + integrity sha512-liijAjXz+KnTRVnxxXG2sYDGd6iLL7VpGGdR8gwoxAXy2KglviKCxLWZdjKYJzYzGSUwKDSTdYk8brj54Bn5BA== + dependencies: + "@babel/runtime" "^7.11.1" + classnames "^2.2.1" + rc-motion "^2.4.4" + rc-util "^5.17.0" + +rc-select@~14.16.2, rc-select@~14.16.6: + version "14.16.6" + resolved "https://registry.yarnpkg.com/rc-select/-/rc-select-14.16.6.tgz#1c57a9aa97248b3fd9a830d9bf5df6e9d2ad2c69" + integrity sha512-YPMtRPqfZWOm2XGTbx5/YVr1HT0vn//8QS77At0Gjb3Lv+Lbut0IORJPKLWu1hQ3u4GsA0SrDzs7nI8JG7Zmyg== + dependencies: + "@babel/runtime" "^7.10.1" + "@rc-component/trigger" "^2.1.1" + classnames "2.x" + rc-motion "^2.0.1" + rc-overflow "^1.3.1" + rc-util "^5.16.1" + rc-virtual-list "^3.5.2" + +rc-slider@~11.1.8: + version "11.1.8" + resolved "https://registry.yarnpkg.com/rc-slider/-/rc-slider-11.1.8.tgz#cf3b30dacac8f98d44f7685f733f6f7da146fc06" + integrity sha512-2gg/72YFSpKP+Ja5AjC5DPL1YnV8DEITDQrcc1eASrUYjl0esptaBVJBh5nLTXCCp15eD8EuGjwezVGSHhs9tQ== + dependencies: + "@babel/runtime" "^7.10.1" + classnames "^2.2.5" + rc-util "^5.36.0" + +rc-steps@~6.0.1: + version "6.0.1" + resolved "https://registry.yarnpkg.com/rc-steps/-/rc-steps-6.0.1.tgz#c2136cd0087733f6d509209a84a5c80dc29a274d" + integrity sha512-lKHL+Sny0SeHkQKKDJlAjV5oZ8DwCdS2hFhAkIjuQt1/pB81M0cA0ErVFdHq9+jmPmFw1vJB2F5NBzFXLJxV+g== + dependencies: + "@babel/runtime" "^7.16.7" + classnames "^2.2.3" + rc-util "^5.16.1" + +rc-switch@~4.1.0: + version "4.1.0" + resolved "https://registry.yarnpkg.com/rc-switch/-/rc-switch-4.1.0.tgz#f37d81b4e0c5afd1274fd85367b17306bf25e7d7" + integrity sha512-TI8ufP2Az9oEbvyCeVE4+90PDSljGyuwix3fV58p7HV2o4wBnVToEyomJRVyTaZeqNPAp+vqeo4Wnj5u0ZZQBg== + dependencies: + "@babel/runtime" "^7.21.0" + classnames "^2.2.1" + rc-util "^5.30.0" + +rc-table@~7.50.4: + version "7.50.4" + resolved "https://registry.yarnpkg.com/rc-table/-/rc-table-7.50.4.tgz#687b5bf76d1a94168f75481cbc83be9442010432" + integrity sha512-Y+YuncnQqoS5e7yHvfvlv8BmCvwDYDX/2VixTBEhkMDk9itS9aBINp4nhzXFKiBP/frG4w0pS9d9Rgisl0T1Bw== + dependencies: + "@babel/runtime" "^7.10.1" + "@rc-component/context" "^1.4.0" + classnames "^2.2.5" + rc-resize-observer "^1.1.0" + rc-util "^5.44.3" + rc-virtual-list "^3.14.2" + +rc-tabs@~15.6.0: + version "15.6.0" + resolved "https://registry.yarnpkg.com/rc-tabs/-/rc-tabs-15.6.0.tgz#1a5b16d76be9733bc488cc8c326428acf7481c5a" + integrity sha512-SQ99Yjc9ewrJCUwoWPKq0FeGL2znWsqPhfcZgsHz1R7bkA2rMNe7CPgOiJkwppdJ98wkLhzs9vPrv21QOE1RyQ== + dependencies: + "@babel/runtime" "^7.11.2" + classnames "2.x" + rc-dropdown "~4.2.0" + rc-menu "~9.16.0" + rc-motion "^2.6.2" + rc-resize-observer "^1.0.0" + rc-util "^5.34.1" + +rc-textarea@~1.10.0: + version "1.10.0" + resolved "https://registry.yarnpkg.com/rc-textarea/-/rc-textarea-1.10.0.tgz#f8f962ef83be0b8e35db97cf03dbfb86ddd9c46c" + integrity sha512-ai9IkanNuyBS4x6sOL8qu/Ld40e6cEs6pgk93R+XLYg0mDSjNBGey6/ZpDs5+gNLD7urQ14po3V6Ck2dJLt9SA== + dependencies: + "@babel/runtime" "^7.10.1" + classnames "^2.2.1" + rc-input "~1.8.0" + rc-resize-observer "^1.0.0" + rc-util "^5.27.0" + +rc-tooltip@~6.4.0: + version "6.4.0" + resolved "https://registry.yarnpkg.com/rc-tooltip/-/rc-tooltip-6.4.0.tgz#e832ed0392872025e59928cfc1ad9045656467fd" + integrity sha512-kqyivim5cp8I5RkHmpsp1Nn/Wk+1oeloMv9c7LXNgDxUpGm+RbXJGL+OPvDlcRnx9DBeOe4wyOIl4OKUERyH1g== + dependencies: + "@babel/runtime" "^7.11.2" + "@rc-component/trigger" "^2.0.0" + classnames "^2.3.1" + rc-util "^5.44.3" + +rc-tree-select@~5.27.0: + version "5.27.0" + resolved "https://registry.yarnpkg.com/rc-tree-select/-/rc-tree-select-5.27.0.tgz#3daa62972ae80846dac96bf4776d1a9dc9c7c4c6" + integrity sha512-2qTBTzwIT7LRI1o7zLyrCzmo5tQanmyGbSaGTIf7sYimCklAToVVfpMC6OAldSKolcnjorBYPNSKQqJmN3TCww== + dependencies: + "@babel/runtime" "^7.25.7" + classnames "2.x" + rc-select "~14.16.2" + rc-tree "~5.13.0" + rc-util "^5.43.0" + +rc-tree@~5.13.0, rc-tree@~5.13.1: + version "5.13.1" + resolved "https://registry.yarnpkg.com/rc-tree/-/rc-tree-5.13.1.tgz#f36a33a94a1282f4b09685216c01487089748910" + integrity sha512-FNhIefhftobCdUJshO7M8uZTA9F4OPGVXqGfZkkD/5soDeOhwO06T/aKTrg0WD8gRg/pyfq+ql3aMymLHCTC4A== + dependencies: + "@babel/runtime" "^7.10.1" + classnames "2.x" + rc-motion "^2.0.1" + rc-util "^5.16.1" + rc-virtual-list "^3.5.1" + +rc-upload@~4.8.1: + version "4.8.1" + resolved "https://registry.yarnpkg.com/rc-upload/-/rc-upload-4.8.1.tgz#ac55f2bc101b95b52a6e47f3c18f0f55b54e16d2" + integrity sha512-toEAhwl4hjLAI1u8/CgKWt30BR06ulPa4iGQSMvSXoHzO88gPCslxqV/mnn4gJU7PDoltGIC9Eh+wkeudqgHyw== + dependencies: + "@babel/runtime" "^7.18.3" + classnames "^2.2.5" + rc-util "^5.2.0" + +rc-util@^5.0.1, rc-util@^5.16.1, rc-util@^5.17.0, rc-util@^5.18.1, rc-util@^5.2.0, rc-util@^5.20.1, rc-util@^5.21.0, rc-util@^5.24.4, rc-util@^5.25.2, rc-util@^5.27.0, rc-util@^5.30.0, rc-util@^5.31.1, rc-util@^5.32.2, rc-util@^5.34.1, rc-util@^5.35.0, rc-util@^5.36.0, rc-util@^5.37.0, rc-util@^5.38.0, rc-util@^5.38.1, rc-util@^5.40.1, rc-util@^5.43.0: version "5.43.0" resolved "https://registry.yarnpkg.com/rc-util/-/rc-util-5.43.0.tgz#bba91fbef2c3e30ea2c236893746f3e9b05ecc4c" integrity sha512-AzC7KKOXFqAdIBqdGWepL9Xn7cm3vnAmjlHqUnoQaTMZYhM4VlXGLkkHHxj/BZ7Td0+SOPKB4RGPboBVKT9htw== @@ -10950,6 +11005,14 @@ rc-util@^5.31.1, rc-util@^5.35.0, rc-util@^5.36.0, rc-util@^5.37.0, rc-util@^5.3 "@babel/runtime" "^7.18.3" react-is "^18.2.0" +rc-util@^5.44.0, rc-util@^5.44.1, rc-util@^5.44.3, rc-util@^5.44.4: + version "5.44.4" + resolved "https://registry.yarnpkg.com/rc-util/-/rc-util-5.44.4.tgz#89ee9037683cca01cd60f1a6bbda761457dd6ba5" + integrity sha512-resueRJzmHG9Q6rI/DfK6Kdv9/Lfls05vzMs1Sk3M2P+3cJa+MakaZyWY8IPfehVuhPJFKrIY1IK4GqbiaiY5w== + dependencies: + "@babel/runtime" "^7.18.3" + react-is "^18.2.0" + rc-virtual-list@^3.14.2, rc-virtual-list@^3.5.1, rc-virtual-list@^3.5.2: version "3.14.5" resolved "https://registry.yarnpkg.com/rc-virtual-list/-/rc-virtual-list-3.14.5.tgz#593cd13fe05eabf4ad098329704a30c77701869e" @@ -12007,6 +12070,11 @@ stylehacks@^6.1.1: browserslist "^4.23.0" postcss-selector-parser "^6.0.16" +stylis@^4.0.13: + version "4.3.2" + resolved "https://registry.yarnpkg.com/stylis/-/stylis-4.3.2.tgz#8f76b70777dd53eb669c6f58c997bf0a9972e444" + integrity sha512-bhtUjWd/z6ltJiQwg0dUfxEJ+W+jdqQd8TbWLWyeIJHlnsqmGLRFFd8e5mA0AZi/zx90smXRlN66YMTcaSFifg== + stylis@^4.3.4, stylis@^4.3.6: version "4.3.6" resolved "https://registry.yarnpkg.com/stylis/-/stylis-4.3.6.tgz#7c7b97191cb4f195f03ecab7d52f7902ed378320" @@ -12177,7 +12245,7 @@ to-regex-range@^5.0.1: toggle-selection@^1.0.6: version "1.0.6" - resolved "https://registry.npmmirror.com/toggle-selection/-/toggle-selection-1.0.6.tgz#6e45b1263f2017fa0acc7d89d78b15b8bf77da32" + resolved "https://registry.yarnpkg.com/toggle-selection/-/toggle-selection-1.0.6.tgz#6e45b1263f2017fa0acc7d89d78b15b8bf77da32" integrity sha512-BiZS+C1OS8g/q2RRbJmy59xpyghNBqrr6k5L/uKBGRsTfxmu3ffiRnd8mlGPUVayg8pvfi5urfnu8TU7DVOkLQ== toidentifier@1.0.1: