Skip to content

Conversation

@jstirnaman
Copy link
Contributor

@jstirnaman jstirnaman commented Nov 19, 2025

Changes

Content discovery

UI

Markdown formatted content

  • Implements recommended file name patterns index.section.md and index.md
  • Section pages are the concatenation of the section page's direct children.
    • For tabs/code-tabs layouts, generates headings from the tab names for each section.
  • Frontmatter url values are full URLs. Hostname is determined by the environment that you pass to the build script--e.g. yarn build:md -e staging uses the test2 hostname.
  • Content URLs remain as they are--site URLs are paths relative to the site root. LLMs understand how to resolve these when provided a full URL or host.
  • Rust implementation generates Markdown pages 10x faster than the earlier turndown.js implementation.

Build and deploy

  • yarn build:md sets the hostname to use and generates the Markdown from /public HTML
  • yarn deploy:staging runs build:md -e staging and s3deploy
  • Updates deploy/edge.js lambda function to return URLs as-is, instead of appending a trailing slash, if they end in a valid file extension.

Tests

  • Adds e2e tests for format selector and markdown content validation

Improvements

  • Adds product-mappings.ts shared utility for DRY access to data/products.yml

Production deployment steps

  1. Update the prod lambda function using deploy/edge.js in this PR, publish a new version, update the behavior with the new version
  2. Merge this PR (CircleCI runs yarn build:md)
  3. Verify Copy page and Copy section return 200 for the markdown pages
  4. Verify frontmatter URLs contain prod hostname

@jstirnaman jstirnaman force-pushed the jts-feat-llm-text branch 4 times, most recently from a925d9b to 6c11757 Compare November 20, 2025 21:34
sanderson and others added 2 commits November 24, 2025 09:31
This enables LLM-friendly documentation for entire sections,
allowing users to copy complete documentation sections with a single click.

Lambda@Edge now generates .md files on-demand with:
- Evaluated Hugo shortcodes
- Proper YAML frontmatter with product metadata
- Clean markdown without UI elements
- Section aggregation (parent + children in single file)

The llms.txt files are now generated automatically during build from
content structure and product metadata in data/products.yml, eliminating
the need for hardcoded files and ensuring maintainability.

**Testing**:
- Automated markdown generation in test setup via cy.exec()
- Implement dynamic content validation that extracts HTML content and
  verifies it appears in markdown version

**Documentation**:
Documents LLM-friendly markdown generation

**Details**:
Add gzip decompression for S3 HTML files in Lambda markdown generator

HTML files stored in S3 are gzip-compressed but the Lambda was attempting
to parse compressed data as UTF-8, causing JSDOM to fail to find article
elements. This resulted in 404 errors for .md and .section.md requests.

- Add zlib gunzip decompression in s3-utils.js fetchHtmlFromS3()
- Detect gzip via ContentEncoding header or magic bytes (0x1f 0x8b)
- Add configurable DEBUG constant for verbose logging
- Add debug logging for buffer sizes and decompression in both files

The decompression adds ~1-5ms per request but is necessary to parse
HTML correctly. CloudFront caching minimizes Lambda invocations.

Await async markdown conversion functions

The convertToMarkdown and convertSectionToMarkdown functions are async
but weren't being awaited, causing the Lambda to return a Promise object
instead of a string. This resulted in CloudFront validation errors:
"The body is not a string, is not an object, or exceeds the maximum size"

**Troubleshooting**:

- Set DEBUG for troubleshooting in lambda
Implements static Markdown generation during Hugo build.

**Key Features:**
- Two-phase generation: HTML→MD (memory-bounded), MD→sections (fast)
- Automatic redirect detection via file size check (skips Hugo aliases)
- Product detection using compiled TypeScript product-mappings module
- Token estimation for LLM context planning (4 chars/token heuristic)
- YAML serialization with description sanitization

**Performance:**
- ~105 seconds for 5,000 pages + 500 sections
- ~300MB peak memory (safe for 2GB CircleCI environment)
- 23 files/sec conversion rate with controlled concurrency

**Configuration Parameters:**
- MIN_HTML_SIZE_BYTES (default: 1024) - Skip files below threshold
- CHARS_PER_TOKEN (default: 4) - Token estimation ratio
- Concurrency: 10 workers (CI), 20 workers (local)

**Output:**
- Single pages: public/*/index.md (with frontmatter + content)
- Section bundles: public/*/index.section.md (aggregated child pages)

**Files Changed:**
- scripts/build-llm-markdown.js (new) - Main build script
- scripts/lib/markdown-converter.cjs (renamed from .js) - Core conversion
- scripts/html-to-markdown.js - Updated import path
- package.json - Updated exports for .cjs module

Related: Replaces Lambda@Edge on-demand generation (5s response time)
with build-time static generation for production deployment.

feat(deploy): Add staging deployment workflow and update CI

Integrates LLM markdown generation into deployment workflows with
a complete staging deployment solution.

**CircleCI Updates:**
- Switch from legacy html-to-markdown.js to optimized build:md
- 2x performance improvement (105s vs 200s+ for 5000 pages)
- Better memory management (300MB vs variable)
- Enables section bundle generation (index.section.md files)

**Staging Deployment:**
- New scripts/deploy-staging.sh for local staging deploys
- Complete workflow: Hugo build → markdown gen → S3 upload
- Environment variable driven configuration
- Optional step skipping for faster iteration
- CloudFront cache invalidation support

**NPM Scripts:**
- Added deploy:staging command for convenience
- Wraps deploy-staging.sh script

**Documentation:**
- Updated DOCS-DEPLOYING.md with comprehensive guide
- Merged staging/production workflows with Lambda@Edge docs
- Build-time generation now primary, Lambda@Edge fallback
- Troubleshooting section with common issues
- Environment variable reference
- Performance metrics and optimization tips

**Benefits:**
- Manual staging validation before production
- Consistent markdown generation across environments
- Faster CI builds with optimized script
- Better error handling and progress reporting
- Section aggregation for improved LLM context

**Usage:**
```bash
export STAGING_BUCKET="test2.docs.influxdata.com"
export AWS_REGION="us-east-1"
export STAGING_CF_DISTRIBUTION_ID="E1XXXXXXXXXX"

yarn deploy:staging
```

Related: Completes build-time markdown generation implementation

refactor: Remove Lambda@Edge implementation

Build-time markdown generation has replaced Lambda@Edge on-demand
generation as the primary method. Removed Lambda code and updated
documentation to focus on build-time generation and testing.

Removed:
- deploy/llm-markdown/ directory (Lambda@Edge code)
- Lambda@Edge section from DOCS-DEPLOYING.md

Added:
- Testing and Validation section in DOCS-DEPLOYING.md
- Focus on build-time generation workflow
Implements core markdown-converter.cjs functions in Rust for performance comparison.

Performance results:
- Rust: ~257 files/sec (10× faster)
- JavaScript: ~25 files/sec average

Recommendation: Keep JavaScript for now, implement incremental builds first.
Rust migration provides 10× speedup but requires 3-4 weeks integration effort.

Files:
- Cargo.toml: Rust dependencies (html2md, scraper, serde_yaml, clap)
- src/main.rs: Core conversion logic + CLI benchmark tool
- benchmark-comparison.js: Side-by-side performance testing
- README.md: Comprehensive findings and recommendations
- Ensure dropdown stays within viewport bounds (min 8px padding)
- Reposition dropdown on window resize and scroll events
- Clean up event listeners when dropdown closes
Add remark-parse, remark-frontmatter, remark-gfm, and unified for
enhanced markdown processing capabilities.
…tensions

Without the return statement, the Lambda@Edge function would continue
executing after the callback, eventually hitting the trailing-slash
redirect logic. This caused .md files to redirect to URLs with trailing
slashes, which returned 404 from S3.
- Add URL_PATTERN_MAP and PRODUCT_NAME_MAP constants directly in the
  CommonJS module (ESM product-mappings.js cannot be require()'d)
- Update generateFrontmatter() to accept baseUrl parameter and construct
  full URLs for the frontmatter url field
- Update generateSectionFrontmatter() similarly for section pages
- Update all call sites to pass baseUrl parameter

This fixes empty product fields and relative URLs in generated markdown
frontmatter when served via Lambda@Edge.
Add -e, --env flag to html-to-markdown.js to control the base URL
in generated markdown frontmatter. This matches Hugo's -e flag behavior
and allows generating markdown with staging or production URLs.

Also update build-llm-markdown.js with similar environment support.
- Add Rust-based HTML-to-Markdown converter with NAPI-RS bindings
- Update Cypress markdown validation tests
- Update deploy-staging.sh with force upload flag
  - Defaults STAGING_URL to https://test2.docs.influxdata.com
  if not set
  - Exports it so yarn build:md -e staging can use it
  - Displays it in the summary
@jstirnaman
Copy link
Contributor Author

jstirnaman commented Nov 30, 2025

Copy section output for https://test2.docs.influxdata.com/influxdb3/core/write-data/

---
title: Write data to InfluxDB 3 Core
description: Collect and write time series data to InfluxDB 3 Core.
url: https://test2.docs.influxdata.com/influxdb3/core/write-data/
product: InfluxDB 3 Core
type: section
pages: 7
estimated_tokens: 10326
child_pages:
  - url: https://test2.docs.influxdata.com/influxdb3/core/write-data/use-telegraf/
    title: Use Telegraf to write data
  - url: https://test2.docs.influxdata.com/influxdb3/core/write-data/troubleshoot/
    title: Troubleshoot issues writing data
  - url: https://test2.docs.influxdata.com/influxdb3/core/write-data/influxdb3-cli/
    title: Use the influxdb3 CLI to write data
  - url: https://test2.docs.influxdata.com/influxdb3/core/write-data/http-api/
    title: Use the InfluxDB HTTP API to write data
  - url: https://test2.docs.influxdata.com/influxdb3/core/write-data/client-libraries/
    title: Use InfluxDB client libraries to write data
  - url: https://test2.docs.influxdata.com/influxdb3/core/write-data/best-practices/
    title: Best practices for writing data
---

Use tools like the `influxdb3`CLI, Telegraf, and InfluxDB client libraries
to write time series data to InfluxDB 3 Core.[line protocol](#line-protocol)is the text-based format used to write data to InfluxDB.

> [!Tip]
> Tools are available to convert other formats (for example—[CSV](/influxdb3/core/write-data/use-telegraf/csv/)) to line protocol.

* [Choose the write endpoint for your workload](#choose-the-write-endpoint-for-your-workload)

  * [Timestamp precision across write APIs](#timestamp-precision-across-write-apis)

* [Line protocol](#line-protocol)

  * [Line protocol elements](#line-protocol-elements)

* [Write data to InfluxDB](#write-data-to-influxdb)

  * [Use InfluxDB client libraries to write data](#use-influxdb-client-libraries-to-write-data)
  * [Use the InfluxDB HTTP API to write data](#use-the-influxdb-http-api-to-write-data)
  * [Use Telegraf to write data](#use-telegraf-to-write-data)
  * [Use the influxdb3 CLI to write data](#use-the-influxdb3-cli-to-write-data)
  * [Best practices for writing data](#best-practices-for-writing-data)
  * [Troubleshoot issues writing data](#troubleshoot-issues-writing-data)

> [!Tip]
> #### Choose the write endpoint for your workload ####
> When creating new write workloads, use the[InfluxDB HTTP API `/api/v3/write_lp` endpoint](/influxdb3/core/write-data/http-api/v3-write-lp/)and [client libraries](/influxdb3/core/write-data/client-libraries/).
> When bringing existing *v1* write workloads, use the InfluxDB 3 Core
> HTTP API [`/write` endpoint](/influxdb3/core/api/v3/#operation/PostV1Write).
> When bringing existing *v2* write workloads, use the InfluxDB 3 Core
> HTTP API [`/api/v2/write` endpoint](/influxdb3/core/api/v3/#operation/PostV2Write).
> **For Telegraf**, use the InfluxDB v1.x [`outputs.influxdb`](/telegraf/v1/output-plugins/influxdb/) or v2.x [`outputs.influxdb_v2`](/telegraf/v1/output-plugins/influxdb_v2/) output plugins.
> See how to [use Telegraf to write data](/influxdb3/core/write-data/use-telegraf/).

When creating new write workloads, use the[InfluxDB HTTP API `/api/v3/write_lp` endpoint](/influxdb3/core/write-data/http-api/v3-write-lp/)and [client libraries](/influxdb3/core/write-data/client-libraries/).

When bringing existing *v1* write workloads, use the InfluxDB 3 Core
HTTP API [`/write` endpoint](/influxdb3/core/api/v3/#operation/PostV1Write).

When bringing existing *v2* write workloads, use the InfluxDB 3 Core
HTTP API [`/api/v2/write` endpoint](/influxdb3/core/api/v3/#operation/PostV2Write).

**For Telegraf**, use the InfluxDB v1.x [`outputs.influxdb`](/telegraf/v1/output-plugins/influxdb/) or v2.x [`outputs.influxdb_v2`](/telegraf/v1/output-plugins/influxdb_v2/) output plugins.
See how to [use Telegraf to write data](/influxdb3/core/write-data/use-telegraf/).

Timestamp precision across write APIs
----------

InfluxDB 3 Core provides multiple write endpoints for compatibility with different InfluxDB versions.
The following table compares timestamp precision support across v1, v2, and v3 write APIs:

|    Precision     |v1 (`/write`)|v2 (`/api/v2/write`)|v3 (`/api/v3/write_lp`)|
|------------------|-------------|--------------------|-----------------------|
|**Auto detection**|    ❌ No     |        ❌ No        |  ✅ `auto` (default)   |
|   **Seconds**    |    ✅ `s`    |       ✅ `s`        |      ✅ `second`       |
| **Milliseconds** |   ✅ `ms`    |       ✅ `ms`       |    ✅ `millisecond`    |
| **Microseconds** |✅ `u` or `µ` |       ✅ `us`       |    ✅ `microsecond`    |
| **Nanoseconds**  |   ✅ `ns`    |       ✅ `ns`       |    ✅ `nanosecond`     |
|   **Minutes**    |    ✅ `m`    |        ❌ No        |         ❌ No          |
|    **Hours**     |    ✅ `h`    |        ❌ No        |         ❌ No          |
|   **Default**    | Nanosecond  |     Nanosecond     |  **Auto** (guessed)   |

* All write endpoints accept timestamps in line protocol format.
* InfluxDB 3 Core multiplies timestamps by the appropriate precision value to convert them to nanoseconds for internal storage.
* All timestamps are stored internally as nanoseconds regardless of the precision specified when writing.

Line protocol
----------

All data written to InfluxDB is written using[line protocol](/influxdb3/core/reference/line-protocol/), a text-based format
that lets you provide the necessary information to write a data point to InfluxDB.

### Line protocol elements ###

In InfluxDB, a point contains a table name, one or more fields, a timestamp,
and optional tags that provide metadata about the observation.

Each line of line protocol contains the following elements:

\* Required

* \* **table**: A string that identifies the
  table to store the data in.
* **tag set**: Comma-delimited list of key value pairs, each representing a tag.
  Tag keys and values are unquoted strings. *Spaces, commas, and equal characters
  must be escaped.*
* \* **field set**: Comma-delimited list of key value pairs, each
  representing a field.
  Field keys are unquoted strings. *Spaces and commas must be escaped.*Field values can be [strings](/influxdb3/core/reference/line-protocol/#string)(quoted),[floats](/influxdb3/core/reference/line-protocol/#float),[integers](/influxdb3/core/reference/line-protocol/#integer),[unsigned integers](/influxdb3/core/reference/line-protocol/#uinteger),
  or [booleans](/influxdb3/core/reference/line-protocol/#boolean).
* **timestamp**: [Unix timestamp](/influxdb3/core/reference/line-protocol/#unix-timestamp)associated with the data. InfluxDB supports up to nanosecond precision.*If the precision of the timestamp is not in nanoseconds, you must specify the
  precision when writing the data to InfluxDB.*

#### Line protocol element parsing ####

* **table**: Everything before the *first unescaped comma before the first
  whitespace*.
* **tag set**: Key-value pairs between the *first unescaped comma* and the *first
  unescaped whitespace*.
* **field set**: Key-value pairs between the *first and second unescaped whitespaces*.
* **timestamp**: Integer value after the *second unescaped whitespace*.
* Lines are separated by the newline character (`\n`).
  Line protocol is whitespace sensitive.

myTable,tag1=val1,tag2=val2 field1="v1",field2=1i 0000000000000000000

*For schema design recommendations, see[InfluxDB schema design](/influxdb3/core/write-data/best-practices/schema-design/).*

Write data to InfluxDB
----------

### [Use InfluxDB client libraries to write data](/influxdb3/core/write-data/client-libraries/) ###

Use InfluxDB API clients to write points as line protocol data to InfluxDB 3 Core.

### [Use the InfluxDB HTTP API to write data](/influxdb3/core/write-data/http-api/) ###

Use the `/api/v3/write_lp`, `/api/v2/write`, or `/write` HTTP API endpoints to write data to InfluxDB 3 Core.

### [Use Telegraf to write data](/influxdb3/core/write-data/use-telegraf/) ###

Use Telegraf to collect and write data to InfluxDB 3 Core.

### [Use the influxdb3 CLI to write data](/influxdb3/core/write-data/influxdb3-cli/) ###

Use the [`influxdb3` CLI](/influxdb3/core/reference/cli/influxdb3/) to write line protocol data to InfluxDB 3 Core.

### [Best practices for writing data](/influxdb3/core/write-data/best-practices/) ###

Learn about the recommendations and best practices for writing data to InfluxDB 3 Core.

### [Troubleshoot issues writing data](/influxdb3/core/write-data/troubleshoot/) ###

Troubleshoot issues writing data. Find response codes for failed writes. Discover how writes fail, from exceeding rate or payload limits, to syntax errors and schema conflicts.

[write](/influxdb3/core/tags/write/)[line protocol](/influxdb3/core/tags/line-protocol/)


---

## Use Telegraf to write data

[Telegraf](https://www.influxdata.com/time-series-platform/telegraf/) is a data
collection agent for collecting and reporting metrics.
Its vast library of input plugins and “plug-and-play” architecture lets you
quickly and easily collect metrics from many different sources.

For a list of available plugins, see [Telegraf plugins](/telegraf/v1/plugins/).

#### Requirements ####

* **Telegraf 1.9.2 or greater**.*For information about installing Telegraf, see the[Telegraf Installation instructions](/telegraf/v1/install/).*

Basic Telegraf usage
----------

Telegraf is a plugin-based agent with plugins that are enabled and configured in
your Telegraf configuration file (`telegraf.conf`).
Each Telegraf configuration must **have at least one input plugin and one output plugin**.

Telegraf input plugins retrieve metrics from different sources.
Telegraf output plugins write those metrics to a destination.

Use the [`outputs.influxdb_v2`](/telegraf/v1/plugins/#output-influxdb_v2) plugin
to connect to the InfluxDB v2 write API included in InfluxDB 3 Core and
write metrics collected by Telegraf to InfluxDB 3 Core.

```toml
# ...

[[outputs.influxdb_v2]]
  urls = ["http://localhost:8181"]
  token = "AUTH_TOKEN"
  organization = ""
  bucket = "DATABASE_NAME"

# ...

Replace the following:

  • DATABASE_NAME:
    the name of the database to write data to
  • AUTH_TOKEN:
    your InfluxDB 3 Core token.Store this in a secret store or environment variable to avoid exposing the raw token string.

See how to Configure Telegraf to write to InfluxDB 3 Core.

Use Telegraf with InfluxDB

Configure Telegraf to write to InfluxDB 3 Core

Update existing or create new Telegraf configurations to use the influxdb_v2 output plugin to write to InfluxDB 3 Core. Start Telegraf using the custom configuration.

Use Telegraf to dual write to InfluxDB

Configure Telegraf to write data to multiple InfluxDB instances or clusters simultaneously.

Use Telegraf to write CSV data

Use the Telegraf file input plugin to read and parse CSV data into line protocol and write it to InfluxDB 3 Core.

InfluxDB University

Data Collection with Telegraf

Learn how to use Telegraf to make data time series data collection easy in this free InfluxDB University course.

Take the course


Troubleshoot issues writing data

Learn how to avoid unexpected results and recover from errors when writing to
InfluxDB 3 Core.

Handle write responses

InfluxDB 3 Core does the following when you send a write request:

  1. Validates the request.

  2. If successful, attempts to ingest data from the request body; otherwise,
    responds with an error status.

  3. Ingests or rejects data in the batch and returns one of the following HTTP
    status codes:

    • 204 No Content: All data in the batch is ingested.
    • 400 Bad Request: Some or all of the data has been rejected.
      Data that has not been rejected is ingested and queryable.

The response body contains error details aboutrejected points, up to 100 points.

Writes are synchronous–the response status indicates the final status of the
write and all ingested data is queryable.

To ensure that InfluxDB handles writes in the order you request them,
wait for the response before you send the next request.

Review HTTP status codes

InfluxDB 3 Core uses conventional HTTP status codes to indicate the success
or failure of a request. The message property of the response body may contain
additional details about the error.
Write requests return the following status codes:

HTTP response code Message Description
204 "Success" If InfluxDB ingested the data
400 "Bad request" error details about rejected points, up to 100 points: line contains the first rejected line, message describes rejections If some or all request data isn’t allowed (for example, if it is malformed or falls outside of the bucket’s retention period)–the response body indicates whether a partial write has occurred or if all data has been rejected
401 "Unauthorized" If the Authorization header is missing or malformed or if the token doesn’t have permission to write to the database. See write API examples using credentials.
404 "Not found" requested resource type (for example, “organization” or “database”), and resource name If a requested resource (for example, organization or database) wasn’t found
500 "Internal server error" Default status for an error
503 “Service unavailable” If the server is temporarily unavailable to accept writes. The Retry-After header describes when to try the write again.

If your data did not write to the database, see how to troubleshoot rejected points.

Troubleshoot failures

If you notice data is missing in your database, do the following:

Troubleshoot rejected points

InfluxDB rejects points that don’t match the schema of existing data.

Check for field data typedifferences between the rejected data point and points within the same
database–for example, did you attempt to write string data to an int field?

Troubleshoot write performance issues

If you experience slow write performance or timeouts during high-volume ingestion,
consider the following:

Memory configuration

InfluxDB 3 Core uses memory for both query processing and internal data operations,
including converting data to Parquet format during persistence.
For write-heavy workloads, insufficient memory allocation can cause performance issues.

Symptoms of memory-related write issues:

  • Slow write performance during data persistence (typically every 10 minutes)
  • Increased response times during high-volume ingestion
  • Memory-related errors in server logs

Solutions:

  • Increase the exec-mem-pool-bytesconfiguration to allocate more memory for data operations.
    For write-heavy workloads, consider setting this to 30-40% of available memory.
  • Monitor memory usage during peak write periods to identify bottlenecks.
  • Adjust the gen1-durationto control how frequently data is persisted to Parquet format.

Example configuration for write-heavy workloads

influxdb3 serve \
  --exec-mem-pool-bytes PERCENTAGE \
  --gen1-duration 15m \
  # ... other options

Replace PERCENTAGE with the percentage
of available memory to allocate (for example, 35% for write-heavy workloads).

Related

writeline protocolerrors


Use the influxdb3 CLI to write data

Use the influxdb3 CLIto write line protocol data to InfluxDB 3 Core.

Note

Use the API for batching and higher-volume writes

The influxdb3 CLI lets you quickly get started writing data to InfluxDB 3 Core.
For batching and higher-volume write workloads, use theInfluxDB HTTP API,API client librariesor Telegraf.

The influxdb3 CLI lets you quickly get started writing data to InfluxDB 3 Core.
For batching and higher-volume write workloads, use theInfluxDB HTTP API,API client librariesor Telegraf.

Construct line protocol

With a basic understanding of line protocol,
you can construct data in line protocol format and write it to InfluxDB 3 Core.
Consider a use case where you collect data from sensors in your home.
Each sensor collects temperature, humidity, and carbon monoxide readings.
To collect this data, use the following schema:

  • table: home
    • tags

      • room: Living Room or Kitchen
    • fields

      • temp: temperature in °C (float)
      • hum: percent humidity (float)
      • co: carbon monoxide in parts per million (integer)
    • timestamp: Unix timestamp in second precision

The following line protocol represents the schema described above:

home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000
home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000
home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1641027600
home,room=Kitchen temp=23.0,hum=36.2,co=0i 1641027600
home,room=Living\ Room temp=21.8,hum=36.0,co=0i 1641031200
home,room=Kitchen temp=22.7,hum=36.1,co=0i 1641031200
home,room=Living\ Room temp=22.2,hum=36.0,co=0i 1641034800
home,room=Kitchen temp=22.4,hum=36.0,co=0i 1641034800
home,room=Living\ Room temp=22.2,hum=35.9,co=0i 1641038400
home,room=Kitchen temp=22.5,hum=36.0,co=0i 1641038400
home,room=Living\ Room temp=22.4,hum=36.0,co=0i 1641042000
home,room=Kitchen temp=22.8,hum=36.5,co=1i 1641042000

For this tutorial, you can either pass this line protocol directly to theinfluxdb3 write command as a string, via stdin, or you can save it to and
read it from a file.

Write the line protocol to InfluxDB

Use the influxdb3 write commandto write the home sensor sample data to InfluxDB 3 Core.
Provide the following:

  • The database name using the--database option

  • Your InfluxDB 3 Core token using the -t, --token option

  • Line protocol.
    Provide the line protocol in one of the following ways:

    • a string
    • a path to a file that contains the line protocol using the --file option
    • from stdin

Note

By default, InfluxDB 3 Core uses the timestamp magnitude to auto-detect the precision.
To specify the precision of timestamps in your data, use the --precision {ns|us|ms|s} option.

string

influxdb3 write \
  --database DATABASE_NAME \
  --token AUTH_TOKEN \
  'home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000
home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000
home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1641027600
home,room=Kitchen temp=23.0,hum=36.2,co=0i 1641027600
home,room=Living\ Room temp=21.8,hum=36.0,co=0i 1641031200
home,room=Kitchen temp=22.7,hum=36.1,co=0i 1641031200
home,room=Living\ Room temp=22.2,hum=36.0,co=0i 1641034800
home,room=Kitchen temp=22.4,hum=36.0,co=0i 1641034800
home,room=Living\ Room temp=22.2,hum=35.9,co=0i 1641038400
home,room=Kitchen temp=22.5,hum=36.0,co=0i 1641038400
home,room=Living\ Room temp=22.4,hum=36.0,co=0i 1641042000
home,room=Kitchen temp=22.8,hum=36.5,co=1i 1641042000'
  1. In your terminal, enter the following command to create the sample data file:

    echo 'home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000
    home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000
    home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1641027600
    home,room=Kitchen temp=23.0,hum=36.2,co=0i 1641027600
    home,room=Living\ Room temp=21.8,hum=36.0,co=0i 1641031200
    home,room=Kitchen temp=22.7,hum=36.1,co=0i 1641031200
    home,room=Living\ Room temp=22.2,hum=36.0,co=0i 1641034800
    home,room=Kitchen temp=22.4,hum=36.0,co=0i 1641034800
    home,room=Living\ Room temp=22.2,hum=35.9,co=0i 1641038400
    home,room=Kitchen temp=22.5,hum=36.0,co=0i 1641038400
    home,room=Living\ Room temp=22.4,hum=36.0,co=0i 1641042000
    home,room=Kitchen temp=22.8,hum=36.5,co=1i 1641042000' > ./home.lp
    
  2. Enter the following CLI command to write the data from the sample file:

    influxdb3 write \
      --database DATABASE_NAME \
      --token AUTH_TOKEN \
      --file ./home.lp
    
  3. In your terminal, enter the following command to create the sample data file:

    echo 'home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000
    home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000
    home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1641027600
    home,room=Kitchen temp=23.0,hum=36.2,co=0i 1641027600
    home,room=Living\ Room temp=21.8,hum=36.0,co=0i 1641031200
    home,room=Kitchen temp=22.7,hum=36.1,co=0i 1641031200
    home,room=Living\ Room temp=22.2,hum=36.0,co=0i 1641034800
    home,room=Kitchen temp=22.4,hum=36.0,co=0i 1641034800
    home,room=Living\ Room temp=22.2,hum=35.9,co=0i 1641038400
    home,room=Kitchen temp=22.5,hum=36.0,co=0i 1641038400
    home,room=Living\ Room temp=22.4,hum=36.0,co=0i 1641042000
    home,room=Kitchen temp=22.8,hum=36.5,co=1i 1641042000' > ./home.lp
    
  4. Enter the following CLI command to write the data from the sample file:

    cat ./home.lp | influxdb3 write \
      --database DATABASE_NAME \
      --token AUTH_TOKEN
    

Replace the following:

  • DATABASE_NAME:
    the name of the database to write to
  • AUTH_TOKEN:
    your InfluxDB 3 Core token

Related


Use the InfluxDB HTTP API to write data

Use the InfluxDB HTTP API to write data to InfluxDB 3 Core.
Different APIs are available depending on your integration method.

Tip

Choose the write endpoint for your workload

When creating new write workloads, use theInfluxDB HTTP API /api/v3/write_lp endpointand client libraries.
When bringing existing v1 write workloads, use the InfluxDB 3 Core
HTTP API /write endpoint.
When bringing existing v2 write workloads, use the InfluxDB 3 Core
HTTP API /api/v2/write endpoint.
For Telegraf, use the InfluxDB v1.x outputs.influxdb or v2.x outputs.influxdb_v2 output plugins.
See how to use Telegraf to write data.

When creating new write workloads, use theInfluxDB HTTP API /api/v3/write_lp endpointand client libraries.

When bringing existing v1 write workloads, use the InfluxDB 3 Core
HTTP API /write endpoint.

When bringing existing v2 write workloads, use the InfluxDB 3 Core
HTTP API /api/v2/write endpoint.

For Telegraf, use the InfluxDB v1.x outputs.influxdb or v2.x outputs.influxdb_v2 output plugins.
See how to use Telegraf to write data.

Use the v3 write_lp API to write data

Use the /api/v3/write_lp HTTP API endpoint to write data to InfluxDB 3 Core.

Use compatibility APIs and client libraries to write data

Use HTTP API endpoints compatible with InfluxDB v2 and v1 clients to write points as line protocol data to InfluxDB 3 Core.

Related


Use InfluxDB client libraries to write data

Use InfluxDB 3 client libraries that integrate with your code to construct data
as time series points, and then write them as line protocol to an
InfluxDB 3 Core database.

Set up your project

Set up your InfluxDB 3 Core project and credentials
to write data using the InfluxDB 3 client library for your programming language
of choice.

  1. Install InfluxDB 3 Core
  2. Set up InfluxDB 3 Core
  3. Create a project directory and store your
    InfluxDB 3 Core credentials as environment variables or in a project
    configuration file, such as a .env (“dotenv”) file.

After setting up InfluxDB 3 Core and your project, you should have the following:

  • InfluxDB 3 Core credentials:

  • A directory for your project.

  • Credentials stored as environment variables or in a project configuration
    file–for example, a .env (“dotenv”) file.

Initialize a project directory

Create a project directory and initialize it for your programming language.

Go

  1. Install Go 1.13 or later.

  2. Create a directory for your Go module and change to the directory–for
    example:

    mkdir iot-starter-go && cd $_
    
  3. Initialize a Go module–for example:

    go mod init iot-starter
    
  4. Install Node.js.

  5. Create a directory for your JavaScript project and change to the
    directory–for example:

    mkdir -p iot-starter-js && cd $_
    
  6. Initialize a project–for example, using npm:

    npm init
    
  7. Install Python

  8. Inside of your project directory, create a directory for your Python module
    and change to the module directory–for example:

    mkdir -p iot-starter-py && cd $_
    
  9. Optional, but recommended: Usevenv orconda to activate a virtual
    environment for installing and executing code–for example, enter the
    following command using venv to create and activate a virtual environment
    for the project:

    python3 -m venv envs/iot-starter && source ./envs/iot-starter/bin/activate
    

Install the client library

Install the InfluxDB 3 client library for your programming language of choice.

C#

Add the InfluxDB 3 C# client library to your project using thedotnet CLI or
by adding the package to your project file–for example:

dotnet add package InfluxDB3.Client

Add theInfluxDB 3 Go client libraryto your project using thego get command–for example:

go mod init path/to/project/dir && cd $_
go get github.com/InfluxCommunity/influxdb3-go/v2/influxdb3

Add the InfluxDB 3 Java client library to your project dependencies using
the MavenGradle build tools.

For example, to add the library to a Maven project, add the following dependency
to your pom.xml file:

<dependency>
  <groupId>com.influxdb</groupId>
  <artifactId>influxdb3-java</artifactId>
  <version>1.1.0</version>
</dependency>

To add the library to a Gradle project, add the following dependency to your build.gradle file:

dependencies {
  implementation 'com.influxdb:influxdb3-java:1.1.0'
}

For a Node.js project, use @influxdata/influxdb3-client, which provides main (CommonJS),
module (ESM), and browser (UMD) exports.
Add the InfluxDB 3 JavaScript client library using your preferred package manager–for example, using npm:

npm install --save @influxdata/influxdb3-client

Install the InfluxDB 3 Python client library usingpip.
To use Pandas features, such as to_pandas(), provided by the Python
client library, you must also install thepandas package.

pip install influxdb3-python pandas

Construct line protocol

With a basic understanding of line protocol,
you can construct line protocol data and write it to InfluxDB 3 Core.

Use client library write methods to provide data as raw line protocol
or as Point objects that the client library converts to line protocol.
If your program creates the data you write to InfluxDB, the Pointinterface to take advantage of type safety in your program.

Client libraries provide one or more Point constructor methods. Some libraries
support language-native data structures, such as Go’s struct, for creating
points.

Examples in this guide show how to construct Point objects that follow the example home schema,
and then write the points as line protocol data to an InfluxDB 3 Core database.

Example home schema

Consider a use case where you collect data from sensors in your home. Each
sensor collects temperature, humidity, and carbon monoxide readings.

To collect this data, use the following schema:

  • table: home
    • tags

      • room: Living Room or Kitchen
    • fields

      • temp: temperature in °C (float)
      • hum: percent humidity (float)
      • co: carbon monoxide in parts per million (integer)
    • timestamp: Unix timestamp in second precision

Go

  1. Create a file for your module–for example: main.go.

  2. In main.go, enter the following sample code:

    package main
    
    import (
     "context"
     "os"
     "fmt"
     "time"
     "github.com/InfluxCommunity/influxdb3-go/v2/influxdb3"
     "github.com/influxdata/line-protocol/v2/lineprotocol"
    )
    
    func Write() error {
      url := os.Getenv("INFLUX_HOST")
      token := os.Getenv("INFLUX_TOKEN")
      database := os.Getenv("INFLUX_DATABASE")
    
      // To instantiate a client, call New() with InfluxDB credentials.
      client, err := influxdb3.New(influxdb3.ClientConfig{
       Host: url,
       Token: token,
       Database: database,
      })
    
      /** Use a deferred function to ensure the client is closed when the
        * function returns.
       **/
      defer func (client *influxdb3.Client)  {
       err = client.Close()
       if err != nil {
         panic(err)
       }
      }(client)
    
      /** Use the NewPoint method to construct a point.
        * NewPoint(measurement, tags map, fields map, time)
       **/
      point := influxdb3.NewPoint("home",
         map[string]string{
           "room": "Living Room",
         },
         map[string]any{
           "temp": 24.5,
           "hum":  40.5,
           "co":   15i},
         time.Now(),
       )
    
      /** Use the NewPointWithMeasurement method to construct a point with
        * method chaining.
       **/
      point2 := influxdb3.NewPointWithMeasurement("home").
       SetTag("room", "Living Room").
       SetField("temp", 23.5).
       SetField("hum", 38.0).
       SetField("co",  16i).
       SetTimestamp(time.Now())
    
      fmt.Println("Writing points")
      points := []*influxdb3.Point{point, point2}
    
      /** Write points to InfluxDB.
        * You can specify WriteOptions, such as Gzip threshold,
        * default tags, and timestamp precision. Default precision is lineprotocol.Nanosecond
       **/
      err = client.WritePoints(context.Background(), points,
        influxdb3.WithPrecision(lineprotocol.Second))
      return nil
    }
    
    func main() {
      Write()
    }
    
  3. To run the module and write the data to your InfluxDB 3 Core database,
    enter the following command in your terminal:

    go run main.go
    
  4. Create a file for your module–for example: write-points.js.

  5. In write-points.js, enter the following sample code:

    // write-points.js
    import { InfluxDBClient, Point } from '@influxdata/influxdb3-client';
    
    /**
     * Set InfluxDB credentials.
     */
    const host = process.env.INFLUX_HOST ?? '';
    const database = process.env.INFLUX_DATABASE;
    const token = process.env.INFLUX_TOKEN;
    
    /**
     * Write line protocol to InfluxDB using the JavaScript client library.
     */
    export async function writePoints() {
      /**
       * Instantiate an InfluxDBClient.
       * Provide the host URL and the database token.
       */
      const client = new InfluxDBClient({ host, token });
    
      /** Use the fluent interface with chained methods to construct Points. */
      const point = Point.measurement('home')
        .setTag('room', 'Living Room')
        .setFloatField('temp', 22.2)
        .setFloatField('hum', 35.5)
        .setIntegerField('co', 7)
        .setTimestamp(new Date().getTime() / 1000);
    
      const point2 = Point.measurement('home')
        .setTag('room', 'Kitchen')
        .setFloatField('temp', 21.0)
        .setFloatField('hum', 35.9)
        .setIntegerField('co', 0)
        .setTimestamp(new Date().getTime() / 1000);
    
      /** Write points to InfluxDB.
       * The write method accepts an array of points, the target database, and
       * an optional configuration object.
       * You can specify WriteOptions, such as Gzip threshold, default tags,
       * and timestamp precision. Default precision is lineprotocol.Nanosecond
       **/
    
      try {
        await client.write([point, point2], database, '', { precision: 's' });
        console.log('Data has been written successfully!');
      } catch (error) {
        console.error(`Error writing data to InfluxDB: ${error.body}`);
      }
    
      client.close();
    }
    
    writePoints();
    
  6. To run the module and write the data to your {{< product-name >}} database,
    enter the following command in your terminal:

    node writePoints.js
    
  7. Create a file for your module–for example: write-points.py.

  8. In write-points.py, enter the following sample code to write data in
    batching mode:

    import os
    from influxdb_client_3 import (
      InfluxDBClient3, InfluxDBError, Point, WritePrecision,
      WriteOptions, write_client_options)
    
    host = os.getenv('INFLUX_HOST')
    token = os.getenv('INFLUX_TOKEN')
    database = os.getenv('INFLUX_DATABASE')
    
    # Create an array of points with tags and fields.
    points = [Point("home")
                .tag("room", "Kitchen")
                .field("temp", 25.3)
                .field('hum', 20.2)
                .field('co', 9)]
    
    # With batching mode, define callbacks to execute after a successful or
    # failed write request.
    # Callback methods receive the configuration and data sent in the request.
    def success(self, data: str):
        print(f"Successfully wrote batch: data: {data}")
    
    def error(self, data: str, exception: InfluxDBError):
        print(f"Failed writing batch: config: {self}, data: {data} due: {exception}")
    
    def retry(self, data: str, exception: InfluxDBError):
        print(f"Failed retry writing batch: config: {self}, data: {data} retry: {exception}")
    
    # Configure options for batch writing.
    write_options = WriteOptions(batch_size=500,
                                        flush_interval=10_000,
                                        jitter_interval=2_000,
                                        retry_interval=5_000,
                                        max_retries=5,
                                        max_retry_delay=30_000,
                                        exponential_base=2)
    
    # Create an options dict that sets callbacks and WriteOptions.
    wco = write_client_options(success_callback=success,
                              error_callback=error,
                              retry_callback=retry,
                              write_options=write_options)
    
    # Instantiate a synchronous instance of the client with your
    # InfluxDB credentials and write options, such as Gzip threshold, default tags,
    # and timestamp precision. Default precision is nanosecond ('ns').
    with InfluxDBClient3(host=host,
                            token=token,
                            database=database,
                            write_client_options=wco) as client:
    
          client.write(points, write_precision='s')
    
  9. To run the module and write the data to your InfluxDB 3 Core database,
    enter the following command in your terminal:

    python write-points.py
    

The sample code does the following:

  1. Instantiates a client configured with the InfluxDB URL and API token.
  2. Constructs hometable Point objects.
  3. Sends data as line protocol format to InfluxDB and waits for the response.
  4. If the write succeeds, logs the success message to stdout; otherwise, logs
    the failure message and error details.
  5. Closes the client to release resources.

Related


Best practices for writing data

The following articles walk through recommendations and best practices for
writing data to InfluxDB 3 Core.

Optimize writes to InfluxDB 3 Core

Tips and examples to optimize performance and system overhead when writing data to InfluxDB 3 Core.

InfluxDB schema design recommendations

Design your schema for simpler and more performant queries.

@jstirnaman
Copy link
Contributor Author

Copy page output for https://test2.docs.influxdata.com/influxdb3/core/get-started/setup/

---
title: Set up InfluxDB 3 Core
description: Install, configure, and set up authorization for InfluxDB 3 Core.
url: https://test2.docs.influxdata.com/influxdb3/core/get-started/setup/
product: InfluxDB 3 Core
product_version: core
date: 2025-11-29T17:28:40Z
lastmod: 2025-11-29T17:28:40Z
estimated_tokens: 3253
---

* [Prerequisites](#prerequisites)

* [Quick-Start Mode (Development)](#quick-start-mode-development)

* [Start InfluxDB](#start-influxdb)

  * [Object store examples](#object-store-examples)

* [Set up authorization](#set-up-authorization)

  * [Create an operator token](#create-an-operator-token)
  * [Set your token for authorization](#set-your-token-for-authorization)

Prerequisites
----------

To get started, you’ll need:

* **InfluxDB 3 Core**: [Install and verify the latest version](/influxdb3/core/install/) on your system.
* If you want to persist data, have access to one of the following:
  * A directory on your local disk where you can persist data (used by examples in this guide)
  * S3-compatible object store and credentials

Quick-Start Mode (Development)
----------

For development, testing, and home use, you can start InfluxDB 3 Core without
any arguments. The system automatically generates required configuration values
based on your system’s hostname:

```bash
influxdb3

When you run influxdb3 without arguments, the following values are auto-generated:

  • node-id: {hostname}-node (or primary-node if hostname is unavailable)

  • object-store: file

  • data-dir: ~/.influxdb

The system displays warning messages showing the auto-generated identifiers:

Using auto-generated node id: mylaptop-node. For production deployments, explicitly set --node-id

Important

When to use quick-start mode

Quick-start mode is designed for development, testing, and home lab environments
where simplicity is prioritized over explicit configuration.
For production deployments, use explicit configuration values with theinfluxdb3 serve commandas shown in the Start InfluxDB section below.

Quick-start mode is designed for development, testing, and home lab environments
where simplicity is prioritized over explicit configuration.

For production deployments, use explicit configuration values with theinfluxdb3 serve commandas shown in the Start InfluxDB section below.

Configuration precedence: Environment variables override auto-generated defaults.
For example, if you set INFLUXDB3_NODE_IDENTIFIER_PREFIX=my-node, the system
uses my-node instead of generating {hostname}-node.

Start InfluxDB

Use the influxdb3 serve commandto start InfluxDB 3 Core.
Provide the following:

  • --node-id: A string identifier that distinguishes individual server instances.
    This forms the final part of the storage path: <CONFIGURED_PATH>/<NODE_ID>.

  • --object-store: Specifies the type of object store to use.
    InfluxDB supports the following:

    • file: local file system
    • memory: in memory (no object persistence)
    • memory-throttled: like memory but with latency and throughput that
      somewhat resembles a cloud-based object store
    • s3: AWS S3 and S3-compatible services like Ceph or Minio
    • google: Google Cloud Storage
    • azure: Azure Blob Storage
  • Other object store parameters depending on the selected object-store type.
    For example, if you use s3, you must provide the bucket name and credentials.

Note

Diskless architecture

InfluxDB 3 supports a diskless architecture that can operate with object
storage alone, eliminating the need for locally attached disks.
InfluxDB 3 Core can also work with only local disk storage when needed.

InfluxDB 3 supports a diskless architecture that can operate with object
storage alone, eliminating the need for locally attached disks.
InfluxDB 3 Core can also work with only local disk storage when needed.

For this getting started guide, use the file object store to persist data to
your local disk.

# File system object store
# Provide the file system directory
influxdb3 serve \
  --node-id host01 \
  --object-store file \
  --data-dir ~/.influxdb3

Object store examples

File system object store

Store data in a specified directory on the local filesystem.
This is the default object store type.

Replace the following with your values:

# File system object store
# Provide the file system directory
influxdb3 serve \
  --node-id host01 \
  --object-store file \
  --data-dir ~/.influxdb3

Docker with a mounted file system object store

To run the Docker image and persist
data to the local file system, mount a volume for the object store–for example,
provide the following options with your docker run command:

  • --volume /path/on/host:/path/in/container: Mounts a directory from your file system to the container
  • --object-store file --data-dir /path/in/container: Uses the volume for object storage
# File system object store with Docker
# Create a mount
# Provide the mount path
docker run -it \
 --volume /path/on/host:/path/in/container \
 influxdb:3-core influxdb3 serve \
 --node-id my_host \
 --object-store file \
 --data-dir /path/in/container

Note

The InfluxDB 3 Core Docker image exposes port 8181, the influxdb3server default for HTTP connections.
To map the exposed port to a different port when running a container, see the
Docker guide for Publishing and exposing ports.

Docker compose with a mounted file system object store

Open compose.yaml for editing and add a services entry for
InfluxDB 3 Core–for example:

# compose.yaml
services:
  influxdb3-core:
    image: influxdb:3-core
    ports:
      - 8181:8181
    command:
      - influxdb3
      - serve
      - --node-id=node0
      - --object-store=file
      - --data-dir=/var/lib/influxdb3/data
      - --plugin-dir=/var/lib/influxdb3/plugins
    volumes:
      - type: bind
        # Path to store data on your host system
        source: ~/.influxdb3/data
        # Path to store data in the container
        target: /var/lib/influxdb3/data
      - type: bind
        # Path to store plugins on your host system
        source: ~/.influxdb3/plugins
        # Path to store plugins in the container
        target: /var/lib/influxdb3/plugins

Use the Docker Compose CLI to start the server–for example:

docker compose pull && docker compose up influxdb3-core

The command pulls the latest InfluxDB 3 Core Docker image and startsinfluxdb3 in a container with host port 8181 mapped to container port8181, the server default for HTTP connections.

Tip

Custom port mapping

To customize your influxdb3 server hostname and port, specify the--http-bind option or the INFLUXDB3_HTTP_BIND_ADDR environment variable.
For more information about mapping your container port to a specific host port, see the
Docker guide for Publishing and exposing ports.

To customize your influxdb3 server hostname and port, specify the--http-bind option or the INFLUXDB3_HTTP_BIND_ADDR environment variable.

For more information about mapping your container port to a specific host port, see the
Docker guide for Publishing and exposing ports.

S3 object storage

Store data in an S3-compatible object store.
This is useful for production deployments that require high availability and durability.
Provide your bucket name and credentials to access the S3 object store.

# S3 object store (default is the us-east-1 region)
# Specify the object store type and associated options
influxdb3 serve \
  --node-id host01 \
  --object-store s3 \
  --bucket OBJECT_STORE_BUCKET \
  --aws-access-key AWS_ACCESS_KEY_ID \
  --aws-secret-access-key AWS_SECRET_ACCESS_KEY
# Minio or other open source object store
# (using the AWS S3 API with additional parameters)
# Specify the object store type and associated options
influxdb3 serve \
  --node-id host01 \
  --object-store s3 \
  --bucket OBJECT_STORE_BUCKET \
  --aws-access-key-id AWS_ACCESS_KEY_ID \
  --aws-secret-access-key AWS_SECRET_ACCESS_KEY \
  --aws-endpoint ENDPOINT \
  --aws-allow-http

Memory-based object store

Store data in RAM without persisting it on shutdown.
It’s useful for rapid testing and development.

# Memory object store
# Stores data in RAM; doesn't persist data
influxdb3 serve \
  --node-id host01 \
  --object-store memory

For more information about server options, use the CLI help or view theInfluxDB 3 CLI reference:

influxdb3 serve --help

Tip

Use the InfluxDB 3 Explorer query interface

You can complete the remaining steps in this guide using InfluxDB 3 Explorer,
the web-based query and administrative interface for InfluxDB 3.
Explorer provides visual management of databases and tokens and an
easy way to write and query your time series data.
For more information, see the InfluxDB 3 Explorer documentation.

You can complete the remaining steps in this guide using InfluxDB 3 Explorer,
the web-based query and administrative interface for InfluxDB 3.
Explorer provides visual management of databases and tokens and an
easy way to write and query your time series data.

For more information, see the InfluxDB 3 Explorer documentation.

Set up authorization

InfluxDB 3 Core uses token-based authorization to authorize actions in the
database. Authorization is enabled by default when you start the server.
With authorization enabled, you must provide a token with influxdb3 CLI
commands and HTTP API requests.

InfluxDB 3 Core supports admin tokens, which grant access to all CLI actions and API endpoints.

For more information about tokens and authorization, see Manage tokens.

Create an operator token

After you start the server, create your first admin token.
The first admin token you create is the operator token for the server.

Use the influxdb3 create token commandwith the --admin option to create your operator token:

CLI

influxdb3 create token --admin
# With Docker — in a new terminal:
docker exec -it CONTAINER_NAME influxdb3 create token --admin

Replace CONTAINER_NAME with the name of your running Docker container.

The command returns a token string for authenticating CLI commands and API requests.

Important

Store your token securely

InfluxDB displays the token string only when you create it.
Store your token securely—you cannot retrieve it from the database later.

InfluxDB displays the token string only when you create it.
Store your token securely—you cannot retrieve it from the database later.

Set your token for authorization

Use your operator token to authenticate server actions in InfluxDB 3 Core,
such as
performing administrative tasks
and writing and querying data.

Use one of the following methods to provide your token and authenticate influxdb3 CLI commands.

In your command, replace YOUR_AUTH_TOKEN with your token string (for example, the operator token from the previous step).

Environment variable (recommended)

Set the INFLUXDB3_AUTH_TOKEN environment variable to have the CLI use your
token automatically:

export INFLUXDB3_AUTH_TOKEN=YOUR_AUTH_TOKEN

Include the --token option with CLI commands:

influxdb3 show databases --token YOUR_AUTH_TOKEN

For HTTP API requests, include your token in the Authorization header–for example:

curl "http://localhost:8181/api/v3/configure/database" \
  --header "Authorization: Bearer YOUR_AUTH_TOKEN"

Learn more about tokens and permissions

  • Manage admin tokens - Understand and
    manage operator and named admin tokens

  • Authentication -
    Understand authentication, authorizations, and permissions in InfluxDB 3 Core

Get startedWrite data

Related

@jstirnaman
Copy link
Contributor Author

Deployed to test2

@jstirnaman jstirnaman marked this pull request as ready for review November 30, 2025 14:58
Copilot finished reviewing on behalf of jstirnaman November 30, 2025 14:59
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements LLM-friendly Markdown generation for the InfluxData documentation site. It introduces a dual-implementation approach with a Rust converter (10x faster) and JavaScript fallback, along with UI components for copying/downloading documentation in various formats.

Key Changes:

  • Rust-based HTML-to-Markdown converter with Node.js bindings via napi-rs
  • JavaScript fallback using Turndown and JSDOM for broader compatibility
  • Format selector UI component for accessing documentation in different formats
  • Build scripts for generating Markdown at build time
  • Hugo templates for llms.txt generation following llmstxt.org specification

Reviewed changes

Copilot reviewed 39 out of 42 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
package.json Added dependencies (turndown, jsdom, remark family, p-limit) and new build scripts
yarn.lock Lockfile updates for new dependencies including Rust napi-rs tooling
scripts/rust-markdown-converter/* Rust implementation with Cargo config, build script, and library code
scripts/lib/markdown-converter.cjs JavaScript fallback implementation with Turndown/JSDOM
scripts/html-to-markdown.js CLI tool for HTML→Markdown conversion
scripts/build-llm-markdown.js Optimized build script with two-phase conversion
scripts/deploy-staging.sh Staging deployment script
layouts/partials/article/format-selector.html UI component for format selection
layouts/index.llmstxt.txt Root llms.txt template
layouts/_default/landing-influxdb.llmstxt.txt Landing page llms.txt template

You can also share your feedback on Copilot code review for a chance to win a $100 gift card. Take the survey.

return;
}

if (productMappings && productMappings.initializeProductData) {
Copy link

Copilot AI Nov 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This guard always evaluates to false.

Copilot uses AI. Check for mistakes.
Comment on lines +13 to +15
validateFrontmatter,
validateTable,
containsText,
Copy link

Copilot AI Nov 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused imports containsText, validateFrontmatter.

Suggested change
validateFrontmatter,
validateTable,
containsText,
validateTable,

Copilot uses AI. Check for mistakes.
*
* @default 4 - Rough heuristic (4 characters ≈ 1 token)
*/
const CHARS_PER_TOKEN = 4;
Copy link

Copilot AI Nov 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused variable CHARS_PER_TOKEN.

Copilot uses AI. Check for mistakes.

const TurndownService = require('turndown');
const { JSDOM } = require('jsdom');
const path = require('path');
Copy link

Copilot AI Nov 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused variable path.

Copilot uses AI. Check for mistakes.
const TurndownService = require('turndown');
const { JSDOM } = require('jsdom');
const path = require('path');
const fs = require('fs');
Copy link

Copilot AI Nov 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused variable fs.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants