Skip to content

update batcher configs #1642

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
214 changes: 205 additions & 9 deletions pages/operators/chain-operators/configuration/batcher.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,192 @@ import { Callout, Tabs } from 'nextra/components'

# Batcher configuration

This page lists all configuration options for the op-batcher. The op-batcher posts
L2 sequencer data to the L1, to make it available for verifiers. The following
options are from the `--help` in [v1.10.0](https://github.com/ethereum-optimism/optimism/releases/tag/op-batcher%2Fv1.10.0).
This page provides the **definitive guide** for OP Stack batcher configuration, serving as the single source of truth for chain operators.
The op-batcher posts L2 sequencer data to the L1, to make it available for verifiers.

## Overview

The batcher serves as a critical component that:

* Stores a contiguous set of blocks & turns them into channels
* Compresses and batches L2 transaction data
* Submits this data to L1 either as calldata or blobs
* Ensures data availability for L2 block verification and derivation

Understanding transaction finality is crucial for batcher configuration: transactions on OP Stack chains become finalized when their data is included in a finalized Ethereum block, typically around 20–30 minutes after submission. This is different from the 7-day withdrawal period, which affects only withdrawals through the Standard Bridge.

**Finality levels:**

* **Unsafe blocks**: The Sequencer can reorganize these blocks (typically within \~5–10 minutes)
* **Safe blocks**: The Sequencer would need to trigger a reorg on Ethereum itself, which is complex and unlikely
* **Finalized blocks**: Once blocks are included in a finalized Ethereum block (typically after \~15–30 minutes), the Sequencer cannot reorganize them without compromising Ethereum's finality guarantees

## Recommended configuration

<Callout type="info">
The following configuration provides optimal settings for most OP Stack chains while maintaining cost efficiency.
</Callout>

### Essential production configuration

```bash
# Basic Configuration - Wait for sync and check recent transactions
OP_BATCHER_WAIT_NODE_SYNC=true
OP_BATCHER_CHECK_RECENT_TXS_DEPTH=5
OP_BATCHER_POLL_INTERVAL=5s

# Batch Settings - Use Span Batches (Delta upgrade feature)
OP_BATCHER_BATCH_TYPE=1 # Span batches for efficiency

# Compression - Higher compression for lower data costs
OP_BATCHER_COMPRESSION_ALGO=brotli-10

# Data availability - Auto-switch between calldata and blobs
OP_BATCHER_DATA_AVAILABILITY_TYPE=auto

# Channel duration - Target 5 hours (1500 L1 blocks) for cost optimization
OP_BATCHER_MAX_CHANNEL_DURATION=1500

# Blob configuration - Multi-blob transactions for higher throughput
OP_BATCHER_TARGET_NUM_FRAMES=5 # 5 blobs per transaction

# Safety margins - 1 hour safety margin to prevent sequencing window issues
OP_BATCHER_SUB_SAFETY_MARGIN=300
OP_BATCHER_NUM_CONFIRMATIONS=4

# Transaction Management - Optimized fee parameters
OP_BATCHER_NETWORK_TIMEOUT=10s
OP_BATCHER_TXMGR_MIN_BASEFEE=2.0 # 2 gwei
OP_BATCHER_TXMGR_MIN_TIP_CAP=2.0 # 2 gwei
OP_BATCHER_TXMGR_FEE_LIMIT_MULTIPLIER=16 # Allow up to 4 doublings
OP_BATCHER_MAX_PENDING_TX=10
OP_BATCHER_RESUBMISSION_TIMEOUT=180s # 3 minutes

# Sequencer following - Enable active sequencer detection
OP_BATCHER_ACTIVE_SEQUENCER_CHECK_DURATION=5s

# Throttling configuration (requires op-geth v1.101411.1+)
OP_BATCHER_THROTTLE_THRESHOLD=1000000 # 1MB backlog triggers throttling
OP_BATCHER_THROTTLE_TX_SIZE=300 # Individual tx size limit
OP_BATCHER_THROTTLE_BLOCK_SIZE=21000 # Block size limit when throttling
OP_BATCHER_THROTTLE_ALWAYS_BLOCK_SIZE=130000 # Always-active block limit

# Network connections
OP_BATCHER_L1_ETH_RPC="<L1_RPC_URL>"
OP_BATCHER_L2_ETH_RPC="http://localhost:8551"
OP_BATCHER_ROLLUP_RPC="http://localhost:9545"

# Authentication (choose one method)
OP_BATCHER_PRIVATE_KEY="your_private_key_here"

# Monitoring
OP_BATCHER_METRICS_ENABLED=true
OP_BATCHER_METRICS_PORT=7300
OP_BATCHER_LOG_LEVEL="INFO"
OP_BATCHER_LOG_FORMAT="json"
```

## Key configuration insights

### Channel duration (`OP_BATCHER_MAX_CHANNEL_DURATION`)

**Recommended value:** `1500` (5 hours)

<Callout type="warning">
The default value inside `op-batcher`, if not specified, is still `0`, which means channel duration tracking is disabled.
For very low throughput chains, this would mean to fill channels until close to the sequencing window and post the channel to `L1 SUB_SAFETY_MARGIN` L1 blocks before the sequencing window expires.
</Callout>

To minimize costs, we recommend setting your `OP_BATCHER_MAX_CHANNEL_DURATION` to target 5 hours, with a value of 1500 L1 blocks. When non-zero, this parameter is the max time (in L1 blocks, which are 12 seconds each) between which batches will be submitted to the L1.

**Important considerations:**

* While setting an `OP_BATCHER_MAX_CHANNEL_DURATION` of 1500 results in the cheapest fees, it also means that your [safe head](https://github.com/ethereum-optimism/specs/blob/main/specs/glossary.md#safe-l2-head) can stall for up to 5 hours
* This will negatively impact apps on your chain that rely on the safe head for operation. While many apps can likely operate simply by following the unsafe head, often Centralized Exchanges or third party bridges wait until transactions are marked safe before processing deposits and withdrawal
* Never exceed your L2's sequencing window (commonly 12 hours)
* If your chain fills up full blobs of data before the `OP_BATCHER_MAX_CHANNEL_DURATION` elapses, a batch will be submitted anyway

### Data availability type (`OP_BATCHER_DATA_AVAILABILITY_TYPE`)

**Recommended value:** `auto`

<Callout type="info">
Setting this flag to auto will allow the batcher to automatically switch between calldata and blobs based on the current L1 gas price.
</Callout>

Setting this flag to auto will allow the batcher to automatically switch between calldata and blobs based on the current L1 gas price.

Comment on lines +131 to +136
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove redundant explanation for auto mode.

The description of OP_BATCHER_DATA_AVAILABILITY_TYPE=auto is repeated in both the callout (lines 131–133) and the paragraph immediately after (lines 135–136). Please remove the duplicate:

- Setting this flag to auto will allow the batcher to automatically switch between calldata and blobs based on the current L1 gas price.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<Callout type="info">
Setting this flag to auto will allow the batcher to automatically switch between calldata and blobs based on the current L1 gas price.
</Callout>
Setting this flag to auto will allow the batcher to automatically switch between calldata and blobs based on the current L1 gas price.
<Callout type="info">
Setting this flag to auto will allow the batcher to automatically switch between calldata and blobs based on the current L1 gas price.
</Callout>
🤖 Prompt for AI Agents
In pages/operators/chain-operators/configuration/batcher.mdx around lines 131 to
136, the explanation for the `auto` mode of `OP_BATCHER_DATA_AVAILABILITY_TYPE`
is duplicated in both the callout and the following paragraph. Remove the
redundant paragraph after the callout to keep only one clear explanation.

**Options:**

* `calldata` - Generally simpler but can be more expensive on mainnet Ethereum, depending on gas prices
* `blobs` - Typically lower cost when your chain has enough transaction volume to fill large chunks of data
* `auto` - Automatically switches based on L1 gas conditions (recommended for most chains)

### Multi-blob configuration

<Callout type="warning">
When there's blob congestion, running with high blob counts can backfire, because you will have a harder time getting blobs included and then fees will bump, which always means a doubling of the priority fees.
</Callout>

For medium to high-throughput chains, configure multi-blob transactions:

```bash
OP_BATCHER_DATA_AVAILABILITY_TYPE=blobs
OP_BATCHER_TARGET_NUM_FRAMES=6 # 6 blobs per transaction
OP_BATCHER_TXMGR_MIN_BASEFEE=2.0 # Higher fees for blob inclusion
OP_BATCHER_TXMGR_MIN_TIP_CAP=2.0
OP_BATCHER_RESUBMISSION_TIMEOUT=240s # Wait 4 min before fee bumps
```

Multi-blob transactions are particularly useful for medium to high-throughput chains, where enough transaction volume exists to fill up 6 blobs in a reasonable amount of time.

### Span batches (`OP_BATCHER_BATCH_TYPE`)

**Recommended value:** `1`

Span batches reduce the overhead of OP Stack chains, introduced in the Delta network upgrade. This is beneficial for sparse and low-throughput OP Stack chains. The overhead is reduced by representing a span of consecutive L2 blocks in a more efficient manner, while preserving the same consistency checks as regular batch data.

### Batcher sequencer throttling

This feature is a batcher-driven sequencer-throttling control loop. This is to avoid sudden spikes in L1 DA-usage consuming too much available gas and causing a backlog in batcher transactions.

<Callout type="info">
Note that this feature requires the batcher to correctly follow the sequencer at all times, or it would set throttling parameters on a non-sequencer EL client. That means, active sequencer follow mode has to be enabled correctly by listing all the possible sequencers in the L2 rollup and EL endpoint flags.
</Callout>

**Feature requirements:**

* Requires op-geth version `v1.101411.1` or later
* Enable the `miner` API namespace: `GETH_HTTP_API: web3,debug,eth,txpool,net,miner`

**Throttling configuration:**

```bash
OP_BATCHER_THROTTLE_THRESHOLD=1000000 # 1MB backlog triggers throttling
OP_BATCHER_THROTTLE_TX_SIZE=300 # Individual tx size limit
OP_BATCHER_THROTTLE_BLOCK_SIZE=21000 # Block size limit when throttling
OP_BATCHER_THROTTLE_ALWAYS_BLOCK_SIZE=130000 # Always-active block limit
```

## Chain-specific recommendations

### For low-throughput chains

* Consider longer `MAX_CHANNEL_DURATION` values (up to 1500 blocks)
* Use span batches (`BATCH_TYPE=1`) for reduced overhead
* Start with `DATA_AVAILABILITY_TYPE=auto` to optimize costs

### For high-throughput chains

* Use multi-blob configuration with `TARGET_NUM_FRAMES=6`
* Shorter `MAX_CHANNEL_DURATION` may be appropriate
* Monitor blob inclusion rates and adjust fee parameters accordingly

### Sequencing window considerations

Your chain should never exceed your L2's sequencing window (commonly 12 hours). The sequencing window is the maximum time allowed for posting batch data to L1 before the chain experiences issues.

**Safety margin:** The batcher tx submission safety margin (in #L1-blocks) to subtract from a channel's timeout and sequencing window, to guarantee safe inclusion of a channel on L1. The default value is 10.

## Batcher policy

Expand All @@ -32,12 +215,12 @@ The batcher policy defines high-level constraints and responsibilities regarding
| Data Availability Type | Specifies whether the batcher uses **blobs**, **calldata**, or **auto** to post transaction data to L1. | Batch submitter address | Ethereum (Blobs or Calldata) | - Alternative data availability (Alt-DA) is not yet supported in the standard configuration.<br />- The sequencer can switch at will between blob transactions and calldata, with no restrictions, because both are fully secured by L1. |
| Batch Submission Frequency | Determines how frequently the batcher submits aggregated transaction data to L1 (via the batcher transaction). | Batch submitter address | Must target **1,800 L1 blocks** (6 hours on Ethereum, assuming 12s L1 block time) or lower | - Batches must be posted before the sequencing window closes (commonly 12 hours by default).<br />- Leave a buffer for L1 network congestion and data size to ensure that each batch is fully committed in a timely manner. |

* **Data Availability Types**:
* **Data availability types**:
* **Calldata** is generally simpler but can be more expensive on mainnet Ethereum, depending on gas prices.
* **Blobs** are typically lower cost when your chain has enough transaction volume to fill large chunks of data.
* The `op-batcher` can toggle between these approaches by setting the `--data-availability-type=<blobs|calldata|auto>` flag or with the `OP_BATCHER_DATA_AVAILABILITY_TYPE` env variable. Setting this flag to `auto` will allow the batcher to automatically switch between `calldata` and `blobs` based on the current L1 gas price.

* **Batch Submission Frequency** (`OP_BATCHER_MAX_CHANNEL_DURATION` and related flags):
* **Batch submission frequency** (`OP_BATCHER_MAX_CHANNEL_DURATION` and related flags):
* Standard OP Chains frequently target a maximum channel duration between 1–6 hours.
* Your chain should never exceed your L2's sequencing window (commonly 12 hours).
* If targeting a longer submission window (e.g., 5 or 6 hours), be aware that the [safe head](https://github.com/ethereum-optimism/specs/blob/main/specs/glossary.md#safe-l2-head) can stall up to that duration.
Expand Down Expand Up @@ -71,7 +254,7 @@ To minimize costs, we recommend setting your `OP_BATCHER_MAX_CHANNEL_DURATION`
When there's blob congestion, running with high blob counts can backfire, because you will have a harder time getting blobs included and then fees will bump, which always means a doubling of the priority fees.
</Callout>

The `op-batcher` has the capabilities to send multiple blobs per single blob transaction. This is accomplished by the use of multi-frame channels, see the [specs](https://specs.optimism.io/protocol/derivation.html#frame-format?utm_source=op-docs&utm_medium=docs) for more technical details on channels and frames.
The `op-batcher` has the capabilities to send multiple blobs per single blob transaction. This is accomplished by the use of multi-frame channels, see the [specs](https://specs.optimism.io/protocol/derivation.html#frame-format?utm_source=op-docs\&utm_medium=docs) for more technical details on channels and frames.

A minimal batcher configuration (with env vars) to enable 6-blob batcher transactions is:

Expand Down Expand Up @@ -166,7 +349,7 @@ however you can see some of the most important variables configured below:
OP_BATCHER_ACTIVE_SEQUENCER_CHECK_DURATION: 5s
```

Lower throughput chains, which aren't filling up channels before the `MAX_CHANNEL_DURATION` is hit,
Lower throughput chains, which aren't filling up channels before the `MAX_CHANNEL_DURATION` is hit,
may save gas by increasing the `MAX_CHANNEL_DURATION`. See the [recommendations section](#set-your--op_batcher_max_channel_duration).

## All configuration variables
Expand Down Expand Up @@ -538,7 +721,6 @@ flag must also be set.
<Tabs.Tab>`OP_BATCHER_HD_PATH=`</Tabs.Tab>
</Tabs>


#### signer.address

Address the signer is signing transactions for.
Expand Down Expand Up @@ -806,7 +988,6 @@ Enable the metrics server. The default value is `false`.
<Tabs.Tab>`--version=false`</Tabs.Tab>
</Tabs>


### Miscellaneous

#### active-sequencer-check-duration
Expand Down Expand Up @@ -888,3 +1069,18 @@ Print the version. The default value is false.
<Tabs.Tab>`--version=<value>`</Tabs.Tab>
<Tabs.Tab>`--version=false`</Tabs.Tab>
</Tabs>

## Conclusion

This configuration reference provides a complete foundation for operating an OP Stack batcher efficiently and securely.
The recommended settings balance cost optimization with operational reliability while ensuring proper transaction finality progression.

The batcher is critical infrastructure that ensures your L2 chain inherits Ethereum's security properties through reliable data availability.
Proper configuration based on these guidelines will ensure optimal operation while meeting your chain's specific requirements.

### Next steps

* Read the [transaction finality](/stack/transactions/transaction-finality) docs
* Check out [rollup protocol](/stack/rollup/overview) specs
* Read the [derivation process](https://specs.optimism.io/protocol/derivation.html) guide
* Read the [batch submission wire format](https://specs.optimism.io/protocol/derivation.html#batch-submission) spec