Skip to content

Conversation

@mbutrovich
Copy link
Contributor

@mbutrovich mbutrovich commented Oct 6, 2025

This PR introduces a new approach for integrating Apache Iceberg with Comet using iceberg-rust, enabling fully-native Iceberg table scans without requiring changes to upstream Iceberg Java code.

Rationale for this change

I was inspired by @RussellSpitzer's recent talk and wanted to revisit the abstraction layer at which Comet integrates with Iceberg.

Our current iceberg_compat approach requires code changes in Iceberg Java to integrate with Parquet reader instantiation, creating a tight coupling between Comet and Iceberg. This PR instead works at the FileScanTask layer after Iceberg's planning phase is complete. This enables fully-native Iceberg scans (similar to our native_datafusion scans) without any changes in upstream Iceberg Java code.

All catalog access and planning continues to happen through Spark's Iceberg integration (unchanged), but file reading is delegated to iceberg-rust, which provides better parallelism and integrates naturally with Comet's native execution engine.

What changes are included in this PR?

This implementation follows a similar pattern to CometNativeScanExec for regular Parquet files, but extracts and serializes Iceberg's FileScanTask objects:

Scala/JVM Side:

  • New CometIcebergNativeScanExec operator that replaces Spark's Iceberg BatchScanExec
  • Uses reflection to extract FileScanTask objects from Iceberg's planning output
  • Serializes tasks and catalog properties to protobuf for native execution

Native/Rust Side:

  • New IcebergScanExec operator that consumes serialized FileScanTask objects
  • Uses iceberg-rust's FileIO and ArrowReader to read data files
  • Leverages catalog properties to configure FileIO (credentials, regions, etc.)

How are these changes tested?

  • New CometIcebergNativeSuite with basic scenarios, but also a number of challenging situations from the Iceberg Java test suite
  • New CometFuzzIcebergSuite that we can adapt to Iceberg-specific logic
  • New IcebergReadFromS3Suite to test passing basic S3 credentials
  • Tested locally with Iceberg 1.5, 1.7, 1.10, CI tests 1.5.2, 1.8.1, and 1.10.0

Benefits over iceberg_compat

  1. No upstream changes needed - No references to Comet needed in Iceberg Java anymore
  2. Better parallelism - File reading happens at the same granularity as native_datafusion, not constrained by Iceberg Java's reader design
  3. Simplified runtime - No separate DataFusion runtime; scans run in the same context as other operators
  4. Better testing for iceberg-rust - I’ve already upstreamed several fixes for iceberg-rust’s ArrowReader
  5. Multi-version support - Reflection approach is version agnostic

Current Limitations & Open Questions

Related Work

Slides from the 10/9/25 Iceberg-Rust community call: iceberg-rust.pdf

@codecov-commenter
Copy link

codecov-commenter commented Oct 6, 2025

Codecov Report

❌ Patch coverage is 67.74892% with 298 lines in your changes missing coverage. Please review.
✅ Project coverage is 59.05%. Comparing base (f09f8af) to head (7911e4b).
⚠️ Report is 715 commits behind head on main.

Files with missing lines Patch % Lines
.../comet/serde/operator/CometIcebergNativeScan.scala 70.43% 82 Missing and 33 partials ⚠️
...n/scala/org/apache/comet/rules/CometScanRule.scala 44.51% 65 Missing and 21 partials ⚠️
...a/org/apache/comet/iceberg/IcebergReflection.scala 71.98% 62 Missing and 10 partials ⚠️
...e/spark/sql/comet/CometIcebergNativeScanExec.scala 81.05% 2 Missing and 16 partials ⚠️
...n/scala/org/apache/comet/rules/CometExecRule.scala 62.50% 2 Missing and 1 partial ⚠️
...rg/apache/spark/sql/comet/CometBatchScanExec.scala 70.00% 0 Missing and 3 partials ⚠️
...la/org/apache/comet/objectstore/NativeConfig.scala 0.00% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##               main    #2528      +/-   ##
============================================
+ Coverage     56.12%   59.05%   +2.92%     
- Complexity      976     1462     +486     
============================================
  Files           119      165      +46     
  Lines         11743    15060    +3317     
  Branches       2251     2504     +253     
============================================
+ Hits           6591     8893    +2302     
- Misses         4012     4899     +887     
- Partials       1140     1268     +128     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@comphead
Copy link
Contributor

comphead commented Oct 6, 2025

It is promising!

@mbutrovich mbutrovich changed the title feat: Iceberg scan based serializing FileScanTasks to iceberg-rust feat: [iceberg] Scan based serializing FileScanTasks to iceberg-rust Oct 6, 2025
@mbutrovich mbutrovich changed the title feat: [iceberg] Scan based serializing FileScanTasks to iceberg-rust feat: Iceberg scan based serializing FileScanTasks to iceberg-rust Oct 6, 2025
# Conflicts:
#	native/Cargo.lock
#	spark/src/main/scala/org/apache/comet/rules/CometScanRule.scala
…eberg version back to 1.8.1 after hitting known segfaults with old versions.
liurenjie1024 pushed a commit to apache/iceberg-rust that referenced this pull request Oct 16, 2025
## Which issue does this PR close?


- Part of #1749.

## What changes are included in this PR?

- Change `ArrowReaderBuilder::new` to be `pub` instead of `pub(crate)`.

## Are these changes tested?

- No new tests for this. Currently being used in DataFusion Comet:
apache/datafusion-comet#2528
# Conflicts:
#	docs/source/user-guide/latest/configs.md
#	native/Cargo.lock
#	native/Cargo.toml
#	native/core/Cargo.toml
Copy link
Member

@andygrove andygrove left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any reason not to go ahead and merge this as an experimental feature.

Copy link
Contributor

@kazuyukitanimura kazuyukitanimura left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

still looking

@mbutrovich mbutrovich merged commit 937cacd into apache:main Nov 20, 2025
259 of 261 checks passed
Copy link
Contributor

@kazuyukitanimura kazuyukitanimura left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this is merged, but one more comment

Comment on lines +166 to +167
iceberg-version: [{short: '1.8', full: '1.8.1'}, {short: '1.9', full: '1.9.1'}, {short: '1.10', full: '1.10.0'}]
spark-version: [{short: '3.4', full: '3.4.3'}, {short: '3.5', full: '3.5.7'}]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The profile now says to use Iceberg 1.5 with Spark 3.4, but we do not have 1.5 here. Not sure if it causes problems...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's what we currently test with this PR:

3.4 3.5 4.0
1.5.2 CometIcebergNativeSuite CometFuzzIcebergSuite IcebergReadFromS3Suite (not run in CI due to MinIO container)
1.8.1 Iceberg Spark Tests Iceberg Spark Extensions Tests Iceberg Spark Runtime Tests Iceberg Spark Tests Iceberg Spark Extensions Tests Iceberg Spark Runtime Tests CometIcebergNativeSuite CometFuzzIcebergSuite IcebergReadFromS3Suite (not run in CI due to MinIO container)
1.9.1 Iceberg Spark Tests Iceberg Spark Extensions Tests Iceberg Spark Runtime Tests Iceberg Spark Tests Iceberg Spark Extensions Tests Iceberg Spark Runtime Tests
1.10 Iceberg Spark Tests Iceberg Spark Extensions Tests Iceberg Spark Runtime Tests Iceberg Spark Tests Iceberg Spark Extensions Tests Iceberg Spark Runtime Tests CometIcebergNativeSuite CometFuzzIcebergSuite IcebergReadFromS3Suite (not run in CI due to MinIO container)

I leaned on newer versions for the Iceberg tests because as best as I could tell, never versions are a superset of the older versions. For the Comet-native tests we are running 1.5.2.

We should have a discussion of what we want to run long term, because right now tagging a PR [iceberg] makes CI take hours and causes so many parallel Iceberg suites that we start getting network timeouts (likely due to throttling).

coderfender pushed a commit to coderfender/datafusion-comet that referenced this pull request Dec 13, 2025
coderfender pushed a commit to coderfender/datafusion-comet that referenced this pull request Dec 13, 2025
nagraham pushed a commit to nagraham/iceberg-rust that referenced this pull request Jan 6, 2026
…chTransformer (apache#1821)

Partially address apache#1749.

This PR adds partition spec handling to `FileScanTask` and
`RecordBatchTransformer` to correctly implement the Iceberg spec's
"Column Projection" rules for fields "not present" in data files.

Prior to this PR, `iceberg-rust`'s `FileScanTask` had no mechanism to
pass partition information to `RecordBatchTransformer`, causing two
issues:

1. **Incorrect handling of bucket partitioning**: Couldn't distinguish
identity transforms (which should use partition metadata constants) from
non-identity transforms like bucket/truncate/year/month (which must read
from data file). For example, `bucket(4, id)` stores
`id_bucket = 2` (bucket number) in partition metadata, but actual `id`
values (100, 200, 300) are only in the data file. iceberg-rust was
incorrectly treating bucket-partitioned source columns as constants,
breaking runtime filtering and returning incorrect query results.

2. **Field ID conflicts in add_files scenarios**: When importing Hive
tables via `add_files`, partition columns could have field IDs
conflicting with Parquet data columns. Example: Parquet has
field_id=1→"name", but Iceberg expects field_id=1→"id" (partition). Per
spec, the
correct field is "not present" and requires name mapping fallback.

Per the Iceberg spec
(https://iceberg.apache.org/spec/#column-projection), when a field ID is
"not present" in a data file, it must be resolved using these rules:

1. Return the value from partition metadata if an **Identity Transform**
exists
2. Use `schema.name-mapping.default` metadata to map field id to columns
without field id
3. Return the default value if it has a defined `initial-default`
4. Return null in all other cases

**Why this matters:**
- **Identity transforms** (e.g., `identity(dept)`) store actual column
values in partition metadata that can be used as constants without
reading the data file
- **Non-identity transforms** (e.g., `bucket(4, id)`, `day(timestamp)`)
store transformed values in partition metadata (e.g., bucket number 2,
not the actual `id` values 100, 200, 300) and must read source columns
from the data file

1. **Added partition fields to `FileScanTask`** (`scan/task.rs`):
- `partition: Option<Struct>` - Partition data from manifest entry
- `partition_spec: Option<Arc<PartitionSpec>>` - For transform-aware
constant detection
- `name_mapping: Option<Arc<NameMapping>>` - Name mapping from table
metadata

2. **Implemented `constants_map()` function**
(`arrow/record_batch_transformer.rs`):
- Replicates Java's `PartitionUtil.constantsMap()` behavior
- Only includes fields where transform is `Transform::Identity`
- Used to determine which fields use partition metadata constants vs.
reading from data files

3. **Enhanced `RecordBatchTransformer`**
(`arrow/record_batch_transformer.rs`):
- Added `build_with_partition_data()` method to accept partition spec,
partition data, and name mapping
- Implements all 4 spec rules for column resolution with
identity-transform awareness
- Detects field ID conflicts by verifying both field ID AND name match
- Falls back to name mapping when field IDs are missing/conflicting
(spec rule risingwavelabs#2)

4. **Updated `ArrowReader`** (`arrow/reader.rs`):
- Uses `build_with_partition_data()` when partition information is
available
- Falls back to `build()` when not available

5. **Updated manifest entry processing** (`scan/context.rs`):
- Populates partition fields in `FileScanTask` from manifest entry data

1. **`bucket_partitioning_reads_source_column_from_file`** - Verifies
that bucket-partitioned source columns are read from data files (not
treated as constants from partition metadata)

2. **`identity_partition_uses_constant_from_metadata`** - Verifies that
identity-transformed fields correctly use partition metadata constants

3. **`test_bucket_partitioning_with_renamed_source_column`** - Verifies
field-ID-based mapping works despite column rename

4. **`add_files_partition_columns_without_field_ids`** - Verifies name
mapping resolution for Hive table imports without field IDs (spec rule

5. **`add_files_with_true_field_id_conflict`** - Verifies correct field
ID conflict detection with name mapping fallback (spec rule risingwavelabs#2)

6. **`test_all_four_spec_rules`** - Integration test verifying all 4
spec rules work together

Yes, there are 6 new unit tests covering all 4 Iceberg spec rules. This
also resolved approximately 50 Iceberg Java tests when running with
DataFusion Comet's experimental
apache/datafusion-comet#2528 PR.

---------

Co-authored-by: Renjie Liu <[email protected]>
chenzl25 pushed a commit to risingwavelabs/iceberg-rust that referenced this pull request Jan 7, 2026
…chTransformer (apache#1821) (#107)

Partially address apache#1749.

This PR adds partition spec handling to `FileScanTask` and
`RecordBatchTransformer` to correctly implement the Iceberg spec's
"Column Projection" rules for fields "not present" in data files.

Prior to this PR, `iceberg-rust`'s `FileScanTask` had no mechanism to
pass partition information to `RecordBatchTransformer`, causing two
issues:

1. **Incorrect handling of bucket partitioning**: Couldn't distinguish
identity transforms (which should use partition metadata constants) from
non-identity transforms like bucket/truncate/year/month (which must read
from data file). For example, `bucket(4, id)` stores
`id_bucket = 2` (bucket number) in partition metadata, but actual `id`
values (100, 200, 300) are only in the data file. iceberg-rust was
incorrectly treating bucket-partitioned source columns as constants,
breaking runtime filtering and returning incorrect query results.

2. **Field ID conflicts in add_files scenarios**: When importing Hive
tables via `add_files`, partition columns could have field IDs
conflicting with Parquet data columns. Example: Parquet has
field_id=1→"name", but Iceberg expects field_id=1→"id" (partition). Per
spec, the
correct field is "not present" and requires name mapping fallback.

Per the Iceberg spec
(https://iceberg.apache.org/spec/#column-projection), when a field ID is
"not present" in a data file, it must be resolved using these rules:

1. Return the value from partition metadata if an **Identity Transform**
exists
2. Use `schema.name-mapping.default` metadata to map field id to columns
without field id
3. Return the default value if it has a defined `initial-default`
4. Return null in all other cases

**Why this matters:**
- **Identity transforms** (e.g., `identity(dept)`) store actual column
values in partition metadata that can be used as constants without
reading the data file
- **Non-identity transforms** (e.g., `bucket(4, id)`, `day(timestamp)`)
store transformed values in partition metadata (e.g., bucket number 2,
not the actual `id` values 100, 200, 300) and must read source columns
from the data file

1. **Added partition fields to `FileScanTask`** (`scan/task.rs`):
- `partition: Option<Struct>` - Partition data from manifest entry
- `partition_spec: Option<Arc<PartitionSpec>>` - For transform-aware
constant detection
- `name_mapping: Option<Arc<NameMapping>>` - Name mapping from table
metadata

2. **Implemented `constants_map()` function**
(`arrow/record_batch_transformer.rs`):
- Replicates Java's `PartitionUtil.constantsMap()` behavior
- Only includes fields where transform is `Transform::Identity`
- Used to determine which fields use partition metadata constants vs.
reading from data files

3. **Enhanced `RecordBatchTransformer`**
(`arrow/record_batch_transformer.rs`):
- Added `build_with_partition_data()` method to accept partition spec,
partition data, and name mapping
- Implements all 4 spec rules for column resolution with
identity-transform awareness
- Detects field ID conflicts by verifying both field ID AND name match
- Falls back to name mapping when field IDs are missing/conflicting
(spec rule #2)

4. **Updated `ArrowReader`** (`arrow/reader.rs`):
- Uses `build_with_partition_data()` when partition information is
available
- Falls back to `build()` when not available

5. **Updated manifest entry processing** (`scan/context.rs`):
- Populates partition fields in `FileScanTask` from manifest entry data

1. **`bucket_partitioning_reads_source_column_from_file`** - Verifies
that bucket-partitioned source columns are read from data files (not
treated as constants from partition metadata)

2. **`identity_partition_uses_constant_from_metadata`** - Verifies that
identity-transformed fields correctly use partition metadata constants

3. **`test_bucket_partitioning_with_renamed_source_column`** - Verifies
field-ID-based mapping works despite column rename

4. **`add_files_partition_columns_without_field_ids`** - Verifies name
mapping resolution for Hive table imports without field IDs (spec rule

5. **`add_files_with_true_field_id_conflict`** - Verifies correct field
ID conflict detection with name mapping fallback (spec rule #2)

6. **`test_all_four_spec_rules`** - Integration test verifying all 4
spec rules work together

Yes, there are 6 new unit tests covering all 4 Iceberg spec rules. This
also resolved approximately 50 Iceberg Java tests when running with
DataFusion Comet's experimental
apache/datafusion-comet#2528 PR.

---------

Co-authored-by: Matt Butrovich <[email protected]>
Co-authored-by: Renjie Liu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants