From c316bfdf820b8389baecadb419c25be2bcc76754 Mon Sep 17 00:00:00 2001 From: Martin Date: Tue, 14 Oct 2025 00:01:47 +0200 Subject: [PATCH 1/6] Add gentle introduction to Arrow and RecordBatches This adds a new user guide page addressing issue #11336 to provide a gentle introduction to Apache Arrow and RecordBatches for DataFusion users. The guide includes: - Explanation of Arrow as a columnar specification - Visual comparison of row vs columnar storage (with ASCII diagrams) - Rationale for RecordBatch-based streaming (memory + vectorization) - Practical examples: reading files, building batches, querying with MemTable - Clear guidance on when Arrow knowledge is needed (extension points) - Links back to DataFrame API and library user guide - Link to DataFusion Invariants for contributors who want to go deeper This helps users understand the foundation without getting overwhelmed, addressing feedback from PR #11290 that DataFrame examples 'throw people into the deep end of Arrow.' --- docs/source/index.rst | 1 + .../library-user-guide/query-optimizer.md | 2 - docs/source/user-guide/arrow-introduction.md | 252 ++++++++++++++++++ .../user-guide/concepts-readings-events.md | 1 - docs/source/user-guide/dataframe.md | 6 + .../source/user-guide/sql/scalar_functions.md | 3 - 6 files changed, 259 insertions(+), 6 deletions(-) create mode 100644 docs/source/user-guide/arrow-introduction.md diff --git a/docs/source/index.rst b/docs/source/index.rst index 574c285b0e65..be93ceb1d49d 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -113,6 +113,7 @@ To get started, see user-guide/crate-configuration user-guide/cli/index user-guide/dataframe + user-guide/arrow-introduction user-guide/expressions user-guide/sql/index user-guide/configs diff --git a/docs/source/library-user-guide/query-optimizer.md b/docs/source/library-user-guide/query-optimizer.md index 877ff8c754ad..20589e9843c4 100644 --- a/docs/source/library-user-guide/query-optimizer.md +++ b/docs/source/library-user-guide/query-optimizer.md @@ -440,13 +440,11 @@ When analyzing expressions, DataFusion runs boundary analysis using interval ari Consider a simple predicate like age > 18 AND age <= 25. The analysis flows as follows: 1. Context Initialization - - Begin with known column statistics - Set up initial boundaries based on column constraints - Initialize the shared analysis context 2. Expression Tree Walk - - Analyze each node in the expression tree - Propagate boundary information upward - Allow child nodes to influence parent boundaries diff --git a/docs/source/user-guide/arrow-introduction.md b/docs/source/user-guide/arrow-introduction.md new file mode 100644 index 000000000000..dd1f8d62fafc --- /dev/null +++ b/docs/source/user-guide/arrow-introduction.md @@ -0,0 +1,252 @@ + + +# A Gentle Introduction to Arrow & RecordBatches (for DataFusion users) + +```{contents} +:local: +:depth: 2 +``` + +This guide helps DataFusion users understand Arrow and its RecordBatch format. While you may never need to work with Arrow directly, this knowledge becomes valuable when using DataFusion's extension points or debugging performance issues. + +## Why Columnar? The Arrow Advantage + +Apache Arrow is an open **specification** that defines how analytical data should be organized in memory. Think of it as a blueprint that different systems agree to follow, not a database or programming language. + +### Row-oriented vs Columnar Layout + +Traditional databases often store data row-by-row: + +``` +Row 1: [id: 1, name: "Alice", age: 30] +Row 2: [id: 2, name: "Bob", age: 25] +Row 3: [id: 3, name: "Carol", age: 35] +``` + +Arrow organizes the same data by column: + +``` +Column "id": [1, 2, 3] +Column "name": ["Alice", "Bob", "Carol"] +Column "age": [30, 25, 35] +``` + +Visual comparison: + +``` +Traditional Row Storage: Arrow Columnar Storage: +┌──────────────────┐ ┌─────────┬─────────┬──────────┐ +│ id │ name │ age │ │ id │ name │ age │ +├────┼──────┼──────┤ ├─────────┼─────────┼──────────┤ +│ 1 │ A │ 30 │ │ [1,2,3] │ [A,B,C] │[30,25,35]│ +│ 2 │ B │ 25 │ └─────────┴─────────┴──────────┘ +│ 3 │ C │ 35 │ ↑ ↑ ↑ +└──────────────────┘ Int32Array StringArray Int32Array +(read entire rows) (process entire columns at once) +``` + +### Why This Matters + +- **Vectorized Execution**: Process entire columns at once using SIMD instructions +- **Better Compression**: Similar values stored together compress more efficiently +- **Cache Efficiency**: Scanning specific columns doesn't load unnecessary data +- **Zero-Copy Data Sharing**: Systems can share Arrow data without conversion overhead + +DataFusion, DuckDB, Polars, and Pandas all speak Arrow natively—they can exchange data without expensive serialization/deserialization steps. + +## What is a RecordBatch? (And Why Batch?) + +A **RecordBatch** represents a horizontal slice of a table—a collection of equal-length columnar arrays sharing the same schema. + +### Why Not Process Entire Tables? + +- **Memory Constraints**: A billion-row table might not fit in RAM +- **Pipeline Processing**: Start producing results before reading all data +- **Parallel Execution**: Different threads can process different batches + +### Why Not Process Single Rows? + +- **Lost Vectorization**: Can't use SIMD instructions on single values +- **Poor Cache Utilization**: Jumping between rows defeats CPU cache optimization +- **High Overhead**: Managing individual rows has significant bookkeeping costs + +### RecordBatches: The Sweet Spot + +RecordBatches typically contain thousands of rows—enough to benefit from vectorization but small enough to fit in memory. DataFusion streams these batches through operators, achieving both efficiency and scalability. + +**Key Properties**: + +- Arrays are immutable (create new batches to modify data) +- NULL values tracked via efficient validity bitmaps +- Variable-length data (strings, lists) use offset arrays for efficient access + +## From files to Arrow + +When you call `read_csv`, `read_parquet`, `read_json` or `read_avro`, DataFusion decodes those formats into Arrow arrays and streams them to operators as RecordBatches. + +The example below shows how to read data from different file formats. Each `read_*` method returns a `DataFrame` that represents a query plan. When you call `.collect()`, DataFusion executes the plan and returns results as a `Vec`—the actual columnar data in Arrow format. + +```rust +use datafusion::prelude::*; + +#[tokio::main] +async fn main() -> datafusion::error::Result<()> { + let ctx = SessionContext::new(); + + // Pick ONE of these per run (each returns a new DataFrame): + let df = ctx.read_csv("data.csv", CsvReadOptions::new()).await?; + // let df = ctx.read_parquet("data.parquet", ParquetReadOptions::default()).await?; + // let df = ctx.read_json("data.ndjson", NdJsonReadOptions::default()).await?; // requires "json" feature + // let df = ctx.read_avro("data.avro", AvroReadOptions::default()).await?; // requires "avro" feature + + let batches = df + .select(vec![col("id")])? + .filter(col("id").gt(lit(10)))? + .collect() + .await?; // Vec + + Ok(()) +} +``` + +## Streaming Through the Engine + +DataFusion processes queries as pull-based pipelines where operators request batches from their inputs. This streaming approach enables early result production, bounds memory usage (spilling to disk only when necessary), and naturally supports parallel execution across multiple CPU cores. + +## Minimal: build a RecordBatch in Rust + +Sometimes you need to create Arrow data programmatically rather than reading from files. This example shows the core building blocks: creating typed arrays (like `Int32Array` for numbers), defining a `Schema` that describes your columns, and assembling them into a `RecordBatch`. Notice how nullable columns can contain `None` values, tracked efficiently by Arrow's internal validity bitmap. + +```rust +use std::sync::Arc; +use arrow_array::{ArrayRef, Int32Array, StringArray, RecordBatch}; +use arrow_schema::{DataType, Field, Schema}; + +fn make_batch() -> arrow_schema::Result { + let ids = Int32Array::from(vec![1, 2, 3]); + let names = StringArray::from(vec![Some("alice"), None, Some("carol")]); + + let schema = Arc::new(Schema::new(vec![ + Field::new("id", DataType::Int32, false), + Field::new("name", DataType::Utf8, true), + ])); + + let cols: Vec = vec![Arc::new(ids), Arc::new(names)]; + RecordBatch::try_new(schema, cols) +} +``` + +## Query an in-memory batch with DataFusion + +Once you have a `RecordBatch`, you can query it with DataFusion using a `MemTable`. This is useful for testing, processing data from external systems, or combining in-memory data with other sources. The example below creates a batch, wraps it in a `MemTable`, registers it as a named table, and queries it using SQL—demonstrating how Arrow serves as the bridge between your data and DataFusion's query engine. + +```rust +use std::sync::Arc; +use arrow_array::{Int32Array, StringArray, RecordBatch}; +use arrow_schema::{DataType, Field, Schema}; +use datafusion::datasource::MemTable; +use datafusion::prelude::*; + +#[tokio::main] +async fn main() -> datafusion::error::Result<()> { + let ctx = SessionContext::new(); + + // build a batch + let schema = Arc::new(Schema::new(vec![ + Field::new("id", DataType::Int32, false), + Field::new("name", DataType::Utf8, true), + ])); + let batch = RecordBatch::try_new( + schema.clone(), + vec![ + Arc::new(Int32Array::from(vec![1, 2, 3])) as _, + Arc::new(StringArray::from(vec![Some("foo"), Some("bar"), None])) as _, + ], + )?; + + // expose it as a table + let table = MemTable::try_new(schema, vec![vec![batch]])?; + ctx.register_table("people", Arc::new(table))?; + + // query it + let df = ctx.sql("SELECT id, upper(name) AS name FROM people WHERE id >= 2").await?; + df.show().await?; + Ok(()) +} +``` + +## Common Pitfalls + +When working with Arrow and RecordBatches, watch out for these common issues: + +- **Schema consistency**: All batches in a stream must share the exact same `Schema` (names, types, nullability, metadata) +- **Immutability**: Arrays are immutable; to modify data, build new arrays or new RecordBatches +- **Buffer management**: Variable-length types (UTF-8, binary, lists) use offsets + values; avoid manual buffer slicing unless you understand Arrow's internal invariants +- **Type mismatches**: Mixed input types across files may require explicit casts before joins/unions +- **Batch size assumptions**: Don't assume a particular batch size; always iterate until the stream ends + +## When Arrow knowledge is needed (Extension Points) + +For many use cases, you don't need to know about Arrow. DataFusion handles the conversion from formats like CSV and Parquet for you. However, Arrow becomes important when you use DataFusion's **extension points** to add your own custom functionality. + +These APIs are where you can plug your own code into the engine, and they often operate directly on Arrow `RecordBatch` streams. + +- **`TableProvider` (Custom Data Sources)**: This is the most common extension point. You can teach DataFusion how to read from any source—a custom file format, a network API, a different database—by implementing the `TableProvider` trait. Your implementation will be responsible for creating `RecordBatch`es to stream data into the engine. + +- **User-Defined Functions (UDFs)**: If you need to perform a custom transformation on your data that isn't built into DataFusion, you can write a UDF. Your function will receive data as Arrow arrays (inside a `RecordBatch`) and must produce an Arrow array as its output. + +- **Custom Optimizer Rules and Operators**: For advanced use cases, you can even add your own rules to the query optimizer or implement entirely new physical operators (like a special type of join). These also operate on the Arrow-based query plans. + +In short, knowing Arrow is key to unlocking the full power of DataFusion's modular and extensible architecture. + +## Next Steps: Working with DataFrames + +Now that you understand Arrow's RecordBatch format, you're ready to work with DataFusion's high-level APIs. The [DataFrame API](dataframe.md) provides a familiar, ergonomic interface for building queries without needing to think about Arrow internals most of the time. + +The DataFrame API handles all the Arrow details under the hood - reading files into RecordBatches, applying transformations, and producing results. You only need to drop down to the Arrow level when implementing custom data sources, UDFs, or other extension points. + +**Recommended reading order:** + +1. [DataFrame API](dataframe.md) - High-level query building interface +2. [Library User Guide: DataFrame API](../library-user-guide/using-the-dataframe-api.md) - Detailed examples and patterns +3. [Custom Table Providers](../library-user-guide/custom-table-providers.md) - When you need Arrow knowledge + +## Further reading + +- [Arrow introduction](https://arrow.apache.org/docs/format/Intro.html) +- [Arrow columnar format (overview)](https://arrow.apache.org/docs/format/Columnar.html) +- [Arrow IPC format (files and streams)](https://arrow.apache.org/docs/format/IPC.html) +- [arrow_array::RecordBatch (docs.rs)](https://docs.rs/arrow-array/latest/arrow_array/struct.RecordBatch.html) +- [Apache Arrow DataFusion: A Fast, Embeddable, Modular Analytic Query Engine](https://dl.acm.org/doi/10.1145/3626246.3653368) + +- DataFusion + Arrow integration (docs.rs): + - [datafusion::common::arrow](https://docs.rs/datafusion/latest/datafusion/common/arrow/index.html) + - [datafusion::common::arrow::array](https://docs.rs/datafusion/latest/datafusion/common/arrow/array/index.html) + - [datafusion::common::arrow::compute](https://docs.rs/datafusion/latest/datafusion/common/arrow/compute/index.html) + - [SessionContext::read_csv](https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_csv) + - [read_parquet](https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_parquet) + - [read_json](https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_json) + - [DataFrame::collect](https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html#method.collect) + - [SendableRecordBatchStream](https://docs.rs/datafusion/latest/datafusion/physical_plan/type.SendableRecordBatchStream.html) + - [TableProvider](https://docs.rs/datafusion/latest/datafusion/datasource/trait.TableProvider.html) + - [MemTable](https://docs.rs/datafusion/latest/datafusion/datasource/struct.MemTable.html) +- Deep dive (memory layout internals): [ArrayData on docs.rs](https://docs.rs/datafusion/latest/datafusion/common/arrow/array/struct.ArrayData.html) +- Parquet format and pushdown: [Parquet format](https://parquet.apache.org/docs/file-format/), [Row group filtering / predicate pushdown](https://arrow.apache.org/docs/cpp/parquet.html#row-group-filtering) +- For DataFusion contributors: [DataFusion Invariants](../contributor-guide/specification/invariants.md) - How DataFusion maintains type safety and consistency with Arrow's dynamic type system diff --git a/docs/source/user-guide/concepts-readings-events.md b/docs/source/user-guide/concepts-readings-events.md index ad444ef91c47..3b5a244f04ca 100644 --- a/docs/source/user-guide/concepts-readings-events.md +++ b/docs/source/user-guide/concepts-readings-events.md @@ -70,7 +70,6 @@ This is a list of DataFusion related blog posts, articles, and other resources. - **2024-10-16** [Blog: Candle Image Segmentation](https://www.letsql.com/posts/candle-image-segmentation/) - **2024-09-23 → 2024-12-02** [Talks: Carnegie Mellon University: Database Building Blocks Seminar Series - Fall 2024](https://db.cs.cmu.edu/seminar2024/) - - **2024-11-12** [Video: Building InfluxDB 3.0 with the FDAP Stack: Apache Flight, DataFusion, Arrow and Parquet (Paul Dix)](https://www.youtube.com/watch?v=AGS4GNGDK_4) - **2024-11-04** [Video: Synnada: Towards “Unified” Compute Engines: Opportunities and Challenges (Mehmet Ozan Kabak)](https://www.youtube.com/watch?v=z38WY9uZtt4) diff --git a/docs/source/user-guide/dataframe.md b/docs/source/user-guide/dataframe.md index 82f1eeb2823d..887d55d0638c 100644 --- a/docs/source/user-guide/dataframe.md +++ b/docs/source/user-guide/dataframe.md @@ -19,6 +19,8 @@ # DataFrame API +## DataFrame overview + A DataFrame represents a logical set of rows with the same named columns, similar to a [Pandas DataFrame] or [Spark DataFrame]. @@ -109,6 +111,10 @@ async fn main() -> Result<()> { } ``` +--- + +# REFERENCES + [pandas dataframe]: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html [spark dataframe]: https://spark.apache.org/docs/latest/sql-programming-guide.html [`sessioncontext`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html diff --git a/docs/source/user-guide/sql/scalar_functions.md b/docs/source/user-guide/sql/scalar_functions.md index 4a1069d4fd60..7aa89706046e 100644 --- a/docs/source/user-guide/sql/scalar_functions.md +++ b/docs/source/user-guide/sql/scalar_functions.md @@ -2444,7 +2444,6 @@ date_bin(interval, expression, origin-timestamp) - **interval**: Bin interval. - **expression**: Time expression to operate on. Can be a constant, column, or function. - **origin-timestamp**: Optional. Starting point used to determine bin boundaries. If not specified defaults 1970-01-01T00:00:00Z (the UNIX epoch in UTC). The following intervals are supported: - - nanoseconds - microseconds - milliseconds @@ -2498,7 +2497,6 @@ date_part(part, expression) #### Arguments - **part**: Part of the date to return. The following date parts are supported: - - year - quarter (emits value in inclusive range [1, 4] based on which quartile of the year the date is in) - month @@ -2538,7 +2536,6 @@ date_trunc(precision, expression) #### Arguments - **precision**: Time precision to truncate to. The following precisions are supported: - - year / YEAR - quarter / QUARTER - month / MONTH From 280a0af3075738f1b4a7bb6ca0372250da697898 Mon Sep 17 00:00:00 2001 From: Martin Date: Tue, 14 Oct 2025 12:28:18 +0200 Subject: [PATCH 2/6] docs: Enhance Arrow introduction guide with clearer explanations and navigation - Add explanation of Arc and ArrayRef for Rust newcomers - Add visual diagram showing RecordBatch streaming through pipeline - Make common pitfalls more concrete with specific examples - Emphasize Arrow's unified type system as DataFusion's foundation - Add comprehensive API documentation links throughout document - Link to extension points guides (TableProvider, UDFs, custom operators) These improvements make the Arrow introduction more accessible for newcomers while providing clear navigation paths to advanced topics for users extending DataFusion. Addresses #11336 --- docs/source/user-guide/arrow-introduction.md | 89 ++++++++++++++++---- 1 file changed, 73 insertions(+), 16 deletions(-) diff --git a/docs/source/user-guide/arrow-introduction.md b/docs/source/user-guide/arrow-introduction.md index dd1f8d62fafc..19e2522e7c09 100644 --- a/docs/source/user-guide/arrow-introduction.md +++ b/docs/source/user-guide/arrow-introduction.md @@ -26,6 +26,8 @@ This guide helps DataFusion users understand Arrow and its RecordBatch format. While you may never need to work with Arrow directly, this knowledge becomes valuable when using DataFusion's extension points or debugging performance issues. +**Why Arrow is central to DataFusion**: Arrow provides the unified type system that makes DataFusion possible. When you query a CSV file, join it with a Parquet file, and aggregate results from JSON—it all works seamlessly because every data source is converted to Arrow's common representation. This unified type system, combined with Arrow's columnar format, enables DataFusion to execute efficient vectorized operations across any combination of data sources while benefiting from zero-copy data sharing between query operators. + ## Why Columnar? The Arrow Advantage Apache Arrow is an open **specification** that defines how analytical data should be organized in memory. Think of it as a blueprint that different systems agree to follow, not a database or programming language. @@ -62,6 +64,9 @@ Traditional Row Storage: Arrow Columnar Storage: (read entire rows) (process entire columns at once) ``` +[`Int32Array`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.Int32Array.html +[`StringArray`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.StringArray.html + ### Why This Matters - **Vectorized Execution**: Process entire columns at once using SIMD instructions @@ -73,7 +78,9 @@ DataFusion, DuckDB, Polars, and Pandas all speak Arrow natively—they can excha ## What is a RecordBatch? (And Why Batch?) -A **RecordBatch** represents a horizontal slice of a table—a collection of equal-length columnar arrays sharing the same schema. +A **[`RecordBatch`]** represents a horizontal slice of a table—a collection of equal-length columnar arrays sharing the same schema. + +[`RecordBatch`]: https://docs.rs/arrow-array/latest/arrow_array/struct.RecordBatch.html ### Why Not Process Entire Tables? @@ -99,9 +106,16 @@ RecordBatches typically contain thousands of rows—enough to benefit from vecto ## From files to Arrow -When you call `read_csv`, `read_parquet`, `read_json` or `read_avro`, DataFusion decodes those formats into Arrow arrays and streams them to operators as RecordBatches. +When you call [`read_csv`], [`read_parquet`], [`read_json`] or [`read_avro`], DataFusion decodes those formats into Arrow arrays and streams them to operators as RecordBatches. + +The example below shows how to read data from different file formats. Each `read_*` method returns a [`DataFrame`] that represents a query plan. When you call [`.collect()`], DataFusion executes the plan and returns results as a `Vec`—the actual columnar data in Arrow format. -The example below shows how to read data from different file formats. Each `read_*` method returns a `DataFrame` that represents a query plan. When you call `.collect()`, DataFusion executes the plan and returns results as a `Vec`—the actual columnar data in Arrow format. +[`read_csv`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_csv +[`read_parquet`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_parquet +[`read_json`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_json +[`read_avro`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_avro +[`DataFrame`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html +[`.collect()`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html#method.collect ```rust use datafusion::prelude::*; @@ -126,13 +140,41 @@ async fn main() -> datafusion::error::Result<()> { } ``` +[`SessionContext`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html +[`CsvReadOptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.CsvReadOptions.html +[`ParquetReadOptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.ParquetReadOptions.html + ## Streaming Through the Engine DataFusion processes queries as pull-based pipelines where operators request batches from their inputs. This streaming approach enables early result production, bounds memory usage (spilling to disk only when necessary), and naturally supports parallel execution across multiple CPU cores. +``` +A user's query: SELECT name FROM 'data.parquet' WHERE id > 10 + +The DataFusion Pipeline: +┌─────────────┐ ┌──────────────┐ ┌────────────────┐ ┌──────────────────┐ ┌──────────┐ +│ Parquet │───▶│ Scan │───▶│ Filter │───▶│ Projection │───▶│ Results │ +│ File │ │ Operator │ │ Operator │ │ Operator │ │ │ +└─────────────┘ └──────────────┘ └────────────────┘ └──────────────────┘ └──────────┘ + (reads data) (id > 10) (keeps "name" col) + RecordBatch ───▶ RecordBatch ────▶ RecordBatch ────▶ RecordBatch +``` + +In this pipeline, [`RecordBatch`]es are the "packages" of columnar data that flow between the different stages of query execution. Each operator processes batches incrementally, enabling the system to produce results before reading the entire input. + ## Minimal: build a RecordBatch in Rust -Sometimes you need to create Arrow data programmatically rather than reading from files. This example shows the core building blocks: creating typed arrays (like `Int32Array` for numbers), defining a `Schema` that describes your columns, and assembling them into a `RecordBatch`. Notice how nullable columns can contain `None` values, tracked efficiently by Arrow's internal validity bitmap. +Sometimes you need to create Arrow data programmatically rather than reading from files. This example shows the core building blocks: creating typed arrays (like [`Int32Array`] for numbers), defining a [`Schema`] that describes your columns, and assembling them into a [`RecordBatch`]. + +You'll notice [`Arc`] ([Atomically Reference Counted](https://doc.rust-lang.org/std/sync/struct.Arc.html)) is used frequently—this is how Arrow enables efficient, zero-copy data sharing. Instead of copying data, different parts of the query engine can safely share read-only references to the same underlying memory. [`ArrayRef`] is simply a type alias for `Arc`, representing a reference to any Arrow array type. + +Notice how nullable columns can contain `None` values, tracked efficiently by Arrow's internal validity bitmap. + +[`Arc`]: https://doc.rust-lang.org/std/sync/struct.Arc.html +[`ArrayRef`]: https://docs.rs/arrow-array/latest/arrow_array/array/type.ArrayRef.html +[`Schema`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Schema.html +[`Field`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Field.html +[`DataType`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html ```rust use std::sync::Arc; @@ -155,7 +197,9 @@ fn make_batch() -> arrow_schema::Result { ## Query an in-memory batch with DataFusion -Once you have a `RecordBatch`, you can query it with DataFusion using a `MemTable`. This is useful for testing, processing data from external systems, or combining in-memory data with other sources. The example below creates a batch, wraps it in a `MemTable`, registers it as a named table, and queries it using SQL—demonstrating how Arrow serves as the bridge between your data and DataFusion's query engine. +Once you have a [`RecordBatch`], you can query it with DataFusion using a [`MemTable`]. This is useful for testing, processing data from external systems, or combining in-memory data with other sources. The example below creates a batch, wraps it in a [`MemTable`], registers it as a named table, and queries it using SQL—demonstrating how Arrow serves as the bridge between your data and DataFusion's query engine. + +[`MemTable`]: https://docs.rs/datafusion/latest/datafusion/datasource/struct.MemTable.html ```rust use std::sync::Arc; @@ -192,30 +236,43 @@ async fn main() -> datafusion::error::Result<()> { } ``` +[`.register_table()`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.register_table +[`.sql()`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.sql +[`.show()`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html#method.show + ## Common Pitfalls When working with Arrow and RecordBatches, watch out for these common issues: -- **Schema consistency**: All batches in a stream must share the exact same `Schema` (names, types, nullability, metadata) -- **Immutability**: Arrays are immutable; to modify data, build new arrays or new RecordBatches -- **Buffer management**: Variable-length types (UTF-8, binary, lists) use offsets + values; avoid manual buffer slicing unless you understand Arrow's internal invariants -- **Type mismatches**: Mixed input types across files may require explicit casts before joins/unions -- **Batch size assumptions**: Don't assume a particular batch size; always iterate until the stream ends +- **Schema consistency**: All batches in a stream must share the exact same [`Schema`]. For example, you can't have one batch where a column is [`Int32`] and the next where it's [`Int64`], even if the values would fit +- **Immutability**: Arrays are immutable—to "modify" data, you must build new arrays or new RecordBatches. For instance, to change a value in an array, you'd create a new array with the updated value +- **Buffer management**: Variable-length types (UTF-8, binary, lists) use offsets + values arrays internally. Avoid manual buffer slicing unless you understand Arrow's internal invariants—use Arrow's built-in compute functions instead +- **Type mismatches**: Mixed input types across files may require explicit casts. For example, a string column `"123"` from a CSV file won't automatically join with an integer column `123` from a Parquet file—you'll need to cast one to match the other +- **Batch size assumptions**: Don't assume a particular batch size; always iterate until the stream ends. One file might produce 8192-row batches while another produces 1024-row batches + +[`Int32`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int32 +[`Int64`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int64 ## When Arrow knowledge is needed (Extension Points) -For many use cases, you don't need to know about Arrow. DataFusion handles the conversion from formats like CSV and Parquet for you. However, Arrow becomes important when you use DataFusion's **extension points** to add your own custom functionality. +For many use cases, you don't need to know about Arrow. DataFusion handles the conversion from formats like CSV and Parquet for you. However, Arrow becomes important when you use DataFusion's **[extension points]** to add your own custom functionality. -These APIs are where you can plug your own code into the engine, and they often operate directly on Arrow `RecordBatch` streams. +These APIs are where you can plug your own code into the engine, and they often operate directly on Arrow [`RecordBatch`] streams. -- **`TableProvider` (Custom Data Sources)**: This is the most common extension point. You can teach DataFusion how to read from any source—a custom file format, a network API, a different database—by implementing the `TableProvider` trait. Your implementation will be responsible for creating `RecordBatch`es to stream data into the engine. +- **[`TableProvider`] (Custom Data Sources)**: This is the most common extension point. You can teach DataFusion how to read from any source—a custom file format, a network API, a different database—by implementing the [`TableProvider`] trait. Your implementation will be responsible for creating [`RecordBatch`]es to stream data into the engine. See the [Custom Table Providers guide] for detailed examples. -- **User-Defined Functions (UDFs)**: If you need to perform a custom transformation on your data that isn't built into DataFusion, you can write a UDF. Your function will receive data as Arrow arrays (inside a `RecordBatch`) and must produce an Arrow array as its output. +- **[User-Defined Functions (UDFs)]**: If you need to perform a custom transformation on your data that isn't built into DataFusion, you can write a UDF. Your function will receive data as Arrow arrays (inside a [`RecordBatch`]) and must produce an Arrow array as its output. -- **Custom Optimizer Rules and Operators**: For advanced use cases, you can even add your own rules to the query optimizer or implement entirely new physical operators (like a special type of join). These also operate on the Arrow-based query plans. +- **[Custom Optimizer Rules and Operators]**: For advanced use cases, you can even add your own rules to the query optimizer or implement entirely new physical operators (like a special type of join). These also operate on the Arrow-based query plans. In short, knowing Arrow is key to unlocking the full power of DataFusion's modular and extensible architecture. +[extension points]: ../library-user-guide/extensions.md +[`TableProvider`]: https://docs.rs/datafusion/latest/datafusion/datasource/trait.TableProvider.html +[Custom Table Providers guide]: ../library-user-guide/custom-table-providers.md +[User-Defined Functions (UDFs)]: ../library-user-guide/functions/adding-udfs.md +[Custom Optimizer Rules and Operators]: ../library-user-guide/extending-operators.md + ## Next Steps: Working with DataFrames Now that you understand Arrow's RecordBatch format, you're ready to work with DataFusion's high-level APIs. The [DataFrame API](dataframe.md) provides a familiar, ergonomic interface for building queries without needing to think about Arrow internals most of the time. @@ -234,7 +291,7 @@ The DataFrame API handles all the Arrow details under the hood - reading files i - [Arrow columnar format (overview)](https://arrow.apache.org/docs/format/Columnar.html) - [Arrow IPC format (files and streams)](https://arrow.apache.org/docs/format/IPC.html) - [arrow_array::RecordBatch (docs.rs)](https://docs.rs/arrow-array/latest/arrow_array/struct.RecordBatch.html) -- [Apache Arrow DataFusion: A Fast, Embeddable, Modular Analytic Query Engine](https://dl.acm.org/doi/10.1145/3626246.3653368) +- [Apache Arrow DataFusion: A Fast, Embeddable, Modular Analytic Query Engine (Paper)](https://dl.acm.org/doi/10.1145/3626246.3653368) - DataFusion + Arrow integration (docs.rs): - [datafusion::common::arrow](https://docs.rs/datafusion/latest/datafusion/common/arrow/index.html) From 7f83c2b3ca17cd472cb0f55eff62aacbd7ce1c8b Mon Sep 17 00:00:00 2001 From: Martin Date: Tue, 14 Oct 2025 14:25:36 +0200 Subject: [PATCH 3/6] update: extracting the references and placing them at the bottom --- docs/source/user-guide/arrow-introduction.md | 66 +++++++++----------- 1 file changed, 29 insertions(+), 37 deletions(-) diff --git a/docs/source/user-guide/arrow-introduction.md b/docs/source/user-guide/arrow-introduction.md index 19e2522e7c09..c5654888f904 100644 --- a/docs/source/user-guide/arrow-introduction.md +++ b/docs/source/user-guide/arrow-introduction.md @@ -64,9 +64,6 @@ Traditional Row Storage: Arrow Columnar Storage: (read entire rows) (process entire columns at once) ``` -[`Int32Array`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.Int32Array.html -[`StringArray`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.StringArray.html - ### Why This Matters - **Vectorized Execution**: Process entire columns at once using SIMD instructions @@ -80,8 +77,6 @@ DataFusion, DuckDB, Polars, and Pandas all speak Arrow natively—they can excha A **[`RecordBatch`]** represents a horizontal slice of a table—a collection of equal-length columnar arrays sharing the same schema. -[`RecordBatch`]: https://docs.rs/arrow-array/latest/arrow_array/struct.RecordBatch.html - ### Why Not Process Entire Tables? - **Memory Constraints**: A billion-row table might not fit in RAM @@ -110,13 +105,6 @@ When you call [`read_csv`], [`read_parquet`], [`read_json`] or [`read_avro`], Da The example below shows how to read data from different file formats. Each `read_*` method returns a [`DataFrame`] that represents a query plan. When you call [`.collect()`], DataFusion executes the plan and returns results as a `Vec`—the actual columnar data in Arrow format. -[`read_csv`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_csv -[`read_parquet`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_parquet -[`read_json`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_json -[`read_avro`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_avro -[`DataFrame`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html -[`.collect()`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html#method.collect - ```rust use datafusion::prelude::*; @@ -140,10 +128,6 @@ async fn main() -> datafusion::error::Result<()> { } ``` -[`SessionContext`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html -[`CsvReadOptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.CsvReadOptions.html -[`ParquetReadOptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.ParquetReadOptions.html - ## Streaming Through the Engine DataFusion processes queries as pull-based pipelines where operators request batches from their inputs. This streaming approach enables early result production, bounds memory usage (spilling to disk only when necessary), and naturally supports parallel execution across multiple CPU cores. @@ -170,12 +154,6 @@ You'll notice [`Arc`] ([Atomically Reference Counted](https://doc.rust-lang.org/ Notice how nullable columns can contain `None` values, tracked efficiently by Arrow's internal validity bitmap. -[`Arc`]: https://doc.rust-lang.org/std/sync/struct.Arc.html -[`ArrayRef`]: https://docs.rs/arrow-array/latest/arrow_array/array/type.ArrayRef.html -[`Schema`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Schema.html -[`Field`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Field.html -[`DataType`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html - ```rust use std::sync::Arc; use arrow_array::{ArrayRef, Int32Array, StringArray, RecordBatch}; @@ -199,8 +177,6 @@ fn make_batch() -> arrow_schema::Result { Once you have a [`RecordBatch`], you can query it with DataFusion using a [`MemTable`]. This is useful for testing, processing data from external systems, or combining in-memory data with other sources. The example below creates a batch, wraps it in a [`MemTable`], registers it as a named table, and queries it using SQL—demonstrating how Arrow serves as the bridge between your data and DataFusion's query engine. -[`MemTable`]: https://docs.rs/datafusion/latest/datafusion/datasource/struct.MemTable.html - ```rust use std::sync::Arc; use arrow_array::{Int32Array, StringArray, RecordBatch}; @@ -236,10 +212,6 @@ async fn main() -> datafusion::error::Result<()> { } ``` -[`.register_table()`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.register_table -[`.sql()`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.sql -[`.show()`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html#method.show - ## Common Pitfalls When working with Arrow and RecordBatches, watch out for these common issues: @@ -250,9 +222,6 @@ When working with Arrow and RecordBatches, watch out for these common issues: - **Type mismatches**: Mixed input types across files may require explicit casts. For example, a string column `"123"` from a CSV file won't automatically join with an integer column `123` from a Parquet file—you'll need to cast one to match the other - **Batch size assumptions**: Don't assume a particular batch size; always iterate until the stream ends. One file might produce 8192-row batches while another produces 1024-row batches -[`Int32`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int32 -[`Int64`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int64 - ## When Arrow knowledge is needed (Extension Points) For many use cases, you don't need to know about Arrow. DataFusion handles the conversion from formats like CSV and Parquet for you. However, Arrow becomes important when you use DataFusion's **[extension points]** to add your own custom functionality. @@ -267,12 +236,6 @@ These APIs are where you can plug your own code into the engine, and they often In short, knowing Arrow is key to unlocking the full power of DataFusion's modular and extensible architecture. -[extension points]: ../library-user-guide/extensions.md -[`TableProvider`]: https://docs.rs/datafusion/latest/datafusion/datasource/trait.TableProvider.html -[Custom Table Providers guide]: ../library-user-guide/custom-table-providers.md -[User-Defined Functions (UDFs)]: ../library-user-guide/functions/adding-udfs.md -[Custom Optimizer Rules and Operators]: ../library-user-guide/extending-operators.md - ## Next Steps: Working with DataFrames Now that you understand Arrow's RecordBatch format, you're ready to work with DataFusion's high-level APIs. The [DataFrame API](dataframe.md) provides a familiar, ergonomic interface for building queries without needing to think about Arrow internals most of the time. @@ -307,3 +270,32 @@ The DataFrame API handles all the Arrow details under the hood - reading files i - Deep dive (memory layout internals): [ArrayData on docs.rs](https://docs.rs/datafusion/latest/datafusion/common/arrow/array/struct.ArrayData.html) - Parquet format and pushdown: [Parquet format](https://parquet.apache.org/docs/file-format/), [Row group filtering / predicate pushdown](https://arrow.apache.org/docs/cpp/parquet.html#row-group-filtering) - For DataFusion contributors: [DataFusion Invariants](../contributor-guide/specification/invariants.md) - How DataFusion maintains type safety and consistency with Arrow's dynamic type system + +[`Arc`]: https://doc.rust-lang.org/std/sync/struct.Arc.html +[`ArrayRef`]: https://docs.rs/arrow-array/latest/arrow_array/array/type.ArrayRef.html +[`Field`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Field.html +[`Schema`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Schema.html +[`DataType`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html +[`Int32Array`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.Int32Array.html +[`StringArray`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.StringArray.html +[`Int32`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int32 +[`Int64`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int64 +[ extension points]: ../library-user-guide/extensions.md +[`TableProvider`]: https://docs.rs/datafusion/latest/datafusion/datasource/trait.TableProvider.html +[Custom Table Providers guide]: ../library-user-guide/custom-table-providers.md +[User-Defined Functions (UDFs)]: ../library-user-guide/functions/adding-udfs.md +[Custom Optimizer Rules and Operators]: ../library-user-guide/extending-operators.md +[`.register_table()`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.register_table +[`.sql()`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.sql +[`.show()`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html#method.show +[`MemTable`]: https://docs.rs/datafusion/latest/datafusion/datasource/struct.MemTable.html +[`SessionContext`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html +[`CsvReadOptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.CsvReadOptions.html +[`ParquetReadOptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.ParquetReadOptions.html +[`RecordBatch`]: https://docs.rs/arrow-array/latest/arrow_array/struct.RecordBatch.html +[`read_csv`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_csv +[`read_parquet`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_parquet +[`read_json`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_json +[`read_avro`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_avro +[`DataFrame`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html +[`.collect()`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html#method.collect From 8ba8a39f1b119bc57a99edbdc41bfdce9b196637 Mon Sep 17 00:00:00 2001 From: Martin Date: Tue, 14 Oct 2025 15:30:34 +0200 Subject: [PATCH 4/6] chore: Apply prettier formatting to arrow-introduction.md Run prettier to fix markdown link reference formatting (lowercase convention) --- docs/source/user-guide/arrow-introduction.md | 38 ++++++++++---------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/docs/source/user-guide/arrow-introduction.md b/docs/source/user-guide/arrow-introduction.md index c5654888f904..b7a528f13ee4 100644 --- a/docs/source/user-guide/arrow-introduction.md +++ b/docs/source/user-guide/arrow-introduction.md @@ -271,31 +271,31 @@ The DataFrame API handles all the Arrow details under the hood - reading files i - Parquet format and pushdown: [Parquet format](https://parquet.apache.org/docs/file-format/), [Row group filtering / predicate pushdown](https://arrow.apache.org/docs/cpp/parquet.html#row-group-filtering) - For DataFusion contributors: [DataFusion Invariants](../contributor-guide/specification/invariants.md) - How DataFusion maintains type safety and consistency with Arrow's dynamic type system -[`Arc`]: https://doc.rust-lang.org/std/sync/struct.Arc.html -[`ArrayRef`]: https://docs.rs/arrow-array/latest/arrow_array/array/type.ArrayRef.html -[`Field`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Field.html -[`Schema`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Schema.html -[`DataType`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html -[`Int32Array`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.Int32Array.html -[`StringArray`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.StringArray.html -[`Int32`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int32 -[`Int64`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int64 +[`arc`]: https://doc.rust-lang.org/std/sync/struct.Arc.html +[`arrayref`]: https://docs.rs/arrow-array/latest/arrow_array/array/type.ArrayRef.html +[`field`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Field.html +[`schema`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Schema.html +[`datatype`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html +[`int32array`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.Int32Array.html +[`stringarray`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.StringArray.html +[`int32`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int32 +[`int64`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int64 [ extension points]: ../library-user-guide/extensions.md -[`TableProvider`]: https://docs.rs/datafusion/latest/datafusion/datasource/trait.TableProvider.html -[Custom Table Providers guide]: ../library-user-guide/custom-table-providers.md -[User-Defined Functions (UDFs)]: ../library-user-guide/functions/adding-udfs.md -[Custom Optimizer Rules and Operators]: ../library-user-guide/extending-operators.md +[`tableprovider`]: https://docs.rs/datafusion/latest/datafusion/datasource/trait.TableProvider.html +[custom table providers guide]: ../library-user-guide/custom-table-providers.md +[user-defined functions (udfs)]: ../library-user-guide/functions/adding-udfs.md +[custom optimizer rules and operators]: ../library-user-guide/extending-operators.md [`.register_table()`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.register_table [`.sql()`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.sql [`.show()`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html#method.show -[`MemTable`]: https://docs.rs/datafusion/latest/datafusion/datasource/struct.MemTable.html -[`SessionContext`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html -[`CsvReadOptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.CsvReadOptions.html -[`ParquetReadOptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.ParquetReadOptions.html -[`RecordBatch`]: https://docs.rs/arrow-array/latest/arrow_array/struct.RecordBatch.html +[`memtable`]: https://docs.rs/datafusion/latest/datafusion/datasource/struct.MemTable.html +[`sessioncontext`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html +[`csvreadoptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.CsvReadOptions.html +[`parquetreadoptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.ParquetReadOptions.html +[`recordbatch`]: https://docs.rs/arrow-array/latest/arrow_array/struct.RecordBatch.html [`read_csv`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_csv [`read_parquet`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_parquet [`read_json`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_json [`read_avro`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_avro -[`DataFrame`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html +[`dataframe`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html [`.collect()`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html#method.collect From 8245f5ca3ba1e93e7dad2014298b3b3c960f6e4a Mon Sep 17 00:00:00 2001 From: Martin Date: Tue, 14 Oct 2025 15:40:07 +0200 Subject: [PATCH 5/6] chore: Fix prettier formatting in additional documentation files Apply prettier formatting to fix pre-existing formatting issues in: - query-optimizer.md - concepts-readings-events.md - scalar_functions.md --- docs/source/library-user-guide/query-optimizer.md | 2 ++ docs/source/user-guide/concepts-readings-events.md | 1 + docs/source/user-guide/sql/scalar_functions.md | 2 ++ 3 files changed, 5 insertions(+) diff --git a/docs/source/library-user-guide/query-optimizer.md b/docs/source/library-user-guide/query-optimizer.md index 20589e9843c4..877ff8c754ad 100644 --- a/docs/source/library-user-guide/query-optimizer.md +++ b/docs/source/library-user-guide/query-optimizer.md @@ -440,11 +440,13 @@ When analyzing expressions, DataFusion runs boundary analysis using interval ari Consider a simple predicate like age > 18 AND age <= 25. The analysis flows as follows: 1. Context Initialization + - Begin with known column statistics - Set up initial boundaries based on column constraints - Initialize the shared analysis context 2. Expression Tree Walk + - Analyze each node in the expression tree - Propagate boundary information upward - Allow child nodes to influence parent boundaries diff --git a/docs/source/user-guide/concepts-readings-events.md b/docs/source/user-guide/concepts-readings-events.md index 3b5a244f04ca..ad444ef91c47 100644 --- a/docs/source/user-guide/concepts-readings-events.md +++ b/docs/source/user-guide/concepts-readings-events.md @@ -70,6 +70,7 @@ This is a list of DataFusion related blog posts, articles, and other resources. - **2024-10-16** [Blog: Candle Image Segmentation](https://www.letsql.com/posts/candle-image-segmentation/) - **2024-09-23 → 2024-12-02** [Talks: Carnegie Mellon University: Database Building Blocks Seminar Series - Fall 2024](https://db.cs.cmu.edu/seminar2024/) + - **2024-11-12** [Video: Building InfluxDB 3.0 with the FDAP Stack: Apache Flight, DataFusion, Arrow and Parquet (Paul Dix)](https://www.youtube.com/watch?v=AGS4GNGDK_4) - **2024-11-04** [Video: Synnada: Towards “Unified” Compute Engines: Opportunities and Challenges (Mehmet Ozan Kabak)](https://www.youtube.com/watch?v=z38WY9uZtt4) diff --git a/docs/source/user-guide/sql/scalar_functions.md b/docs/source/user-guide/sql/scalar_functions.md index 7aa89706046e..703220fc0121 100644 --- a/docs/source/user-guide/sql/scalar_functions.md +++ b/docs/source/user-guide/sql/scalar_functions.md @@ -2497,6 +2497,7 @@ date_part(part, expression) #### Arguments - **part**: Part of the date to return. The following date parts are supported: + - year - quarter (emits value in inclusive range [1, 4] based on which quartile of the year the date is in) - month @@ -2536,6 +2537,7 @@ date_trunc(precision, expression) #### Arguments - **precision**: Time precision to truncate to. The following precisions are supported: + - year / YEAR - quarter / QUARTER - month / MONTH From b22f1c9d2b985d0e976e213218ceaac42a931d5c Mon Sep 17 00:00:00 2001 From: Martin Date: Fri, 17 Oct 2025 13:43:46 +0200 Subject: [PATCH 6/6] docs: address review feedback for Arrow introduction guide Technical improvements: - Clarify Arrow adoption: native systems (DataFusion, Polars) vs interchange converters (DuckDB, Spark, pandas) - Add note that read_json expects newline-delimited JSON - Fix reference link capitalization to match in-text usage Documentation cleanup: - Remove unnecessary REFERENCES header from dataframe.md - Previously discarded prettier changes to autogenerated scalar_functions.md Addresses feedback from @Jefffrey and @comphead in PR #18051 --- docs/source/user-guide/arrow-introduction.md | 119 ++++++++----------- docs/source/user-guide/dataframe.md | 4 - 2 files changed, 51 insertions(+), 72 deletions(-) diff --git a/docs/source/user-guide/arrow-introduction.md b/docs/source/user-guide/arrow-introduction.md index b7a528f13ee4..0f3664949472 100644 --- a/docs/source/user-guide/arrow-introduction.md +++ b/docs/source/user-guide/arrow-introduction.md @@ -24,7 +24,7 @@ :depth: 2 ``` -This guide helps DataFusion users understand Arrow and its RecordBatch format. While you may never need to work with Arrow directly, this knowledge becomes valuable when using DataFusion's extension points or debugging performance issues. +This guide helps DataFusion users understand [Arrow] and its RecordBatch format. While you may never need to work with Arrow directly, this knowledge becomes valuable when using DataFusion's extension points or debugging performance issues. **Why Arrow is central to DataFusion**: Arrow provides the unified type system that makes DataFusion possible. When you query a CSV file, join it with a Parquet file, and aggregate results from JSON—it all works seamlessly because every data source is converted to Arrow's common representation. This unified type system, combined with Arrow's columnar format, enables DataFusion to execute efficient vectorized operations across any combination of data sources while benefiting from zero-copy data sharing between query operators. @@ -66,38 +66,27 @@ Traditional Row Storage: Arrow Columnar Storage: ### Why This Matters +- **Unified Type System**: All data sources (CSV, Parquet, JSON) convert to the same Arrow types, enabling seamless cross-format queries - **Vectorized Execution**: Process entire columns at once using SIMD instructions -- **Better Compression**: Similar values stored together compress more efficiently -- **Cache Efficiency**: Scanning specific columns doesn't load unnecessary data +- **Cache Efficiency**: Scanning specific columns doesn't load unnecessary data into CPU cache - **Zero-Copy Data Sharing**: Systems can share Arrow data without conversion overhead -DataFusion, DuckDB, Polars, and Pandas all speak Arrow natively—they can exchange data without expensive serialization/deserialization steps. +Arrow has become the universal standard for in-memory analytics precisely because of its **columnar format**—systems that natively store or process data in Arrow (DataFusion, Polars, InfluxDB 3.0), and runtimes that convert to Arrow for interchange (DuckDB, Spark, pandas), all organize data by column rather than by row. This cross-language, cross-platform adoption of the columnar model enables seamless data flow between systems with minimal conversion overhead. -## What is a RecordBatch? (And Why Batch?) - -A **[`RecordBatch`]** represents a horizontal slice of a table—a collection of equal-length columnar arrays sharing the same schema. - -### Why Not Process Entire Tables? +Within this columnar design, Arrow's standard unit for packaging data is the **RecordBatch**—the key to making columnar format practical for real-world query engines. -- **Memory Constraints**: A billion-row table might not fit in RAM -- **Pipeline Processing**: Start producing results before reading all data -- **Parallel Execution**: Different threads can process different batches - -### Why Not Process Single Rows? +## What is a RecordBatch? (And Why Batch?) -- **Lost Vectorization**: Can't use SIMD instructions on single values -- **Poor Cache Utilization**: Jumping between rows defeats CPU cache optimization -- **High Overhead**: Managing individual rows has significant bookkeeping costs +A **[`RecordBatch`]** cleverly combines the benefits of columnar storage with the practical need to process data in chunks. It represents a horizontal slice of a table, but critically, each column _within_ that slice remains a contiguous array. -### RecordBatches: The Sweet Spot +Think of it as having two perspectives: -RecordBatches typically contain thousands of rows—enough to benefit from vectorization but small enough to fit in memory. DataFusion streams these batches through operators, achieving both efficiency and scalability. +- **Columnar inside**: Each column (`id`, `name`, `age`) is a contiguous array optimized for vectorized operations +- **Row-oriented outside**: The batch represents a chunk of rows (e.g., rows 1-1000), making it a manageable unit for streaming -**Key Properties**: +RecordBatches are **immutable snapshots**—once created, they cannot be modified. Any transformation produces a _new_ RecordBatch, enabling safe parallel processing without locks or coordination overhead. -- Arrays are immutable (create new batches to modify data) -- NULL values tracked via efficient validity bitmaps -- Variable-length data (strings, lists) use offset arrays for efficient access +This design allows DataFusion to process streams of row-based chunks while gaining maximum performance from the columnar layout. Let's see how this works in practice. ## From files to Arrow @@ -115,7 +104,7 @@ async fn main() -> datafusion::error::Result<()> { // Pick ONE of these per run (each returns a new DataFrame): let df = ctx.read_csv("data.csv", CsvReadOptions::new()).await?; // let df = ctx.read_parquet("data.parquet", ParquetReadOptions::default()).await?; - // let df = ctx.read_json("data.ndjson", NdJsonReadOptions::default()).await?; // requires "json" feature + // let df = ctx.read_json("data.ndjson", NdJsonReadOptions::default()).await?; // requires "json" feature; expects newline-delimited JSON // let df = ctx.read_avro("data.avro", AvroReadOptions::default()).await?; // requires "avro" feature let batches = df @@ -150,9 +139,7 @@ In this pipeline, [`RecordBatch`]es are the "packages" of columnar data that flo Sometimes you need to create Arrow data programmatically rather than reading from files. This example shows the core building blocks: creating typed arrays (like [`Int32Array`] for numbers), defining a [`Schema`] that describes your columns, and assembling them into a [`RecordBatch`]. -You'll notice [`Arc`] ([Atomically Reference Counted](https://doc.rust-lang.org/std/sync/struct.Arc.html)) is used frequently—this is how Arrow enables efficient, zero-copy data sharing. Instead of copying data, different parts of the query engine can safely share read-only references to the same underlying memory. [`ArrayRef`] is simply a type alias for `Arc`, representing a reference to any Arrow array type. - -Notice how nullable columns can contain `None` values, tracked efficiently by Arrow's internal validity bitmap. +Note: You'll see [`Arc`] used frequently in the code—DataFusion's async architecture requires wrapping Arrow arrays in `Arc` (atomically reference-counted pointers) to safely share data across tasks. [`ArrayRef`] is simply a type alias for `Arc`. ```rust use std::sync::Arc; @@ -250,52 +237,48 @@ The DataFrame API handles all the Arrow details under the hood - reading files i ## Further reading -- [Arrow introduction](https://arrow.apache.org/docs/format/Intro.html) -- [Arrow columnar format (overview)](https://arrow.apache.org/docs/format/Columnar.html) -- [Arrow IPC format (files and streams)](https://arrow.apache.org/docs/format/IPC.html) -- [arrow_array::RecordBatch (docs.rs)](https://docs.rs/arrow-array/latest/arrow_array/struct.RecordBatch.html) -- [Apache Arrow DataFusion: A Fast, Embeddable, Modular Analytic Query Engine (Paper)](https://dl.acm.org/doi/10.1145/3626246.3653368) - -- DataFusion + Arrow integration (docs.rs): - - [datafusion::common::arrow](https://docs.rs/datafusion/latest/datafusion/common/arrow/index.html) - - [datafusion::common::arrow::array](https://docs.rs/datafusion/latest/datafusion/common/arrow/array/index.html) - - [datafusion::common::arrow::compute](https://docs.rs/datafusion/latest/datafusion/common/arrow/compute/index.html) - - [SessionContext::read_csv](https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_csv) - - [read_parquet](https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_parquet) - - [read_json](https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_json) - - [DataFrame::collect](https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html#method.collect) - - [SendableRecordBatchStream](https://docs.rs/datafusion/latest/datafusion/physical_plan/type.SendableRecordBatchStream.html) - - [TableProvider](https://docs.rs/datafusion/latest/datafusion/datasource/trait.TableProvider.html) - - [MemTable](https://docs.rs/datafusion/latest/datafusion/datasource/struct.MemTable.html) -- Deep dive (memory layout internals): [ArrayData on docs.rs](https://docs.rs/datafusion/latest/datafusion/common/arrow/array/struct.ArrayData.html) -- Parquet format and pushdown: [Parquet format](https://parquet.apache.org/docs/file-format/), [Row group filtering / predicate pushdown](https://arrow.apache.org/docs/cpp/parquet.html#row-group-filtering) -- For DataFusion contributors: [DataFusion Invariants](../contributor-guide/specification/invariants.md) - How DataFusion maintains type safety and consistency with Arrow's dynamic type system - -[`arc`]: https://doc.rust-lang.org/std/sync/struct.Arc.html -[`arrayref`]: https://docs.rs/arrow-array/latest/arrow_array/array/type.ArrayRef.html -[`field`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Field.html -[`schema`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Schema.html -[`datatype`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html -[`int32array`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.Int32Array.html -[`stringarray`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.StringArray.html -[`int32`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int32 -[`int64`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int64 -[ extension points]: ../library-user-guide/extensions.md -[`tableprovider`]: https://docs.rs/datafusion/latest/datafusion/datasource/trait.TableProvider.html -[custom table providers guide]: ../library-user-guide/custom-table-providers.md -[user-defined functions (udfs)]: ../library-user-guide/functions/adding-udfs.md -[custom optimizer rules and operators]: ../library-user-guide/extending-operators.md +**Arrow Documentation:** + +- [Arrow Format Introduction](https://arrow.apache.org/docs/format/Intro.html) - Official Arrow specification +- [Arrow Columnar Format](https://arrow.apache.org/docs/format/Columnar.html) - In-depth look at the memory layout + +**DataFusion API References:** + +- [RecordBatch](https://docs.rs/arrow-array/latest/arrow_array/struct.RecordBatch.html) - Core Arrow data structure +- [DataFrame](https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html) - High-level query interface +- [TableProvider](https://docs.rs/datafusion/latest/datafusion/datasource/trait.TableProvider.html) - Custom data source trait +- [MemTable](https://docs.rs/datafusion/latest/datafusion/datasource/struct.MemTable.html) - In-memory table implementation + +**Academic Paper:** + +- [Apache Arrow DataFusion: A Fast, Embeddable, Modular Analytic Query Engine](https://dl.acm.org/doi/10.1145/3626246.3653368) - Published at SIGMOD 2024 + +[arrow]: https://arrow.apache.org/docs/index.html +[`Arc`]: https://doc.rust-lang.org/std/sync/struct.Arc.html +[`ArrayRef`]: https://docs.rs/arrow-array/latest/arrow_array/array/type.ArrayRef.html +[`Field`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Field.html +[`Schema`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Schema.html +[`DataType`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html +[`Int32Array`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.Int32Array.html +[`StringArray`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.StringArray.html +[`Int32`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int32 +[`Int64`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int64 +[extension points]: ../library-user-guide/extensions.md +[`TableProvider`]: https://docs.rs/datafusion/latest/datafusion/datasource/trait.TableProvider.html +[Custom Table Providers guide]: ../library-user-guide/custom-table-providers.md +[User-Defined Functions (UDFs)]: ../library-user-guide/functions/adding-udfs.md +[Custom Optimizer Rules and Operators]: ../library-user-guide/extending-operators.md [`.register_table()`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.register_table [`.sql()`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.sql [`.show()`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html#method.show -[`memtable`]: https://docs.rs/datafusion/latest/datafusion/datasource/struct.MemTable.html -[`sessioncontext`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html -[`csvreadoptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.CsvReadOptions.html -[`parquetreadoptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.ParquetReadOptions.html -[`recordbatch`]: https://docs.rs/arrow-array/latest/arrow_array/struct.RecordBatch.html +[`MemTable`]: https://docs.rs/datafusion/latest/datafusion/datasource/struct.MemTable.html +[`SessionContext`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html +[`CsvReadOptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.CsvReadOptions.html +[`ParquetReadOptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.ParquetReadOptions.html +[`RecordBatch`]: https://docs.rs/arrow-array/latest/arrow_array/struct.RecordBatch.html [`read_csv`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_csv [`read_parquet`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_parquet [`read_json`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_json [`read_avro`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_avro -[`dataframe`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html +[`DataFrame`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html [`.collect()`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html#method.collect diff --git a/docs/source/user-guide/dataframe.md b/docs/source/user-guide/dataframe.md index 887d55d0638c..85724a72399a 100644 --- a/docs/source/user-guide/dataframe.md +++ b/docs/source/user-guide/dataframe.md @@ -111,10 +111,6 @@ async fn main() -> Result<()> { } ``` ---- - -# REFERENCES - [pandas dataframe]: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html [spark dataframe]: https://spark.apache.org/docs/latest/sql-programming-guide.html [`sessioncontext`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html