-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update README docs for language transforms #800
Merged
Merged
Changes from 4 commits
Commits
Show all changes
14 commits
Select commit
Hold shift + click to select a range
6c122b1
update pdf2parquet README
dolfim-ibm 6769452
add data_files_to_use
dolfim-ibm 7b592bc
doc_chunk README
dolfim-ibm 61d72fd
text_encoder README
dolfim-ibm c4c9e5e
Merge branch 'dev' into dol-update-readme-docs
abec823
Added notebook for pdf2parquet
956e5a0
Update pdf2parquet.ipynb
shahrokhDaijavad f776f1d
Update pdf2parquet.ipynb
shahrokhDaijavad 13dabd0
Added doc chunk minimal notebook
touma-I a98d402
Merge branch 'dol-update-readme-docs' of github.com:IBM/data-prep-kit…
touma-I 0ddcca1
minimal sample notebook for how transform can be invoked
touma-I 11ed291
restoring the make venv
shahrokhDaijavad a7e8d31
unification of notebooks
shahrokhDaijavad 3c6466a
added constraint for pydantic to prevent llama-index-core from picki…
touma-I File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,16 @@ | ||
# Chunk documents Transform | ||
|
||
Please see the set of | ||
[transform project conventions](../../../README.md#transform-project-conventions) | ||
for details on general project conventions, transform configuration, | ||
testing and IDE set up. | ||
|
||
## Contributors | ||
|
||
- Michele Dolfi ([email protected]) | ||
|
||
## Description | ||
|
||
This transform is chunking documents. It supports multiple _chunker modules_ (see the `chunking_type` parameter). | ||
|
||
When using documents converted to JSON, the transform leverages the [Docling Core](https://github.com/DS4SD/docling-core) `HierarchicalChunker` | ||
|
@@ -9,20 +20,26 @@ which provides the required JSON structure. | |
|
||
When using documents converted to Markdown, the transform leverages the [Llama Index](https://docs.llamaindex.ai/en/stable/module_guides/loading/node_parsers/modules/#markdownnodeparser) `MarkdownNodeParser`, which is relying on its internal Markdown splitting logic. | ||
|
||
## Output format | ||
|
||
### Input | ||
|
||
| input column name | data type | description | | ||
|-|-|-| | ||
| the one specified in _content_column_name_ configuration | string | the content used in this transform | | ||
|
||
|
||
### Output format | ||
|
||
The output parquet file will contain all the original columns, but the content will be replaced with the individual chunks. | ||
|
||
|
||
### Tracing the origin of the chunks | ||
#### Tracing the origin of the chunks | ||
|
||
The transform allows to trace the origin of the chunk with the `source_doc_id` which is set to the value of the `document_id` column (if present) in the input table. | ||
The actual name of columns can be customized with the parameters described below. | ||
|
||
|
||
## Running | ||
|
||
### Parameters | ||
## Configuration | ||
|
||
The transform can be tuned with the following parameters. | ||
|
||
|
@@ -40,6 +57,12 @@ The transform can be tuned with the following parameters. | |
| `output_pageno_column_name` | `page_number` | Column name to store the page number of the chunk in the output table. | | ||
| `output_bbox_column_name` | `bbox` | Column name to store the bbox of the chunk in the output table. | | ||
|
||
|
||
|
||
## Usage | ||
|
||
### Launched Command Line Options | ||
|
||
When invoking the CLI, the parameters must be set as `--doc_chunk_<name>`, e.g. `--doc_chunk_column_name_key=myoutput`. | ||
|
||
|
||
|
@@ -63,8 +86,32 @@ ls output | |
``` | ||
To see results of the transform. | ||
|
||
### Code example | ||
|
||
TBD (link to the notebook will be provided) | ||
|
||
See the sample script [src/doc_chunk_local_python.py](src/doc_chunk_local_python.py). | ||
|
||
|
||
### Transforming data using the transform image | ||
|
||
To use the transform image to transform your data, please refer to the | ||
[running images quickstart](../../../../doc/quick-start/run-transform-image.md), | ||
substituting the name of this transform image and runtime as appropriate. | ||
|
||
## Testing | ||
|
||
Following [the testing strategy of data-processing-lib](../../../../data-processing-lib/doc/transform-testing.md) | ||
|
||
Currently we have: | ||
- [Unit test](test/test_doc_chunk_python.py) | ||
|
||
|
||
## Further Resource | ||
|
||
- For the [Docling Core](https://github.com/DS4SD/docling-core) `HierarchicalChunker` | ||
- <https://ds4sd.github.io/docling/> | ||
- For the Markdown chunker in LlamaIndex | ||
- [Markdown chunking](https://docs.llamaindex.ai/en/stable/module_guides/loading/node_parsers/modules/#markdownnodeparser) | ||
- For the Token Text Splitter in LlamaIndex | ||
- [Token Text Splitter](https://docs.llamaindex.ai/en/stable/api_reference/node_parsers/token_text_splitter/) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,15 @@ | ||
# Ingest PDF to Parquet | ||
# Ingest PDF to Parquet Transform | ||
|
||
Please see the set of | ||
[transform project conventions](../../../README.md#transform-project-conventions) | ||
for details on general project conventions, transform configuration, | ||
testing and IDE set up. | ||
|
||
## Contributors | ||
|
||
- Michele Dolfi ([email protected]) | ||
|
||
## Description | ||
|
||
This tranforms iterate through document files or zip of files and generates parquet files | ||
containing the converted document in Markdown or JSON format. | ||
|
@@ -7,6 +18,9 @@ The PDF conversion is using the [Docling package](https://github.com/DS4SD/docli | |
The Docling configuration in DPK is tuned for best results when running large batch ingestions. | ||
For more details on the multiple configuration options, please refer to the official [Docling documentation](https://ds4sd.github.io/docling/). | ||
|
||
|
||
### Input files | ||
|
||
This transform supports the following input formats: | ||
|
||
- PDF documents | ||
|
@@ -17,37 +31,39 @@ This transform supports the following input formats: | |
- Markdown documents | ||
- ASCII Docs documents | ||
|
||
The input documents can be provided in a folder structure, or as a zip archive. | ||
Please see the configuration section for specifying the input files. | ||
|
||
## Output format | ||
|
||
The output format will contain all the columns of the metadata CSV file, | ||
with the addition of the following columns | ||
### Output format | ||
|
||
```jsonc | ||
{ | ||
"source_filename": "string", // the basename of the source archive or file | ||
"filename": "string", // the basename of the PDF file | ||
"contents": "string", // the content of the PDF | ||
"document_id": "string", // the document id, a random uuid4 | ||
"document_hash": "string", // the document hash of the input content | ||
"ext": "string", // the detected file extension | ||
"hash": "string", // the hash of the `contents` column | ||
"size": "string", // the size of `contents` | ||
"date_acquired": "date", // the date when the transform was executing | ||
"num_pages": "number", // number of pages in the PDF | ||
"num_tables": "number", // number of tables in the PDF | ||
"num_doc_elements": "number", // number of document elements in the PDF | ||
"pdf_convert_time": "float", // time taken to convert the document in seconds | ||
} | ||
``` | ||
The output table will contain following columns | ||
|
||
| output column name | data type | description | | ||
|-|-|-| | ||
| source_filename | string | the basename of the source archive or file | | ||
| filename | string | the basename of the PDF file | | ||
| contents | string | the content of the PDF | | ||
| document_id | string | the document id, a random uuid4 | | ||
| document_hash | string | the document hash of the input content | | ||
| ext | string | the detected file extension | | ||
| hash | string | the hash of the `contents` column | | ||
| size | string | the size of `contents` | | ||
| date_acquired | date | the date when the transform was executing | | ||
| num_pages | number | number of pages in the PDF | | ||
| num_tables | number | number of tables in the PDF | | ||
| num_doc_elements | number | number of document elements in the PDF | | ||
| pdf_convert_time | float | time taken to convert the document in seconds | | ||
|
||
## Parameters | ||
|
||
|
||
## Configuration | ||
|
||
The transform can be initialized with the following parameters. | ||
|
||
| Parameter | Default | Description | | ||
|------------|----------|--------------| | ||
| `data_files_to_use` | - | The files extensions to be considered when running the transform. Example value `['.pdf','.docx','.pptx','.zip']`. For all the supported input formats, see the section above. | | ||
| `batch_size` | -1 | Number of documents to be saved in the same result table. A value of -1 will generate one result file for each input file. | | ||
| `artifacts_path` | <unset> | Path where to Docling models artifacts are located, if unset they will be downloaded and fetched from the [HF_HUB_CACHE](https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache) folder. | | ||
| `contents_type` | `text/markdown` | The output type for the `contents` column. Valid types are `text/markdown`, `text/plain` and `application/json`. | | ||
|
@@ -58,9 +74,68 @@ The transform can be initialized with the following parameters. | |
| `pdf_backend` | `dlparse_v2` | The PDF backend to use. Valid values are `dlparse_v2`, `dlparse_v1`, `pypdfium2`. | | ||
| `double_precision` | `8` | If set, all floating points (e.g. bounding boxes) are rounded to this precision. For tests it is advised to use 0. | | ||
|
||
|
||
Example | ||
|
||
```py | ||
{ | ||
"data_files_to_use": ast.literal_eval("['.pdf','.docx','.pptx','.zip']"), | ||
"contents_type": "application/json", | ||
"do_ocr": True, | ||
} | ||
``` | ||
|
||
## Usage | ||
|
||
### Launched Command Line Options | ||
|
||
When invoking the CLI, the parameters must be set as `--pdf2parquet_<name>`, e.g. `--pdf2parquet_do_ocr=true`. | ||
|
||
|
||
### Running the samples | ||
To run the samples, use the following `make` targets | ||
|
||
* `run-cli-sample` - runs src/pdf2parquet_transform_python.py using command line args | ||
* `run-local-sample` - runs src/pdf2parquet_local.py | ||
* `run-local-python-sample` - runs src/pdf2parquet_local_python.py | ||
|
||
These targets will activate the virtual environment and set up any configuration needed. | ||
Use the `-n` option of `make` to see the detail of what is done to run the sample. | ||
|
||
For example, | ||
```shell | ||
make run-local-python-sample | ||
... | ||
``` | ||
Then | ||
```shell | ||
ls output | ||
``` | ||
To see results of the transform. | ||
|
||
|
||
### Code example | ||
|
||
TBD (link to the notebook will be provided) | ||
|
||
See the sample script [src/pdf2parquet_local_python.py](src/pdf2parquet_local_python.py). | ||
|
||
|
||
### Transforming data using the transform image | ||
|
||
To use the transform image to transform your data, please refer to the | ||
[running images quickstart](../../../../doc/quick-start/run-transform-image.md), | ||
substituting the name of this transform image and runtime as appropriate. | ||
|
||
## Testing | ||
|
||
Following [the testing strategy of data-processing-lib](../../../../data-processing-lib/doc/transform-testing.md) | ||
|
||
Currently we have: | ||
- [Unit test](transforms/language/pdf2parquet/python/test/test_pdf2parquet_python.py) | ||
- [Integration test](transforms/language/pdf2parquet/python/test/test_pdf2parquet.py) | ||
|
||
|
||
## Credits | ||
|
||
The PDF document conversion is developed by the AI for Knowledge group in IBM Research Zurich. | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,14 +1,36 @@ | ||
# Text Encoder Transform | ||
|
||
## Summary | ||
Please see the set of | ||
[transform project conventions](../../../README.md#transform-project-conventions) | ||
for details on general project conventions, transform configuration, | ||
testing and IDE set up. | ||
|
||
## Contributors | ||
|
||
- Michele Dolfi ([email protected]) | ||
|
||
## Description | ||
|
||
This transform is using [sentence encoder models](https://en.wikipedia.org/wiki/Sentence_embedding) to create embedding vectors of the text in each row of the input .parquet table. | ||
|
||
The embeddings vectors generated by the transform are useful for tasks like sentence similarity, features extraction, etc which are also at the core of retrieval-augmented generation (RAG) applications. | ||
|
||
### Input | ||
|
||
| input column name | data type | description | | ||
|-|-|-| | ||
| the one specified in _content_column_name_ configuration | string | the content used in this transform | | ||
|
||
|
||
### Output columns | ||
|
||
|
||
| output column name | data type | description | | ||
|-|-|-| | ||
| the one specified in _output_embeddings_column_name_ configuration | `array[float]` | the embeddings vectors of the content | | ||
|
||
## Running | ||
|
||
### Parameters | ||
## Configuration | ||
|
||
The transform can be tuned with the following parameters. | ||
|
||
|
@@ -18,7 +40,11 @@ The transform can be tuned with the following parameters. | |
| `model_name` | `BAAI/bge-small-en-v1.5` | The HF model to use for encoding the text. | | ||
| `content_column_name` | `contents` | Name of the column containing the text to be encoded. | | ||
| `output_embeddings_column_name` | `embeddings` | Column name to store the embeddings in the output table. | | ||
| `output_path_column_name` | `doc_path` | Column name to store the document path of the chunk in the output table. | | ||
|
||
|
||
## Usage | ||
|
||
### Launched Command Line Options | ||
|
||
When invoking the CLI, the parameters must be set as `--text_encoder_<name>`, e.g. `--text_encoder_column_name_key=myoutput`. | ||
|
||
|
@@ -43,8 +69,20 @@ ls output | |
``` | ||
To see results of the transform. | ||
|
||
### Code example | ||
|
||
TBD (link to the notebook will be provided) | ||
|
||
|
||
### Transforming data using the transform image | ||
|
||
To use the transform image to transform your data, please refer to the | ||
[running images quickstart](../../../../doc/quick-start/run-transform-image.md), | ||
substituting the name of this transform image and runtime as appropriate. | ||
|
||
## Testing | ||
|
||
Following [the testing strategy of data-processing-lib](../../../../data-processing-lib/doc/transform-testing.md) | ||
|
||
Currently we have: | ||
- [Unit test](test/test_text_encoder_python.py) |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dolfim-ibm @shahrokhDaijavad What do you guys think of adding a section like this one below to show how a user can invoke the transform once they have done a pip install (alternative to cloning the repo)::
import ast
import sys
from data_processing.runtime.pure_python import PythonTransformLauncher
from data_processing.utils import ParamsUtils
from pdf2parquet_transform_python import Pdf2ParquetPythonTransformConfiguration
local_conf = {
"input_folder": “input”,
"output_folder": “output”,
}
params = {
"data_local_config": ParamsUtils.convert_to_ast(local_conf),
"data_files_to_use": ast.literal_eval("['.pdf','.docx','.pptx','.zip']"),
}
sys.argv = ParamsUtils.dict_to_req(d=params)
launcher = PythonTransformLauncher(runtime_config=Pdf2ParquetPythonTransformConfiguration())
launcher.launch()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice job, @dolfim-ibm! Great job with the README files for all three transforms. They follow the template.
What @touma-I is suggesting would be to add these lines of code in the section that says "Code example" and has the link to the upcoming Notebook example. These lines, together with the pip install, will be used in the Notebook, but they could also be used in a Python example that is not Notebook. I am ok either way: 1) Wait for the Notebook or 2) Add the lines now.
@dolfim-ibm Please don't pick option 1 because it will make it easier on you! Maroun's question is how useful it is to have these lines.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@touma-I @shahrokhDaijavad I was actually adding the code block already, but then I realized it was 1-to-1 exactly the content of the example script. Instead of having to maintain multiple versions of it (with the high-risk) of being outdated, I think that linking to the example is still ok.
Honestly, I think the best is to plan in terms of a documentation engine which can embed working code examples, and to ensure in CI that those example codes are being executed.