-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat: Add support for parquet files #443
base: main
Are you sure you want to change the base?
Feat: Add support for parquet files #443
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #443 +/- ##
====================================
- Coverage 78% 78% -0%
====================================
Files 36 37 +1
Lines 5217 5372 +155
====================================
+ Hits 4088 4185 +97
- Misses 1129 1187 +58 |
Hey @deependujha Nice progress ;) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is dope. If we could automatically index an s3 folder
And generate an index file, it would be dope.
import polars as pl
import fsspec
file_path = "s3://your-bucket/path/to/your-file.parquet"
# Open the Parquet file with fsspec
with fsspec.open(file_path, mode="rb") as f:
# Fetch the number of rows from the metadata
num_rows = pl.read_parquet(f, use_pyarrow=True).shape[0]
print(f"Number of rows: {num_rows}")
|
GitGuardian id | GitGuardian status | Secret | Commit | Filename | |
---|---|---|---|---|---|
5685611 | Triggered | Generic High Entropy Secret | 76efafb | tests/streaming/test_resolver.py | View secret |
🛠 Guidelines to remediate hardcoded secrets
- Understand the implications of revoking this secret by investigating where it is used in your code.
- Replace and store your secret safely. Learn here the best practices.
- Revoke and rotate this secret.
- If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.
To avoid such incidents in the future consider
- following these best practices for managing and storing secrets including API keys and other credentials
- install secret detection on pre-commit to catch secret before it leaves your machine and ease remediation.
🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.
Adding support for directly consuming HF datasets is an exciting direction! For HF datasets, my current idea involves iterating through all the Parquet datasets in the HF repository and creating an index.json file that is stored in a cache (since modifying the original dataset is not feasible). When using the streaming dataset/dataloader, we would then pass this separate index.json file from the cache. At this point, I'm uncertain about the exact approach for handling HF datasets comprehensively. This PR is ready for review and lays the groundwork for future enhancements. We can discuss HF dataset integration in a subsequent PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is quite awesome !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add the benchmarks in the description ?
Before submitting
What does this PR do?
Fixes #191
Benchmark on Data prep machine
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in GitHub issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃