Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Added Data management framework #125

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 23 additions & 0 deletions DATA_MANAGMENT_FRAMEWORK/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
## Dataset management for deep learning applications

Introducing Data 2.0, powered by Hub.
The fastest way to store, access & manage datasets with version-control for PyTorch/TensorFlow. Works locally or on any cloud. Scalable data pipelines.

Checkout Activeloop's [website](activeloop.ai) to learn more.

### What is Hub for?

Software 2.0 needs Data 2.0, and Hub delivers it. Most of the time Data Scientists/ML researchers work on data management and preprocessing instead of training models. With Hub, we are fixing this. We store your (even petabyte-scale) datasets as single numpy-like array on the cloud, so you can seamlessly access and work with it from any machine. Hub makes any data type (images, text files, audio, or video) stored in cloud usable as fast as if it were stored on premise. With same dataset view, your team can always be in sync.

Hub is being used by Waymo, Red Cross, World Resources Institute, Omdena, and others.

### Features

1. Store and retrieve large datasets with version-control
2. Collaborate as in Google Docs: Multiple data scientists working on the same data in sync with no interruptions
3. Access from multiple machines simultaneously
4. Deploy anywhere - locally, on Google Cloud, S3, Azure as well as Activeloop (by default - and for free!)
5. Integrate with your ML tools like Numpy, Dask, Ray, PyTorch, or TensorFlow
6. Create arrays as big as you want. You can store images as big as 100k by 100k!
7. Keep shape of each sample dynamic. This way you can store small and big arrays as 1 array.
8. Visualize any slice of the data in a matter of seconds without redundant manipulations
171 changes: 171 additions & 0 deletions DATA_MANAGMENT_FRAMEWORK/uploading_images.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,171 @@
{
"metadata": {
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": 3
},
"orig_nbformat": 2
},
"nbformat": 4,
"nbformat_minor": 2,
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Uploading Images\n",
"# In this notebook, we will see how to upload and store images on Hub.\n",
"\n",
"# first we install hub\n",
"# runtime enviroment\n",
"!pip install hub\n",
"# Note: Restart the colab runtime as few packages has been updated or you may get error ()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Download the images\n",
"!wget -q https://github.com/albumentations-team/albumentations_examples/archive/master.zip -O /tmp/albumentations_examples.zip\n",
"!unzip -o -qq /tmp/albumentations_examples.zip -d /tmp/albumentations_examples\n",
"!cp -r /tmp/albumentations_examples/albumentations_examples-master/notebooks/images .\n",
"!echo \"Images are successfully downloaded\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Images are successfully downloaded\n",
"from hub.schema import ClassLabel, Image\n",
"from hub import transform, schema\n",
"\n",
"from skimage.io import imread\n",
"from skimage import img_as_ubyte\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"from tqdm import tqdm\n",
"\n",
"from glob import glob\n",
"from time import time\n",
"\n",
"%matplotlib inline"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# a list of image filepaths\n",
"fnames = [\"images/image_1.jpg\", \"images/image_2.jpg\", \"images/image_3.jpg\"]\n",
"len(fnames)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# each image filepath corresponds to an unique image\n",
"img = imread(fnames[2])\n",
"plt.imshow(img)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"# Defining a Schema\n",
"# A schema is a python dicts that contains metadata about our dataset.\n",
"\n",
"# In this example, we tell Hub that our images have the shape (512, 512, 3) and are uint8. Furthermore, these images belong to one of three classes.\n",
"my_schema = {\n",
" \"image\": Image(shape=(512, 512, 3), dtype=\"uint8\"),\n",
" \"label\": ClassLabel(num_classes=3),\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Defining Transforms\n",
"# First, we define a method load_transform and decorate it with @transform. This is the function that will applied to each instance/sample of our dataset.\n",
"\n",
"# In our example, for each element in the list fnames, we want to read the image into memory (with imread) and label it (by pulling its class from its filename). If we wanted to, we include arbitrary operations too, perhaps resizing or reshaping each image.\n",
"\n",
"# Then, we return a dict with the same key-values as the ones defined in my_schema.\n",
"\n",
"@transform(schema=my_schema)\n",
"def load_transform(sample):\n",
" image = imread(sample)\n",
" label = int(sample[-5]) - 1\n",
" \n",
" return {\n",
" \"image\" : image,\n",
" \"label\" : label\n",
" }"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ds = load_transform(fnames) # returns a transform object\n",
"type(ds)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Finally, Execution!\n",
"# Hub lazily executes, so nothing happens until we invoke store.\n",
"start = time()\n",
"\n",
"tag = \"./my_datasets/tutorial_image\"\n",
"ds2 = ds.store(tag)\n",
"type(ds2)\n",
"\n",
"end = time()\n",
"print(\"Elapsed time:\", end - start)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plt.imshow(ds2['image', 0].compute())"
]
}
]
}