Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve pandas dataframe inspection #319

Merged
merged 2 commits into from
Sep 5, 2024

Conversation

martinRenou
Copy link
Member

@martinRenou martinRenou commented Sep 4, 2024

Improve inspection of dataframes with many columns.

Prior to this PR, inspecting after running the following code would eventually make my laptop go out of memory and crash!!!.
With this PR, the inspection code takes: 414 μs ± 6.29 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) and the kernel stays alive.

One pain point was df.memory_usage which creates a whole new dataframe. We need to find a smarter/faster way to compute the memory usage and disable this feature for now (what I do in the PR).

import pandas as pd
import numpy as np

# Set the seed for reproducibility
np.random.seed(42)

# Calculate the number of rows to approximate 1 GB of memory
num_rows = 10
num_cols = 50_000_000

# Create a DataFrame with the calculated size
large_df = pd.DataFrame(
    np.random.rand(num_rows, num_cols),
    columns=[f'col_{i+1}' for i in range(num_cols)]
)

Copy link

github-actions bot commented Sep 4, 2024

Binder 👈 Launch a Binder on branch martinRenou/jupyterlab-variableInspector/improve_df_inspect

@martinRenou
Copy link
Member Author

cc. @achhina @trungleduc

@martinRenou
Copy link
Member Author

Still goes in the direction of #307

@martinRenou martinRenou added the enhancement New feature or request label Sep 4, 2024
Copy link
Collaborator

@krassowski krassowski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is related to a bug in pandas which is now closed as fixed by a PR merged into 3.0 branch:

Instead of removing the usage of .memory_usage(), can we could try using pd.options.mode.copy_on_write = True in a context manager.

Can you give it a try and see if it improves the situation?

return x.memory_usage().sum()
# DO NOT CALL df.memory_usage() as this can be very costly
# to the point of crashing the kernel
return "?"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return "?"
with pd.option_context("mode.copy_on_write", True):
return x.memory_usage().sum()

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, still the same issue with this.

Running a profiler on inspecting a 10 rows x 500_000 columns I see this takes more than 6 seconds to run. When inspecting a 10 rows x 50_000_000 columns, laptop goes out of memory and crashes.

Without doing any memory_usage computation, inspection takes less than 10 ms.

Copy link
Member Author

@martinRenou martinRenou Sep 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the suggestion though!

I wonder if we could add a small condition on the shape, if the shape is in the 10 thousands and more we don't compute the memory usage?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, what about using a lazy approximation?

Suggested change
return "?"
return x.head(1).memory_usage().sum() * len(x)

Copy link
Member Author

@martinRenou martinRenou Sep 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is still quite slow when there are many columns (500_000 here, 6.8 seconds to compute)

Screenshot from 2024-09-04 14-42-27

Note that this means adding 6.8 seconds of delay between each cell execution once the variable is defined.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I guess we could use some kind of cache. I guess it is feasible to write a function giving a rough estimate with something like x.dtypes.map(size_of).sum() * len(x) where size_of would take the dtype and compute it's size or return one from cache.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mapping over all the columns with x.dtypes.map will still be quite slow with a dataframe with lots of columns. Also invalidating cache may be hard?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be rather fast. You cloud do something like:

sum([
    size_of(dtype) * count
    for dtype, count in x.dtypes.value_counts().items()
]) * len(x)

The harder part is implementing size_of

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you be fine if, in this PR, I just add a watchdog that does:

# It seems a big number of rows does not impact performance, only columns
if len(x.columns) < 10_000:
    return x.memory_usage().sum()
else:
    return "?"

And we can open a follow-up issue for a faster calculation of the memory usage, pointing to this discussion?

This would at least fix the crashing issue we're seeing on our side.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, souds fine!

Copy link
Collaborator

@krassowski krassowski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@martinRenou
Copy link
Member Author

Thnks a lot for your reviews!

@martinRenou martinRenou merged commit cf8b0dc into jupyterlab-contrib:main Sep 5, 2024
6 checks passed
@martinRenou martinRenou deleted the improve_df_inspect branch September 5, 2024 09:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants