Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark improvements #598

Closed
wants to merge 3 commits into from

Conversation

LeoGrin
Copy link
Contributor

@LeoGrin LeoGrin commented Jun 13, 2023

  • Random search
  • Simple tests for the monitor function
  • joblib support (I think this won't work for memory monitoring, but should work for time monitoring. Would love feedback on this).

@LilianBoulard this might be useful for the gapencoder benchmark
@jovan-stojanovic

@LilianBoulard LilianBoulard added the benchmarks Something related to the benchmarks label Jun 19, 2023
Copy link
Member

@LilianBoulard LilianBoulard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR :)

On the parallelism of runs, I originally set them to run sequentially so the time and memory monitoring would be accurate.
If there is a reliable way to measure those while running the instances in parallel, I'm all for it! Though I don't think the current implementation is. I changed the inner structure quite a bit in #593, it might be easier to integrate parallelism with these.

I like the random search idea, though we should also have a way to set the seed, for reproducibility.

benchmarks/tests/test_monitor.py Show resolved Hide resolved
else:
parametrization = list(product(parametrize))
if not (n_random_search is None):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can be simplified:

Suggested change
if not (n_random_search is None):
if n_random_search is not None:

@@ -199,7 +219,7 @@ def exec_func(*args, **kwargs) -> Dict[str, List[float]]:
"there is therefore nothing to monitor for. ",
stacklevel=2,
)
return df
return None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same thing:

Suggested change
return None
return

@GaelVaroquaux
Copy link
Member

Can you:

  • merge with master
  • add a changelog entry

Copy link
Member

@GaelVaroquaux GaelVaroquaux left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few comments. Happy to discuss IRL if useful

@@ -0,0 +1,95 @@
import pytest
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a short module-level docstring describing what this does

@@ -0,0 +1,95 @@
import pytest
from benchmarks.utils import monitor, repr_func # replace with the actual import path
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity: is this supposed to work without installation? I'm a bit surprised that it does.

Copy link
Member

@LilianBoulard LilianBoulard Jul 24, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it does work, since there is an __init__.py file in benchmarks/ :)

@@ -1,11 +1,13 @@
import tracemalloc
import random
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have a small module-level docstring that describe what this file does?

@LeoGrin LeoGrin marked this pull request as draft July 18, 2023 09:26
@LeoGrin LeoGrin closed this Oct 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmarks Something related to the benchmarks no changelog needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants