Conversation
Summary of ChangesHello @Hmnt39, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new utility class, Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a TaskThrottle class, providing a robust and flexible way to limit concurrent task execution using semaphores. The implementation is clean, offering both decorator and context manager interfaces, which enhances usability. The accompanying test suite is comprehensive, covering initialization, concurrency limits, serial execution, and high concurrency scenarios, ensuring the reliability of the new utility. The code adheres to best practices for threading and resource management, such as using functools.wraps for decorators and try...finally for context managers to guarantee semaphore release. Overall, this is a well-designed and well-tested addition to the extractorutils library.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #505 +/- ##
==========================================
+ Coverage 81.13% 81.18% +0.05%
==========================================
Files 43 44 +1
Lines 4208 4236 +28
==========================================
+ Hits 3414 3439 +25
- Misses 794 797 +3
🚀 New features to boost your workflow:
|
nithinb
left a comment
There was a problem hiding this comment.
The changes look good, change the version as mentioned in the comment because patch would imply you are fixing something but the reality is that you are adding a new feature
…s into DOG-6551-throttling-support
haakonvt
left a comment
There was a problem hiding this comment.
Risk review: just a few questions on whether the implemented functionality is really needed.
| @contextmanager | ||
| def lease(self) -> Generator[None, None, None]: | ||
| """ | ||
| Context manager that acquires/releases a throttle slot. | ||
| """ | ||
| self._semaphore.acquire() | ||
| try: | ||
| yield | ||
| finally: | ||
| self._semaphore.release() |
There was a problem hiding this comment.
By using the @contextmanager decorator, you can get the effect of a context manager without writing a full class. You, however, have written a full class, then I see no point in not implementing enter and exit dunder methods:
def __enter__(self):
self._semaphore.acquire()
def __exit__(self, exc_type, exc_val, exc_tb):
self._semaphore.release()However, taking a step back, this is exactly the interface the semaphore already provides you, leading me to question why you need this in the first place?
throttle = TaskThrottle(5)
with throttle.lease():
...
throttle = Semaphore(5)
with throttle:
...| def limit(self, func: Callable[P, T]) -> Callable[P, T]: | ||
| """ | ||
| Decorator to throttle a task function. | ||
| """ | ||
|
|
||
| @wraps(func) | ||
| def wrapper(*args: P.args, **kwargs: P.kwargs) -> T: | ||
| with self.lease(): | ||
| return func(*args, **kwargs) | ||
|
|
There was a problem hiding this comment.
In light of the previous comment, I would suggest you only implement this limit functionality:
def limit_concurrency(max_concurrent):
semaphore = Semaphore(max_concurrent)
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
with semaphore:
return func(*args, **kwargs)
return wrapper
return decoratorWhich would allow both single-use:
@limit_concurrency(3)
def foo(data):
......and shared-pool limit:
throttle = limit_concurrency(5)
@throttle
def foo(user_id):
...
@throttle
def bar(entry):
...|
|
||
| ### Added | ||
| * In the `unstable` package: Add TaskThrottle helper class for limiting concurrent task execution with decorator and context manager support | ||
| * Add support for environment variable interpolation in keyvault config |
There was a problem hiding this comment.
Where was this added?
* Add support for environment variable interpolation in keyvault config
|
Make sense, this is not required, I can directly use semaphore as context manager rather creating a wrapper on it. Closing this PR |
This PR add
TaskThrottleclass to limit concurrent task execution using semaphores. Provides both decorator and context manager (with throttle.lease()) interfaces for throttling.