-
Notifications
You must be signed in to change notification settings - Fork 896
Add support for optional max concurrency #643
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for optional max concurrency #643
Conversation
809a587
to
834566f
Compare
834566f
to
f926e94
Compare
f926e94
to
a9dfa76
Compare
… of `evaluation.evaluate()`
Note: currently have |
Thanks! try to run but encounter a problem |
I love this PR - thank you so much @joy13975 ❤️ - really! thank you for sharing your fix. I just have one commend though. So I was thinking about the so as a user would you prefer it to be in That was the only question I had but otherwise, I'm merging this in! |
@jjmachan @wangyiran33 thanks for the report and l’ll make it compatible with 3.10 as well in the next PR update. |
makes perfect sense @joy13975 - there seems to be a small type error - if you fix that I'll merge this in I can fix it too if you want but it will take some time for me to get to this sadly but the PR looks really good so would be awesome to merge it as soon as we can right. |
or not - because if we don't have support for 3.10 that is a deal breaker - thanks for reporting that @wangyiran33 would love to hear your thoughts @joy13975 |
@jjmachan yeah I’ll fix all the above points once I get time. Should be up this weekend. |
sounds good - thanks a lot again |
Also renamed max_concurrency to max_workers to be consistent with convention.
b34341a
to
fe3ed68
Compare
fe3ed68
to
0cebcc1
Compare
@jjmachan Just pushed an update to address all the above. @wangyiran33 Can you please help test on Python 3.10+? |
of course! I have tested and it works for py3.10, really appreciate that. Love this PR! A big Thanks for you! @joy13975 |
awesome - finished my round of testing and it is a massive improvement ❤️ thanks again for this @joy13975 for helping us all and contributing this fix 🙂 going out in today's release |
**Added optional Semaphore-based concurrency control for explodinggradients#642** As for the default value for `max_concurrency`, I don't know the ratio of API users vs. local LLM users, so the proposed default is an opinionated value of `16` * I *think* more people use OpenAI API for now vs. local LLMs, thus default is not `-1` (no limit) * `16` seems to be reasonably fast and doesn't seem to hit throughput limit in my experience **Tests** Embedding for 1k documents finished in <2min and subsequent Testset generation for `test_size=1000` proceeding without getting stuck: <img width="693" alt="image" src="https://github.com/explodinggradients/ragas/assets/6729737/d83fecc8-a815-43ee-a3b0-3395d7a9d244"> another 30s passes: <img width="725" alt="image" src="https://github.com/explodinggradients/ragas/assets/6729737/d4ab08ba-5a79-45f6-84b1-e563f107d682"> --------- Co-authored-by: Jithin James <[email protected]>
Added optional Semaphore-based concurrency control for #642
As for the default value for
max_concurrency
, I don't know the ratio of API users vs. local LLM users, so the proposed default is an opinionated value of16
-1
(no limit)16
seems to be reasonably fast and doesn't seem to hit throughput limit in my experienceTests

Embedding for 1k documents finished in <2min and subsequent Testset generation for
test_size=1000
proceeding without getting stuck:another 30s passes:
