Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update dependency litellm to v1.59.0 #1972

Merged
merged 1 commit into from
Jan 19, 2025
Merged

Update dependency litellm to v1.59.0 #1972

merged 1 commit into from
Jan 19, 2025

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Jan 18, 2025

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
litellm 1.56.9 -> 1.59.0 age adoption passing confidence

Release Notes

BerriAI/litellm (litellm)

v1.59.0

Compare Source

What's Changed

Full Changelog: BerriAI/litellm@v1.58.4...v1.59.0

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.0
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 250.0 285.129348931583 6.106818187164813 0.0 1827 0 224.69302100000732 2869.612018000055
Aggregated Passed ✅ 250.0 285.129348931583 6.106818187164813 0.0 1827 0 224.69302100000732 2869.612018000055

v1.58.4

Compare Source

What's Changed

New Contributors

Full Changelog: BerriAI/litellm@v1.58.2...v1.58.4

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.58.4
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 200.0 237.21547618310757 6.133261155980474 0.0 1835 0 175.96439100003636 4047.4063279999655
Aggregated Passed ✅ 200.0 237.21547618310757 6.133261155980474 0.0 1835 0 175.96439100003636 4047.4063279999655

v1.58.2

Compare Source

What's Changed

Full Changelog: BerriAI/litellm@v1.58.1...v1.58.2

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.58.2
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 250.0 289.8090936126223 6.143711740946042 0.0 1838 0 228.12097899998207 2196.5017750000015
Aggregated Passed ✅ 250.0 289.8090936126223 6.143711740946042 0.0 1838 0 228.12097899998207 2196.5017750000015

v1.58.1

Compare Source

🚨Alpha - 1.58.0 has various perf improvements, we recommend waiting for a stable release before bumping in production

What's Changed

Full Changelog: BerriAI/litellm@v1.58.0...v1.58.1

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.58.1
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 250.0 294.2978673554448 6.045420383532543 0.0 1809 0 223.72276400000146 3539.4181890000027
Aggregated Passed ✅ 250.0 294.2978673554448 6.045420383532543 0.0 1809 0 223.72276400000146 3539.4181890000027

v1.58.0

Compare Source

v1.58.0 - Alpha Release

🚨 This is an alpha release - we've made several performance / RPS improvements to litellm core. If you see any issues please file it https://github.com/BerriAI/litellm/issues

What's Changed

Full Changelog: BerriAI/litellm@v1.57.11...v1.58.0

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.58.0
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 240.0 273.2166563012582 6.118315985413586 0.0033451700302972037 1829 1 75.1692759999969 3821.228761000043
Aggregated Passed ✅ 240.0 273.2166563012582 6.118315985413586 0.0033451700302972037 1829 1 75.1692759999969 3821.228761000043

v1.57.11

Compare Source

v1.57.11 - Alpha Release

🚨 This is an alpha release - we've made several performance / RPS improvements to litellm core. If you see any issues please file it https://github.com/BerriAI/litellm/issues

What's Changed

Full Changelog: BerriAI/litellm@v1.57.10...v1.57.11

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.11
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 240.0 270.55759577820237 6.130862160194138 0.0 1835 0 224.79750500002638 1207.8732939999952
Aggregated Passed ✅ 240.0 270.55759577820237 6.130862160194138 0.0 1835 0 224.79750500002638 1207.8732939999952

v1.57.10

Compare Source

v1.57.10 - Alpha Release

🚨 This is an alpha release - we've made several performance / RPS improvements to litellm core. If you see any issues please file it https://github.com/BerriAI/litellm/issues

Full Changelog: BerriAI/litellm@v1.57.8...v1.57.10

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.10
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 240.0 264.0629029362514 6.184926091214754 0.0 1851 0 213.62108399998192 1622.618584999998
Aggregated Passed ✅ 240.0 264.0629029362514 6.184926091214754 0.0 1851 0 213.62108399998192 1622.618584999998

v1.57.8

Compare Source

What's Changed

Full Changelog: BerriAI/litellm@v1.57.7...v1.57.8

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.8
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 210.0 225.29799695056985 6.153370698253471 0.0 1841 0 177.73327700001573 2088.13791099999
Aggregated Passed ✅ 210.0 225.29799695056985 6.153370698253471 0.0 1841 0 177.73327700001573 2088.13791099999

v1.57.7

Compare Source

What's Changed

Full Changelog: BerriAI/litellm@v1.57.5...v1.57.7

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.7
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 200.0 218.4749677188173 6.216185012755876 0.0 1860 0 177.92223199990076 3911.6109139999935
Aggregated Passed ✅ 200.0 218.4749677188173 6.216185012755876 0.0 1860 0 177.92223199990076 3911.6109139999935

v1.57.5

Compare Source

🚨🚨 Known issue - do not upgrade - Window's compatibility issue on this release

Relevant issue: https://github.com/BerriAI/litellm/issues/7677

What's Changed

Full Changelog: BerriAI/litellm@v1.57.4...v1.57.5

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.5
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 230.0 282.70225500655766 6.115771768544881 0.0 1830 0 206.44150200001832 3375.4479410000044
Aggregated Passed ✅ 230.0 282.70225500655766 6.115771768544881 0.0 1830 0 206.44150200001832 3375.4479410000044

v1.57.4

Compare Source

What's Changed

Full Changelog: BerriAI/litellm@v1.57.3...v1.57.4

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.4
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 200.0 218.7550845980808 6.268875045928877 0.0 1876 0 170.9488330000113 1424.4913769999812
Aggregated Passed ✅ 200.0 218.7550845980808 6.268875045928877 0.0 1876 0 170.9488330000113 1424.4913769999812

v1.57.3

Compare Source

Full Changelog: BerriAI/litellm@v1.57.2...v1.57.3

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.3
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 240.0 273.577669278204 6.101109800829093 0.0 1826 0 209.38834100002168 2450.7287210000186
Aggregated Passed ✅ 240.0 273.577669278204 6.101109800829093 0.0 1826 0 209.38834100002168 2450.7287210000186

v1.57.2

Compare Source

What's Changed

Full Changelog: BerriAI/litellm@v1.57.1...v1.57.2

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.2
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 190.0 212.2353391522645 6.34173008698281 0.0 1898 0 174.4866640000282 3470.5951910000013
Aggregated Passed ✅ 190.0 212.2353391522645 6.34173008698281 0.0 1898 0 174.4866640000282 3470.5951910000013

v1.57.1

Compare Source

What's Changed

Full Changelog: BerriAI/litellm@v1.57.0...v1.57.1

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.1
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.1
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 250.0 286.96666935492755 6.035628429692609 0.0 1806 0 226.66728699999794 3887.529271000062
Aggregated Passed ✅ 250.0 286.96666935492755 6.035628429692609 0.0 1806 0 226.66728699999794 3887.529271000062

v1.57.0

Compare Source

What's Changed

New Contributors

Full Changelog: BerriAI/litellm@v1.56.10...v1.57.0

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.0
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 200.0 212.84027329611826 6.1961289027318704 0.0 1854 0 174.45147399996586 1346.3216149999653
Aggregated Passed ✅ 200.0 212.84027329611826 6.1961289027318704 0.0 1854 0 174.45147399996586 1346.3216149999653

v1.56.10

Compare Source

What's Changed

Full Changelog: BerriAI/litellm@v1.56.9...v1.56.10

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.10
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 230.0 268.3301603401397 6.21711064668469 0.0 1861 0 212.36320399998476 3556.7401620000396
Aggregated Passed ✅ 230.0 268.3301603401397 6.21711064668469 0.0 1861 0 212.36320399998476 3556.7401620000396

Configuration

📅 Schedule: Branch creation - "every weekend" in timezone US/Eastern, Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot force-pushed the renovate/litellm-1.x branch 2 times, most recently from 7455cd0 to a2495a7 Compare January 18, 2025 22:01
@renovate renovate bot changed the title Update dependency litellm to v1.58.4 Update dependency litellm to v1.59.0 Jan 18, 2025
@renovate renovate bot force-pushed the renovate/litellm-1.x branch from a2495a7 to 605978d Compare January 18, 2025 22:03
@renovate renovate bot merged commit afaa36b into main Jan 19, 2025
11 checks passed
@renovate renovate bot deleted the renovate/litellm-1.x branch January 19, 2025 01:38
@odlbot odlbot mentioned this pull request Jan 21, 2025
13 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants