Update dependency litellm to v1.59.0 #1972
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
1.56.9
->1.59.0
Release Notes
BerriAI/litellm (litellm)
v1.59.0
Compare Source
What's Changed
supported_
as base + Anthropic nested pydantic object support by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7844/key/delete
- allow team admin to delete team keys by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7846Full Changelog: BerriAI/litellm@v1.58.4...v1.59.0
Docker Run LiteLLM Proxy
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
v1.58.4
Compare Source
What's Changed
gemini/
frequency_penalty + presence_penalty support by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7776datadog llm observability
logging integration by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7824dd-trace-run
to litellm ci/cd pipeline + fix bug caused bydd-trace
patching OpenAI sdk by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7820New Contributors
Full Changelog: BerriAI/litellm@v1.58.2...v1.58.4
Docker Run LiteLLM Proxy
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
v1.58.2
Compare Source
What's Changed
BaseAWSLLM
- cache IAM role credentials when used by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7775Full Changelog: BerriAI/litellm@v1.58.1...v1.58.2
Docker Run LiteLLM Proxy
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
v1.58.1
Compare Source
🚨Alpha - 1.58.0 has various perf improvements, we recommend waiting for a stable release before bumping in production
What's Changed
litellm_llm_api_time_to_first_token_metric
not populating for bedrock models by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7740health_check_model
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7752Full Changelog: BerriAI/litellm@v1.58.0...v1.58.1
Docker Run LiteLLM Proxy
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
v1.58.0
Compare Source
v1.58.0 - Alpha Release
🚨 This is an alpha release - we've made several performance / RPS improvements to litellm core. If you see any issues please file it https://github.com/BerriAI/litellm/issues
What's Changed
Full Changelog: BerriAI/litellm@v1.57.11...v1.58.0
Docker Run LiteLLM Proxy
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
v1.57.11
Compare Source
v1.57.11 - Alpha Release
🚨 This is an alpha release - we've made several performance / RPS improvements to litellm core. If you see any issues please file it https://github.com/BerriAI/litellm/issues
What's Changed
verbose_logger.debug
and_cached_get_model_info_helper
in_response_cost_calculator
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7720_model_contains_known_llm_provider
inresponse_cost_calculator
to check if the model contains a known litellm provider by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7721Full Changelog: BerriAI/litellm@v1.57.10...v1.57.11
Docker Run LiteLLM Proxy
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
v1.57.10
Compare Source
v1.57.10 - Alpha Release
🚨 This is an alpha release - we've made several performance / RPS improvements to litellm core. If you see any issues please file it https://github.com/BerriAI/litellm/issues
_get_model_info_helper
for cost tracking by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7703orjson
for reading request body by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7706aiohttp_openai/
) - fix get_custom_llm_provider by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7705/images/variations
+ Topaz API support by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7700_cached_get_model_group_info
to use when trying to get deployment tpm/rpm limits by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7719Full Changelog: BerriAI/litellm@v1.57.8...v1.57.10
Docker Run LiteLLM Proxy
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
v1.57.8
Compare Source
What's Changed
response_cost_calculator
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7674pre_call_check
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7673num_workers
is specified by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7681asyncio.create_task
if user opts into alerting by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7683Full Changelog: BerriAI/litellm@v1.57.7...v1.57.8
Docker Run LiteLLM Proxy
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
v1.57.7
Compare Source
What's Changed
Full Changelog: BerriAI/litellm@v1.57.5...v1.57.7
Docker Run LiteLLM Proxy
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
v1.57.5
Compare Source
🚨🚨 Known issue - do not upgrade - Window's compatibility issue on this release
Relevant issue: https://github.com/BerriAI/litellm/issues/7677
What's Changed
aiohttp_openai/
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7659uvloop
for higher RPS (10%-20% higher RPS) by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7662Full Changelog: BerriAI/litellm@v1.57.4...v1.57.5
Docker Run LiteLLM Proxy
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
v1.57.4
Compare Source
What's Changed
omni-moderation
cost model tracking by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7566migrationJob.enabled
variable within job by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7639Full Changelog: BerriAI/litellm@v1.57.3...v1.57.4
Docker Run LiteLLM Proxy
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
v1.57.3
Compare Source
Full Changelog: BerriAI/litellm@v1.57.2...v1.57.3
Docker Run LiteLLM Proxy
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
v1.57.2
Compare Source
What's Changed
aiohttp_openai/
fixes - allow usingaiohttp_openai/gpt-4o
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7598Full Changelog: BerriAI/litellm@v1.57.1...v1.57.2
Docker Run LiteLLM Proxy
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
v1.57.1
Compare Source
What's Changed
async_service_success_hook
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7591.copy()
operation by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7564Full Changelog: BerriAI/litellm@v1.57.0...v1.57.1
Docker Run LiteLLM Proxy
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Docker Run LiteLLM Proxy
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
v1.57.0
Compare Source
What's Changed
init
custom loggers is non blocking by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7554f
string inadd_litellm_data_to_request()
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7558asyncio.create_task
forservice_logger_obj.async_service_success_hook
in pre_call by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7563fireworks_ai/accounts/fireworks/models/deepseek-v3
by @Fredy in https://github.com/BerriAI/litellm/pull/7567New Contributors
Full Changelog: BerriAI/litellm@v1.56.10...v1.57.0
Docker Run LiteLLM Proxy
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
v1.56.10
Compare Source
What's Changed
/models
endpoints for available models based on key by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7538cohere/command-r7b-12-2024
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7553Full Changelog: BerriAI/litellm@v1.56.9...v1.56.10
Docker Run LiteLLM Proxy
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Configuration
📅 Schedule: Branch creation - "every weekend" in timezone US/Eastern, Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.