-
Notifications
You must be signed in to change notification settings - Fork 846
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Native Function Calling #973
Native Function Calling #973
Conversation
* feat: gracefully handling parsing errors for structured outputs * chore: stringify parsed response
…on-object-over-string Reverting breaking change for json mode.
* add 'MISTRAL' to LLMProvider enum * add Mistral to SAML * feat(ui): add Mistral integration support
* feat(db): add database schemas * feat: add groq to SAML * feat(ui): groq integration
…response fix: add JSON serialization to SuperRagTool query response
…'browser' in workflow prompts
Your Render PR Server URL is https://superagent-pr-973.onrender.com. Follow its progress at https://dashboard.render.com/web/srv-cojajtgl5elc73dd8t1g. |
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
|
||
self.messages = messages | ||
kwargs["messages"] = self.messages | ||
return await self._completion(**kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@homanp Right now, if LLM returns tool_calls we are recursively executing tool calls and passing the result to LLM back again until there is no tool_calls anymore. Should we add threshold for maximum recursion depth level (e.g. 10)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
10 sounds a lot?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
10 sounds a lot?
Yeah, I know it's a lot. But I intentionally chose that to avoid false negatives - stopping function calling when we actually need to continue.
As a reference point, Langchain have chosen 15
…if tool calling is supported
@elisalimli apologies for commenting on a merged PR, but wondering if it works with Ollama as well. |
If you deploy it to a server, we can. As of now, we do not support the local running models. |
Yes, Ollama server directly or through Litellm. |
If you deploy it somewhere yes, but not locally. |
This PR adds native function calling support for models from Anthropic, Mistral, Bedrock, Groq instead of leveraging OpenAI's GPT model for function calling.
We are using GPT for function calling for the models that does not support for function calling such as, Perplexity.
Depends on
#965 BerriAI/litellm#3124