Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Native Function Calling #973

Merged
merged 35 commits into from
Apr 29, 2024

Conversation

elisalimli
Copy link
Contributor

@elisalimli elisalimli commented Apr 22, 2024

This PR adds native function calling support for models from Anthropic, Mistral, Bedrock, Groq instead of leveraging OpenAI's GPT model for function calling.

We are using GPT for function calling for the models that does not support for function calling such as, Perplexity.

Depends on

#965 BerriAI/litellm#3124

homanp and others added 30 commits April 16, 2024 10:36
* feat: gracefully handling parsing errors for structured outputs

* chore: stringify parsed response
…on-object-over-string

Reverting breaking change for json mode.
* add 'MISTRAL' to LLMProvider enum

* add Mistral to SAML

* feat(ui): add Mistral integration support
* feat(db): add database schemas

* feat: add groq to SAML

* feat(ui): groq integration
…response

fix: add JSON serialization to SuperRagTool query response
Copy link

render bot commented Apr 22, 2024

Copy link

vercel bot commented Apr 22, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
superagent-ui ✅ Ready (Inspect) Visit Preview 💬 Add feedback Apr 29, 2024 7:55am

@elisalimli elisalimli changed the base branch from main to refactor/agent-classes April 22, 2024 18:18
@elisalimli elisalimli requested a review from homanp April 22, 2024 18:22
@elisalimli elisalimli marked this pull request as draft April 22, 2024 18:23

self.messages = messages
kwargs["messages"] = self.messages
return await self._completion(**kwargs)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@homanp Right now, if LLM returns tool_calls we are recursively executing tool calls and passing the result to LLM back again until there is no tool_calls anymore. Should we add threshold for maximum recursion depth level (e.g. 10)?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

10 sounds a lot?

Copy link
Contributor Author

@elisalimli elisalimli Apr 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

10 sounds a lot?

Yeah, I know it's a lot. But I intentionally chose that to avoid false negatives - stopping function calling when we actually need to continue.

As a reference point, Langchain have chosen 15

@elisalimli elisalimli marked this pull request as ready for review April 24, 2024 08:25
@elisalimli elisalimli merged commit 50b96de into refactor/agent-classes Apr 29, 2024
8 checks passed
@arsaboo
Copy link

arsaboo commented May 1, 2024

@elisalimli apologies for commenting on a merged PR, but wondering if it works with Ollama as well.

@elisalimli elisalimli deleted the feat/native-function-calling branch May 2, 2024 03:23
@elisalimli
Copy link
Contributor Author

elisalimli commented May 2, 2024

@elisalimli apologies for commenting on a merged PR, but wondering if it works with Ollama as well.

If you deploy it to a server, we can. As of now, we do not support the local running models.

@arsaboo
Copy link

arsaboo commented May 2, 2024

Yes, Ollama server directly or through Litellm.

@elisalimli
Copy link
Contributor Author

elisalimli commented May 2, 2024

Yes, Ollama server directly or through Litellm.

If you deploy it somewhere yes, but not locally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants