Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make llm service and options configurable from environnment variables #1258

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

frutik
Copy link

@frutik frutik commented Mar 3, 2025

Description

configuration of llm service externalized into env variables. added configurable timeout for interactions with llm service

Motivation and Context

in current implementation all details of llm, except the api key, are hardcoded. proposed chages allows specifying as llm service any llm compatible with openai completion api (industry standard)

How Has This Been Tested?

tested locally on llama2 / ollama runtime - https://github.com/frutik/quepid-ollama/tree/master

Screenshots or GIFs (if appropriate):

Types of changes

  • Improvement (non-breaking change which improves existing functionality)

Checklist:

  • [] My code follows the code style of this project.
  • [] My change requires a change to the documentation.
  • [] I have updated the documentation accordingly.
  • [] I have read the CONTRIBUTING document.
  • [] I have added tests to cover my changes.
  • [] All new and existing tests passed.

@epugh
Copy link
Member

epugh commented Mar 4, 2025

This is really cool... I'll have to mull a little on this, but it really points to how we can make the LLM more plugabble. I am also keeping an eye on projects like https://github.com/activeagents/activeagent and https://github.com/patterns-ai-core/langchainrb to see if they get enough adoption to make sense to depend on..

@epugh
Copy link
Member

epugh commented Mar 7, 2025

I looked a bit through this, and super excited about this.

So, as we think about embarking on dropping ✨ emojies everywhere to indicate AI (or is it that diamond?) I am realizing we need some good abstractions over the various providers. In the Rails world I have seen ActiveAgent, Langchainrb, and others that purport to do this.

My gut feeling is that we should NOT be too ambitious, and maybe just roll our own support for OpenAI and Llama (in the same vein of this PR and the #1256)...

To get this PR in I think we need:

@frutik
Copy link
Author

frutik commented Mar 9, 2025

the various providers. In the Rails world I have seen ActiveAgent, Langchainrb, and others that purport to do this.

Don't overcomplicate it with unnecessary dependencies. In the end, they all support some kind of HTTP API.

@frutik
Copy link
Author

frutik commented Mar 9, 2025

More robust configuration

Well, the 12-factor app manifesto states that the best configuration method is via environment variables. 🙂

So, indeed, this way of configuring is not very flexible (as it still allows only one LLM), but it does at least provide a choice of which one (from the list of compatible with openai api) to use.

@epugh
Copy link
Member

epugh commented Mar 10, 2025

Do you mind if I use your branch and work on some things, or do you want me to make a new branch?

@frutik
Copy link
Author

frutik commented Mar 11, 2025

@epugh Of course. I’m sorry that I can’t dedicate enough time on my end

@epugh
Copy link
Member

epugh commented Mar 12, 2025

Couldn't figure out how to push to here, so I created #1275

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants