Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Auto populate llm model in config settings #56

Conversation

kshitij79
Copy link
Contributor

@kshitij79 kshitij79 commented Aug 20, 2024

Closes #45

Changes

  • Ensured default LLM model is set when selecting any provider.
  • Added sanity checks and fixed incorrect llm urls.
  • Added error handling and logging for better debugging.
  • Adjusted UI to dynamically update LLM model based on provider selection.
  • Prevent saving workspace settings when no workspace is open

Flags

  • Testing is needed to verify that the default LLM model is correctly auto-populated and that no regressions occur with any providers.

Screenshots or Video

Related Issues

Author Checklist

  • Ensure you provide a DCO sign-off for your commits using the --signoff option of git commit.
  • Vital features and changes captured in unit and/or integration tests
  • Commits messages follow AP format
  • Extend the documentation, if necessary
  • Merging to master from fork:branchname

@dselman dselman closed this Nov 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

LLM Model Field Behavior Issues for OpenAI and MistralAI Providers in Configuration Settings
2 participants