feat: multi-provider LLM support via Prompture#463
feat: multi-provider LLM support via Prompture#463jhd3197 wants to merge 2 commits into666ghj:mainfrom
Conversation
Add optional Prompture integration for 12+ LLM providers (LM Studio, Ollama, Claude, Groq, Kimi/Moonshot, etc.) as a drop-in backend. Zero breaking changes — falls back to the existing OpenAI SDK client when Prompture is not installed. - Rewrite llm_client.py with dual-backend architecture - Update .env.example with provider/model format examples - Add multi-provider table to README Quick Start section - Add prompture as optional dependency in requirements.txt Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
This PR adds optional multi-provider LLM support by integrating Prompture as an alternative backend, while keeping the existing OpenAI SDK-based client as the default when Prompture isn’t installed.
Changes:
- Added a dual-backend
LLMClientthat uses Prompture when available and otherwise falls back to the OpenAI SDK. - Expanded documentation/examples to show
provider/modelconfiguration for multiple LLM providers. - Documented Prompture as an optional dependency in backend requirements.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
backend/app/utils/llm_client.py |
Adds Prompture import/initialization path and routes chat calls through Prompture when installed. |
README.md |
Documents optional Prompture install and provider/model configuration examples. |
backend/requirements.txt |
Notes Prompture as an optional commented dependency. |
.env.example |
Adds commented examples for Prompture provider/model configuration. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| self.api_key = api_key or Config.LLM_API_KEY | ||
| self.base_url = base_url or Config.LLM_BASE_URL | ||
| self.model = model or Config.LLM_MODEL_NAME | ||
|
|
||
|
|
||
| if _HAS_PROMPTURE: | ||
| self._init_prompture() | ||
| else: | ||
| self._init_openai() |
There was a problem hiding this comment.
When Prompture is installed, the client always selects the Prompture backend regardless of LLM_MODEL_NAME. This means users with Prompture installed but still using a plain model name (e.g., qwen-plus for an OpenAI-compatible base URL) will be routed through Prompture instead of the existing OpenAI SDK path. Consider choosing the backend based on the model string format (e.g., only use Prompture when LLM_MODEL_NAME contains a provider prefix like provider/...), or add an explicit config flag to force the OpenAI backend.
| self._env = ProviderEnvironment(**env_kwargs) if env_kwargs else None | ||
| self._driver_options: Dict[str, Any] = {} | ||
| if self.base_url: | ||
| self._driver_options["base_url"] = self.base_url | ||
|
|
There was a problem hiding this comment.
_init_prompture() always forwards base_url into Prompture driver options. Because Config.LLM_BASE_URL has a non-empty default, self.base_url will almost always be set (even when the user didn’t intend to override endpoints). This can unintentionally force non-OpenAI providers (e.g., Claude/Groq/Google) to use an OpenAI-compatible base URL. Prefer only passing base_url to Prompture when it’s explicitly configured for OpenAI-compatible/local providers, or gate it based on the parsed provider.
| if _HAS_PROMPTURE: | ||
| response = self._chat_prompture(messages, temperature, max_tokens) | ||
| else: | ||
| response = self._chat_openai( | ||
| messages, temperature, max_tokens, | ||
| response_format={"type": "json_object"}, | ||
| ) |
There was a problem hiding this comment.
chat_json() enforces JSON mode only on the OpenAI backend (via response_format={"type": "json_object"}), but the Prompture path just calls _chat_prompture() without any equivalent JSON enforcement. This creates inconsistent behavior and will likely increase JSON parse failures when Prompture is installed. Add a Prompture-side option/parameter to request strict JSON output (or a schema) if supported, or fall back to the OpenAI backend for chat_json() when strict JSON mode isn’t available.
| # Inject system prompt | ||
| system_parts = [m["content"] for m in messages if m["role"] == "system"] | ||
| if system_parts: | ||
| conv._messages.append({"role": "system", "content": "\n".join(system_parts)}) | ||
|
|
||
| # Replay prior turns | ||
| non_system = [m for m in messages if m["role"] != "system"] | ||
| for msg in non_system[:-1]: | ||
| conv._messages.append({"role": msg["role"], "content": msg["content"]}) | ||
|
|
There was a problem hiding this comment.
_chat_prompture() mutates conv._messages directly. Because this is a private/internal attribute (leading underscore), it’s not part of a stable public API and may break on Prompture upgrades. Prefer using Prompture’s public methods for adding system/history messages (or constructing the conversation with initial messages) rather than appending to _messages directly.
| Install [Prompture](https://github.com/jhd3197/prompture) to unlock 12+ LLM providers beyond OpenAI-compatible APIs: | ||
|
|
||
| ```bash | ||
| pip install prompture |
There was a problem hiding this comment.
README lists uv as the required Python package manager in Prerequisites, but the new optional Prompture instructions use pip install prompture. To keep setup instructions consistent, consider using uv pip install prompture (or mention both) so users don’t end up installing into a different environment than the one created by npm run setup:backend.
| pip install prompture | |
| uv pip install prompture |
…hand-rolled regexes chat() and chat_json() now delegate think-tag stripping and JSON cleanup to Prompture's built-in utilities (strip_think_tags, clean_json_text). Manual regexes are kept only in the OpenAI fallback path. Adds LM Studio integration test script. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
pip install promptureand switch to"provider/model"format in.envChanges
backend/app/utils/llm_client.py.env.exampleREADME.mdbackend/requirements.txtpromptureas commented optional dependencyWhy
MiroFish currently only supports OpenAI-compatible APIs. Users who want to use local models (Ollama, LM Studio), Anthropic Claude, Groq, or other providers have to manually modify
llm_client.py. This PR makes that a one-line config change.Test plan
lmstudio/local-modelmoonshot/moonshot-v1-8k(Kimi)