Skip to content

feat: multi-provider LLM support via Prompture#463

Open
jhd3197 wants to merge 2 commits into666ghj:mainfrom
jhd3197:feat/prompture-multi-provider
Open

feat: multi-provider LLM support via Prompture#463
jhd3197 wants to merge 2 commits into666ghj:mainfrom
jhd3197:feat/prompture-multi-provider

Conversation

@jhd3197
Copy link
Copy Markdown

@jhd3197 jhd3197 commented Apr 4, 2026

Summary

  • Adds optional Prompture integration as a drop-in LLM backend
  • 12+ providers out of the box: LM Studio, Ollama, Claude, Groq, Kimi/Moonshot, Google, OpenRouter, and more
  • Zero breaking changes — when Prompture is not installed, the existing OpenAI SDK client works exactly as before
  • Users just pip install prompture and switch to "provider/model" format in .env

Changes

File What changed
backend/app/utils/llm_client.py Dual-backend architecture: tries Prompture first, falls back to OpenAI SDK
.env.example Added provider/model format examples for all supported providers
README.md Added multi-provider table in Quick Start section
backend/requirements.txt Added prompture as commented optional dependency

Why

MiroFish currently only supports OpenAI-compatible APIs. Users who want to use local models (Ollama, LM Studio), Anthropic Claude, Groq, or other providers have to manually modify llm_client.py. This PR makes that a one-line config change.

Test plan

  • Verify existing OpenAI SDK path works unchanged (no Prompture installed)
  • Install Prompture and test with lmstudio/local-model
  • Test with moonshot/moonshot-v1-8k (Kimi)
  • Run full simulation pipeline end-to-end

Add optional Prompture integration for 12+ LLM providers (LM Studio,
Ollama, Claude, Groq, Kimi/Moonshot, etc.) as a drop-in backend.
Zero breaking changes — falls back to the existing OpenAI SDK client
when Prompture is not installed.

- Rewrite llm_client.py with dual-backend architecture
- Update .env.example with provider/model format examples
- Add multi-provider table to README Quick Start section
- Add prompture as optional dependency in requirements.txt

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings April 4, 2026 05:18
@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Apr 4, 2026
@dosubot dosubot bot added the enhancement New feature or request label Apr 4, 2026
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds optional multi-provider LLM support by integrating Prompture as an alternative backend, while keeping the existing OpenAI SDK-based client as the default when Prompture isn’t installed.

Changes:

  • Added a dual-backend LLMClient that uses Prompture when available and otherwise falls back to the OpenAI SDK.
  • Expanded documentation/examples to show provider/model configuration for multiple LLM providers.
  • Documented Prompture as an optional dependency in backend requirements.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 5 comments.

File Description
backend/app/utils/llm_client.py Adds Prompture import/initialization path and routes chat calls through Prompture when installed.
README.md Documents optional Prompture install and provider/model configuration examples.
backend/requirements.txt Notes Prompture as an optional commented dependency.
.env.example Adds commented examples for Prompture provider/model configuration.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 62 to +69
self.api_key = api_key or Config.LLM_API_KEY
self.base_url = base_url or Config.LLM_BASE_URL
self.model = model or Config.LLM_MODEL_NAME


if _HAS_PROMPTURE:
self._init_prompture()
else:
self._init_openai()
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When Prompture is installed, the client always selects the Prompture backend regardless of LLM_MODEL_NAME. This means users with Prompture installed but still using a plain model name (e.g., qwen-plus for an OpenAI-compatible base URL) will be routed through Prompture instead of the existing OpenAI SDK path. Consider choosing the backend based on the model string format (e.g., only use Prompture when LLM_MODEL_NAME contains a provider prefix like provider/...), or add an explicit config flag to force the OpenAI backend.

Copilot uses AI. Check for mistakes.
Comment on lines +81 to +85
self._env = ProviderEnvironment(**env_kwargs) if env_kwargs else None
self._driver_options: Dict[str, Any] = {}
if self.base_url:
self._driver_options["base_url"] = self.base_url

Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_init_prompture() always forwards base_url into Prompture driver options. Because Config.LLM_BASE_URL has a non-empty default, self.base_url will almost always be set (even when the user didn’t intend to override endpoints). This can unintentionally force non-OpenAI providers (e.g., Claude/Groq/Google) to use an OpenAI-compatible base URL. Prefer only passing base_url to Prompture when it’s explicitly configured for OpenAI-compatible/local providers, or gate it based on the parsed provider.

Copilot uses AI. Check for mistakes.
Comment on lines +148 to +154
if _HAS_PROMPTURE:
response = self._chat_prompture(messages, temperature, max_tokens)
else:
response = self._chat_openai(
messages, temperature, max_tokens,
response_format={"type": "json_object"},
)
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

chat_json() enforces JSON mode only on the OpenAI backend (via response_format={"type": "json_object"}), but the Prompture path just calls _chat_prompture() without any equivalent JSON enforcement. This creates inconsistent behavior and will likely increase JSON parse failures when Prompture is installed. Add a Prompture-side option/parameter to request strict JSON output (or a schema) if supported, or fall back to the OpenAI backend for chat_json() when strict JSON mode isn’t available.

Copilot uses AI. Check for mistakes.
Comment on lines +177 to +186
# Inject system prompt
system_parts = [m["content"] for m in messages if m["role"] == "system"]
if system_parts:
conv._messages.append({"role": "system", "content": "\n".join(system_parts)})

# Replay prior turns
non_system = [m for m in messages if m["role"] != "system"]
for msg in non_system[:-1]:
conv._messages.append({"role": msg["role"], "content": msg["content"]})

Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_chat_prompture() mutates conv._messages directly. Because this is a private/internal attribute (leading underscore), it’s not part of a stable public API and may break on Prompture upgrades. Prefer using Prompture’s public methods for adding system/history messages (or constructing the conversation with initial messages) rather than appending to _messages directly.

Copilot uses AI. Check for mistakes.
Install [Prompture](https://github.com/jhd3197/prompture) to unlock 12+ LLM providers beyond OpenAI-compatible APIs:

```bash
pip install prompture
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

README lists uv as the required Python package manager in Prerequisites, but the new optional Prompture instructions use pip install prompture. To keep setup instructions consistent, consider using uv pip install prompture (or mention both) so users don’t end up installing into a different environment than the one created by npm run setup:backend.

Suggested change
pip install prompture
uv pip install prompture

Copilot uses AI. Check for mistakes.
…hand-rolled regexes

chat() and chat_json() now delegate think-tag stripping and JSON
cleanup to Prompture's built-in utilities (strip_think_tags,
clean_json_text).  Manual regexes are kept only in the OpenAI
fallback path.  Adds LM Studio integration test script.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants