feat: add GitHub Copilot as LLM provider#383
Open
CodingIsBliss wants to merge 4 commits into666ghj:mainfrom
Open
feat: add GitHub Copilot as LLM provider#383CodingIsBliss wants to merge 4 commits into666ghj:mainfrom
CodingIsBliss wants to merge 4 commits into666ghj:mainfrom
Conversation
Enable using a GitHub Copilot subscription to power MiroFish's LLM calls, eliminating the need for a separate API key. How it works: - Exchanges a GitHub token for a short-lived Copilot API token via api.github.com/copilot_internal/v2/token (same flow as OpenClaw) - Derives the OpenAI-compatible base URL from the token's proxy-ep field - Caches tokens in memory + disk, auto-refreshes before expiry - Thread-safe for concurrent simulation agent calls Configuration (in .env): LLM_PROVIDER=github-copilot GITHUB_TOKEN=ghp_xxx (or GH_TOKEN / COPILOT_GITHUB_TOKEN) LLM_MODEL_NAME=gpt-4o (or any Copilot-supported model) Auto-detection: if no LLM_API_KEY is set but a GitHub token is found, Copilot mode activates automatically. Files changed: - NEW: backend/app/utils/copilot_auth.py — token exchange + caching - MOD: backend/app/utils/llm_client.py — Copilot auto-detection + refresh - MOD: backend/app/config.py — LLM_PROVIDER config key - MOD: backend/app/utils/__init__.py — export new module - MOD: .env.example — Copilot setup instructions
The GitHub Copilot API rejects requests missing Editor-Version, User-Agent, and X-Github-Api-Version headers. Pass these via OpenAI SDK's default_headers when in Copilot mode.
…gGenerator These services create their own OpenAI clients directly, bypassing LLMClient. Add the same Copilot auto-detection and required headers to both.
The camel-ai/OASIS ModelFactory creates its own OpenAI clients that don't inherit custom headers. Setting openai.default_headers at module level ensures all OpenAI clients created in the process include the required Editor-Version, User-Agent, and X-Github-Api-Version headers for GitHub Copilot authentication.
koyouko
added a commit
to koyouko/MiroFish
that referenced
this pull request
Mar 30, 2026
… Copilot) Replaces the single-provider LLM client with a unified 5-provider client. Adds thread-safety improvements to simulation_runner from upstream PR 666ghj#389, GitHub Copilot OAuth token support from upstream PR 666ghj#383, and translates all backend Python modules to English. Providers supported: Anthropic (Claude), OpenAI, GitHub Copilot (OAuth), Ollama (local), and the original MiniMax/GLM default. Co-Authored-By: koyouko <koyouko@users.noreply.github.com> Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
5 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Enable using a GitHub Copilot subscription to power MiroFish's LLM calls — no separate API key needed.
Motivation
Many developers already have a GitHub Copilot subscription. This PR lets them use that same subscription to run MiroFish simulations, removing the need to set up and pay for a separate LLM API provider.
This uses the same token exchange mechanism as OpenClaw's built-in
github-copilotprovider.How it works
api.github.com/copilot_internal/v2/tokenproxy-epfieldLLM_PROVIDER=github-copilotis set (or when noLLM_API_KEYexists but a GitHub token is detected)Configuration
Also supports
GH_TOKENandCOPILOT_GITHUB_TOKENenv vars (priority:COPILOT_GITHUB_TOKEN>GH_TOKEN>GITHUB_TOKEN).Files changed
backend/app/utils/copilot_auth.pybackend/app/utils/llm_client.pybackend/app/config.pyLLM_PROVIDERconfig keybackend/app/utils/__init__.py.env.exampleNotes
urllibfor the token exchange (no new dependencies)copilot_internalAPI is undocumented but stable (used by VS Code Copilot + OpenClaw)