Skip to content

feat: add GitHub Copilot as LLM provider#383

Open
CodingIsBliss wants to merge 4 commits into666ghj:mainfrom
CodingIsBliss:feat/github-copilot-provider
Open

feat: add GitHub Copilot as LLM provider#383
CodingIsBliss wants to merge 4 commits into666ghj:mainfrom
CodingIsBliss:feat/github-copilot-provider

Conversation

@CodingIsBliss
Copy link
Copy Markdown

Summary

Enable using a GitHub Copilot subscription to power MiroFish's LLM calls — no separate API key needed.

Motivation

Many developers already have a GitHub Copilot subscription. This PR lets them use that same subscription to run MiroFish simulations, removing the need to set up and pay for a separate LLM API provider.

This uses the same token exchange mechanism as OpenClaw's built-in github-copilot provider.

How it works

  1. Exchanges a GitHub token for a short-lived Copilot API token via api.github.com/copilot_internal/v2/token
  2. Derives the OpenAI-compatible base URL from the token's proxy-ep field
  3. Caches tokens in memory + on disk, auto-refreshes before expiry
  4. Thread-safe for concurrent simulation agent calls
  5. Zero changes needed for existing users — only activates when LLM_PROVIDER=github-copilot is set (or when no LLM_API_KEY exists but a GitHub token is detected)

Configuration

LLM_PROVIDER=github-copilot
GITHUB_TOKEN=ghp_xxx
LLM_MODEL_NAME=gpt-4o

Also supports GH_TOKEN and COPILOT_GITHUB_TOKEN env vars (priority: COPILOT_GITHUB_TOKEN > GH_TOKEN > GITHUB_TOKEN).

Files changed

File Change
backend/app/utils/copilot_auth.py NEW — Token exchange, caching, auto-refresh
backend/app/utils/llm_client.py Copilot auto-detection + token refresh before each call
backend/app/config.py Added LLM_PROVIDER config key
backend/app/utils/__init__.py Export new module
.env.example Copilot setup instructions

Notes

  • Uses only stdlib urllib for the token exchange (no new dependencies)
  • Copilot models available depend on the user's plan (Free/Pro/Business/Enterprise)
  • Rate limits are lower than dedicated LLM APIs — users should start with small simulations (<40 rounds)
  • The copilot_internal API is undocumented but stable (used by VS Code Copilot + OpenClaw)

Enable using a GitHub Copilot subscription to power MiroFish's LLM calls,
eliminating the need for a separate API key.

How it works:
- Exchanges a GitHub token for a short-lived Copilot API token via
  api.github.com/copilot_internal/v2/token (same flow as OpenClaw)
- Derives the OpenAI-compatible base URL from the token's proxy-ep field
- Caches tokens in memory + disk, auto-refreshes before expiry
- Thread-safe for concurrent simulation agent calls

Configuration (in .env):
  LLM_PROVIDER=github-copilot
  GITHUB_TOKEN=ghp_xxx  (or GH_TOKEN / COPILOT_GITHUB_TOKEN)
  LLM_MODEL_NAME=gpt-4o  (or any Copilot-supported model)

Auto-detection: if no LLM_API_KEY is set but a GitHub token is found,
Copilot mode activates automatically.

Files changed:
- NEW: backend/app/utils/copilot_auth.py — token exchange + caching
- MOD: backend/app/utils/llm_client.py — Copilot auto-detection + refresh
- MOD: backend/app/config.py — LLM_PROVIDER config key
- MOD: backend/app/utils/__init__.py — export new module
- MOD: .env.example — Copilot setup instructions
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. enhancement New feature or request LLM API Any questions regarding the LLM API labels Mar 29, 2026
The GitHub Copilot API rejects requests missing Editor-Version, User-Agent,
and X-Github-Api-Version headers. Pass these via OpenAI SDK's default_headers
when in Copilot mode.
…gGenerator

These services create their own OpenAI clients directly, bypassing LLMClient.
Add the same Copilot auto-detection and required headers to both.
The camel-ai/OASIS ModelFactory creates its own OpenAI clients that
don't inherit custom headers. Setting openai.default_headers at module
level ensures all OpenAI clients created in the process include the
required Editor-Version, User-Agent, and X-Github-Api-Version headers
for GitHub Copilot authentication.
koyouko added a commit to koyouko/MiroFish that referenced this pull request Mar 30, 2026
… Copilot)

Replaces the single-provider LLM client with a unified 5-provider client.
Adds thread-safety improvements to simulation_runner from upstream PR 666ghj#389,
GitHub Copilot OAuth token support from upstream PR 666ghj#383, and translates
all backend Python modules to English.

Providers supported: Anthropic (Claude), OpenAI, GitHub Copilot (OAuth),
Ollama (local), and the original MiniMax/GLM default.

Co-Authored-By: koyouko <koyouko@users.noreply.github.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request LLM API Any questions regarding the LLM API size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant