- 
                Notifications
    
You must be signed in to change notification settings  - Fork 4
 
Description
Feature: Add support for OpenRouter
Original request reference: https://www.reddit.com/r/mcp/comments/1mmmw16/comment/n81wiwe/
Goal
Add "openrouter" as a first-class LLM provider so users can route requests through OpenRouter’s unified API and access many upstream models (OpenAI, Anthropic, Google, open models, etc.) with a single key.
High-Level Overview
OpenRouter exposes an OpenAI-compatible chat completions endpoint at:
POST https://openrouter.ai/api/v1/chat/completions
Headers:
- Authorization: Bearer <OPENROUTER_API_KEY>
 - HTTP-Referer:
 - X-Title:
 - Content-Type: application/json
 
Body (OpenAI-style):
{
  "model": "openai/gpt-4o",  // or any supported id (e.g. "anthropic/claude-3.5-sonnet", "google/gemini-2.5-flash")
  "messages": [{"role": "user", "content": "Hello"}],
  "temperature": 0.1,
  "max_tokens": 4000
}
If model is omitted OpenRouter uses the account default (model routing). For initial implementation we will REQUIRE a model to reduce ambiguity (aligns with existing provider defaults) but we’ll supply a defaultModel constant.
Scope (MVP)
Supported in this first PR:
- New provider: 
OpenRouterProviderimplementingBaseLLMProviderusing nativefetch(no extra dependency) OR OpenAI SDK withbaseURLoverride (choose native fetch for explicit control & lighter footprint). - Config changes to allow 
CONTEXT_OPT_LLM_PROVIDER=openrouterandCONTEXT_OPT_OPENROUTER_KEY. - Schema + validation updates (add provider union member + key). Fail fast if key missing.
 - Provider factory registration.
 - Basic request (non-streaming) returning first text completion.
 - Error normalization (network errors, HTTP non-2xx, malformed response, empty choices).
 - Tests (unit + integration style behind env guard).
 - Docs (README, API keys reference, changelog entry).
 
Deferred (future issues):
- Streaming (SSE) support (
stream: true). - Dynamic model listing via 
GET https://openrouter.ai/api/v1/modelswith caching. - Automatic retry on transient 5xx / rate limit responses.
 - Usage/token accounting mapping to internal metrics.
 - Assistant prefill / multi-turn context management.
 - Passing through advanced parameters (top_p, frequency_penalty, etc.).
 
Files to Modify / Add
src/config/schema.ts- Extend 
providerunion:'gemini' | 'claude' | 'openai' | 'openrouter'. - Add optional 
openrouterKey?: string;tollmblock. 
- Extend 
 src/config/manager.ts- Accept 
openrouteringetLLMProvider()valid list and error messages. - Include 
...(process.env.CONTACT_OPT_OPENROUTER_KEY && { openrouterKey: process.env.CONTEXT_OPT_OPENROUTER_KEY })when building config. validProvidersarrays updated to includeopenrouter.- Validation: ensure 
openrouterKeyrequired if provider isopenrouter. getSanitizedConfig()addhasOpenrouterKeyboolean.
- Accept 
 src/providers/openrouter.ts(NEW)- Class 
OpenRouterProvider extends BaseLLMProvider. name = 'OpenRouter'.defaultModel = 'openai/gpt-4o'(rationale: widely available; can be adjusted later) OR choose a cheaper default likeopenai/gpt-4o-miniif cost-sensitive. (Pickopenai/gpt-4o-minito align with existing OpenAI default style.)apiKeyUrl = 'https://openrouter.ai/'(landing page where keys managed).apiKeyPrefix = ''(keys aren’t standardized with a fixed prefix; leave empty / undefined if not meaningful).processRequest(prompt: string, model?: string, apiKey?: string):- Validate apiKey presence.
 - Construct body using 
createStandardRequesthelper for consistency (but adapt property names:max_tokens,messages). - Use 
fetch('https://openrouter.ai/api/v1/chat/completions', {...})with method POST. - Headers: Authorization, Content-Type, and optionally pass 
HTTP-Referer+X-Titleif environment vars present (define optional env vars:CONTEXT_OPT_APP_URL,CONTEXT_OPT_APP_NAME— OPTIONAL; only send if defined, do NOT add to schema for now). - Parse JSON. Expected shape (subset): 
{ choices: [{ message: { content: string } }] }similar to OpenAI. Fallback if not found -> error. - On non-2xx: attempt to parse error JSON: maybe shape 
{ error: { message } }else text. - Return success/error via helper methods.
 
- Consider small timeout (e.g., use AbortController with 60s) — OPTIONAL. For MVP rely on global fetch; leave todo comment.
 
- Class 
 src/providers/factory.ts- Add case 'openrouter' mapping to new provider.
 
- Tests:
test/openrouter.test.ts(unit):- Mocks 
global.fetchto return a sample success JSON. - Tests error when API key missing.
 - Tests error path when response has no content.
 - Tests non-2xx status handling.
 
- Mocks 
 test/openrouter.integration.test.ts(optional) behindprocess.env.CONTEXT_OPT_OPENROUTER_KEYpresence and maybe aTEST_LIVE_OPENROUTERflag. Skip if not set.- Update 
test/config-test.tsif it asserts provider lists. 
 - Docs:
README.md: add OpenRouter in provider list + quick start env var snippet.docs/reference/api-keys.md: add section: "OpenRouter" with instructions to obtain key & note optional headers.docs/reference/changelog.md: New entry e.g.Added OpenRouter provider (#1).- (Optional) 
docs/architecture.md: brief note providers are pluggable and now includes OpenRouter. 
 
Environment Variables (New / Updated)
Required when using OpenRouter:
CONTEXT_OPT_LLM_PROVIDER=openrouterCONTEXT_OPT_OPENROUTER_KEY=<your key>
Optional (if we choose to support branding headers):CONTEXT_OPT_APP_URL=https://your-site.example-> sent asHTTP-RefererCONTEXT_OPT_APP_NAME=Context Optimizer-> sent asX-Title
(No changes needed to existing keys for other providers.)
Acceptance Criteria
- Selecting 
openrouterprovider with valid key returns model output for a simple prompt. - Missing key triggers clear configuration error on startup.
 - Invalid HTTP response returns a structured error (
success=false,errorpopulated) without throwing unhandled exceptions. - Factory can instantiate OpenRouter provider via 
LLMProviderFactory.createProvider('openrouter'). - All existing tests still pass; new tests added and green.
 - Documentation updated (README + api-keys + changelog).
 - No sensitive key values logged (sanitized config shows only boolean flags).
 
Implementation Steps
- Update config schema (
src/config/schema.ts). Add'openrouter'to provider union, plusopenrouterKey?: string;. - Update configuration manager (
src/config/manager.ts):- Add environment variable load line for 
CONTEXT_OPT_OPENROUTER_KEY. - Update provider validation arrays to include 
openrouter. - Ensure 
openrouterKeyis required when provider is openrouter (mirrors existing logic). - Add 
hasOpenrouterKeyingetSanitizedConfigoutput. 
 - Add environment variable load line for 
 - Create new provider file 
src/providers/openrouter.tsimplementing class as described. - Add provider registration in 
src/providers/factory.tsswitch. - Write unit tests:
- Create 
test/openrouter.test.ts. - Mock 
global.fetch(store original, restore after). Provide sample JSON:{ choices: [{ message: { content: "Test reply" } }] }. - Test: success case returns 
success=trueand expected content. - Test: missing apiKey returns error message from provider.
 - Test: non-2xx (e.g., 400) returns structured error (simulate 
{ error: { message: 'Bad Request' } }). - Test: malformed JSON (e.g., empty object) returns error 
No response from OpenRouter. 
 - Create 
 - Integration test (optional in this PR – can skip if no live key):
- If environment variable 
CONTEXT_OPT_OPENROUTER_KEYis set, perform a real request with a minimal prompt to ensure pipeline works. Mark withit.skipif not defined. 
 - If environment variable 
 - Update docs & changelog.
 - Run test suite and ensure all pass.
 - Self-review for style consistency (naming, error messages match patterns in other providers).
 - Open PR referencing this issue and summarizing changes.
 
Sample Provider Implementation (Skeleton)
// src/providers/openrouter.ts
import { BaseLLMProvider, LLMResponse } from './base';
export class OpenRouterProvider extends BaseLLMProvider {
  readonly name = 'OpenRouter';
  readonly defaultModel = 'openai/gpt-4o-mini';
  readonly apiKeyUrl = 'https://openrouter.ai/';
  readonly apiKeyPrefix = undefined; // Not standardized
  async processRequest(prompt: string, model?: string, apiKey?: string): Promise<LLMResponse> {
    if (!apiKey) {
      return this.createErrorResponse('OpenRouter API key not configured');
    }
    try {
      const body = this.createStandardRequest(prompt, model || this.defaultModel);
      const headers: Record<string,string> = {
        'Authorization': `Bearer ${apiKey}`,
        'Content-Type': 'application/json'
      };
      if (process.env.CONTEXT_OPT_APP_URL) headers['HTTP-Referer'] = process.env.CONTEXT_OPT_APP_URL;
      if (process.env.CONTEXT_OPT_APP_NAME) headers['X-Title'] = process.env.CONTEXT_OPT_APP_NAME;
      const res = await fetch('https://openrouter.ai/api/v1/chat/completions', {
        method: 'POST',
        headers,
        body: JSON.stringify(body)
      });
      if (!res.ok) {
        let errorMsg = `HTTP ${res.status}`;
        try { const errJson: any = await res.json(); errorMsg = errJson?.error?.message || errorMsg; } catch { /* ignore */ }
        return this.createErrorResponse(`OpenRouter request failed: ${errorMsg}`);
      }
      const json: any = await res.json();
      const content = json?.choices?.[0]?.message?.content;
      if (!content) {
        return this.createErrorResponse('No response from OpenRouter');
      }
      return this.createSuccessResponse(content);
    } catch (e: any) {
      return this.createErrorResponse(`OpenRouter processing failed: ${e?.message || 'Unknown error'}`);
    }
  }
}Testing Notes
- Follow existing test style (see 
openaiorclaudeprovider tests for patterns). There are currently provider tests; mimic structure. - Ensure fetch mock counts invocations and that headers include Authorization (but DO NOT assert exact key value; just presence pattern if needed).
 - Validate error messaging consistency with other providers (prefix with provider name in failure path).
 
Security / Privacy Considerations
- Never log the raw API key (only existence booleans).
 - Keep calls server-side (no exposure to client code).
 - Provide guidance in docs that optional headers (
HTTP-Referer,X-Title) are purely metadata and safe to include. 
Future Enhancements (Follow-up Issues)
- Streaming support using EventSource or manual SSE parsing (split lines, ignore lines starting with 
:which are comments; assembledeltatokens). - Model metadata cache (GET /api/v1/models) with refresh interval (e.g., 24h) and filtering by supported parameters.
 - Retry/backoff on 5xx or rate-limits (respect 
Retry-Afterheader if provided). - Parameter passthrough (temperature, top_p, stop, etc.) via configuration or request options.
 - Usage stats surfaced in responses (token counts) for user display or logging.
 
Definition of Done
- Code merged to main with green CI.
 - Documentation updated and published.
 - Changelog entry present.
 - Able to run a manual prompt using OpenRouter provider and receive coherent output.
 
Open Questions
- Default model final choice (
openai/gpt-4o-minivs a cheaper open model). (Assumeopenai/gpt-4o-miniunless directed otherwise.) - Include optional branding headers now? (Plan: yes, conditionally if env vars present.)