An MCP (Model Context Protocol) server that enables LLM-to-LLM communication through OpenRouter and Google Gemini APIs.
- Dual Provider Support: Integrates with both OpenRouter and Google Gemini.
- Multi-Turn Conversations: Maintains context for extended dialogues.
- Configurable Presets: Easily switch between settings optimized for
generaluse andcoding. - High Token Limits: Supports up to 8192 tokens for handling large codebases and complex discussions.
After attaching to your terminal/code editor, you can basically send a prompt like this call llm-bridge mcp server to ask another LLM about his opinion, use [modelname](optional), find the most optional solution. The LLM will decide what tool to use if not guided.
Add the server to your MCP client's configuration. The recommended way is to use npx to run the latest version directly from npm.
Example for RovoDev / RooCode:
{
"llm-bridge": {
"command": "npx",
"args": [
"-y",
"@dav1lex/server-llm-bridge"
]
}
}For local development:
{
"llm-bridge": {
"command": "node",
"args": [
"/path/to/your/llm-bridge/dist/index.js"
]
}
}Important: You need at least one API key (OpenRouter or Gemini) for the server to be functional. The server will start without keys but all tools will return errors. You can provide API keys in several ways:
Option A: Create a .env file in your project root:
# OpenRouter Configuration
OPENROUTER_API_KEY=your_openrouter_api_key_here
OPENROUTER_HTTP_REFERER=http://localhost:3000
OPENROUTER_X_TITLE=LLM Bridge MCP
# Google Gemini Configuration
GEMINI_API_KEY=your_gemini_api_key_hereOption B: Set system environment variables:
# Windows
set OPENROUTER_API_KEY=your_openrouter_api_key_here
set GEMINI_API_KEY=your_gemini_api_key_here
# Linux/Mac
export OPENROUTER_API_KEY=your_openrouter_api_key_here
export GEMINI_API_KEY=your_gemini_api_key_hereOption C: Set in your shell profile (permanent):
Add to your .bashrc, .zshrc, or equivalent:
export OPENROUTER_API_KEY=your_openrouter_api_key_here
export GEMINI_API_KEY=your_gemini_api_key_hereSend a single prompt to an OpenRouter model.
prompt(required): The text to send.model(optional): e.g.,deepseek/deepseek-chat-v3.1:free.preset(optional): Use a predefined configuration (generalorcoding).
Send a single prompt to a Gemini model.
prompt(required): The text to send.model(optional): e.g.,gemini-2.5-pro.preset(optional): Use a predefined configuration (generalorcoding).
Have a multi-turn conversation, maintaining context.
messages(required): An array of the conversation history.provider(optional):openrouterorgemini.model(optional): The model to use.
Example:
{
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
},
{
"role": "assistant",
"content": "The capital of France is Paris."
},
{
"role": "user",
"content": "What is a famous landmark there?"
}
],
"provider": "openrouter",
"model": "deepseek/deepseek-chat-v3.1:free"
}If you've cloned this repository and want to run your local version:
- Install dependencies:
npm install - Build the code:
npm run build - Run in development mode:
npm run dev(auto-rebuilds on changes)
Then, update your MCP client to run the local version with node:
{
"llm-bridge": {
"command": "node",
"args": [
"c:/Users/admin/LLM-Bridge/dist/index.js"
]
}
}MIT License.