Prompt Optimizer supports the Model Context Protocol (MCP), enabling integration with AI applications that support MCP such as Claude Desktop.
- optimize-user-prompt: Optimize user prompts to improve LLM performance
- optimize-system-prompt: Optimize system prompts to improve LLM performance
- iterate-prompt: Iteratively improve mature prompts based on specific requirements
Docker is the simplest deployment method, with both Web interface and MCP server starting together:
# Basic deployment
docker run -d -p 8081:80 \
-e VITE_OPENAI_API_KEY=your-openai-key \
-e MCP_DEFAULT_MODEL_PROVIDER=openai \
--name prompt-optimizer \
linshen/prompt-optimizer
# Access URLs
# Web Interface: http://localhost:8081
# MCP Server: http://localhost:8081/mcpNote: This method is only for developers for development and debugging. Regular users should use Docker deployment.
# 1. Clone the project
git clone https://github.com/your-repo/prompt-optimizer.git
cd prompt-optimizer
# 2. Install dependencies
pnpm install
# 3. Configure environment variables (copy and edit .env.local)
cp env.local.example .env.local
# 4. Start MCP server
pnpm mcp:devThe server will start at http://localhost:3000/mcp. Developers can refer to the Developer Documentation for more development-related information.
At least one API key must be configured:
# Choose one or more API keys
VITE_OPENAI_API_KEY=your-openai-key
VITE_GEMINI_API_KEY=your-gemini-key
VITE_DEEPSEEK_API_KEY=your-deepseek-key
VITE_SILICONFLOW_API_KEY=your-siliconflow-key
VITE_ZHIPU_API_KEY=your-zhipu-key
# Custom API (e.g., Ollama)
VITE_CUSTOM_API_KEY=your-custom-key
VITE_CUSTOM_API_BASE_URL=http://localhost:11434/v1
VITE_CUSTOM_API_MODEL=qwen2.5:0.5b# Preferred model provider (when multiple API keys are configured)
# Options: openai, gemini, deepseek, siliconflow, zhipu, custom
MCP_DEFAULT_MODEL_PROVIDER=openai
# Log level (optional, default: debug)
# Options: debug, info, warn, error
MCP_LOG_LEVEL=info
# HTTP port (optional, default: 3000, not needed for Docker deployment)
MCP_HTTP_PORT=3000
# Default language (optional, default: zh)
# Options: zh, en
MCP_DEFAULT_LANGUAGE=zh- Windows:
%APPDATA%\Claude\services - macOS:
~/Library/Application Support/Claude/services - Linux:
~/.config/Claude/services
Create or edit the services.json file:
{
"services": [
{
"name": "Prompt Optimizer",
"url": "http://localhost:8081/mcp"
}
]
}Note: If you are using developer local deployment (port 3000), please change the URL to
http://localhost:3000/mcp.
The MCP server supports the standard MCP protocol and can be used by any compatible client:
- Connection URLs:
- Docker deployment:
http://localhost:8081/mcp - Local deployment:
http://localhost:3000/mcp
- Docker deployment:
- Protocol: HTTP Streamable
- Transport: HTTP or stdio
MCP Inspector is the official testing tool:
# 1. Start MCP server
pnpm mcp:dev
# 2. Start Inspector in another terminal
npx @modelcontextprotocol/inspectorIn the Inspector Web UI:
- Select transport method:
Streamable HTTP - Server URL:
http://localhost:3000/mcp - Click "Connect" to connect to the server
- Test available tools
Error: Error: listen EADDRINUSE: address already in use
Solution: Port is occupied, change port or stop the occupying process
# Check port usage
netstat -ano | findstr :3000
# Change port
MCP_HTTP_PORT=3001 pnpm mcp:devError: No enabled models found
Solution: Check API key configuration
# Ensure at least one valid API key is configured
echo $VITE_OPENAI_API_KEYError: Using wrong model
Solution: Check MCP_DEFAULT_MODEL_PROVIDER configuration
# Ensure provider name is correct
MCP_DEFAULT_MODEL_PROVIDER=openai # not OpenAISolution Steps:
- Confirm MCP server is running
- Check if URL is correct
- Confirm firewall settings
- Check Claude Desktop logs
Enable verbose logging:
# Development environment
MCP_LOG_LEVEL=debug pnpm mcp:dev
# Docker environment
docker run -e MCP_LOG_LEVEL=debug ...If you encounter issues:
- Check the troubleshooting section in this document
- Check project Issues
- Submit a new Issue describing the problem
- Contact the development team