This guide helps you resolve common issues with the Ollama MCP Server.
# Error: Cannot find module 'express'
# Error: Cannot find module 'crypto'Solution:
# Clean and reinstall
npm run clean
rm -rf node_modules package-lock.json
npm install
npm run build# Error: Module not foundSolution:
Check your tsconfig.json:
{
"compilerOptions": {
"module": "ESNext",
"moduleResolution": "Node",
"allowSyntheticDefaultImports": true
}
}# Error: Cannot find module './server/mcp-server.js'Solution:
# Ensure you're running from the correct directory
cd /path/to/ollama-mcp
npm run build
npm start# Error: Failed to list models: HTTP error! status: 500Solutions:
-
Check if Ollama is running:
ollama list
-
Start Ollama service:
ollama serve
-
Check Ollama URL:
# Default is http://localhost:11434 # For Railway: http://127.0.0.1:11434 export OLLAMA_BASE_URL=http://localhost:11434
-
Verify Ollama is accessible:
curl http://localhost:11434/api/tags
# Error: Transport not initializedSolution:
- Ensure you're not running in HTTP mode
- Check that no other process is using stdio
- Restart the server
# Error: Invalid HTTP port specifiedSolutions:
-
Check port configuration:
echo $MCP_HTTP_PORT # Should be a valid number echo $PORT # Railway uses this
-
Use valid port numbers:
MCP_HTTP_PORT=8080 npm start
-
Check port availability:
lsof -i :8080 # Check if port is in use
# Error: CORS policy blockedSolution:
# Add allowed origins
export MCP_HTTP_ALLOWED_ORIGINS="http://localhost:3000,https://yourdomain.com"# Error: failed to solve: process did not complete successfullySolutions:
-
Check Dockerfile syntax:
docker build -t test-build . -
Verify Ollama download URL:
- Check if the URL in Dockerfile is still valid
- Update OLLAMA_VERSION if needed
-
Check Railway logs:
railway logs
# Error: Volume not foundSolution:
# Create volume first
railway volume add --mount-path /data/ollama
# Then deploy
railway up# Error: Invalid configurationSolution:
# Check Railway variables
railway variables
# Set required variables
railway variables set MCP_TRANSPORT=http
railway variables set MCP_HTTP_PORT=8080# Error: model not foundSolutions:
-
List available models:
ollama list
-
Pull the model:
ollama pull llama2
-
Check model name spelling:
# Use exact model name from ollama list ollama run llama2:latest
# Error: Failed to pull modelSolutions:
-
Check internet connection
-
Verify model name exists:
# Check Ollama registry curl https://ollama.com/library -
Try different model:
ollama pull mistral:7b
# Models take too long to loadSolutions:
-
Use smaller models for testing:
ollama pull tinyllama
-
Check available memory:
free -h
-
Use CPU-only mode:
export OLLAMA_HOST=0.0.0.0:11434
# Server uses too much memorySolutions:
-
Limit concurrent models:
export OLLAMA_MAX_LOADED_MODELS=1 -
Use model quantization:
ollama pull llama2:7b-q4_0 # Quantized version
# Ollama debug logging
export OLLAMA_DEBUG=1
ollama serve
# MCP server debug logging
export DEBUG=mcp:*
npm start# Health check (HTTP mode)
curl http://localhost:8080/healthz
# Check Ollama API
curl http://localhost:11434/api/tags# Railway logs
railway logs --follow
# Local logs
npm start 2>&1 | tee server.log-
Check this troubleshooting guide
-
Verify your environment:
node --version # Should be 18+ npm --version ollama --version -
Check logs for error messages
-
Try the basic setup:
npm run clean npm install npm run build npm start
Include:
- Operating system and version
- Node.js version
- Ollama version
- Complete error message
- Steps to reproduce
- Log output
# Full reset
npm run clean
rm -rf node_modules package-lock.json
npm install
npm run build
# Test basic functionality
ollama list
curl http://localhost:11434/api/tags
# Test MCP server
npm start
# In another terminal:
curl -X POST http://localhost:8080/mcp \
-H "Content-Type: application/json" \
-d '{"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"1.0.0"}}}'