PITI is an autonomous AI personal trainer that lives on Telegram. It provides expert fitness coaching, nutrition advice, and health guidance through natural conversation, with persistent memory, multi-language support, and vision capabilities for exercise form analysis.
Each user gets an isolated Docker container running their own AI agent instance, with conversation history, long-term memories, and token usage tracked independently in PostgreSQL.
- Multi-user isolation -- Each user gets a dedicated Docker container, auto-created on first message, auto-destroyed after idle timeout
- Two-tier model routing -- A cheap router model (e.g., Gemini Flash) classifies messages as simple/complex/off-topic, then routes to the appropriate model. Complex queries and media go to a smart model (e.g., Gemini Pro)
- Vision capabilities -- Send photos for form checks, meal analysis, or progress tracking. Send videos and PITI extracts frames with ffmpeg for movement analysis
- MCP integration -- Extensible tool system via Model Context Protocol. Ships with DuckDuckGo web search; add new tools with 3 lines of config
- Long-term memory -- Automatically extracts and stores facts about each user (goals, injuries, preferences, PRs) across conversations
- Token usage tracking -- Per-user, per-model tracking of input/output tokens for chat, classification, and memory extraction
- MCP call tracking -- Every tool call logged with timing, arguments, and server info
- Multi-language support -- Auto-detects user language on first message, supports 11+ languages with per-language refusal messages
- Topic enforcement -- Two-layer guard system (regex heuristics + LLM classification) keeps the bot strictly on fitness/nutrition/health topics
- Local HTTP API -- REST API for testing and building alternative frontends, with user mapping to share data with Telegram accounts
┌─────────────────────────────────┐
│ Server │
│ │
┌──────────┐ Telegram API │ ┌───────────────────────────┐ │
│ Telegram │ <──────────────────> │ │ Gateway (Node.js) │ │
└──────────┘ │ │ │ │
│ │ - Telegram bot (telegraf) │ │
┌──────────┐ HTTP :3000 │ │ - HTTP API (fastify) │ │
│ HTTP API │ <──────────────────> │ │ - Container orchestrator │ │
│ clients │ │ │ - User/memory DB access │ │
└──────────┘ │ └─────────┬─────────────────┘ │
│ │ creates per-user │
│ v │
│ ┌───────────────────────────┐ │
│ │ Agent Container (Docker) │ │
│ │ - One per active user │ │
│ │ - Fastify HTTP server │ │
│ │ - Calls LLM providers │ │
│ │ - Uses MCP tools via HTTP │ │
│ └─────────┬─────────────────┘ │
│ │ HTTP :5100 │
│ v │
│ ┌───────────────────────────┐ │
│ │ MCP Bridge (Python) │ │
│ │ - FastAPI HTTP server │ │
│ │ - Spawns MCP servers │ │
│ │ - stdio --> DuckDuckGo │ │
│ │ - stdio --> (extensible) │ │
│ └───────────────────────────┘ │
│ │
│ ┌─────────────┐ ┌───────────┐ │
│ │ PostgreSQL │ │ Redis │ │
│ │ (pgvector) │ │ (state) │ │
│ └─────────────┘ └───────────┘ │
└─────────────────────────────────┘
| Component | Technology |
|---|---|
| Gateway | Node.js, TypeScript, Telegraf, Fastify |
| Agent | Node.js, TypeScript, Vercel AI SDK, Fastify |
| MCP Bridge | Python, FastAPI, MCP SDK |
| Database | PostgreSQL 16 with pgvector |
| State/Registry | Redis 7 |
| Containers | Docker (dockerode) |
| LLM Providers | OpenRouter, Anthropic, Kimi |
| Process Manager | PM2 |
| Package Manager | pnpm (monorepo workspaces) |
| Testing | Vitest |
- Node.js 20+
- pnpm 9+
- Docker (with Docker Compose)
- ffmpeg (for video frame extraction)
git clone <repo-url> piti
cd piti
pnpm installcp config.example.yaml config.yamlEdit config.yaml and fill in:
telegram.token-- Your Telegram bot token from @BotFathertelegram.allowed_users-- Array of Telegram user IDs allowed to use the bot (empty = allow all)llm.providers.openrouter.api_key-- Your OpenRouter API key (or configure another provider)
docker compose up -d postgres redisThis starts PostgreSQL (port 5433) and Redis (port 6379).
docker exec -i piti-postgres-1 psql -U piti -d piti < scripts/init-db.sqlpnpm --filter @piti/shared builddocker build -t piti-agent -f packages/agent/Dockerfile .docker compose build mcp-bridge# Using PM2 (recommended)
pnpm pm2 start ecosystem.config.cjs
# Or directly for development
pnpm dev:gatewaySend /status to your Telegram bot. You should see your user info, provider settings, and MCP service status.
The config.yaml file controls all aspects of the system.
telegram:
token: "YOUR_TELEGRAM_BOT_TOKEN" # Bot token from @BotFather
allowed_users: [] # Telegram user IDs; empty = allow everyonedatabase:
url: "postgresql://piti:piti_secret@localhost:5433/piti"
agent_url: "postgresql://piti:piti_secret@host.docker.internal:5433/piti"Two connection strings are needed because the gateway runs on the host while agent containers run inside Docker. The agent_url uses host.docker.internal to reach PostgreSQL from within a container.
redis:
url: "redis://localhost:6379"Redis is used for container registry (tracking which user has which container/port) and port allocation.
docker:
agent_image: "piti-agent" # Docker image name for agent containers
port_range: [4000, 4100] # Port pool for agent containers (max 100 concurrent users)
idle_timeout_ms: 3600000 # Destroy idle containers after 1 hour (ms)llm:
default_provider: "openrouter" # Provider for new users
default_model: "google/gemini-2.5-flash" # Default chat model
router_model: "google/gemini-2.5-flash" # Cheap model for classification + memory extraction
smart_model: "google/gemini-2.5-pro" # Expensive model for complex queries + media
default_language: "italian" # Default response language for new users
providers:
openrouter:
api_key: "YOUR_OPENROUTER_API_KEY"
anthropic:
api_key: ""
kimi:
api_key: ""Supported providers and models:
| Provider | Models |
|---|---|
| openrouter | google/gemini-2.5-flash, google/gemini-2.5-pro, anthropic/claude-sonnet-4-20250514, openai/gpt-4o |
| claude | claude-sonnet-4-20250514, claude-haiku-4-5-20251001 |
| kimi | kimi-for-coding, kimi-k2-thinking-turbo |
How routing works: Every incoming message is first classified by the router_model (cheap, fast) into one of three categories:
- SIMPLE -- Handled by the
router_modelitself (greetings, basic questions, simple facts) - COMPLEX -- Escalated to the
smart_model(workout plans, meal plans, detailed advice) - OFF-TOPIC -- Rejected with a localized refusal message
Messages with media (photos/videos) always use the smart_model.
api:
enabled: true # Enable the local HTTP API
port: 3000 # API port
user_map:
local: 0 # Map "local" API user to a Telegram user IDThe user_map connects API user keys to Telegram user IDs so they share conversation history and memories. Set the value to your actual Telegram user ID to link them. See docs/api.md for full API documentation.
mcp:
search:
enabled: true
package: "duckduckgo-mcp-server"
command: ["python", "-m", "duckduckgo_mcp_server.server"]Each MCP server entry has:
enabled-- Toggle without removing configpackage-- pip package name, installed dynamically at bridge startupcommand-- How to spawn the MCP server process (stdio transport)
See docs/mcp.md for detailed MCP documentation, including how to add new servers.
| Command | Description |
|---|---|
/start |
Welcome message and feature overview |
/help |
List all available commands |
/profile |
View your fitness profile (built from conversation) |
/provider <name> [model] |
Switch LLM provider and model |
/language <name> |
Set your preferred response language |
/memories |
View what PITI remembers about you |
/reset |
Clear conversation history (memories are preserved) |
/status |
View agent status, token usage stats, and MCP service info |
Supported languages: english, italian, french, spanish, german, portuguese, chinese, japanese, korean, russian, arabic.
PITI includes a local HTTP API for testing and building alternative frontends. See docs/api.md for the full reference.
Quick example:
# Health check
curl http://localhost:3000/health
# Send a message
curl -X POST http://localhost:3000/chat \
-H "Content-Type: application/json" \
-d '{"message": "Create a 3-day workout plan for a beginner"}'
# Check user status
curl http://localhost:3000/status/local# Start infrastructure
docker compose up -d postgres redis
# Build shared types
pnpm --filter @piti/shared build
# Run gateway with hot reload
pnpm dev:gatewaypnpm testTests use Vitest and cover guards, system prompt generation, config validation, auth middleware, language detection, and provider configuration.
piti/
├── config.example.yaml # Configuration template
├── config.yaml # Your local config (gitignored)
├── docker-compose.yml # PostgreSQL, Redis, MCP Bridge
├── ecosystem.config.cjs # PM2 process config
├── vitest.config.ts # Test configuration
├── scripts/
│ ├── init-db.sql # Database schema setup
│ └── cleanup-containers.sh # Remove stale agent containers
├── docs/
│ ├── api.md # HTTP API documentation
│ └── mcp.md # MCP integration documentation
├── packages/
│ ├── shared/ # Shared types and utilities
│ │ └── src/
│ │ ├── types/
│ │ │ ├── agent.ts # AgentRequest, AgentResponse, TokenUsage, McpCall
│ │ │ ├── config.ts # GatewayConfig, AgentEnv, LLM_PROVIDERS, LLM_MODELS
│ │ │ ├── message.ts # ChatMessage, Memory
│ │ │ └── user.ts # UserProfile
│ │ └── utils/
│ │ ├── env.ts # Environment validation
│ │ └── logger.ts # Structured logging
│ ├── gateway/ # Main process (Telegram + API + orchestration)
│ │ └── src/
│ │ ├── index.ts # Entry point: loads config, starts all services
│ │ ├── api/
│ │ │ └── server.ts # Local HTTP API (Fastify + CORS)
│ │ ├── bot/
│ │ │ ├── bot.ts # Telegraf bot setup
│ │ │ ├── handlers/
│ │ │ │ ├── command.ts # /start, /help, /provider, /language, etc.
│ │ │ │ └── message.ts # Text, photo, and video message handling
│ │ │ └── middleware/
│ │ │ └── auth.ts # User allowlist middleware
│ │ ├── db/
│ │ │ ├── client.ts # Drizzle ORM client
│ │ │ └── schema.ts # DB schema definitions
│ │ └── orchestrator/
│ │ ├── containerManager.ts # Docker container lifecycle + health checks
│ │ ├── dispatcher.ts # Request pipeline (user, history, dispatch, save)
│ │ └── mcpManager.ts # MCP bridge container management
│ ├── agent/ # AI agent (runs inside Docker containers)
│ │ └── src/
│ │ ├── index.ts # Entry point
│ │ ├── server.ts # Fastify HTTP server (/health, /chat)
│ │ ├── agent/
│ │ │ ├── trainer.ts # Chat handler with routing + memory extraction
│ │ │ ├── guard.ts # Off-topic detection (heuristic patterns)
│ │ │ └── systemPrompt.ts # Dynamic system prompt with user context
│ │ ├── llm/
│ │ │ └── provider.ts # LLM provider factory (Anthropic, OpenRouter, Kimi)
│ │ └── mcp/
│ │ └── client.ts # MCP Bridge HTTP client, AI SDK tool generation
│ └── mcp-bridge/ # Python MCP server bridge
│ ├── main.py # FastAPI app: spawns MCP servers, exposes HTTP
│ ├── requirements.txt # Python dependencies
│ └── Dockerfile
└── tests/
└── unit/ # Vitest unit tests
├── guard.test.ts
├── systemPrompt.test.ts
├── config.test.ts
├── auth.test.ts
├── language.test.ts
└── provider.test.ts
PITI uses PostgreSQL 16 with pgvector. Schema is initialized from scripts/init-db.sql.
| Table | Purpose |
|---|---|
users |
User profiles, LLM preferences, language settings |
messages |
Conversation history (user + assistant turns) |
memories |
Long-term facts per user, categorized (goal, injury, preference, etc.), with optional vector embedding |
token_usage |
Per-call token counts by provider, model, and purpose (chat/classification/memory_extraction) |
mcp_calls |
MCP tool invocations with arguments, timing, and server info |
Private.