feat: Add Ollama/Local Model Provider Support#158
feat: Add Ollama/Local Model Provider Support#158doozie-akshay wants to merge 8 commits intoKilo-Org:devfrom
Conversation
Implements local LLM support via Ollama integration: - Auto-detects running Ollama instance at localhost:11434 - Fetches available models dynamically from /api/tags endpoint - Supports custom baseURL configuration via OLLAMA_HOST env or config - Zero cost for all Ollama models (local inference) - Comprehensive documentation and test coverage Closes Kilo-Org#154
- Support API key authentication via OLLAMA_API_KEY env or config - Try both /api/tags (native) and /v1/models (OpenAI-compatible) endpoints - Handle baseURL with or without /v1 suffix - Better error handling for remote/secured instances This enables using Ollama behind reverse proxies or via SSH tunnels with authentication.
- Always register Ollama provider with default models (llama3.2, llama3.1, mistral) - Dynamically fetch actual models in background after connection - Prevents repeated API key prompts - Better UX: provider visible immediately, models update after auth - Works both with and without API key configuration
When running 'kilo auth login ollama', users are now prompted for: 1. Host URL (e.g., http://localhost:11434 or remote address) 2. Whether API key is required 3. API key (if required) Configuration is saved to both auth.json and opencode.json for easy setup. Improves UX for configuring remote/secured Ollama instances.
- Add section about 'kilo auth login ollama' command - Include examples for local and remote setups - Document the interactive prompts for host and API key
- When baseURL is set in config, synchronously fetch real models - Only use default models for unconfigured localhost instances - Fix config loading to use correct path - Remote models now populate immediately instead of showing defaults
Model options (including baseURL) were not being passed to the AI SDK.
This caused API calls to go to wrong endpoint without /v1 prefix.
Changed options merge from:
const options = { ...provider.options }
To:
const options = { ...provider.options, ...model.options }
This ensures Ollama's baseURL and apiKey are properly used.
|
This PR is linked to issue #154 which already covers this feature request. |
|
Hi! Thank you for taking the time to contribute to this project—we really appreciate it. 🙏 We are currently working on re-platforming the core of our VS Code and JetBrains extensions to be based on our new Kilo CLI, with a complete rebuild based on OpenCode as our new foundation, and the moment has come to promote this repository to become the main repository. To do that, we moved the code from this repository to the kilocode repository. This unfortunately means we cannot merge this branch here anymore. Please add https://github.com/Kilo-Org/kilocode.git as a remote, and push your branch there and create a new PR in https://github.com/Kilo-Org/kilocode . We unfortunately cannot do this for you as then the PR would not be in your name anymore. If you need any help, feel free to ask on our Discord in #kilo-dev-contributors Sorry for the inconvenience and thank you for contributing to Kilo! |
Summary
This PR adds Ollama as a first-class provider, enabling users to run local AI models alongside cloud providers.
Changes
Core Implementation
Features
✅ Auto-detect local Ollama at
http://localhost:11434✅ Support remote Ollama via SSH tunnel or direct connection
✅ API key authentication for secured instances
✅ Dual endpoint support (
/api/tagsnative +/v1/modelsOpenAI-compatible)✅ Always-visible provider with dynamic model fetching
✅ Interactive setup via
kilo auth login ollamaUsage
Quick Start (Local):
kilo # Select Ollama from provider listRemote with Auth:
Testing
✅ Tested with local Ollama (macOS)
✅ Tested with remote Ollama via SSH tunnel
✅ Tested with API key authentication
✅ All typecheck tests pass
Documentation
packages/opencode/docs/providers/ollama.mdRelated Issue
Closes #154
Notes
kilocode_changemarkers for upstream compatibility