Skip to content

Comments

feat: Add Ollama/Local Model Provider Support#158

Closed
doozie-akshay wants to merge 8 commits intoKilo-Org:devfrom
doozie-akshay:feature/154-ollama-provider
Closed

feat: Add Ollama/Local Model Provider Support#158
doozie-akshay wants to merge 8 commits intoKilo-Org:devfrom
doozie-akshay:feature/154-ollama-provider

Conversation

@doozie-akshay
Copy link

Summary

This PR adds Ollama as a first-class provider, enabling users to run local AI models alongside cloud providers.

Changes

Core Implementation

  • packages/opencode/src/provider/models.ts: Dynamic Ollama provider injection with model fetching
  • packages/opencode/src/provider/provider.ts: Fixed model options merging (baseURL/apiKey now properly passed to SDK)
  • packages/opencode/src/cli/cmd/auth.ts: Interactive Ollama setup command

Features

✅ Auto-detect local Ollama at http://localhost:11434
✅ Support remote Ollama via SSH tunnel or direct connection
✅ API key authentication for secured instances
✅ Dual endpoint support (/api/tags native + /v1/models OpenAI-compatible)
✅ Always-visible provider with dynamic model fetching
✅ Interactive setup via kilo auth login ollama

Usage

Quick Start (Local):

kilo
# Select Ollama from provider list

Remote with Auth:

# Option 1: Interactive setup
kilo auth login ollama

# Option 2: Config file
# ~/.opencode/config.json
{
  "provider": {
    "ollama": {
      "options": {
        "baseURL": "http://127.0.0.1:11434",
        "apiKey": "sk-your-key"
      }
    }
  }
}

Testing

✅ Tested with local Ollama (macOS)
✅ Tested with remote Ollama via SSH tunnel
✅ Tested with API key authentication
✅ All typecheck tests pass

Documentation

  • Added comprehensive docs at packages/opencode/docs/providers/ollama.md
  • Includes local setup, remote configuration, and troubleshooting

Related Issue

Closes #154

Notes

  • Uses kilocode_change markers for upstream compatibility
  • Minimal changes to shared code paths
  • Follows existing provider patterns

Implements local LLM support via Ollama integration:
- Auto-detects running Ollama instance at localhost:11434
- Fetches available models dynamically from /api/tags endpoint
- Supports custom baseURL configuration via OLLAMA_HOST env or config
- Zero cost for all Ollama models (local inference)
- Comprehensive documentation and test coverage

Closes Kilo-Org#154
- Support API key authentication via OLLAMA_API_KEY env or config
- Try both /api/tags (native) and /v1/models (OpenAI-compatible) endpoints
- Handle baseURL with or without /v1 suffix
- Better error handling for remote/secured instances

This enables using Ollama behind reverse proxies or via SSH tunnels
with authentication.
- Always register Ollama provider with default models (llama3.2, llama3.1, mistral)
- Dynamically fetch actual models in background after connection
- Prevents repeated API key prompts
- Better UX: provider visible immediately, models update after auth
- Works both with and without API key configuration
When running 'kilo auth login ollama', users are now prompted for:
1. Host URL (e.g., http://localhost:11434 or remote address)
2. Whether API key is required
3. API key (if required)

Configuration is saved to both auth.json and opencode.json for easy setup.
Improves UX for configuring remote/secured Ollama instances.
- Add section about 'kilo auth login ollama' command
- Include examples for local and remote setups
- Document the interactive prompts for host and API key
- When baseURL is set in config, synchronously fetch real models
- Only use default models for unconfigured localhost instances
- Fix config loading to use correct path
- Remote models now populate immediately instead of showing defaults
Model options (including baseURL) were not being passed to the AI SDK.
This caused API calls to go to wrong endpoint without /v1 prefix.

Changed options merge from:
  const options = { ...provider.options }
To:
  const options = { ...provider.options, ...model.options }

This ensures Ollama's baseURL and apiKey are properly used.
@doozie-akshay
Copy link
Author

This PR is linked to issue #154 which already covers this feature request.

@markijbema
Copy link
Contributor

Hi! Thank you for taking the time to contribute to this project—we really appreciate it. 🙏

We are currently working on re-platforming the core of our VS Code and JetBrains extensions to be based on our new Kilo CLI, with a complete rebuild based on OpenCode as our new foundation, and the moment has come to promote this repository to become the main repository. To do that, we moved the code from this repository to the kilocode repository.

This unfortunately means we cannot merge this branch here anymore. Please add https://github.com/Kilo-Org/kilocode.git as a remote, and push your branch there and create a new PR in https://github.com/Kilo-Org/kilocode . We unfortunately cannot do this for you as then the PR would not be in your name anymore. If you need any help, feel free to ask on our Discord in #kilo-dev-contributors

Sorry for the inconvenience and thank you for contributing to Kilo!

@markijbema markijbema closed this Feb 22, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE]: Add Ollama/Local Model Provider Support

2 participants