feat(openclaw): turnkey Open WebUI integration via HTTP API#438
feat(openclaw): turnkey Open WebUI integration via HTTP API#438nt1412 wants to merge 5 commits intoLight-Heart-Labs:mainfrom
Conversation
Enable OpenClaw agents as a selectable model in Open WebUI with zero manual configuration. When the openclaw extension is enabled, users see an "openclaw" model in the Open WebUI dropdown alongside the direct LLM — providing file I/O, shell commands, code execution, and sub-agent capabilities through the familiar chat interface. Changes: inject-token.js: - Enable /v1/chat/completions HTTP endpoint on the gateway - Spawn OpenAI-compat shim on port 18790 (serves /v1/models, proxies chat completions to gateway) — needed because OpenClaw doesn't natively serve /v1/models which Open WebUI requires - Fix provider baseUrl in merged config using OLLAMA_URL env var (resolves macOS where LLM runs natively on host, not in Docker) - Add OPENCLAW_LLM_URL support for optional Token Spy monitoring - Log Control UI URL with token for Docker users compose.yaml: - Updated entrypoint to use merged config with HTTP API - Pass OPENCLAW_LLM_URL env var into container - Add open-webui overlay: sets OPENAI_API_BASE_URLS (plural) with both llama-server and OpenClaw shim as backends manifest.yaml: - Add apple to gpu_backends (was [amd, nvidia], now includes macOS) Tested on macOS M4 (32GB) with Qwen3-8B: - File write + read: proven - Shell command execution: proven - Write + execute code: proven - Sub-agent spawn: proven (slow on 8B, ~3min due to serialized LLM) Known requirement: CTX_SIZE must be >= 32768 for OpenClaw agent prompts. Default 16384 causes "Context size exceeded" errors. TODO: Sub-agents need a larger model or multiple LLM instances for responsive performance. Single 8B model serializes main + sub-agent turns causing timeouts on complex parallel tasks. Cross-platform: all changes run inside Docker, no host OS dependency. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Lightheartdevs
left a comment
There was a problem hiding this comment.
Review: Needs Work
The integration approach is reasonable, but a few things need addressing:
1. Unconditional HTTP endpoint enablement
The PR forces config.gateway.http.endpoints.chatCompletions = { enabled: true } for ALL OpenClaw installs, not just those wanting Open WebUI integration. This should be opt-in — gate it behind an env var like OPENCLAW_HTTP_API=true.
2. No supervision for the openai-shim
The Node.js proxy on port 18790 is spawned with child.unref(). If it crashes, Open WebUI integration silently breaks with no indication to the user. Consider either:
- A healthcheck in compose.yaml that hits the shim
- A restart loop wrapper
- At minimum, logging the crash
3. Unrelated change bundled
Adding apple to gpu_backends in manifest.yaml is unrelated to the HTTP API feature. Should be a separate commit/PR.
What's good
- Security model is sound: shim is container-internal only (no host port mapping), API key auth preserved
- The inject-token.js pattern for config merging is clean
- Open WebUI env var wiring is correct
Co-Authored-By: Claude Opus 4.6 (1M context) noreply@anthropic.com
…anifest Addresses all three points from maintainer review: 1. HTTP API is opt-in via OPENCLAW_HTTP_API env var (default: true in compose, so turnkey when extension is enabled, but overridable with OPENCLAW_HTTP_API=false). Both inject-token.js Parts 1/3/4 check the flag before enabling chatCompletions or starting the shim. 2. Healthcheck now hits the shim (:18790) when HTTP API is enabled, falls back to gateway (:18789) when disabled. If the shim crashes, Docker marks the container unhealthy — visible in the dashboard. Container restart recovers the shim automatically. 3. Reverted apple from gpu_backends in manifest.yaml — will be filed as a separate PR. Tested: - OPENCLAW_HTTP_API=true: shim starts, /v1/models works, chat works - Shim killed: healthcheck fails → container goes unhealthy - Container restart: shim respawns → healthy again - Auth rejection: 401 on wrong token - Opt-out: OPENCLAW_HTTP_API != 'true' → no shim, no HTTP endpoint Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Maintainer asked for opt-in, not opt-out. Users must explicitly set OPENCLAW_HTTP_API=true in .env to enable the Open WebUI integration. Without it, OpenClaw runs normally via Control UI with no shim. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Log said "created merged config with HTTP API" even when HTTP API was not enabled. Changed to neutral "created merged config". Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
nt1412
left a comment
There was a problem hiding this comment.
Thanks for the review. All three points addressed:
1. Opt-in gating: HTTP API + shim only activate when OPENCLAW_HTTP_API=true is explicitly set. Default is empty (off). OpenClaw runs normally without it.
2. Shim supervision: Healthcheck now hits the shim (:18790) when HTTP API is enabled, falls back to gateway (:18789) when disabled. Shim crash → container goes unhealthy → restart recovers it. Tested the full cycle.
3. Manifest split: Reverted apple from gpu_backends — will file separately.
Truth table
| openclaw enabled | OPENCLAW_HTTP_API | HTTP endpoint | Shim | Open WebUI sees openclaw | Healthcheck target | Result |
|---|---|---|---|---|---|---|
| No | — | off | off | no | — | No container |
| Yes | unset | off | off | no | :18789 (gateway) | Control UI only |
| Yes | false |
off | off | no | :18789 (gateway) | Control UI only |
| Yes | true |
on | on | yes | :18790 (shim) | Full integration |
Addresses maintainer's three suggestions for shim supervision:
1. Healthcheck: compose healthcheck hits :18790 when HTTP API enabled
(already in previous commit)
2. Restart loop: server.on('error') retries up to 5 times with
exponential backoff (handles EADDRINUSE, socket errors)
3. Crash logging: uncaughtException and SIGTERM handlers log to
stderr (visible in docker logs)
Tested full cycle: startup → kill → logged → unhealthy → restart → recovered.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… from #438) Resolves merge conflicts from #438 against current main (post-Lemonade). Original work by @nt1412. - OpenAI-compatible shim on port 18790 serves /v1/models + proxies /v1/chat/completions to OpenClaw gateway - Open WebUI auto-discovers "openclaw" as a selectable model - Opt-in via OPENCLAW_HTTP_API=true env var - OPENCLAW_LLM_URL override for Token Spy monitoring - Merged config created at /tmp/openclaw-config.json - Preserves all Lemonade changes (OLLAMA_URL, LITELLM_KEY patching) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… from #438) (#616) Resolves merge conflicts from #438 against current main (post-Lemonade). Original work by @nt1412. - OpenAI-compatible shim on port 18790 serves /v1/models + proxies /v1/chat/completions to OpenClaw gateway - Open WebUI auto-discovers "openclaw" as a selectable model - Opt-in via OPENCLAW_HTTP_API=true env var - OPENCLAW_LLM_URL override for Token Spy monitoring - Merged config created at /tmp/openclaw-config.json - Preserves all Lemonade changes (OLLAMA_URL, LITELLM_KEY patching) Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>



Summary
When the OpenClaw extension is enabled, OpenClaw agents automatically appear as a selectable
openclawmodel in Open WebUI — zero manual configuration. Users get file I/O, shell commands, code execution, and sub-agent capabilities through the familiar chat interface./v1/chat/completionsHTTP endpoint, spawn OpenAI-compat shim (serves/v1/modelsthat OpenClaw doesn't natively expose), fix providerbaseUrlfor macOS/native LLM, addOPENCLAW_LLM_URLfor Token Spy monitoringopen-webuioverlay adds OpenClaw as second backend viaOPENAI_API_BASE_URLSappletogpu_backends(was[amd, nvidia]— macOS was excluded)Cross-platform: all changes run inside Docker containers, no host OS dependency.
Proven use cases (impossible with raw LLM)
proof.txt, read back contentsls | wc -l, returned file countsum.py, ran it, returned5050Known limitations
CTX_SIZE=16384causes "Context size exceeded" errors./v1/modelsshim: OpenClaw doesn't natively serve/v1/models(only/v1/chat/completions). A 15-line Node.js shim inside the container bridges this gap. Will become unnecessary if OpenClaw adds native support upstream.OPENAI_API_BASE_URLSis only read on first DB creation. Existing installs need to add the OpenClaw connection via Admin UI > Settings > Connections.Test plan
openclawmodel appears in Open WebUI dropdownopenclawmodel, verify agent responds🤖 Generated with Claude Code