Releases: BlockRunAI/ClawRouter
v0.12.92 — Fix multi-turn chat for reasoning models (continue.dev #135)
Bug Fix: Existing chat always fails in continue.dev (#135)
Root cause
moonshot/kimi-k2.5 (primary MEDIUM-tier model in blockrun/auto) is a reasoning model that requires reasoning_content on all assistant messages in multi-turn history — not just tool-call messages. When continue.dev sent an existing chat, the plain-text assistant message from the previous turn was missing reasoning_content, causing a 400 from the model.
Since that 400 didn't match any PROVIDER_ERROR_PATTERNS, isProviderError=false and the fallback loop broke on the first attempt. All models failed → SSE error sent → OpenAI SDK in continue.dev threw "Unexpected error".
New chats (no assistant history) were unaffected — only existing chats broke.
Fixes
normalizeMessagesForThinking— now addsreasoning_content: ""to all assistant messages (not just tool-call ones) when targeting a reasoning model- SSE error format — error events now always use
{"error":{...}}OpenAI wrapper; raw upstream JSON was previously forwarded as-is, hiding the real error message PROVIDER_ERROR_PATTERNS— addedreasoning_content.*missingas a safety net for proper fallback
Verification
- E2E: 3-turn SSE streaming test passed (turn 2 was the broken case)
- Unit: 7 new regression tests for
normalizeMessagesForThinking - Full suite: 364/364 passing
Update
npx @blockrun/clawrouter@latest
v0.12.90 — Fix empty-turn fallback for eco/agentic requests
What's Fixed
Empty turn responses now trigger model fallback
Problem: Under the eco profile (and sometimes auto), agentic clients like Roo Code would receive silent empty responses — the model returned HTTP 200 with no content and no tool calls, just finish_reason: stop. The proxy treated this as success and forwarded the empty turn to the client, causing the agent to loop or stall.
Root cause: Models like gemini-3.1-flash-lite sometimes refuse complex agentic requests (large Roo Code tool schemas) by producing a zero-output response instead of an error. ClawRouter's degraded-response detector didn't catch this pattern, so it never fell back to the next model.
Fix: detectDegradedSuccessResponse now flags responses where:
choices[0].message.contentis empty and- no
tool_callsand finish_reason === "stop"
These are treated as degraded response: empty turn → fallback fires → next model in chain is tried automatically.
Upgrade
npm i -g @blockrun/clawrouter@latestv0.12.87
v0.12.70
Fixed
- Plugin crash on string model config — ClawRouter crashed during OpenClaw plugin registration with
TypeError: Cannot create property 'primary' on string 'blockrun/auto'. This happened whenagents.defaults.modelin the OpenClaw config was a plain string instead of the expected object{ primary: "blockrun/auto" }. Now auto-converts string/array/non-object model values to the correct object form.
Install / Update
npm i -g @blockrun/clawrouter@latestv0.12.69
What's Changed
✨ Features
- feat: add GPT-5.4 Nano + Gemini 3.1 Flash Lite, SOL→USDC swap hint
- feat: GEO optimize README with definition and FAQ section
🐛 Bug Fixes
- fix: config duplication on update — full model allowlist reconciliation (#112)
- fix: /stats under-reporting costs — use actual x402 payment amounts
- fix: save npm install error log to ~/clawrouter-npm-install.log
- fix: remove redundant deprecated models list from reinstall step 2
- fix: sync package-lock.json (missing opusscript after viem merge)
- fix: lint errors — unused import and no-useless-assignment
📚 Documentation
- docs: add illustrations to OpenRouter comparison article
🔧 Maintenance
- chore(deps): bump viem from 2.46.3 to 2.47.6 (#115)
- style: fix prettier formatting
Full Changelog: v0.12.66...v0.12.69
v0.12.66
Bug Fixes
Fix: payment settlement failure now falls back to free model
When payment settlement fails on-chain (insufficient funds, simulation failure), ClawRouter now skips all remaining paid models and falls back to nvidia/gpt-oss-120b (free). Previously, payment settlement errors returned non-standard HTTP codes that weren't recognized as provider errors, so the fallback loop broke immediately with "Payment settlement failed" — even though the free model was available.
Fix: pre-auth cache key includes model ID (v0.12.65)
Cached payment requirements from a paid model were incorrectly applied to free model requests.
npm install -g @blockrun/clawrouter@0.12.66
v0.12.65
Bug Fix
Fix: payment error when using free model — The pre-auth cache keyed on URL path alone, so cached payment requirements from a paid model (e.g. sonnet) would be applied to nvidia/gpt-oss-120b. Users switching to free model with an empty wallet got payment errors even though the server never charged for free requests. Cache key now includes model ID.
npm install -g @blockrun/clawrouter@0.12.65
v0.12.64
What's New
Cost Visibility & Routing Transparency
- Cost headers on every response:
x-clawrouter-costandx-clawrouter-savingsshow per-request cost and savings percentage - SSE cost comment: Streaming responses include a cost summary before
[DONE] - Model field injection: SSE chunks and non-streaming responses now show the actual routed model (not the upstream provider's name)
Improved Error Handling
- Structured fallback error: When all models fail, error message lists every attempted model and failure reason:
"All 6 models failed. Tried: gemini-2.5-flash (rate_limited), deepseek-chat (server_error), ..."
Model Allowlist Expansion
- Expanded to 33 models in the OpenClaw model picker allowlist
Code Quality
- Removed dead code (
isProviderErrorfunction,FALLBACK_STATUS_CODESconstant) - Fixed Prettier formatting across 11 files
- Renamed blog docs to SEO-friendly slugs
Documentation
- New article: ClawRouter vs OpenRouter — LLM Routing Comparison (100 OpenClaw issues analyzed)