Skip to content

Releases: BlockRunAI/ClawRouter

v0.12.92 — Fix multi-turn chat for reasoning models (continue.dev #135)

31 Mar 21:33
0da922b

Choose a tag to compare

Bug Fix: Existing chat always fails in continue.dev (#135)

Root cause

moonshot/kimi-k2.5 (primary MEDIUM-tier model in blockrun/auto) is a reasoning model that requires reasoning_content on all assistant messages in multi-turn history — not just tool-call messages. When continue.dev sent an existing chat, the plain-text assistant message from the previous turn was missing reasoning_content, causing a 400 from the model.

Since that 400 didn't match any PROVIDER_ERROR_PATTERNS, isProviderError=false and the fallback loop broke on the first attempt. All models failed → SSE error sent → OpenAI SDK in continue.dev threw "Unexpected error".

New chats (no assistant history) were unaffected — only existing chats broke.

Fixes

  • normalizeMessagesForThinking — now adds reasoning_content: "" to all assistant messages (not just tool-call ones) when targeting a reasoning model
  • SSE error format — error events now always use {"error":{...}} OpenAI wrapper; raw upstream JSON was previously forwarded as-is, hiding the real error message
  • PROVIDER_ERROR_PATTERNS — added reasoning_content.*missing as a safety net for proper fallback

Verification

  • E2E: 3-turn SSE streaming test passed (turn 2 was the broken case)
  • Unit: 7 new regression tests for normalizeMessagesForThinking
  • Full suite: 364/364 passing

Update

npx @blockrun/clawrouter@latest

v0.12.90 — Fix empty-turn fallback for eco/agentic requests

31 Mar 17:01
38bc08b

Choose a tag to compare

What's Fixed

Empty turn responses now trigger model fallback

Problem: Under the eco profile (and sometimes auto), agentic clients like Roo Code would receive silent empty responses — the model returned HTTP 200 with no content and no tool calls, just finish_reason: stop. The proxy treated this as success and forwarded the empty turn to the client, causing the agent to loop or stall.

Root cause: Models like gemini-3.1-flash-lite sometimes refuse complex agentic requests (large Roo Code tool schemas) by producing a zero-output response instead of an error. ClawRouter's degraded-response detector didn't catch this pattern, so it never fell back to the next model.

Fix: detectDegradedSuccessResponse now flags responses where:

  • choices[0].message.content is empty and
  • no tool_calls and
  • finish_reason === "stop"

These are treated as degraded response: empty turn → fallback fires → next model in chain is tried automatically.

Upgrade

npm i -g @blockrun/clawrouter@latest

v0.12.87

30 Mar 20:03
b6cf956

Choose a tag to compare

feat: add Predexon prediction market skill + extend partner proxy to …

v0.12.70

24 Mar 02:26
b2b3697

Choose a tag to compare

Fixed

  • Plugin crash on string model config — ClawRouter crashed during OpenClaw plugin registration with TypeError: Cannot create property 'primary' on string 'blockrun/auto'. This happened when agents.defaults.model in the OpenClaw config was a plain string instead of the expected object { primary: "blockrun/auto" }. Now auto-converts string/array/non-object model values to the correct object form.

Install / Update

npm i -g @blockrun/clawrouter@latest

v0.12.69

23 Mar 22:33
ec60011

Choose a tag to compare

What's Changed

✨ Features

  • feat: add GPT-5.4 Nano + Gemini 3.1 Flash Lite, SOL→USDC swap hint
  • feat: GEO optimize README with definition and FAQ section

🐛 Bug Fixes

  • fix: config duplication on update — full model allowlist reconciliation (#112)
  • fix: /stats under-reporting costs — use actual x402 payment amounts
  • fix: save npm install error log to ~/clawrouter-npm-install.log
  • fix: remove redundant deprecated models list from reinstall step 2
  • fix: sync package-lock.json (missing opusscript after viem merge)
  • fix: lint errors — unused import and no-useless-assignment

📚 Documentation

  • docs: add illustrations to OpenRouter comparison article

🔧 Maintenance

  • chore(deps): bump viem from 2.46.3 to 2.47.6 (#115)
  • style: fix prettier formatting

Full Changelog: v0.12.66...v0.12.69

v0.12.66

21 Mar 03:27
61f0e3d

Choose a tag to compare

Bug Fixes

Fix: payment settlement failure now falls back to free model

When payment settlement fails on-chain (insufficient funds, simulation failure), ClawRouter now skips all remaining paid models and falls back to nvidia/gpt-oss-120b (free). Previously, payment settlement errors returned non-standard HTTP codes that weren't recognized as provider errors, so the fallback loop broke immediately with "Payment settlement failed" — even though the free model was available.

Fix: pre-auth cache key includes model ID (v0.12.65)

Cached payment requirements from a paid model were incorrectly applied to free model requests.

npm install -g @blockrun/clawrouter@0.12.66

v0.12.65

21 Mar 03:22
c76a82d

Choose a tag to compare

Bug Fix

Fix: payment error when using free model — The pre-auth cache keyed on URL path alone, so cached payment requirements from a paid model (e.g. sonnet) would be applied to nvidia/gpt-oss-120b. Users switching to free model with an empty wallet got payment errors even though the server never charged for free requests. Cache key now includes model ID.

npm install -g @blockrun/clawrouter@0.12.65

v0.12.64

21 Mar 02:50
ce4bba9

Choose a tag to compare

What's New

Cost Visibility & Routing Transparency

  • Cost headers on every response: x-clawrouter-cost and x-clawrouter-savings show per-request cost and savings percentage
  • SSE cost comment: Streaming responses include a cost summary before [DONE]
  • Model field injection: SSE chunks and non-streaming responses now show the actual routed model (not the upstream provider's name)

Improved Error Handling

  • Structured fallback error: When all models fail, error message lists every attempted model and failure reason: "All 6 models failed. Tried: gemini-2.5-flash (rate_limited), deepseek-chat (server_error), ..."

Model Allowlist Expansion

  • Expanded to 33 models in the OpenClaw model picker allowlist

Code Quality

  • Removed dead code (isProviderError function, FALLBACK_STATUS_CODES constant)
  • Fixed Prettier formatting across 11 files
  • Renamed blog docs to SEO-friendly slugs

Documentation

v0.12.56

17 Mar 02:01
c8857f3

Choose a tag to compare

  • Add zai/glm-5 and zai/glm-5-turbo models to model picker
  • Aliases: glm, glm-5, glm-5-turbo

v0.12.30

10 Mar 01:06
5c87beb

Choose a tag to compare

chore: bump version to 0.12.30