Skip to content

fix: normalize input_text content blocks in Claude-to-OpenAI conversion#2968

Open
0-don wants to merge 1 commit intoQuantumNous:mainfrom
0-don:fix/claude-input-text-content-block
Open

fix: normalize input_text content blocks in Claude-to-OpenAI conversion#2968
0-don wants to merge 1 commit intoQuantumNous:mainfrom
0-don:fix/claude-input-text-content-block

Conversation

@0-don
Copy link

@0-don 0-don commented Feb 18, 2026

  • Map input_texttext in ClaudeToOpenAIRequest so clients sending Responses API content types via /v1/messages don't get silently dropped
  • Fixes Invalid value: 'input_text' errors from upstream providers

Summary by CodeRabbit

  • Bug Fixes
    • Improved handling of text messages so content is converted consistently across formats.
    • Message text now distinguishes sender role (assistant vs. others) to ensure correct interpretation of incoming and outgoing text.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 18, 2026

Walkthrough

Claude-to-OpenAI conversion now treats "input_text" the same as "text" for media content, and response conversion emits "output_text" for assistant messages while non-assistant messages use "input_text".

Changes

Cohort / File(s) Summary
Claude -> OpenAI conversion
service/convert.go
Switch now treats input_text same as text, producing MediaContent with Type: "text" and using mediaMsg.GetText() for both cases.
Chat -> Responses mapping
service/openaicompat/chat_to_responses.go
When converting chat completions to responses, text parts use type: "output_text" for assistant role and type: "input_text" for other roles (replacing the prior constant input_text).

Sequence Diagram(s)

(Skipped — changes are localized conversions without multi-component sequential flow requiring visualization.)

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Poem

A rabbit hops through code tonight, 🐇
Swapping types until they fit just right.
"input_text" and "text" now sing as one,
Assistant replies mark "output" when done.
I nibble bugs and bound away — hooray! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: normalizing input_text content blocks during Claude-to-OpenAI conversion to prevent content from being silently dropped.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@seefs001
Copy link
Collaborator

应该是搞错地方了,claude文档上没有这一个类型,这个报错可能是别的地方的转换错误

@0-don
Copy link
Author

0-don commented Feb 18, 2026

input_text is not in the Anthropic spec, it's from the Responses API. OpenClaw sends it through /v1/messages when using codex models — known bugs on their side (openclaw/openclaw#13189, openclaw/openclaw#18787). Same structure as text, just a different type string. Without this, the content gets silently dropped and upstream returns errors.

Clients like OpenClaw send input_text content blocks (a Responses API
type) through /v1/messages. The Claude-to-OpenAI converter silently
drops unknown types, so the message arrives empty at the upstream,
causing "Invalid value: 'input_text'" errors.

Map input_text to text since they share the same structure.
@0-don 0-don force-pushed the fix/claude-input-text-content-block branch from 20f3962 to e8e94e9 Compare February 19, 2026 21:29
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
service/convert.go (1)

129-136: ⚠️ Potential issue | 🟡 Minor

output_text not handled — symmetric silent-drop for assistant messages

The companion change in chat_to_responses.go now emits "output_text" for assistant-role text content when converting Chat → Responses API. The OpenAI Responses API spec documents output_text as the type always used for model-generated text. A community report confirms that using input_text for the assistant role returns an API error, making output_text the required type for assistant turns.

If a client that sends output_text blocks (e.g., when forwarding a previous Responses API assistant turn verbatim) calls /v1/messages, those blocks fall through the switch without a matching case and are silently dropped — the exact same class of bug this PR fixes for input_text.

🐛 Proposed fix — add output_text to the case
-               case "text", "input_text":
+               case "text", "input_text", "output_text":
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@service/convert.go` around lines 129 - 136, The switch in service/convert.go
that handles mediaMsg.Type currently matches "text" and "input_text" but omits
"output_text", causing assistant-originated Response API blocks to be dropped;
update the switch in the conversion function that builds dto.MediaContent (the
block that creates message := dto.MediaContent{ Type: "text", Text:
mediaMsg.GetText(), CacheControl: mediaMsg.CacheControl } and appends to
mediaMessages) to also match "output_text" (either by adding "output_text" to
the same case list or by adding a case that maps "output_text" to the same
dto.MediaContent shape) so assistant output_text blocks are preserved.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@service/convert.go`:
- Around line 129-136: The switch in service/convert.go that handles
mediaMsg.Type currently matches "text" and "input_text" but omits "output_text",
causing assistant-originated Response API blocks to be dropped; update the
switch in the conversion function that builds dto.MediaContent (the block that
creates message := dto.MediaContent{ Type: "text", Text: mediaMsg.GetText(),
CacheControl: mediaMsg.CacheControl } and appends to mediaMessages) to also
match "output_text" (either by adding "output_text" to the same case list or by
adding a case that maps "output_text" to the same dto.MediaContent shape) so
assistant output_text blocks are preserved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments