feat(agent): strip trailing assistant prefill for proxy provider compat#258
Closed
xuandung38 wants to merge 3 commits intonextlevelbuilder:mainfrom
Closed
feat(agent): strip trailing assistant prefill for proxy provider compat#258xuandung38 wants to merge 3 commits intonextlevelbuilder:mainfrom
xuandung38 wants to merge 3 commits intonextlevelbuilder:mainfrom
Conversation
…LM call
Some LLM providers and models do not support assistant message prefill
— requests ending with an assistant-role message get rejected with
HTTP 400 ("This model does not support assistant message prefill").
This happens when the system injects assistant messages to guide model
behavior or establish context.
Add per-agent strip_assistant_prefill toggle (stored in other_config)
that removes the trailing assistant message before constructing the
ChatRequest. The option is configurable in the LLM Configuration
section of the agent General tab.
- Add ParseStripAssistantPrefill() to AgentData (reads from other_config)
- Add stripAssistantPrefill field to Loop/LoopConfig
- Wire through resolver → loop
- Add checkbox UI in LLM Config section with i18n (en/vi/zh)
Add .gemini/, .claude/, .opencode/ to .gitignore to prevent committing user-specific AI tool configurations.
…prefill # Conflicts: # internal/agent/loop_types.go # ui/web/src/pages/agents/agent-detail/agent-general-tab.tsx # ui/web/src/pages/agents/agent-detail/general-sections/llm-config-section.tsx
Contributor
|
Thank you for identifying this issue with proxy providers rejecting assistant prefill messages! Great catch. After tracing the full message flow, we found the root cause: team task reminders (lead + member) were injected after the user message, leaving a trailing Instead of a per-agent config toggle, we addressed this at the source:
This approach requires zero configuration — all agents using proxy providers automatically benefit without needing to toggle a setting. Resolved in main via 11673da. |
Contributor
Author
|
The approach makes sense, thanks sir. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Some LLM providers and models (via LiteLLM, OpenRouter, etc.) do not support assistant message prefill — requests ending with an
assistant-role message get rejected with HTTP 400:This happens when the system injects trailing assistant messages to guide model behavior or establish context continuity.
Changes
Backend: Add per-agent
strip_assistant_prefilltoggle (stored inother_config)ParseStripAssistantPrefill()onAgentDatareads the flag fromother_configstripAssistantPrefillfield onLoop/LoopConfigstructsresolver.go→ loop initializationChatRequest, if enabled, removes the last message when it hasrole: "assistant"Frontend: Checkbox in LLM Configuration section of the agent General tab
Config: Added
.gemini/and.opencode/to.gitignore(AI tool config dirs)Files Changed
internal/agent/loop_types.gostripAssistantPrefillfield + wiring inLoopConfiginternal/agent/resolver.goStripAssistantPrefillfrom agent configui/web/.../agent-general-tab.tsxui/web/.../llm-config-section.tsx.gitignoreWhen to Use
Enable
strip_assistant_prefillon agents that use proxy LLM providers which reject assistant-role prefill messages. This is a per-agent setting — only enable it for agents connected to providers that exhibit this issue.Test Plan