Bug Description
When AutoContextMemory triggers Strategy 6 (summaryCurrentRoundMessages),
it compresses [tool_use + tool_result] message pairs into a single plain
ASSISTANT text message. This destroys the structural information that the LLM
relies on to recognize a completed tool execution, causing it to re-invoke the
same tool in a loop.
Version
agentscope-extensions-autocontext-memory: 1.0.9
Steps to Reproduce
- Configure
AutoContextMemory with a low msgThreshold (e.g. 6) so that
Strategy 6 is triggered after a few rounds.
- Call a tool that returns a large result (e.g. a search or file-read tool).
- Trigger compression — Strategy 6 fires and compresses the current round.
- On the next user turn, the LLM re-invokes the same tool with the same
arguments, even though the result already exists in history.
Root Cause
In summaryCurrentRoundMessages, the compressed output is written as a single
ASSISTANT role message:
Before compression (valid ReAct structure):
ASSISTANT → { type: tool_use, call_id: "abc", name: "search", input: {...} }
USER → { type: tool_result, call_id: "abc", content: "" }
After Strategy 6 compression (structure destroyed):
ASSISTANT → "我调用了 search 工具,返回:"
The LLM no longer sees a tool_use / tool_result pair. From its perspective,
no tool call has been made in the current context, so it issues a new tool_use
request — triggering an infinite loop.
Expected Behavior
The message role structure should be preserved after compression. Only the
content of the tool_result should be replaced with a reference/summary:
ASSISTANT → { type: tool_use, call_id: "abc", name: "search", input: {...} }
USER → { type: tool_result, call_id: "abc", content: "" }
This way the LLM still recognizes the completed tool invocation and does not
repeat it.
Current Workaround
Adding explicit instructions to the system prompt asking the LLM to treat
compressed messages as completed tool executions. This is fragile and
model-dependent.

Bug Description
When
AutoContextMemorytriggers Strategy 6 (summaryCurrentRoundMessages),it compresses
[tool_use + tool_result]message pairs into a single plainASSISTANTtext message. This destroys the structural information that the LLMrelies on to recognize a completed tool execution, causing it to re-invoke the
same tool in a loop.
Version
agentscope-extensions-autocontext-memory: 1.0.9Steps to Reproduce
AutoContextMemorywith a lowmsgThreshold(e.g. 6) so thatStrategy 6 is triggered after a few rounds.
arguments, even though the result already exists in history.
Root Cause
In
summaryCurrentRoundMessages, the compressed output is written as a singleASSISTANTrole message:Before compression (valid ReAct structure):
ASSISTANT → { type: tool_use, call_id: "abc", name: "search", input: {...} }
USER → { type: tool_result, call_id: "abc", content: "" }
After Strategy 6 compression (structure destroyed):
ASSISTANT → "我调用了 search 工具,返回:"
The LLM no longer sees a
tool_use/tool_resultpair. From its perspective,no tool call has been made in the current context, so it issues a new
tool_userequest — triggering an infinite loop.
Expected Behavior
The message role structure should be preserved after compression. Only the
content of the
tool_resultshould be replaced with a reference/summary:ASSISTANT → { type: tool_use, call_id: "abc", name: "search", input: {...} }
USER → { type: tool_result, call_id: "abc", content: "" }
This way the LLM still recognizes the completed tool invocation and does not
repeat it.
Current Workaround
Adding explicit instructions to the system prompt asking the LLM to treat
compressed messages as completed tool executions. This is fragile and
model-dependent.