Bug Description
When a user tells OpenClaw a fact (e.g., "My grandmother's name is Li Jiulin"), the afterTurn hook in the OpenViking plugin only captures the assistant's reply, not the user's original message.
As a result, the text sent to OpenViking for extraction is:
[assistant]: Got it, I've noted that down. Your grandmother's name is Li Jiulin...
The VLM extractor returns 0 memories from this, because the information appears in an [assistant]-role message (the model treats it as a recap/acknowledgment rather than a user-provided fact), even though the content itself contains extractable facts.
The root cause is that prePromptMessageCount — passed by the OpenClaw host to afterTurn — points to an index that excludes the user's current-turn message, leaving only the assistant reply in the "new messages" window.
Steps to Reproduce
- Configure OpenClaw with the OpenViking plugin (
autoCapture: true, mode: remote)
- In a conversation, tell the agent a personal fact:
"My grandmother's name is Li Jiulin, her birthday is the 7th day of the 7th lunar month."
- The agent replies: "Got it, I'll add this to long-term memory."
- Check OpenClaw gateway logs:
openviking: capture-check ... newMsgCount=1 text="[assistant]: Got it, I've noted that down..."
openviking: auto-captured 1 new messages, extracted 0 memories
- Query the memory later — the fact is never stored.
Expected Behavior
afterTurn should capture both the user message and the assistant reply from the current turn (or at minimum the user message), so the VLM extractor receives:
[user]: My grandmother's name is Li Jiulin, her birthday is the 7th day of the 7th lunar month.
[assistant]: Got it, I'll add this to long-term memory.
With the full turn, the VLM correctly extracts 1 memory (verified by direct API test).
Actual Behavior
Only the assistant message is captured. The user's original input is excluded because prePromptMessageCount places it outside the "new messages" window in extractNewTurnTexts.
Direct VLM test confirms the issue is role-dependent:
- Input:
[assistant]: Got it. Your grandmother's name is Li Jiulin... → {"memories": []}
- Input:
[user]: My grandmother's name is Li Jiulin... → 1 memory extracted
- Input: both user + assistant → 1 memory extracted
Minimal Reproducible Example
Error Logs
OpenViking Version
0.2.8
Python Version
3.13
Operating System
Linux
Model Backend
Other
Additional Context
Model
Kimi K2
Bug Description
When a user tells OpenClaw a fact (e.g., "My grandmother's name is Li Jiulin"), the
afterTurnhook in the OpenViking plugin only captures the assistant's reply, not the user's original message.As a result, the text sent to OpenViking for extraction is:
The VLM extractor returns 0 memories from this, because the information appears in an
[assistant]-role message (the model treats it as a recap/acknowledgment rather than a user-provided fact), even though the content itself contains extractable facts.The root cause is that
prePromptMessageCount— passed by the OpenClaw host toafterTurn— points to an index that excludes the user's current-turn message, leaving only the assistant reply in the "new messages" window.Steps to Reproduce
autoCapture: true,mode: remote)Expected Behavior
afterTurnshould capture both the user message and the assistant reply from the current turn (or at minimum the user message), so the VLM extractor receives:With the full turn, the VLM correctly extracts 1 memory (verified by direct API test).
Actual Behavior
Only the assistant message is captured. The user's original input is excluded because
prePromptMessageCountplaces it outside the "new messages" window inextractNewTurnTexts.Direct VLM test confirms the issue is role-dependent:
[assistant]: Got it. Your grandmother's name is Li Jiulin...→{"memories": []}[user]: My grandmother's name is Li Jiulin...→ 1 memory extractedMinimal Reproducible Example
Error Logs
OpenViking Version
0.2.8
Python Version
3.13
Operating System
Linux
Model Backend
Other
Additional Context
Model
Kimi K2