Skip to content

[Bug]: afterTurn auto-capture only captures assistant message, user message is lost, leading to 0 memories extracted #1248

@a31c8j

Description

@a31c8j

Bug Description

When a user tells OpenClaw a fact (e.g., "My grandmother's name is Li Jiulin"), the afterTurn hook in the OpenViking plugin only captures the assistant's reply, not the user's original message.

As a result, the text sent to OpenViking for extraction is:

[assistant]: Got it, I've noted that down. Your grandmother's name is Li Jiulin...

The VLM extractor returns 0 memories from this, because the information appears in an [assistant]-role message (the model treats it as a recap/acknowledgment rather than a user-provided fact), even though the content itself contains extractable facts.

The root cause is that prePromptMessageCount — passed by the OpenClaw host to afterTurn — points to an index that excludes the user's current-turn message, leaving only the assistant reply in the "new messages" window.

Steps to Reproduce

  1. Configure OpenClaw with the OpenViking plugin (autoCapture: true, mode: remote)
  2. In a conversation, tell the agent a personal fact:

    "My grandmother's name is Li Jiulin, her birthday is the 7th day of the 7th lunar month."

  3. The agent replies: "Got it, I'll add this to long-term memory."
  4. Check OpenClaw gateway logs:
openviking: capture-check ... newMsgCount=1 text="[assistant]: Got it, I've noted that down..."
openviking: auto-captured 1 new messages, extracted 0 memories
  1. Query the memory later — the fact is never stored.

Expected Behavior

afterTurn should capture both the user message and the assistant reply from the current turn (or at minimum the user message), so the VLM extractor receives:

[user]: My grandmother's name is Li Jiulin, her birthday is the 7th day of the 7th lunar month.
[assistant]: Got it, I'll add this to long-term memory.

With the full turn, the VLM correctly extracts 1 memory (verified by direct API test).

Actual Behavior

Only the assistant message is captured. The user's original input is excluded because prePromptMessageCount places it outside the "new messages" window in extractNewTurnTexts.

Direct VLM test confirms the issue is role-dependent:

  • Input: [assistant]: Got it. Your grandmother's name is Li Jiulin...{"memories": []}
  • Input: [user]: My grandmother's name is Li Jiulin... → 1 memory extracted
  • Input: both user + assistant → 1 memory extracted

Minimal Reproducible Example

Error Logs

OpenViking Version

0.2.8

Python Version

3.13

Operating System

Linux

Model Backend

Other

Additional Context

Model

Kimi K2

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    Status

    In progress

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions