Skip to content

Share LLM conversation context between chatbot/workflow runtimes during debug via conversation ID mapping#334

Open
forhad-hosain wants to merge 3 commits intodevfrom
fix/cross-runtime-llm-context
Open

Share LLM conversation context between chatbot/workflow runtimes during debug via conversation ID mapping#334
forhad-hosain wants to merge 3 commits intodevfrom
fix/cross-runtime-llm-context

Conversation

@forhad-hosain
Copy link
Contributor

@forhad-hosain forhad-hosain commented Feb 13, 2026

📝 Description

Problem: When a GenAILLM component has Use Context Window enabled, the Chatbot debugger fails to access the context. This happens because the Chatbot and the Debugger run on different instances, causing the context to be lost.

Solution: We now link the conversationId with a debugId. LLM messages are stored using the conversationId. When a request comes from the debugger, we retrieve the missing conversationId from the cache to correctly load the message history.

🔗 Clickup Ticket

https://app.clickup.com/t/86ev54q0k

  • Fixes #
  • Relates to #

🔧 Type of Change

  • 🐛 Bug fix (non-breaking change that fixes an issue)
  • ✨ New feature (non-breaking change that adds functionality)
  • 📚 Documentation update
  • 🔧 Code refactoring (no functional changes)
  • 🧪 Test improvements
  • 🔨 Build/CI changes

✅ Checklist

  • Self-review performed
  • Tests added/updated
  • Documentation updated (if needed)

forhad-hosain and others added 3 commits February 14, 2026 02:36
Allow chatbot and workflow runtimes to share LLM conversation context
during debug sessions. The chatbot runtime stores a debug-to-conversation
mapping in cache, which the workflow runtime retrieves to use the same
LLM cache. This enables the GenAILLM "Use Context Window" feature to
maintain conversation history across runtime boundaries.

Key changes:
- AgentRuntime: init LLM cache with conversation-based cache IDs, store
  debug session mappings for cross-runtime retrieval
- LLMCache: add static generateCacheId for consistent cache key format
- LLMContext: accept optional conversationId for cache key generation
- Conversation/Chat: propagate conversationId through the stack

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant