feat: Support LLMMessages to be returned from invoke#115
Conversation
Updates the LLMChat.invoke method signature and its usage in respond to support returning llm_messages.LLMMessage directly.
290db71 to
6a98f64
Compare
There was a problem hiding this comment.
Can you please add some tests to test_llm_chats.py. These will help me and other people understand the code change and safe-guard them. Thanks!
# --- Tests for LLMMessage return path from invoke() (PR #115) ---
class LLMMessageReturnLLM(actors.LLMChat):
"""A mock LLM that returns an LLMMessage directly from invoke()."""
def __init__(self, content="hello from LLMMessage", **kwargs):
super().__init__(name="LLMMessageReturnLLM", **kwargs)
self._content = content
def invoke(self, messages, system=None, **kwargs):
from kaggle_benchmarks import llm_messages
from kaggle_benchmarks.usage import Usage
return llm_messages.LLMMessage(
content=self._content,
sender=self,
usage=Usage(input_tokens=42, output_tokens=17),
)
def test_llm_message_return_prompt_integration():
"""Basic smoke test: prompt() works when invoke() returns LLMMessage (schema=str)."""
llm = LLMMessageReturnLLM(content="prompt answer")
assert llm.prompt("Hello there") == "prompt answer"
# This will fail now
def test_llm_message_return_with_schema():
"""prompt(schema=int) should return the LLMMessage content as-is when already finalized."""
llm = LLMMessageReturnLLM(content='{"value": 42}')
assert llm.prompt("What is the answer?", schema=int) == 42
There was a problem hiding this comment.
# This will fail now
def test_llm_message_return_with_schema():
"""prompt(schema=int) should return the LLMMessage content as-is when already finalized."""
llm = LLMMessageReturnLLM(content="42")
assert llm.prompt("What is the answer?", schema=int) == 42
This will (and rightly so) still fail even if LLMResponse is used. The expected content is {"value": 42}.
There was a problem hiding this comment.
Also, unsure why I should check for double append here, as it is unrelated to my changes. In general we should avoid testing implementation details over contracts.
#116 introduces an improved LLMMessageReturnLLM that covers the new logic and will be integrated into test_llm_chat.
There was a problem hiding this comment.
Ah I see. I keep forgetting what the content should be. Yes please consider making the test clearer either here or in PR #116 since this will document the behavior for others to understand more easily.
double append is irrelevant and I forgot to remove.
| elif isinstance(invoke_response, Iterator): | ||
| response.stream(invoke_response) | ||
| elif isinstance(invoke_response, llm_messages.LLMMessage): | ||
| response = invoke_response |
There was a problem hiding this comment.
Shall we return early here? Otherwise response will go through parsing below and will fail? Please see the suggested unit test above.
There was a problem hiding this comment.
This is the intended behavior; invoke should only output raw text (LLMMessage[str]).
dolaameng
left a comment
There was a problem hiding this comment.
LGTM. Please consider make the tests clearer to document the expected behavior. Thanks!
There was a problem hiding this comment.
Ah I see. I keep forgetting what the content should be. Yes please consider making the test clearer either here or in PR #116 since this will document the behavior for others to understand more easily.
double append is irrelevant and I forgot to remove.
Updates the LLMChat.invoke method signature and its usage in respond to support returning llm_messages.LLMMessage directly.