Skip to content

feat: Support LLMMessages to be returned from invoke#115

Merged
s-alexey merged 1 commit intocifrom
alexey/llm_messages
Apr 3, 2026
Merged

feat: Support LLMMessages to be returned from invoke#115
s-alexey merged 1 commit intocifrom
alexey/llm_messages

Conversation

@s-alexey
Copy link
Copy Markdown
Contributor

@s-alexey s-alexey commented Apr 2, 2026

Updates the LLMChat.invoke method signature and its usage in respond to support returning llm_messages.LLMMessage directly.

@s-alexey s-alexey requested a review from dolaameng April 2, 2026 14:34
Updates the LLMChat.invoke method signature and its usage in respond to support returning llm_messages.LLMMessage directly.
@s-alexey s-alexey force-pushed the alexey/llm_messages branch from 290db71 to 6a98f64 Compare April 2, 2026 14:40
Copy link
Copy Markdown
Collaborator

@dolaameng dolaameng Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please add some tests to test_llm_chats.py. These will help me and other people understand the code change and safe-guard them. Thanks!

# --- Tests for LLMMessage return path from invoke() (PR #115) ---


class LLMMessageReturnLLM(actors.LLMChat):
    """A mock LLM that returns an LLMMessage directly from invoke()."""

    def __init__(self, content="hello from LLMMessage", **kwargs):
        super().__init__(name="LLMMessageReturnLLM", **kwargs)
        self._content = content

    def invoke(self, messages, system=None, **kwargs):
        from kaggle_benchmarks import llm_messages
        from kaggle_benchmarks.usage import Usage

        return llm_messages.LLMMessage(
            content=self._content,
            sender=self,
            usage=Usage(input_tokens=42, output_tokens=17),
        )


def test_llm_message_return_prompt_integration():
    """Basic smoke test: prompt() works when invoke() returns LLMMessage (schema=str)."""
    llm = LLMMessageReturnLLM(content="prompt answer")
    assert llm.prompt("Hello there") == "prompt answer"

# This will fail now
def test_llm_message_return_with_schema():
    """prompt(schema=int) should return the LLMMessage content as-is when already finalized."""
    llm = LLMMessageReturnLLM(content='{"value": 42}')
    assert llm.prompt("What is the answer?", schema=int) == 42

Copy link
Copy Markdown
Contributor Author

@s-alexey s-alexey Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

# This will fail now
def test_llm_message_return_with_schema():
    """prompt(schema=int) should return the LLMMessage content as-is when already finalized."""
    llm = LLMMessageReturnLLM(content="42")
    assert llm.prompt("What is the answer?", schema=int) == 42

This will (and rightly so) still fail even if LLMResponse is used. The expected content is {"value": 42}.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, unsure why I should check for double append here, as it is unrelated to my changes. In general we should avoid testing implementation details over contracts.

#116 introduces an improved LLMMessageReturnLLM that covers the new logic and will be integrated into test_llm_chat.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see. I keep forgetting what the content should be. Yes please consider making the test clearer either here or in PR #116 since this will document the behavior for others to understand more easily.

double append is irrelevant and I forgot to remove.

elif isinstance(invoke_response, Iterator):
response.stream(invoke_response)
elif isinstance(invoke_response, llm_messages.LLMMessage):
response = invoke_response
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we return early here? Otherwise response will go through parsing below and will fail? Please see the suggested unit test above.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the intended behavior; invoke should only output raw text (LLMMessage[str]).

Copy link
Copy Markdown
Collaborator

@dolaameng dolaameng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Please consider make the tests clearer to document the expected behavior. Thanks!

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see. I keep forgetting what the content should be. Yes please consider making the test clearer either here or in PR #116 since this will document the behavior for others to understand more easily.

double append is irrelevant and I forgot to remove.

@s-alexey s-alexey merged commit 87bcc82 into ci Apr 3, 2026
4 checks passed
@s-alexey s-alexey deleted the alexey/llm_messages branch April 3, 2026 16:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants