Conversation
74c099d to
9b62a2d
Compare
develra
left a comment
There was a problem hiding this comment.
LGTM - it's a bit hard for me to tell as a person pretty ignorant of this code if the test coverage is sufficient to be confident that these changes are safe. I think that it would be good to think through what might break as a result of these changes and make sure we have test coverage for it - especially given the somewhat sensitive timing of a new launch.
f83f2db to
08587e3
Compare
742b3d2 to
840d3b3
Compare
7802a61 to
611f3c1
Compare
11a88ce to
03e858d
Compare
Major refactor of the LLM chat architecture to improve code organization, maintainability, and type safety. Key Changes: - Split `LLMChat` subclasses into distinct Non-Streaming and Streaming implementations. Streaming logic (primarily for notebooks) was complicating the core classes; this split makes primary actors more concise and less error-prone. - Moved provider-specific implementations into separate files: `openai.py` and `genai.py`. - Replaced the generic `LLMResponse` with a strictly typed version, specifically enforcing types for `tool_usage` and `token_usage`. - Updated `invoke` method to accept explicit arguments. - Migrated OpenAI integration from the `completion` API to the more user-friendly `responses` API. Testing: - Added coverage for common use cases using real APIs (tests run conditionally if environment keys are present).
| yield tool_utils.ToolInvocationResult( | ||
| name=part.function_response.name, | ||
| call_id=f"call_{part.function_response.name}", | ||
| arguments=calls.pop(0).args, |
There was a problem hiding this comment.
qq: If this is None, will it throw Type Error in invoke_tool for functions without arguments? so we should use calls.pop(0).args or {} instead.
| This class may include workarounds for specific proxy behaviors. | ||
| """ | ||
|
|
||
| def __init__(self, client: genai.Client, model: str, **kwargs): |
There was a problem hiding this comment.
if support_tool_calling is True, would the tool be called twice, once by API native support, the other by our "simulated call"? For example, would this work?
# %%
# --- Test Case: Tool called twice ---
COUNTER = 0
def increment_counter() -> int:
global COUNTER
COUNTER += 1
return COUNTER
@benchmark_test(include=[
"google/gemini-2.5-pro",
])
@kbench.task()
def test_stateful_tool_double_execution(llm):
global COUNTER
COUNTER = 0 # Reset for each test run
llm.prompt("Call the increment_counter tool.", tools=[increment_counter])
# If the bug exists, this will fail because COUNTER will be 2 (or more).
kbench.assertions.assert_equal(
1, COUNTER, expectation="Tool should be executed exactly once."
)
| call = message.content | ||
| return [ | ||
| { | ||
| "role": self.roles_mapping.get("system", "system"), |
There was a problem hiding this comment.
It seems the role system will NOT be recognized as "tool result" by some models like gemini. This will cause an infinite calling of tools till ToolInvocationLimitExhausted reached. Shall we change the role to "user"?
For example, this seems to fail the same test as in https://github.com/Kaggle/kaggle-benchmarks/pull/12/changes#r3018051338
| yield tool_utils.ToolInvocationResult( | ||
| name=part.function_response.name, | ||
| call_id=f"call_{part.function_response.name}", | ||
| arguments=calls.pop(0).args, |
There was a problem hiding this comment.
Same line here: shall we match function_response to function_call by id (or name) instead of pop(0). See here it seems to be the way to associate function results to function calls.
This might result in misaligned results (I remember seeing it before):
function_call(add, 2, 3)
function_call(times, 4, 5)
function_response(times, 20) # this will be mis-assigned to add
function_response(add, 5)
Major refactoring of the LLM interaction layer, significantly enhancing the
llm.promptmethod to establish it as the primary, unified entry point for all model communications. The goal is to abstract away model-specific logic, providing seamless support for structured outputs, automatic tool calling, and vision capabilities across all integrated models.This simplifies task definitions and enhances the user experience by providing a consistent, high-level API.
Enhanced
llm.promptwith automatic tool calling:Automatic tool-calling emulation:
Refactored Actor model:
llms.pymodule has been streamlined. API-specific logic has been moved into dedicatedactors/genai.pyandactors/openai.pymodules.StreamingGoogleGenAI,StreamingOpenAIResponsesAPI) to isolate it from the core API used for scheduled runsImproved vision and image support:
Enhanced support for multimodal inputs, particularly for the Gemini API. The framework now correctly handles image content, including captions and various data formats (URLs and base64).
New agentic assertion:
Added
assert_tool_was_invokedto allow for testing and evaluation of agentic behavior by verifying that a specific tool was used during a task.Updated Examples & Tests
llm.promptAPI for tool use.test_api_integration.py) that run against live OpenAI, Google, and Model Proxy endpoints (when API keys are available) to ensure cross-model consistency.