-
Notifications
You must be signed in to change notification settings - Fork 17k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core[patch]: Fix unit test that tests error callbacks on chat models #30228
core[patch]: Fix unit test that tests error callbacks on chat models #30228
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Skipped Deployment
|
ed7006b
to
52d464a
Compare
llm = FakeListChatModel( | ||
responses=[message], | ||
error_on_chunk_number=i, | ||
) | ||
cb_async = FakeAsyncCallbackHandler() | ||
llm_astream = llm.astream("Dummy message", config={"callbacks": [cb_async]}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
llm.astream("Dummy message", callbacks=[cb_async])
was incorrect.
Changed it to llm.astream("Dummy message", config={"callbacks": [cb_async]})
cb_async = FakeAsyncCallbackHandler() | ||
async for _ in llm.astream("Dummy message", callbacks=[cb_async]): | ||
pass | ||
eval_response(cb_async, i) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code was actually not evaluated before because we were exiting the pytest.raises
block when getting FakeListChatModelError
@@ -93,34 +96,40 @@ async def test_async_batch_size() -> None: | |||
assert (cb.traced_runs[0].extra or {}).get("batch_size") == 1 | |||
|
|||
|
|||
async def test_stream_error_callback() -> None: | |||
message = "test" | |||
async def test_error_callback() -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's no use to test the error callback in streaming. The error is set in LLM.generate()
@@ -83,9 +83,6 @@ class FakeStreamingListLLM(FakeListLLM): | |||
chunks in a streaming implementation. | |||
""" | |||
|
|||
error_on_chunk_number: Optional[int] = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we want to make this change -- it's breaking the API and users may be using the fake models now in unit tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR seems to contain breaking API change that isn't necessary for fixing the unit test.
I rolledback the changes on FakeListLLM. PTAL. |
Fixes #29436