Conversation
|
TODO
|
|
@mikegros Could you review this PR? The only existing source code that was changed is The other changes are new files. So this PR introduces new abilities to log messages and does not introduce breaking changes. |
|
I'll try to review before our meeting. Otherwise, later today. |
| "using Monte Carlo; use standard lib only." | ||
| "Plan at most two steps." | ||
| ) | ||
| llm.save_messages(Path(f"messages.json"), indent=2) |
There was a problem hiding this comment.
Since this is connected to the LLM, not a specific agent, how are the messages sorted here? It would be nice to make it clear to a user in the documentation how that works. Maybe just with a small comment.
There was a problem hiding this comment.
"sorted" as in what order the messages appear?
| parent = cast(BaseChatModel, super()) | ||
|
|
||
| output = parent.invoke(input, config=config, **kwargs) | ||
| self._append_message(output) |
There was a problem hiding this comment.
This appends at each LLM call, which is really smooth, but I am a little worried that the message log could get messy in multiagent settings. Specifically, when we instantiate one LLM object and then hand it to multiple agents (or even agents that can be used as tools of other agents).
I think this will lead to conversation histories that are mixed into each other. It might be nice to have some meta_data tag for what agent did the call.
This might be a "future work" effort instead of something to merge in here, but I wanted to bring it up just in case.
There was a problem hiding this comment.
Implementing code to add agent info will indeed be a heavier lift and could easily break existing code. Could we revisit after this PR?
@ndebard Let's chat about this draft regarding #201 on 3/19.
This draft PR implements
TracedChatOpenAIandTracedChatOllama. Their function is to wrap.invoke(ofChatOpenAIandChatOllama) in logic to save text to/from the LLM, and metadata from LLM calls if available. In particular, reasoning summaries are saved when present.