-
The default
chat_openai()
model is now GPT-4o. -
New
Chat$set_turns()
to set turns.Chat$turns()
is nowChat$get_turns()
.Chat$system_prompt()
is replaced withChat$set_system_prompt()
andChat$get_system_prompt()
. -
Async and streaming async chat are now event-driven and use
later::later_fd()
to wait efficiently on curl socket activity (#157). -
New
chat_bedrock()
to chat with AWS bedrock models (#50). -
New
chat$extract_data()
uses the structured data API where available (and tool calling otherwise) to extract data structured according to a known type specification. You can create specs with functionstype_boolean()
,type_integer()
,type_number()
,type_string()
,type_enum()
,type_array()
, andtype_object()
(#31). -
The general
ToolArg()
has been replaced by the more specifictype_*()
functions.ToolDef()
has been renamed totool
. -
content_image_url()
will now create inline images when given a data url (#110). -
Streaming ollama results works once again (#117).
-
Streaming OpenAI results now capture more results, including
logprops
(#115). -
New
interpolate()
andprompt_file()
make it easier to create prompts that are a mix of static text and dynamic values. -
You can find how many tokens you've used in the current session by calling
token_usage()
. -
chat_browser()
andchat_console()
are nowlive_browser()
andlive_console()
. -
The
echo
can now be one of three values: "none", "text", or "all". If "all", you'll now see both user and assistant turns, and all content types will be printed, not just text. When running in the global environment,echo
defaults to "text", and when running inside a function it defaults to "none". -
chat$register_tool()
now takes an object created byTool()
. This makes it a little easier to reuse tool definitions (#32). -
new_chat_openai()
is nowchat_openai()
. -
Claude and Gemini are now supported via
chat_claude()
andchat_gemini()
. -
The Snowflake Cortex Analyst is now supported via
chat_cortex()
(#56). -
Databricks is now supported via
chat_databricks()
(#152).