-
Notifications
You must be signed in to change notification settings - Fork 25
support ReqLLM.Response struct in ChatCompletion and preserve tool_results in map fallback #102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: 1.x
Are you sure you want to change the base?
Conversation
…onse - Add ReqLLM.Response alias for cleaner code - Consolidate format_response/1 clauses to handle both ReqResponse structs and maps - Use ReqResponse.text/1 and ReqResponse.tool_calls/1 for struct responses - Extract tool call formatting logic to handle both response types consistently - Remove redundant pattern matching clauses - Ensure tool_results always returns a list, even when empty
4c34712 to
3512877
Compare
2efaac74 Fix agentjido#65: Add HTTP status validation to StreamServer (agentjido#109) 2cd4206e fix: Use correct fields for ReqLLM.Error.Validation.Error (agentjido#107) e7078609 Release v1.0.0-rc.7 35d4be19 Enhance Dialyzer configuration and refactor request struct creation for Elixir 1.19 54556fca Formatting for Elixir 1.19 2f568e95 Fixes for Elixir 1.19 e271ce3c Update model metadata and add new providers 73d05073 Refactor: use ReqLLM.ToolCall directly, remove normalization layer (agentjido#105) 9a26f336 Add normalize_model_id callback for Bedrock inference profiles (agentjido#104) 0ee42133 Replace aws_auth GitHub dependency with ex_aws_auth from Hex (agentjido#103) 83899338 Updates for v1.0.0-rc.6 02a609f4 Docs & Templates 6119aafc Normalize model ids with dashes to underscores 5bfe4cfe Add tests for Google files 1afd226c fix: restore file/video attachment support in Google provider (agentjido#82) c97d9375 fix: correct tool_call structure and resolve compiler warnings in Bedrock tests 384594a2 Add AWS Bedrock provider with streaming support (agentjido#66) d7e9d6d7 chore: Refine model fixtures and improve OpenRouter/Google coverage (agentjido#102) 52b60c78 Fix Z.AI provider timeout and reasoning token handling (agentjido#101) 8e06ddc5 feat: add Z.AI provider with standard and coding endpoints (agentjido#92) bafc6eb6 Enhance XAI provider support and update fixtures (agentjido#100) 0da6303e Massive test fixture update (agentjido#99) 59f7c6f6 Refactor context and message handling for tool calls (agentjido#98) 238938f5 Resurrect: Enhance model compatibility task and update provider implementations (agentjido#88) 829a3739 fix(google): implement structured output using native responseSchema (agentjido#89) 85df690b fix: Respect max_tokens from Model.new/3 across all providers (agentjido#95) e6d5bc14 Fix Anthropic provider tool result encoding for multi-turn conversations (agentjido#94) 1347d3b5 Revert "Enhance model compatibility task and update provider implementations (agentjido#87)" 3e671e83 Enhance model compatibility task and update provider implementations (agentjido#87) 7a3a1b51 Improve metadata provider error handling with structured Splode errors (agentjido#85) 4b94247f Fix get_provider/1 returning {:ok, nil} for metadata-only providers (agentjido#84) a03d3bc9 fixes warning for duplicate clause (agentjido#80) 89b56437 v1.0.0-rc.5 release cf6a0836 Update .gitignore, add fixes documentation, and enhance getting started guide (agentjido#79) 238bef11 Fixes agentjido#71 45c8cd05 Add Cerebras provider implementation (agentjido#78) b7bfd28e Add dev tooling with tidewave to be able to project_eval into ReqLLM in a dev scenario (agentjido#73) e608a63c chore/refresh coverage tests (agentjido#70) f5552a8e feat(context): add Context.from_json/1 for JSON deserialization (agentjido#69) 23c92fb8 feat(schema): add `:in` type support to ReqLLM.Schema (agentjido#67) 566adb1c Prep rc.4 release 18970875 Formatting 52fbe8a0 Enhance documentation for provider architecture and streaming requests 1a895f31 Re-sync models, add Claude 4.5 c5ff026c Refactor streaming from Req to Finch for production stability (agentjido#63) dfd4b177 Quality after merges 86a20b10 Remove Context.Codec and Response.Codec protocols (agentjido#53) 92733e59 Consolidate AI generation tasks into unified command (agentjido#48) 1d67c3d1 Fix: Translate max_tokens to max_completion_tokens for OpenAI reasoning models (agentjido#58) 11c806f2 Fix: Convert 'assistant' role to 'model' for Google Gemini API (agentjido#56) 679763c4 feat: add tool call support to Google Gemini provider (agentjido#54) d810548b fix(http): Ensure req_http_options are passed to Req (agentjido#49) b8f750e4 Update model name in the documents (agentjido#52) 046e2904 Update .gitignore to include Conductor and Language Server files; remove obsolete streaming race condition documentation 4e239242 Fix agentjido#31: Add cost calculation to Response.usage() (agentjido#35) 00d5a5bb fix: resolve streaming race condition causing BadMapError (issue agentjido#42) (agentjido#46) 39cd0a34 fix: encode tool_calls field in Context.Codec for OpenAI compatibility (agentjido#45) 16302270 Tag 1.0.0-rc.3 24dea327 Update documentation and rename capability testing guide 3c466da3 Update model configurations and add new providers (agentjido#37) b84bb329 Enhance CHANGELOG with new features and improvements 3642fbeb Fix mix task documentation: use model_sync instead of models (agentjido#36) b699102c Refine Stream Return system (agentjido#26) 6c37b770 feat: add file to google content part (agentjido#27) 8fcf7e94 Refactor LLM Fixture System (agentjido#22) 602036ba Update dependencies in mix.exs for improved project structure ddf3ee6b Release version 1.0.0-rc.2 with significant enhancements and fixes git-subtree-dir: projects/req_llm git-subtree-split: 2efaac748d0d8b3d0ad85c2efe8179a44c2a7091
|
Thanks for tackling this! I hit #101 while working on extended thinking support and this fix would unblock that work nicely. Happy to help test once this is ready - I'm using Let me know if there's anything blocking this from landing. |
|
NOTICE: We've promoted the upcoming We will maintain |
This PR updates
Jido.AI.Actions.ReqLlm.ChatCompletionto properly handleReqLLM.Responsestructs and maintains backward compatibility with map-based responses.About the error
The Problem in main branch: Even on the latest main branch, the framework crashes during LLM interactions. This is due to the recent ReqLLM v1.0.0 update, which changed the response format from a plain Map to a %ReqLLM.Response{} Struct.
The code in Jido.AI.Actions.ReqLlm.ChatCompletion still uses bracket access (Access behavior), which structs do not implement, causing an immediate crash.
Error Message:
Changes
alias ReqLLM.Response, as: ReqResponse.format_response/1to use:ReqResponse.text/1for contentReqResponse.tool_calls/1for tool callstool_resultswhentool_callsis present.Motivation
req_llmmigrated to structs;jido_aiassumed maps and used[]access, causing crashes.ReqLLM.ResponseAPI is safer and future-proof.req_llmversions that still return maps.Impact
ChatCompletion.