Skip to content

Conversation

@jhonathas
Copy link

Description

This pull request introduces support for passing provider-specific options directly to ReqLLM via a new provider_options parameter in the ChatCompletion action. This allows for greater flexibility when using different LLM providers (e.g., passing response_format for OpenAI or specific flags for Anthropic) without requiring hardcoded logic in the library.

It also refactors the response handling to correctly support ReqLLM.Response structs and streamlines parameter validation.

Changes

Features

Added provider_options (keyword list) to the ChatCompletion action schema.
Updated run_with_validated_params to whitelist and propagate provider_options to the ReqLLM call.
Removed implicit json_mode logic in favor of allowing the caller to pass explicit configuration via provider_options (e.g., response_format: %{type: "json_object"}).

Fixes

Fixed KeyError and parameter stripping by correctly merging provider_options defaults.
Updated format_response/1 to correctly handle both ReqLLM.Response structs and legacy maps, preventing UndefinedFunctionError.

Usage Example

ChatCompletion.run(%{
  model: model,
  prompt: prompt,
  # Now you can pass provider-specific options directly
  provider_options: [
    response_format: %{type: "json_object"}
  ]
}, context)

…onse

- Add ReqLLM.Response alias for cleaner code
- Consolidate format_response/1 clauses to handle both ReqResponse structs and maps
- Use ReqResponse.text/1 and ReqResponse.tool_calls/1 for struct responses
- Extract tool call formatting logic to handle both response types consistently
- Remove redundant pattern matching clauses
- Ensure tool_results always returns a list, even when empty
layeddie pushed a commit to layeddie/jido_ai that referenced this pull request Jan 16, 2026
2efaac74 Fix agentjido#65: Add HTTP status validation to StreamServer (agentjido#109)
2cd4206e fix: Use correct fields for ReqLLM.Error.Validation.Error (agentjido#107)
e7078609 Release v1.0.0-rc.7
35d4be19 Enhance Dialyzer configuration and refactor request struct creation for Elixir 1.19
54556fca Formatting for Elixir 1.19
2f568e95 Fixes for Elixir 1.19
e271ce3c Update model metadata and add new providers
73d05073 Refactor: use ReqLLM.ToolCall directly, remove normalization layer (agentjido#105)
9a26f336 Add normalize_model_id callback for Bedrock inference profiles (agentjido#104)
0ee42133 Replace aws_auth GitHub dependency with ex_aws_auth from Hex (agentjido#103)
83899338 Updates for v1.0.0-rc.6
02a609f4 Docs & Templates
6119aafc Normalize model ids with dashes to underscores
5bfe4cfe Add tests for Google files
1afd226c fix: restore file/video attachment support in Google provider (agentjido#82)
c97d9375 fix: correct tool_call structure and resolve compiler warnings in Bedrock tests
384594a2 Add AWS Bedrock provider with streaming support (agentjido#66)
d7e9d6d7 chore: Refine model fixtures and improve OpenRouter/Google coverage (agentjido#102)
52b60c78 Fix Z.AI provider timeout and reasoning token handling (agentjido#101)
8e06ddc5 feat: add Z.AI provider with standard and coding endpoints (agentjido#92)
bafc6eb6 Enhance XAI provider support and update fixtures (agentjido#100)
0da6303e Massive test fixture update (agentjido#99)
59f7c6f6 Refactor context and message handling for tool calls (agentjido#98)
238938f5 Resurrect: Enhance model compatibility task and update provider implementations (agentjido#88)
829a3739 fix(google): implement structured output using native responseSchema (agentjido#89)
85df690b fix: Respect max_tokens from Model.new/3 across all providers (agentjido#95)
e6d5bc14 Fix Anthropic provider tool result encoding for multi-turn conversations (agentjido#94)
1347d3b5 Revert "Enhance model compatibility task and update provider implementations (agentjido#87)"
3e671e83 Enhance model compatibility task and update provider implementations (agentjido#87)
7a3a1b51 Improve metadata provider error handling with structured Splode errors (agentjido#85)
4b94247f Fix get_provider/1 returning {:ok, nil} for metadata-only providers (agentjido#84)
a03d3bc9 fixes warning for duplicate clause (agentjido#80)
89b56437 v1.0.0-rc.5 release
cf6a0836 Update .gitignore, add fixes documentation, and enhance getting started guide (agentjido#79)
238bef11 Fixes agentjido#71
45c8cd05 Add Cerebras provider implementation (agentjido#78)
b7bfd28e Add dev tooling with tidewave to be able to project_eval into ReqLLM in a dev scenario (agentjido#73)
e608a63c chore/refresh coverage tests (agentjido#70)
f5552a8e feat(context): add Context.from_json/1 for JSON deserialization (agentjido#69)
23c92fb8 feat(schema): add `:in` type support to ReqLLM.Schema (agentjido#67)
566adb1c Prep rc.4 release
18970875 Formatting
52fbe8a0 Enhance documentation for provider architecture and streaming requests
1a895f31 Re-sync models, add Claude 4.5
c5ff026c Refactor streaming from Req to Finch for production stability (agentjido#63)
dfd4b177 Quality after merges
86a20b10 Remove Context.Codec and Response.Codec protocols (agentjido#53)
92733e59 Consolidate AI generation tasks into unified command (agentjido#48)
1d67c3d1 Fix: Translate max_tokens to max_completion_tokens for OpenAI reasoning models (agentjido#58)
11c806f2 Fix: Convert 'assistant' role to 'model' for Google Gemini API (agentjido#56)
679763c4 feat: add tool call support to Google Gemini provider (agentjido#54)
d810548b fix(http): Ensure req_http_options are passed to Req (agentjido#49)
b8f750e4 Update model name in the documents (agentjido#52)
046e2904 Update .gitignore to include Conductor and Language Server files; remove obsolete streaming race condition documentation
4e239242 Fix agentjido#31: Add cost calculation to Response.usage() (agentjido#35)
00d5a5bb fix: resolve streaming race condition causing BadMapError (issue agentjido#42) (agentjido#46)
39cd0a34 fix: encode tool_calls field in Context.Codec for OpenAI compatibility (agentjido#45)
16302270 Tag 1.0.0-rc.3
24dea327 Update documentation and rename capability testing guide
3c466da3 Update model configurations and add new providers (agentjido#37)
b84bb329 Enhance CHANGELOG with new features and improvements
3642fbeb Fix mix task documentation: use model_sync instead of models (agentjido#36)
b699102c Refine Stream Return system (agentjido#26)
6c37b770 feat: add file to google content part (agentjido#27)
8fcf7e94 Refactor LLM Fixture System (agentjido#22)
602036ba Update dependencies in mix.exs for improved project structure
ddf3ee6b Release version 1.0.0-rc.2 with significant enhancements and fixes

git-subtree-dir: projects/req_llm
git-subtree-split: 2efaac748d0d8b3d0ad85c2efe8179a44c2a7091
@mikehostetler mikehostetler changed the base branch from main to 1.x January 24, 2026 12:39
@mikehostetler
Copy link
Contributor

NOTICE: We've promoted the upcoming jido_ai 2.0 release to the main branch - so I've redirected this PR to point at 1.x

We will maintain 1.x for a while - thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants