-
Notifications
You must be signed in to change notification settings - Fork 23
Add usage data to ChatCompletion response #109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
jlecount
wants to merge
1
commit into
agentjido:1.x
Choose a base branch
from
prokeep:main
base: 1.x
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* include usage from ReqLLM * update documentation accordingly * fix test helper so mocks are faithful to actual behavior
Contributor
|
NOTICE: We've promoted the upcoming We will maintain |
nshkrdotcom
pushed a commit
to nshkrdotcom/jido_ai
that referenced
this pull request
Jan 29, 2026
2efaac74 Fix agentjido#65: Add HTTP status validation to StreamServer (agentjido#109) 2cd4206e fix: Use correct fields for ReqLLM.Error.Validation.Error (agentjido#107) e7078609 Release v1.0.0-rc.7 35d4be19 Enhance Dialyzer configuration and refactor request struct creation for Elixir 1.19 54556fca Formatting for Elixir 1.19 2f568e95 Fixes for Elixir 1.19 e271ce3c Update model metadata and add new providers 73d05073 Refactor: use ReqLLM.ToolCall directly, remove normalization layer (agentjido#105) 9a26f336 Add normalize_model_id callback for Bedrock inference profiles (agentjido#104) 0ee42133 Replace aws_auth GitHub dependency with ex_aws_auth from Hex (agentjido#103) 83899338 Updates for v1.0.0-rc.6 02a609f4 Docs & Templates 6119aafc Normalize model ids with dashes to underscores 5bfe4cfe Add tests for Google files 1afd226c fix: restore file/video attachment support in Google provider (agentjido#82) c97d9375 fix: correct tool_call structure and resolve compiler warnings in Bedrock tests 384594a2 Add AWS Bedrock provider with streaming support (agentjido#66) d7e9d6d7 chore: Refine model fixtures and improve OpenRouter/Google coverage (agentjido#102) 52b60c78 Fix Z.AI provider timeout and reasoning token handling (agentjido#101) 8e06ddc5 feat: add Z.AI provider with standard and coding endpoints (agentjido#92) bafc6eb6 Enhance XAI provider support and update fixtures (agentjido#100) 0da6303e Massive test fixture update (agentjido#99) 59f7c6f6 Refactor context and message handling for tool calls (agentjido#98) 238938f5 Resurrect: Enhance model compatibility task and update provider implementations (agentjido#88) 829a3739 fix(google): implement structured output using native responseSchema (agentjido#89) 85df690b fix: Respect max_tokens from Model.new/3 across all providers (agentjido#95) e6d5bc14 Fix Anthropic provider tool result encoding for multi-turn conversations (agentjido#94) 1347d3b5 Revert "Enhance model compatibility task and update provider implementations (agentjido#87)" 3e671e83 Enhance model compatibility task and update provider implementations (agentjido#87) 7a3a1b51 Improve metadata provider error handling with structured Splode errors (agentjido#85) 4b94247f Fix get_provider/1 returning {:ok, nil} for metadata-only providers (agentjido#84) a03d3bc9 fixes warning for duplicate clause (agentjido#80) 89b56437 v1.0.0-rc.5 release cf6a0836 Update .gitignore, add fixes documentation, and enhance getting started guide (agentjido#79) 238bef11 Fixes agentjido#71 45c8cd05 Add Cerebras provider implementation (agentjido#78) b7bfd28e Add dev tooling with tidewave to be able to project_eval into ReqLLM in a dev scenario (agentjido#73) e608a63c chore/refresh coverage tests (agentjido#70) f5552a8e feat(context): add Context.from_json/1 for JSON deserialization (agentjido#69) 23c92fb8 feat(schema): add `:in` type support to ReqLLM.Schema (agentjido#67) 566adb1c Prep rc.4 release 18970875 Formatting 52fbe8a0 Enhance documentation for provider architecture and streaming requests 1a895f31 Re-sync models, add Claude 4.5 c5ff026c Refactor streaming from Req to Finch for production stability (agentjido#63) dfd4b177 Quality after merges 86a20b10 Remove Context.Codec and Response.Codec protocols (agentjido#53) 92733e59 Consolidate AI generation tasks into unified command (agentjido#48) 1d67c3d1 Fix: Translate max_tokens to max_completion_tokens for OpenAI reasoning models (agentjido#58) 11c806f2 Fix: Convert 'assistant' role to 'model' for Google Gemini API (agentjido#56) 679763c4 feat: add tool call support to Google Gemini provider (agentjido#54) d810548b fix(http): Ensure req_http_options are passed to Req (agentjido#49) b8f750e4 Update model name in the documents (agentjido#52) 046e2904 Update .gitignore to include Conductor and Language Server files; remove obsolete streaming race condition documentation 4e239242 Fix agentjido#31: Add cost calculation to Response.usage() (agentjido#35) 00d5a5bb fix: resolve streaming race condition causing BadMapError (issue agentjido#42) (agentjido#46) 39cd0a34 fix: encode tool_calls field in Context.Codec for OpenAI compatibility (agentjido#45) 16302270 Tag 1.0.0-rc.3 24dea327 Update documentation and rename capability testing guide 3c466da3 Update model configurations and add new providers (agentjido#37) b84bb329 Enhance CHANGELOG with new features and improvements 3642fbeb Fix mix task documentation: use model_sync instead of models (agentjido#36) b699102c Refine Stream Return system (agentjido#26) 6c37b770 feat: add file to google content part (agentjido#27) 8fcf7e94 Refactor LLM Fixture System (agentjido#22) 602036ba Update dependencies in mix.exs for improved project structure ddf3ee6b Release version 1.0.0-rc.2 with significant enhancements and fixes git-subtree-dir: projects/req_llm git-subtree-split: 2efaac748d0d8b3d0ad85c2efe8179a44c2a7091
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This adds usage data when it exists from ReqLLM to the ChatCompletion response
Type of Change
Testing
mix test)mix quality)Quality checks are not passing in merge-base but did not worsen from this change
Checklist
CHANGELOG.md(it is auto-generated by git_ops)Related Issues
Closes #