Releases: llamastack/llama-stack-client-python
Releases · llamastack/llama-stack-client-python
v0.2.19-alpha.1
0.2.19-alpha.1 (2025-08-22)
Full Changelog: v0.2.18-alpha.3...v0.2.19-alpha.1
Features
- api: manual updates (119bdb2)
- api: query_metrics, batches, changes (c935c79)
- api: some updates to query metrics (8f0f7a5)
Bug Fixes
- agent: fix wrong module import in ReAct agent (#262) (c17f3d6), closes #261
- build: kill explicit listing of python3.13 for now (5284b4a)
Chores
- update github action (af6b97e)
Build System
- Bump version to 0.2.18 (53d95ba)
v0.2.18-alpha.3
0.2.18-alpha.3 (2025-08-14)
Full Changelog: v0.2.18-alpha.2...v0.2.18-alpha.3
Features
llama-stack-client providers inspect PROVIDER_ID
(#181) (6d18aae)- add client-side utility for getting OAuth tokens simply (#230) (91156dc)
- add client.chat.completions.create() and client.completions.create() (#226) (ee0e65e)
- Add llama-stack-client datasets unregister command (#222) (38cd91c)
- add support for chat sessions (#167) (ce3b30f)
- add type hints to event logger util (#140) (26f3c33)
- add updated batch inference types (#220) (ddb93ca)
- add weighted_average aggregation function support (#208) (b62ac6c)
- agent: support multiple tool calls (#192) (43ea2f6)
- agent: support plain function as client_tool (#187) (2ec8044)
- api: update via SDK Studio (48fd19c)
- async agent wrapper (#169) (fc9907c)
- autogen llama-stack-client CLI reference doc (#190) (e7b19a5)
- client.responses.create() and client.responses.retrieve() (#227) (fba5102)
- datasets api updates (#203) (b664564)
- enable_persist: sync updates from stainless branch: yanxi0830/dev (#145) (59a02f0)
- new Agent API (#178) (c2f73b1)
- support client tool output metadata (#180) (8e4fd56)
- Sync updates from stainless branch: ehhuang/dev (#149) (367da69)
- unify max infer iters with server/client tools (#173) (548f2de)
- update react with new agent api (#189) (ac9d1e2)
Bug Fixes
llama-stack-client provider inspect
should use retrieve (#202) (e33b5bf)- accept extra_headers in agent.create_turn and pass them faithfully (#228) (e72d9e8)
- added uv.lock (546e0df)
- agent: better error handling (#207) (5746f91)
- agent: initialize toolgroups/client_tools (#186) (458e207)
- broken .retrieve call using
identifier=
(#135) (626805a) - bump to 0.2.1 (edb6173)
- bump version (b6d45b8)
- bump version in another place (7253433)
- cli: align cli toolgroups register to the new arguments (#231) (a87b6f7)
- correct toolgroups_id parameter name on unregister call (#235) (1be7904)
- fix duplicate model get help text (#188) (4bab07a)
- llama-stack-client providers list (#134) (930138a)
- react agent (#200) (b779979)
- React Agent for non-llama models (#174) (ee5dd2b)
- React agent should be able to work with provided config (#146) (08ab5df)
- react agent with custom tool parser n_iters (#184) (aaff961)
- remove the alpha suffix in run_benchmark.py (#179) (638f7f2)
- update CONTRIBUTING.md to point to uv instead of rye (3fbe0cd)
- update uv lock (cc072c8)
- validate endpoint url (#196) (6fa8095)
Chores
v0.2.8
Release corresponding to Llama Stack v0.2.8
What's Changed
- fix: accept extra_headers in agent.create_turn and pass them faithfully by @ashwinb in #228
- feat: add client-side utility for getting OAuth tokens simply by @ashwinb in #230
- fix(cli): align cli toolgroups register to the new arguments by @dolfim-ibm in #231
New Contributors
- @dolfim-ibm made their first contribution in #231
Full Changelog: v0.2.7...v0.2.8
v0.2.0
v0.1.9
Along with release in https://github.com/meta-llama/llama-stack/releases/tag/v0.1.9
Full Changelog: v0.1.8...v0.1.9
v0.1.8
Published along with https://github.com/meta-llama/llama-stack/releases/tag/v0.1.8
What's Changed
- feat: datasets api updates by @yanxi0830 in #203
- Sync updates from stainless branch: hardikjshah/dev by @hardikjshah in #204
- fix(agent): better error handling by @ehhuang in #207
- feat: add weighted_average aggregation function support by @SLR722 in #208
- fix: fix duplicate model get help text by @reidliu41 in #188
- simplify import paths; Sync updates from stainless branch: ehhuang/dev by @ehhuang in #205
- Sync updates from stainless branch: yanxi0830/dev by @yanxi0830 in #209
New Contributors
- @reidliu41 made their first contribution in #188
Full Changelog: v0.1.7...v0.1.8
v0.1.7
Published along with https://github.com/meta-llama/llama-stack/releases/tag/v0.1.7
What's Changed
- Add
-h
help flag support to all CLI commands by @alinaryan in #185 - feat: update react with new agent api by @yanxi0830 in #189
- feat: autogen llama-stack-client CLI reference doc by @yanxi0830 in #190
- chore: remove litellm type conversion by @ehhuang in #193
- chore: AsyncAgent should use ToolResponse instead of ToolResponseMessage by @ehhuang in #197
- fix: validate endpoint url by @cdoern in #196
- fix: react agent by @ehhuang in #200
- Sync updates from stainless branch: main by @yanxi0830 in #198
- chore: Sync updates from stainless branch: ehhuang/dev by @ehhuang in #199
- feat(agent): support multiple tool calls by @ehhuang in #192
- feat:
llama-stack-client providers inspect PROVIDER_ID
by @cdoern in #181 - chore: Sync updates from stainless branch: main by @yanxi0830 in #201
- fix:
llama-stack-client provider inspect
should use retrieve by @cdoern in #202
New Contributors
- @alinaryan made their first contribution in #185
Full Changelog: v0.1.6...v0.1.7
v0.1.6
v0.1.6
Published along with Llama Stack v0.1.6
What's Changed
- feat: unify max infer iters with server/client tools by @yanxi0830 in #173
- chore: api sync, deprecate allow_resume_turn + rename task_config->benchmark_config (Sync updates from stainless branch: yanxi0830/dev) by @yanxi0830 in #176
- fix: remove the alpha suffix in run_benchmark.py by @SLR722 in #179
- chore: Sync updates from stainless branch: ehhuang/dev by @ehhuang in #182
- feat: support client tool output metadata by @ehhuang in #180
- chore: use rich to format logs by @ehhuang in #177
- feat: new Agent API by @ehhuang in #178
- fix: react agent with custom tool parser n_iters by @yanxi0830 in #184
- fix(agent): initialize toolgroups/client_tools by @ehhuang in #186
- feat: async agent wrapper by @yanxi0830 in #169
- feat(agent): support plain function as client_tool by @ehhuang in #187
Full Changelog: v0.1.5...v0.1.6
v0.1.5
Published along with Llama Stack v0.1.5
What's Changed
- feat: add support for chat sessions by @cdoern in #167
- fix react agent by @hardikjshah in #172
- fix: React Agent for non-llama models by @hardikjshah in #174
Full Changelog: v0.1.4...v0.1.5
v0.1.4
What's Changed
- feat: Sync updates from stainless branch: ehhuang/dev by @ehhuang in #149
- Update CODEOWNERS by @SLR722 in #151
- chore: deprecate eval task (Sync updates from stainless branch: main) by @yanxi0830 in #150
- refine the benchmark eval UX by @SLR722 in #156
- fix: React agent should be able to work with provided config by @hardikjshah in #146
- feat (1/n): agents resume turn (Sync updates from stainless branch: yanxi0830/dev) by @yanxi0830 in #157
- v0.1.4 - Sync updates from stainless branch: yanxi0830/dev by @yanxi0830 in #164
New Contributors
- @hardikjshah made their first contribution in #146
Full Changelog: v0.1.3...v0.1.4