Skip to content

Commit 78de486

Browse files
release: 0.4.0-alpha.1 (#37)
Automated Release PR --- ## 0.4.0-alpha.1 (2025-10-31) Full Changelog: [v0.2.23-alpha.1...v0.4.0-alpha.1](v0.2.23-alpha.1...v0.4.0-alpha.1) ### ⚠ BREAKING CHANGES * **api:** /v1/inspect only lists v1 apis by default * **api:** /v1/inspect only lists v1 apis by default * **api:** use input_schema instead of parameters for tools * **api:** fixes to remove deprecated inference resources ### Features * **api:** Adding prompts API to stainless config ([5ab8d74](5ab8d74)) * **api:** expires_after changes for /files ([a0b0fb7](a0b0fb7)) * **api:** fix file batches SDK to list_files ([25a0f10](25a0f10)) * **api:** fixes to remove deprecated inference resources ([367d775](367d775)) * **api:** fixes to URLs ([e4f7840](e4f7840)) * **api:** manual updates ([7d2e375](7d2e375)) * **api:** manual updates ([0302d54](0302d54)) * **api:** manual updates ([98a596f](98a596f)) * **api:** manual updates ([c6fb0b6](c6fb0b6)) * **api:** manual updates??! ([4dda064](4dda064)) * **api:** move datasets to beta, vector_db -&gt; vector_store ([f32c0be](f32c0be)) * **api:** move post_training and eval under alpha namespace ([aec1d5f](aec1d5f)) * **api:** moving { rerank, agents } to `client.alpha.` ([793e069](793e069)) * **api:** removing openai/v1 ([b5432de](b5432de)) * **api:** SDKs for vector store file batches ([b0676c8](b0676c8)) * **api:** SDKs for vector store file batches apis ([88731bf](88731bf)) * **api:** several updates including Conversations, Responses changes, etc. ([e0728d5](e0728d5)) * **api:** sync ([7d85013](7d85013)) * **api:** tool api (input_schema, etc.) changes ([06f2bca](06f2bca)) * **api:** updates to vector_store, etc. ([19535c2](19535c2)) * **api:** updating post /v1/files to have correct multipart/form-data ([f1cf9d6](f1cf9d6)) * **api:** use input_schema instead of parameters for tools ([8910a12](8910a12)) * **api:** vector_db_id -&gt; vector_store_id ([079d89d](079d89d)) ### Bug Fixes * **api:** another fix to capture correct responses.create() params ([6acae91](6acae91)) * **api:** fix the ToolDefParam updates ([5cee3d6](5cee3d6)) * **client:** incorrect offset pagination check ([257285f](257285f)) * fix stream event model reference ([a71b421](a71b421)) ### Chores * **api:** /v1/inspect only lists v1 apis by default ([ae3dc95](ae3dc95)) * **api:** /v1/inspect only lists v1 apis by default ([e30f51c](e30f51c)) * extract some types in mcp docs ([dcc7bb8](dcc7bb8)) * fix readme example ([402f930](402f930)) * fix readme examples ([4d5517c](4d5517c)) * **internal:** codegen related update ([252e0a2](252e0a2)) * **internal:** codegen related update ([34da720](34da720)) * **internal:** fix incremental formatting in some cases ([c5c8292](c5c8292)) * **internal:** use npm pack for build uploads ([a246793](a246793)) ### Documentation * update examples ([17b9eb3](17b9eb3)) ### Build System * Bump version to 0.2.23 ([16e05ed](16e05ed)) --- This pull request is managed by Stainless's [GitHub App](https://github.com/apps/stainless-app). The [semver version number](https://semver.org/#semantic-versioning-specification-semver) is based on included [commit messages](https://www.conventionalcommits.org/en/v1.0.0/). Alternatively, you can manually set the version number in the title of this pull request. For a better experience, it is recommended to use either rebase-merge or squash-merge when merging this pull request. 🔗 Stainless [website](https://www.stainlessapi.com) 📚 Read the [docs](https://app.stainlessapi.com/docs) 🙋 [Reach out](mailto:[email protected]) for help or questions --------- Co-authored-by: stainless-app[bot] <142633134+stainless-app[bot]@users.noreply.github.com> Co-authored-by: Ashwin Bharambe <[email protected]>
1 parent a294775 commit 78de486

File tree

98 files changed

+7618
-3578
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

98 files changed

+7618
-3578
lines changed

.devcontainer/devcontainer.json

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,7 @@
99
"postCreateCommand": "yarn install",
1010
"customizations": {
1111
"vscode": {
12-
"extensions": [
13-
"esbenp.prettier-vscode"
14-
]
12+
"extensions": ["esbenp.prettier-vscode"]
1513
}
1614
}
1715
}

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,5 @@ dist
77
dist-deno
88
/*.tgz
99
.idea/
10+
.eslintcache
1011

.release-please-manifest.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
".": "0.2.23-alpha.1"
2+
".": "0.4.0-alpha.1"
33
}

.stats.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
configured_endpoints: 111
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-f252873ea1e1f38fd207331ef2621c511154d5be3f4076e59cc15754fc58eee4.yml
3-
openapi_spec_hash: 10cbb4337a06a9fdd7d08612dd6044c3
4-
config_hash: 0358112cc0f3d880b4d55debdbe1cfa3
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-35c6569e5e9fcc85084c9728eb7fc7c5908297fcc77043d621d25de3c850a990.yml
3+
openapi_spec_hash: 0f95bbeee16f3205d36ec34cfa62c711
4+
config_hash: ef275cc002a89629459fd73d0cf9cba9

CHANGELOG.md

Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,73 @@
11
# Changelog
22

3+
## 0.4.0-alpha.1 (2025-10-31)
4+
5+
Full Changelog: [v0.2.23-alpha.1...v0.4.0-alpha.1](https://github.com/llamastack/llama-stack-client-typescript/compare/v0.2.23-alpha.1...v0.4.0-alpha.1)
6+
7+
### ⚠ BREAKING CHANGES
8+
9+
* **api:** /v1/inspect only lists v1 apis by default
10+
* **api:** /v1/inspect only lists v1 apis by default
11+
* **api:** use input_schema instead of parameters for tools
12+
* **api:** fixes to remove deprecated inference resources
13+
14+
### Features
15+
16+
* **api:** Adding prompts API to stainless config ([5ab8d74](https://github.com/llamastack/llama-stack-client-typescript/commit/5ab8d7423f6a9c26453b36c9daee99d343993d4b))
17+
* **api:** expires_after changes for /files ([a0b0fb7](https://github.com/llamastack/llama-stack-client-typescript/commit/a0b0fb7aa74668f3f6996c178f9654723b8b0f22))
18+
* **api:** fix file batches SDK to list_files ([25a0f10](https://github.com/llamastack/llama-stack-client-typescript/commit/25a0f10cffa7de7f1457d65c97259911bc70ab0a))
19+
* **api:** fixes to remove deprecated inference resources ([367d775](https://github.com/llamastack/llama-stack-client-typescript/commit/367d775c3d5a2fd85bf138d2b175e91b7c185913))
20+
* **api:** fixes to URLs ([e4f7840](https://github.com/llamastack/llama-stack-client-typescript/commit/e4f78407f74f3ba7597de355c314e1932dd94761))
21+
* **api:** manual updates ([7d2e375](https://github.com/llamastack/llama-stack-client-typescript/commit/7d2e375bde7bd04ae58cc49fcd5ab7b134b25640))
22+
* **api:** manual updates ([0302d54](https://github.com/llamastack/llama-stack-client-typescript/commit/0302d54398d87127ab0e9221a8a92760123d235b))
23+
* **api:** manual updates ([98a596f](https://github.com/llamastack/llama-stack-client-typescript/commit/98a596f677fe2790e4b4765362aa19b6cff8b97e))
24+
* **api:** manual updates ([c6fb0b6](https://github.com/llamastack/llama-stack-client-typescript/commit/c6fb0b67d8f2e641c13836a17400e51df0b029f1))
25+
* **api:** manual updates??! ([4dda064](https://github.com/llamastack/llama-stack-client-typescript/commit/4dda06489f003860e138f396c253b40de01103b6))
26+
* **api:** move datasets to beta, vector_db -&gt; vector_store ([f32c0be](https://github.com/llamastack/llama-stack-client-typescript/commit/f32c0becb1ec0d66129b7fcaa06de3323ee703da))
27+
* **api:** move post_training and eval under alpha namespace ([aec1d5f](https://github.com/llamastack/llama-stack-client-typescript/commit/aec1d5ff198473ba736bf543ad00c6626cab9b81))
28+
* **api:** moving { rerank, agents } to `client.alpha.` ([793e069](https://github.com/llamastack/llama-stack-client-typescript/commit/793e0694d75c2af4535bf991d5858cd1f21300b4))
29+
* **api:** removing openai/v1 ([b5432de](https://github.com/llamastack/llama-stack-client-typescript/commit/b5432de2ad56ff0d2fd5a5b8e1755b5237616b60))
30+
* **api:** SDKs for vector store file batches ([b0676c8](https://github.com/llamastack/llama-stack-client-typescript/commit/b0676c837bbd835276fea3fe12f435afdbb75ef7))
31+
* **api:** SDKs for vector store file batches apis ([88731bf](https://github.com/llamastack/llama-stack-client-typescript/commit/88731bfecd6f548ae79cbe2a1125620e488c42a3))
32+
* **api:** several updates including Conversations, Responses changes, etc. ([e0728d5](https://github.com/llamastack/llama-stack-client-typescript/commit/e0728d5dd59be8723d9f967d6164351eb05528d1))
33+
* **api:** sync ([7d85013](https://github.com/llamastack/llama-stack-client-typescript/commit/7d850139d1327a215312a82c98b3428ebc7e5f68))
34+
* **api:** tool api (input_schema, etc.) changes ([06f2bca](https://github.com/llamastack/llama-stack-client-typescript/commit/06f2bcaf0df2e5d462cbe2d9ef3704ab0cfe9248))
35+
* **api:** updates to vector_store, etc. ([19535c2](https://github.com/llamastack/llama-stack-client-typescript/commit/19535c27147bf6f6861b807d9eeee471b5625148))
36+
* **api:** updating post /v1/files to have correct multipart/form-data ([f1cf9d6](https://github.com/llamastack/llama-stack-client-typescript/commit/f1cf9d68b6b2569dfb5ea3e2d2c33eff1a832e47))
37+
* **api:** use input_schema instead of parameters for tools ([8910a12](https://github.com/llamastack/llama-stack-client-typescript/commit/8910a121146aeddcb8f400101e6a2232245097e0))
38+
* **api:** vector_db_id -&gt; vector_store_id ([079d89d](https://github.com/llamastack/llama-stack-client-typescript/commit/079d89d6522cb4f2eed5e5a09962d94ad800e883))
39+
40+
41+
### Bug Fixes
42+
43+
* **api:** another fix to capture correct responses.create() params ([6acae91](https://github.com/llamastack/llama-stack-client-typescript/commit/6acae910db289080e8f52864f1bdf6d7951d1c3b))
44+
* **api:** fix the ToolDefParam updates ([5cee3d6](https://github.com/llamastack/llama-stack-client-typescript/commit/5cee3d69650a4c827e12fc046c1d2ec3b2fa9126))
45+
* **client:** incorrect offset pagination check ([257285f](https://github.com/llamastack/llama-stack-client-typescript/commit/257285f33bb989c9040580dd24251d05f9657bb0))
46+
* fix stream event model reference ([a71b421](https://github.com/llamastack/llama-stack-client-typescript/commit/a71b421152a609e49e76d01c6e4dd46eb3dbfae0))
47+
48+
49+
### Chores
50+
51+
* **api:** /v1/inspect only lists v1 apis by default ([ae3dc95](https://github.com/llamastack/llama-stack-client-typescript/commit/ae3dc95964c908d219b23d7166780eaab6003ef5))
52+
* **api:** /v1/inspect only lists v1 apis by default ([e30f51c](https://github.com/llamastack/llama-stack-client-typescript/commit/e30f51c704c39129092255c040bbf5ad90ed0b07))
53+
* extract some types in mcp docs ([dcc7bb8](https://github.com/llamastack/llama-stack-client-typescript/commit/dcc7bb8b4d940982c2e9c6d1a541636e99fdc5ff))
54+
* fix readme example ([402f930](https://github.com/llamastack/llama-stack-client-typescript/commit/402f9301d033bb230c9714104fbfa554f3f7cd8f))
55+
* fix readme examples ([4d5517c](https://github.com/llamastack/llama-stack-client-typescript/commit/4d5517c2b9af2eb6994f5e4b2c033c95d268fb5c))
56+
* **internal:** codegen related update ([252e0a2](https://github.com/llamastack/llama-stack-client-typescript/commit/252e0a2a38bd8aedab91b401c440a9b10c056cec))
57+
* **internal:** codegen related update ([34da720](https://github.com/llamastack/llama-stack-client-typescript/commit/34da720c34c35dafb38775243d28dfbdce2497db))
58+
* **internal:** fix incremental formatting in some cases ([c5c8292](https://github.com/llamastack/llama-stack-client-typescript/commit/c5c8292b631c678efff5498bbab9f5a43bee50b6))
59+
* **internal:** use npm pack for build uploads ([a246793](https://github.com/llamastack/llama-stack-client-typescript/commit/a24679300cff93fea8ad4bc85e549ecc88198d58))
60+
61+
62+
### Documentation
63+
64+
* update examples ([17b9eb3](https://github.com/llamastack/llama-stack-client-typescript/commit/17b9eb3c40957b63d2a71f7fc21944abcc720d80))
65+
66+
67+
### Build System
68+
69+
* Bump version to 0.2.23 ([16e05ed](https://github.com/llamastack/llama-stack-client-typescript/commit/16e05ed9798233375e19098992632d223c3f5d8d))
70+
371
## 0.2.23-alpha.1 (2025-09-26)
472

573
Full Changelog: [v0.2.19-alpha.1...v0.2.23-alpha.1](https://github.com/llamastack/llama-stack-client-typescript/compare/v0.2.19-alpha.1...v0.2.23-alpha.1)

README.md

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -41,13 +41,13 @@ import LlamaStackClient from 'llama-stack-client';
4141

4242
const client = new LlamaStackClient();
4343

44-
const stream = await client.inference.chatCompletion({
44+
const stream = await client.chat.completions.create({
4545
messages: [{ content: 'string', role: 'user' }],
46-
model_id: 'model_id',
46+
model: 'model',
4747
stream: true,
4848
});
49-
for await (const chatCompletionResponseStreamChunk of stream) {
50-
console.log(chatCompletionResponseStreamChunk.completion_message);
49+
for await (const chatCompletionChunk of stream) {
50+
console.log(chatCompletionChunk);
5151
}
5252
```
5353

@@ -64,11 +64,11 @@ import LlamaStackClient from 'llama-stack-client';
6464

6565
const client = new LlamaStackClient();
6666

67-
const params: LlamaStackClient.InferenceChatCompletionParams = {
67+
const params: LlamaStackClient.Chat.CompletionCreateParams = {
6868
messages: [{ content: 'string', role: 'user' }],
69-
model_id: 'model_id',
69+
model: 'model',
7070
};
71-
const chatCompletionResponse: LlamaStackClient.ChatCompletionResponse = await client.inference.chatCompletion(
71+
const completion: LlamaStackClient.Chat.CompletionCreateResponse = await client.chat.completions.create(
7272
params,
7373
);
7474
```
@@ -113,8 +113,8 @@ a subclass of `APIError` will be thrown:
113113

114114
<!-- prettier-ignore -->
115115
```ts
116-
const chatCompletionResponse = await client.inference
117-
.chatCompletion({ messages: [{ content: 'string', role: 'user' }], model_id: 'model_id' })
116+
const completion = await client.chat.completions
117+
.create({ messages: [{ content: 'string', role: 'user' }], model: 'model' })
118118
.catch(async (err) => {
119119
if (err instanceof LlamaStackClient.APIError) {
120120
console.log(err.status); // 400
@@ -155,7 +155,7 @@ const client = new LlamaStackClient({
155155
});
156156

157157
// Or, configure per-request:
158-
await client.inference.chatCompletion({ messages: [{ content: 'string', role: 'user' }], model_id: 'model_id' }, {
158+
await client.chat.completions.create({ messages: [{ content: 'string', role: 'user' }], model: 'model' }, {
159159
maxRetries: 5,
160160
});
161161
```
@@ -172,7 +172,7 @@ const client = new LlamaStackClient({
172172
});
173173

174174
// Override per-request:
175-
await client.inference.chatCompletion({ messages: [{ content: 'string', role: 'user' }], model_id: 'model_id' }, {
175+
await client.chat.completions.create({ messages: [{ content: 'string', role: 'user' }], model: 'model' }, {
176176
timeout: 5 * 1000,
177177
});
178178
```
@@ -193,17 +193,17 @@ You can also use the `.withResponse()` method to get the raw `Response` along wi
193193
```ts
194194
const client = new LlamaStackClient();
195195

196-
const response = await client.inference
197-
.chatCompletion({ messages: [{ content: 'string', role: 'user' }], model_id: 'model_id' })
196+
const response = await client.chat.completions
197+
.create({ messages: [{ content: 'string', role: 'user' }], model: 'model' })
198198
.asResponse();
199199
console.log(response.headers.get('X-My-Header'));
200200
console.log(response.statusText); // access the underlying Response object
201201

202-
const { data: chatCompletionResponse, response: raw } = await client.inference
203-
.chatCompletion({ messages: [{ content: 'string', role: 'user' }], model_id: 'model_id' })
202+
const { data: completion, response: raw } = await client.chat.completions
203+
.create({ messages: [{ content: 'string', role: 'user' }], model: 'model' })
204204
.withResponse();
205205
console.log(raw.headers.get('X-My-Header'));
206-
console.log(chatCompletionResponse.completion_message);
206+
console.log(completion);
207207
```
208208

209209
### Making custom/undocumented requests
@@ -307,8 +307,8 @@ const client = new LlamaStackClient({
307307
});
308308

309309
// Override per-request:
310-
await client.inference.chatCompletion(
311-
{ messages: [{ content: 'string', role: 'user' }], model_id: 'model_id' },
310+
await client.chat.completions.create(
311+
{ messages: [{ content: 'string', role: 'user' }], model: 'model' },
312312
{
313313
httpAgent: new http.Agent({ keepAlive: false }),
314314
},

0 commit comments

Comments
 (0)