-
Notifications
You must be signed in to change notification settings - Fork 106
Update inference API specification to include new Llama Service #5020
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
8b89902
659c6ca
a68e740
23dd73f
6b1c6d4
73fc8af
c527e7d
16e0155
207638f
0694fd2
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,35 @@ | ||
{ | ||
"inference.put_llama": { | ||
"documentation": { | ||
"url": "https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-service-llama.html", | ||
"description": "Configure a Llama inference endpoint" | ||
}, | ||
"stability": "stable", | ||
"visibility": "public", | ||
"headers": { | ||
"accept": ["application/json"], | ||
"content_type": ["application/json"] | ||
}, | ||
"url": { | ||
"paths": [ | ||
{ | ||
"path": "/_inference/{task_type}/{llama_inference_id}", | ||
"methods": ["PUT"], | ||
"parts": { | ||
"task_type": { | ||
"type": "string", | ||
"description": "The task type" | ||
}, | ||
"llama_inference_id": { | ||
"type": "string", | ||
"description": "The inference ID" | ||
} | ||
} | ||
} | ||
] | ||
}, | ||
"body": { | ||
"description": "The inference endpoint's task and service settings" | ||
} | ||
} | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,85 @@ | ||
/* | ||
* Licensed to Elasticsearch B.V. under one or more contributor | ||
* license agreements. See the NOTICE file distributed with | ||
* this work for additional information regarding copyright | ||
* ownership. Elasticsearch B.V. licenses this file to you under | ||
* the Apache License, Version 2.0 (the "License"); you may | ||
* not use this file except in compliance with the License. | ||
* You may obtain a copy of the License at | ||
* | ||
* http://www.apache.org/licenses/LICENSE-2.0 | ||
* | ||
* Unless required by applicable law or agreed to in writing, | ||
* software distributed under the License is distributed on an | ||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY | ||
* KIND, either express or implied. See the License for the | ||
* specific language governing permissions and limitations | ||
* under the License. | ||
*/ | ||
|
||
import { RequestBase } from '@_types/Base' | ||
import { Id } from '@_types/common' | ||
import { Duration } from '@_types/Time' | ||
import { | ||
LlamaServiceSettings, | ||
LlamaServiceType, | ||
LlamaTaskSettings, | ||
LlamaTaskType | ||
} from '@inference/_types/CommonTypes' | ||
import { InferenceChunkingSettings } from '@inference/_types/Services' | ||
|
||
/** | ||
* Create a Llama inference endpoint. | ||
* | ||
* Create an inference endpoint to perform an inference task with the `llama` service. | ||
* @rest_spec_name inference.put_llama | ||
* @availability stack since=9.2.0 stability=stable visibility=public | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @jonathan-buttner could you please check if this 9.2.0 version is correctly set here. I assume it is, but want to be sure. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yep this is correct 👍 |
||
* @availability serverless stability=stable visibility=public | ||
* @cluster_privileges manage_inference | ||
* @doc_id inference-api-put-llama | ||
*/ | ||
export interface Request extends RequestBase { | ||
urls: [ | ||
{ | ||
path: '/_inference/{task_type}/{llama_inference_id}' | ||
methods: ['PUT'] | ||
} | ||
] | ||
path_parts: { | ||
/** | ||
* The type of the inference task that the model will perform. | ||
*/ | ||
task_type: LlamaTaskType | ||
/** | ||
* The unique identifier of the inference endpoint. | ||
*/ | ||
llama_inference_id: Id | ||
} | ||
query_parameters: { | ||
/** | ||
* Specifies the amount of time to wait for the inference endpoint to be created. | ||
* @server_default 30s | ||
*/ | ||
timeout?: Duration | ||
} | ||
body: { | ||
/** | ||
* The chunking configuration object. | ||
* @ext_doc_id inference-chunking | ||
*/ | ||
chunking_settings?: InferenceChunkingSettings | ||
/** | ||
* The type of service supported for the specified task type. In this case, `llama`. | ||
*/ | ||
service: LlamaServiceType | ||
/** | ||
* Settings used to install the inference model. These settings are specific to the `llama` service. | ||
*/ | ||
service_settings: LlamaServiceSettings | ||
/** | ||
* Settings to configure the inference task. | ||
* These settings are specific to the task type you specified. | ||
*/ | ||
task_settings?: LlamaTaskSettings | ||
} | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,25 @@ | ||
/* | ||
* Licensed to Elasticsearch B.V. under one or more contributor | ||
* license agreements. See the NOTICE file distributed with | ||
* this work for additional information regarding copyright | ||
* ownership. Elasticsearch B.V. licenses this file to you under | ||
* the Apache License, Version 2.0 (the "License"); you may | ||
* not use this file except in compliance with the License. | ||
* You may obtain a copy of the License at | ||
* | ||
* http://www.apache.org/licenses/LICENSE-2.0 | ||
* | ||
* Unless required by applicable law or agreed to in writing, | ||
* software distributed under the License is distributed on an | ||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY | ||
* KIND, either express or implied. See the License for the | ||
* specific language governing permissions and limitations | ||
* under the License. | ||
*/ | ||
|
||
import { InferenceEndpointInfoLlama } from '@inference/_types/Services' | ||
|
||
export class Response { | ||
/** @codegen_name endpoint_info */ | ||
body: InferenceEndpointInfoLlama | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
# summary: | ||
description: Run `PUT _inference/text_embedding/llama-text-embedding` to create a Llama inference endpoint that performs a `text_embedding` task. | ||
method_request: 'PUT _inference/text_embedding/llama-text-embedding' | ||
# type: "request" | ||
value: |- | ||
{ | ||
"service": "llama", | ||
"service_settings": { | ||
"url": "http://localhost:8321/v1/openai/v1/embeddings" | ||
"dimensions": 384, | ||
"api_key": "llama-api-key", | ||
"model_id": "all-MiniLM-L6-v2" | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should have mentioned this on the elasticsearch PR. When did you need to supply an API key? In my testing of running the stack locally I didn't need to supply one 🤔
I was running it like this:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is good point to discuss. Glad you're bringing that up.
Llama Stack doesn't have built in authorization check by default, so It is possible to use it without providing any tokens. Specially when testing with Ollama locally.
However for me it is doubtful that users are going to use llama stack without auth 100% of the time so I added this api_key parameter as option for clients that would want to set up bearer auth. Authentication Configuration is described in Distribution Overview's Server Configuration section of official Llama Stack guide.
https://llama-stack.readthedocs.io/en/latest/distributions/configuration.html#authentication-configuration
I haven't investigated it in depth but I think it is safe to assume that providing ability to send bearer token pretty much covers security concerns.