Skip to content

Update inference API specification to include new Llama Service #5020

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 10 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
210 changes: 207 additions & 3 deletions output/openapi/elasticsearch-openapi.json

Large diffs are not rendered by default.

210 changes: 207 additions & 3 deletions output/openapi/elasticsearch-serverless-openapi.json

Large diffs are not rendered by default.

517 changes: 470 additions & 47 deletions output/schema/schema.json

Large diffs are not rendered by default.

41 changes: 41 additions & 0 deletions output/typescript/types.ts

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 2 additions & 0 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 2 additions & 0 deletions specification/_doc_ids/table.csv
Original file line number Diff line number Diff line change
Expand Up @@ -371,6 +371,7 @@ inference-api-put-googleaistudio,https://www.elastic.co/docs/api/doc/elasticsear
inference-api-put-googlevertexai,https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-googlevertexai,https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-google-vertex-ai.html,
inference-api-put-huggingface,https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-hugging-face,https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-hugging-face.html,
inference-api-put-jinaai,https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-jinaai,,
inference-api-put-llama,https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-llama,,
inference-api-put-mistral,https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-mistral,https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-mistral.html,
inference-api-put-openai,https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-openai,https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-openai.html,
inference-api-put-voyageai,https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-voyageai,,
Expand Down Expand Up @@ -400,6 +401,7 @@ knn-inner-hits,https://www.elastic.co/docs/solutions/search/vector/knn#nested-kn
license-management,https://www.elastic.co/docs/deploy-manage/license/manage-your-license-in-self-managed-cluster,,
list-analytics-collection,https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search-application-get-behavioral-analytics,https://www.elastic.co/guide/en/elasticsearch/reference/8.18/list-analytics-collection.html,
list-synonyms-sets,https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-synonyms-get-synonyms-sets,https://www.elastic.co/guide/en/elasticsearch/reference/8.18/list-synonyms-sets.html,
llama-api-models,https://llama-stack.readthedocs.io/en/latest/references/llama_cli_reference/download_models.html/,,
logstash-api-delete-pipeline,https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-logstash-delete-pipeline,https://www.elastic.co/guide/en/elasticsearch/reference/8.18/logstash-api-delete-pipeline.html,
logstash-api-get-pipeline,https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-logstash-get-pipeline,https://www.elastic.co/guide/en/elasticsearch/reference/8.18/logstash-api-get-pipeline.html,
logstash-api-put-pipeline,https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-logstash-put-pipeline,https://www.elastic.co/guide/en/elasticsearch/reference/8.18/logstash-api-put-pipeline.html,
Expand Down
35 changes: 35 additions & 0 deletions specification/_json_spec/inference.put_llama.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
{
"inference.put_llama": {
"documentation": {
"url": "https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-service-llama.html",
"description": "Configure a Llama inference endpoint"
},
"stability": "stable",
"visibility": "public",
"headers": {
"accept": ["application/json"],
"content_type": ["application/json"]
},
"url": {
"paths": [
{
"path": "/_inference/{task_type}/{llama_inference_id}",
"methods": ["PUT"],
"parts": {
"task_type": {
"type": "string",
"description": "The task type"
},
"llama_inference_id": {
"type": "string",
"description": "The inference ID"
}
}
}
]
},
"body": {
"description": "The inference endpoint's task and service settings"
}
}
}
72 changes: 72 additions & 0 deletions specification/inference/_types/CommonTypes.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1520,6 +1520,78 @@ export enum JinaAITextEmbeddingTask {
search
}

export class LlamaServiceSettings {
/**
* The URL endpoint of the Llama stack endpoint.
* URL must contain:
* * For `text_embedding` task - `/v1/openai/v1/embeddings`.
* * For `completion` and `chat_completion` tasks - `/v1/openai/v1/chat/completions`.
*/
url: string
/**
* The name of the model to use for the inference task.
* Refer to the Llama downloading models documentation for different ways of getting a list of available models and downloading them.
* Service has been tested and confirmed to be working with the following models:
* * For `text_embedding` task - `all-MiniLM-L6-v2`.
* * For `completion` and `chat_completion` tasks - `llama3.2:3b`.
* @ext_doc_id llama-api-models
*/
model_id: string
/**
* A valid API key for accessing Llama stack endpoint that is going to be sent as part of Bearer authentication header.
* This field is optional because Llama stack doesn't provide authentication by default.
*
* IMPORTANT: You need to provide the API key only once, during the inference model creation.
* The get inference endpoint API does not retrieve your API key.
* After creating the inference model, you cannot change the associated API key.
* If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.
*/
api_key?: string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I should have mentioned this on the elasticsearch PR. When did you need to supply an API key? In my testing of running the stack locally I didn't need to supply one 🤔

I was running it like this:

PUT _inference/text_embedding/llama-text-embedding
{
    "service": "llama",
    "service_settings": {
        "url": "http://localhost:8321/v1/inference/embeddings",
        "model_id": "all-MiniLM-L6-v2"
    }
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is good point to discuss. Glad you're bringing that up.
Llama Stack doesn't have built in authorization check by default, so It is possible to use it without providing any tokens. Specially when testing with Ollama locally.
However for me it is doubtful that users are going to use llama stack without auth 100% of the time so I added this api_key parameter as option for clients that would want to set up bearer auth. Authentication Configuration is described in Distribution Overview's Server Configuration section of official Llama Stack guide.
https://llama-stack.readthedocs.io/en/latest/distributions/configuration.html#authentication-configuration
I haven't investigated it in depth but I think it is safe to assume that providing ability to send bearer token pretty much covers security concerns.

/**
* For a `text_embedding` task, the maximum number of tokens per input before chunking occurs.
*/
max_input_tokens?: integer
/**
* For a `text_embedding` task, the number of dimensions the resulting output embeddings must have.
* It is supported only in `text-embedding-3` and later models. If it is not set by user, it defaults to the model returned dimensions.
* If model returns embeddings with a different number of dimensions, error is returned.
*/
dimensions?: integer
/**
* For a `text_embedding` task, the similarity measure. One of cosine, dot_product, l2_norm.
*/
similarity?: LlamaSimilarityType
/**
* This setting helps to minimize the number of rate limit errors returned from the Llama API.
* By default, the `llama` service sets the number of requests allowed per minute to 3000.
*/
rate_limit?: RateLimitSetting
}

export class LlamaTaskSettings {
/**
* For a `completion` or `text_embedding` task, specify the user issuing the request.
* This information can be used for abuse detection.
*/
user?: string
}

export enum LlamaTaskType {
text_embedding,
completion,
chat_completion
}

export enum LlamaServiceType {
llama
}

export enum LlamaSimilarityType {
cosine,
dot_product,
l2_norm
}

export class MistralServiceSettings {
/**
* A valid API key of your Mistral account.
Expand Down
13 changes: 13 additions & 0 deletions specification/inference/_types/Services.ts
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ import {
TaskTypeGoogleVertexAI,
TaskTypeHuggingFace,
TaskTypeJinaAi,
TaskTypeLlama,
TaskTypeMistral,
TaskTypeOpenAI,
TaskTypeVoyageAI,
Expand Down Expand Up @@ -241,6 +242,17 @@ export class InferenceEndpointInfoJinaAi extends InferenceEndpoint {
task_type: TaskTypeJinaAi
}

export class InferenceEndpointInfoLlama extends InferenceEndpoint {
/**
* The inference Id
*/
inference_id: string
/**
* The task type
*/
task_type: TaskTypeLlama
}

export class InferenceEndpointInfoMistral extends InferenceEndpoint {
/**
* The inference Id
Expand Down Expand Up @@ -366,6 +378,7 @@ export class RateLimitSetting {
* * `googlevertexai` service: `30000`
* * `hugging_face` service: `3000`
* * `jinaai` service: `2000`
* * `llama` service: `3000`
* * `mistral` service: `240`
* * `openai` service and task type `text_embedding`: `3000`
* * `openai` service and task type `completion`: `500`
Expand Down
6 changes: 6 additions & 0 deletions specification/inference/_types/TaskType.ts
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,12 @@ export enum TaskTypeHuggingFace {
text_embedding
}

export enum TaskTypeLlama {
text_embedding,
chat_completion,
completion
}

export enum TaskTypeMistral {
text_embedding,
chat_completion,
Expand Down
1 change: 1 addition & 0 deletions specification/inference/put/PutRequest.ts
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ import { TaskType } from '@inference/_types/TaskType'
* * Google AI Studio (`completion`, `text_embedding`)
* * Google Vertex AI (`rerank`, `text_embedding`)
* * Hugging Face (`chat_completion`, `completion`, `rerank`, `text_embedding`)
* * Llama (`chat_completion`, `completion`, `text_embedding`)
* * Mistral (`chat_completion`, `completion`, `text_embedding`)
* * OpenAI (`chat_completion`, `completion`, `text_embedding`)
* * VoyageAI (`text_embedding`, `rerank`)
Expand Down
85 changes: 85 additions & 0 deletions specification/inference/put_llama/PutLlamaRequest.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
/*
* Licensed to Elasticsearch B.V. under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch B.V. licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

import { RequestBase } from '@_types/Base'
import { Id } from '@_types/common'
import { Duration } from '@_types/Time'
import {
LlamaServiceSettings,
LlamaServiceType,
LlamaTaskSettings,
LlamaTaskType
} from '@inference/_types/CommonTypes'
import { InferenceChunkingSettings } from '@inference/_types/Services'

/**
* Create a Llama inference endpoint.
*
* Create an inference endpoint to perform an inference task with the `llama` service.
* @rest_spec_name inference.put_llama
* @availability stack since=9.2.0 stability=stable visibility=public
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jonathan-buttner could you please check if this 9.2.0 version is correctly set here. I assume it is, but want to be sure.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep this is correct 👍

* @availability serverless stability=stable visibility=public
* @cluster_privileges manage_inference
* @doc_id inference-api-put-llama
*/
export interface Request extends RequestBase {
urls: [
{
path: '/_inference/{task_type}/{llama_inference_id}'
methods: ['PUT']
}
]
path_parts: {
/**
* The type of the inference task that the model will perform.
*/
task_type: LlamaTaskType
/**
* The unique identifier of the inference endpoint.
*/
llama_inference_id: Id
}
query_parameters: {
/**
* Specifies the amount of time to wait for the inference endpoint to be created.
* @server_default 30s
*/
timeout?: Duration
}
body: {
/**
* The chunking configuration object.
* @ext_doc_id inference-chunking
*/
chunking_settings?: InferenceChunkingSettings
/**
* The type of service supported for the specified task type. In this case, `llama`.
*/
service: LlamaServiceType
/**
* Settings used to install the inference model. These settings are specific to the `llama` service.
*/
service_settings: LlamaServiceSettings
/**
* Settings to configure the inference task.
* These settings are specific to the task type you specified.
*/
task_settings?: LlamaTaskSettings
}
}
25 changes: 25 additions & 0 deletions specification/inference/put_llama/PutLlamaResponse.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
/*
* Licensed to Elasticsearch B.V. under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch B.V. licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

import { InferenceEndpointInfoLlama } from '@inference/_types/Services'

export class Response {
/** @codegen_name endpoint_info */
body: InferenceEndpointInfoLlama
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# summary:
description: Run `PUT _inference/text_embedding/llama-text-embedding` to create a Llama inference endpoint that performs a `text_embedding` task.
method_request: 'PUT _inference/text_embedding/llama-text-embedding'
# type: "request"
value: |-
{
"service": "llama",
"service_settings": {
"url": "http://localhost:8321/v1/openai/v1/embeddings"
"dimensions": 384,
"api_key": "llama-api-key",
"model_id": "all-MiniLM-L6-v2"
}
}
Loading