Skip to content

Commit

Permalink
Merge branch 'main' into dash0
Browse files Browse the repository at this point in the history
  • Loading branch information
shahargl authored Feb 4, 2025
2 parents eea2624 + ed70495 commit 56b0f61
Show file tree
Hide file tree
Showing 11 changed files with 393 additions and 47 deletions.
1 change: 1 addition & 0 deletions docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -234,6 +234,7 @@
"providers/documentation/twilio-provider",
"providers/documentation/uptimekuma-provider",
"providers/documentation/victoriametrics-provider",
"providers/documentation/vllm-provider",
"providers/documentation/webhook-provider",
"providers/documentation/websocket-provider",
"providers/documentation/zabbix-provider",
Expand Down
1 change: 1 addition & 0 deletions docs/providers/documentation/clickhouse-provider.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ The ClickHouse provider requires the following authentication parameters:
- `Clickhouse Hostname`: The host where ClickHouse is running.
- `Clickhouse Port`: The port where ClickHouse is running. The default port is `9000`.
- `Clickhouse Database`: The database to connect to.
- `Clickhouse Protocol`: The protocol to use for connecting to ClickHouse. Http, https for HTTP-based, clickhouse and clickhouses (with SSL) for native.

## Connecting with the ClickHouse provider

Expand Down
66 changes: 66 additions & 0 deletions docs/providers/documentation/vllm-provider.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
---
title: "vLLM Provider"
description: "The vLLM Provider enables integration with vLLM-deployed language models into Keep."
---

<Tip>
The vLLM Provider supports querying language models deployed with vLLM for prompt-based interactions.
</Tip>

## Inputs

The vLLM Provider supports the following parameters:

- `prompt`: Interact with vLLM-deployed models by sending prompts and receiving responses
- `model`: The model to be used, defaults to `Qwen/Qwen1.5-1.8B-Chat`
- `temperature`: Controls randomness in the response, defaults to 0.7
- `max_tokens`: Limit amount of tokens returned by the model, defaults to 1024
- `structured_output_format`: Optional JSON schema for structured output formatting

## Outputs

The vLLM Provider returns the model's response based on the provided prompt. When using structured output format, the response will be formatted according to the provided JSON schema.

## Authentication Parameters

To use the vLLM Provider, you'll need to configure the following authentication parameters:

- **api_url** (required): The endpoint URL where your vLLM service is deployed
- **api_key** (optional): API key if your vLLM deployment requires authentication

## Connecting with the Provider

To connect to a vLLM deployment:

1. Deploy your vLLM instance or obtain the API endpoint of an existing deployment
2. Configure the API URL in your provider configuration
3. If your deployment requires authentication, configure the API key

## Structured Output

Structure output for vLLM should follow Json Schema notation. Example:

```yaml
steps:
- name: get-enrichments
provider:
config: "{{ providers.my_vllm }}"
type: vllm
with:
prompt: "You received such an alert {{alert}}, generate missing fields."
model: "Qwen/Qwen1.5-1.8B-Chat" # This model supports structured output
structured_output_format: # We limit what model could return
type: object
properties:
environment:
type: string
enum:
- production
- debug
- pre-prod
impacted_customer_name:
type: string
required:
- environment
- impacted_customer_name
```
6 changes: 6 additions & 0 deletions docs/providers/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -613,6 +613,12 @@ By leveraging Keep Providers, users are able to deeply integrate Keep with the t
icon={ <img src="https://img.logo.dev/victoriametrics.com?token=pk_dfXfZBoKQMGDTIgqu7LvYg" /> }
></Card>

<Card
title="vLLM"
href="/providers/documentation/vllm-provider"
icon={ <img src="https://img.logo.dev/docs.vllm.ai?token=pk_dfXfZBoKQMGDTIgqu7LvYg" /> }
></Card>

<Card
title="Webhook"
href="/providers/documentation/webhook-provider"
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
id: enrich-using-vllm-qwen
description: Enrich alerts using structured output from vLLM & Qwen
triggers:
- type: alert
filters:
- key: source
value: prometheus

steps:
- name: get-enrichments
provider:
config: "{{ providers.my_vllm }}"
type: vllm
with:
prompt: "You received such an alert {{alert}}, generate missing fields."
model: "Qwen/Qwen1.5-1.8B-Chat" # This model supports structured output
structured_output_format: # We limit what model could return
type: object
properties:
environment:
type: string
enum:
- production
- debug
- pre-prod
impacted_customer_name:
type: string
required:
- environment
- impacted_customer_name

actions:
- name: enrich-alert
provider:
type: mock
with:
enrich_alert:
- key: environment
value: "{{ steps.get-enrichments.results.response.environment }}"
- key: impacted_customer_name
value: "{{ steps.get-enrichments.results.response.impacted_customer_name }}"
6 changes: 4 additions & 2 deletions keep-ui/middleware.ts
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ export const middleware = auth(async (request) => {
// If not authenticated and not on signin page, redirect to signin
if (!isAuthenticated && !pathname.startsWith("/signin") && !pathname.startsWith("/health")) {
const redirectTo = request.nextUrl.href || "/incidents";
console.log("Redirecting to signin page because user is not authenticated");
console.log(`Redirecting ${pathname} to signin page because user is not authenticated`);
return NextResponse.redirect(
new URL(`/signin?callbackUrl=${redirectTo}`, request.url)
);
Expand Down Expand Up @@ -90,14 +90,16 @@ export const config = {
* Match all request paths except for the ones starting with:
* - api (API routes)
* - keep_big.svg (logo)
* - keep.svg (logo)
* - gnip.webp (logo)
* - api/aws-marketplace (aws marketplace)
* - api/auth (auth)
* - monitoring (monitoring)
* - _next/static (static files)
* - _next/image (image optimization files)
* - favicon.ico (favicon file)
* - icons (providers' logos)
*/
"/((?!keep_big\\.svg$|gnip\\.webp|api/aws-marketplace$|api/auth|monitoring|_next/static|_next/image|favicon\\.ico).*)",
"/((?!keep_big\\.svg$|gnip\\.webp|api/aws-marketplace$|api/auth|monitoring|_next/static|_next/image|favicon\\.ico|icons|keep\\.svg).*)",
],
};
Binary file added keep-ui/public/icons/vllm-icon.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 56b0f61

Please sign in to comment.