feat: add LLM Gateway models and providers#698
feat: add LLM Gateway models and providers#698steebchen wants to merge 8 commits intoanomalyco:devfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This PR integrates a large set of LLM Gateway-backed models into the catalog, adding several new providers and many new model definitions across existing providers. The main focus is enriching the registry with future-dated models, pricing, limits, and modality metadata while keeping formats consistent with existing models.dev TOML schemas.
Changes:
- Add new providers for additional LLM gateways and platforms (RouteWay, NovitaAI, NanoGPT, Moonshot AI, CloudRift, CanopyWave, ByteDance).
- Register many new models (GLM 4.5/4.6/4.7, Llama 3.x/4 variants, Gemma 2/3, DeepSeek V3/R1, Kimi K2, GPT OSS, Claude 4.x, Qwen 2.5/3, CogView, etc.) for both text and image modalities with pricing and context limits.
- Extend existing providers (OpenAI, Amazon Bedrock, Google, Google Vertex, Groq, ZAI, Nebius, TogetherAI, Cerebras, Inference, xAI, RouteWay) with additional models, including search-specific, reasoning, and image-capable variants.
Reviewed changes
Copilot reviewed 120 out of 120 changed files in this pull request and generated 11 comments.
Show a summary per file
| File | Description |
|---|---|
| providers/zai/models/glm-image.toml | Adds ZAI GLM Image text-to-image generation model metadata. |
| providers/zai/models/glm-4.7-flashx.toml | Adds ZAI GLM-4.7 FlashX high-context reasoning model. |
| providers/zai/models/glm-4.7-flash.toml | Adds ZAI GLM-4.7 Flash variant with long context. |
| providers/zai/models/glm-4.6v-flashx.toml | Adds ZAI GLM-4.6V FlashX multimodal reasoning model. |
| providers/zai/models/glm-4.6v-flash.toml | Adds ZAI GLM-4.6V Flash multimodal model. |
| providers/zai/models/glm-4.5-x.toml | Adds ZAI GLM-4.5 X premium reasoning model with pricing and limits. |
| providers/zai/models/glm-4.5-airx.toml | Adds ZAI GLM-4.5 AirX mid-tier reasoning model. |
| providers/zai/models/glm-4-32b-0414-128k.toml | Adds ZAI GLM-4 32B 128k-context variant. |
| providers/zai/models/cogview-4-250304.toml | Adds ZAI CogView-4 text-to-image model. |
| providers/xai/models/grok-4-fast-reasoning.toml | Adds Grok 4 Fast Reasoning model with large context. |
| providers/xai/models/grok-4-1-fast-reasoning.toml | Adds Grok 4.1 Fast Reasoning upgraded model. |
| providers/xai/models/grok-4-0709.toml | Adds Grok 4 (0709) base text model. |
| providers/togetherai/models/mistralai-mixtral-8x7b-instruct-v0.1.toml | Adds TogetherAI Mixtral 8x7B Instruct model definition. |
| providers/togetherai/models/meta-llama-meta-llama-3.1-8b-instruct-turbo.toml | Adds TogetherAI Llama 3.1 8B Instruct Turbo. |
| providers/togetherai/models/meta-llama-meta-llama-3.1-405b-instruct-turbo.toml | Adds TogetherAI Llama 4 Scout (large) Instruct model. |
| providers/togetherai/models/google-gemma-2-27b-it.toml | Adds TogetherAI Gemma 2 27B IT model (family currently set to gemini). |
| providers/routeway/provider.toml | Introduces RouteWay provider configuration (OpenAI-compatible endpoint). |
| providers/routeway/models/nemotron-nano-9b-v2-free.toml | Adds RouteWay free-tier Nemotron Nano 9B V2 model. |
| providers/routeway/models/llama-4-scout-free.toml | Adds RouteWay free-tier Meta Llama 4 Scout model. |
| providers/routeway/models/llama-4-maverick-free.toml | Adds RouteWay free-tier Meta Llama 4 Maverick model. |
| providers/routeway/models/llama-3.3-70b-instruct-free.toml | Adds RouteWay free-tier Llama 3.3 70B Instruct. |
| providers/routeway/models/kimi-k2-0905-free.toml | Adds RouteWay free Kimi Dev 0905 model. |
| providers/routeway/models/gpt-oss-20b-free.toml | Adds RouteWay free GPT OSS 20B model. |
| providers/routeway/models/glm-4.5-air-free.toml | Adds RouteWay free GLM-4.5 Air model. |
| providers/routeway/models/deepseek-r1t2-chimera-free.toml | Adds RouteWay free DeepSeek R1T2 Chimera model. |
| providers/openai/models/gpt-4o-search-preview.toml | Adds OpenAI GPT-4o Search Preview model with search-tuned pricing/context. |
| providers/openai/models/gpt-4o-mini-search-preview.toml | Adds OpenAI GPT-4o Mini Search Preview model. |
| providers/novita/provider.toml | Introduces NovitaAI provider configuration (OpenAI-compatible). |
| providers/novita/models/zai-org-glm-4.7.toml | Adds NovitaAI access to ZAI GLM-4.7 model. |
| providers/novita/models/zai-org-glm-4.6v.toml | Adds NovitaAI ZAI GLM-4.6V multimodal model. |
| providers/novita/models/zai-org-glm-4.6.toml | Adds NovitaAI ZAI GLM-4.6 text model. |
| providers/novita/models/zai-org-glm-4.5v.toml | Adds NovitaAI ZAI GLM-4.5V multimodal model. |
| providers/novita/models/moonshotai-kimi-k2-instruct.toml | Adds NovitaAI Moonshot Kimi K2 Instruct model. |
| providers/novita/models/minimax-minimax-m2.1.toml | Adds NovitaAI MiniMax M2.1 model. |
| providers/nebius/models/qwen-qwen3-coder-480b-a35b-instruct.toml | Adds Nebius Qwen3 Coder 480B A35B Instruct model. |
| providers/nebius/models/qwen-qwen3-coder-30b-a3b-instruct.toml | Adds Nebius Qwen3 Coder 30B A3B Instruct model. |
| providers/nebius/models/qwen-qwen3-32b.toml | Adds Nebius Qwen3 32B base model. |
| providers/nebius/models/qwen-qwen3-30b-a3b-thinking-2507.toml | Adds Nebius Qwen3 30B A3B Thinking model. |
| providers/nebius/models/qwen-qwen3-30b-a3b-instruct-2507.toml | Adds Nebius Qwen3 30B A3B Instruct (2507) model. |
| providers/nebius/models/qwen-qwen3-235b-a22b-thinking-2507.toml | Adds Nebius Qwen3 235B A22B Thinking model. |
| providers/nebius/models/qwen-qwen3-235b-a22b-instruct-2507.toml | Adds Nebius Qwen3 235B A22B Instruct model. |
| providers/nebius/models/qwen-qwen2.5-vl-72b-instruct.toml | Adds Nebius Qwen2.5 VL 72B multimodal instruct model. |
| providers/nebius/models/qwen-qwen2.5-coder-7b-fast.toml | Adds Nebius Qwen2.5 Coder 7B fast coding model. |
| providers/nebius/models/nvidia-llama-3_1-nemotron-ultra-253b-v1.toml | Adds Nebius Llama 3.1 Nemotron Ultra 253B model. |
| providers/nebius/models/moonshotai-kimi-k2-instruct.toml | Adds Nebius Moonshot Kimi K2 Instruct model. |
| providers/nebius/models/meta-llama-llama-3.3-70b-instruct.toml | Adds Nebius Llama 3.3 70B Instruct text model. |
| providers/nebius/models/google-gemma-3-27b-it.toml | Adds Nebius Gemma 3 27B IT model (family currently set to gemini). |
| providers/nebius/models/deepseek-ai-deepseek-r1-0528.toml | Adds Nebius DeepSeek R1 (0528) model. |
| providers/nanogpt/provider.toml | Introduces NanoGPT provider configuration (OpenAI-compatible). |
| providers/nanogpt/models/openai-gpt-oss-20b.toml | Adds NanoGPT GPT OSS 20B model (family currently set to gpt). |
| providers/nanogpt/models/openai-gpt-oss-120b.toml | Adds NanoGPT GPT OSS 120B model (family currently set to gpt). |
| providers/moonshot/provider.toml | Introduces Moonshot AI provider configuration. |
| providers/moonshot/models/kimi-k2-thinking.toml | Adds Moonshot Kimi K2 Thinking reasoning model. |
| providers/moonshot/models/kimi-k2-thinking-turbo.toml | Adds Moonshot Kimi K2 Thinking Turbo higher-cost model. |
| providers/moonshot/models/kimi-k2-0711-preview.toml | Adds Moonshot Kimi K2 preview model. |
| providers/inference/models/meta-llama-llama-3.2-11b-instruct-fp-16.toml | Adds Inference Llama 3.2 11B Instruct FP16 model. |
| providers/inference/models/meta-llama-llama-3.1-8b-instruct-fp-8.toml | Adds Inference Llama 3.1 8B Instruct FP8 model. |
| providers/groq/models/openai-gpt-oss-20b.toml | Adds Groq GPT OSS 20B model (family currently set to gpt). |
| providers/groq/models/openai-gpt-oss-120b.toml | Adds Groq GPT OSS 120B model (family currently set to gpt). |
| providers/groq/models/moonshotai-kimi-k2-instruct.toml | Adds Groq-hosted Moonshot Kimi K2 Instruct model. |
| providers/groq/models/meta-llama-llama-guard-4-12b.toml | Adds Groq Llama Guard 4 12B safety model. |
| providers/google/models/gemma-3n-e4b-it.toml | Adds Google Gemma 3n E4B IT model (family currently set to gemini). |
| providers/google/models/gemma-3n-e2b-it.toml | Adds Google Gemma 3n E2B IT model (family currently set to gemini). |
| providers/google/models/gemma-3-4b-it.toml | Adds Google Gemma 3 4B IT model (family currently set to gemini). |
| providers/google/models/gemma-3-1b-it.toml | Adds Google Gemma 3 1B IT model (family currently set to gemini). |
| providers/google/models/gemma-3-12b-it.toml | Adds Google Gemma 3 12B IT model (family currently set to gemini). |
| providers/google/models/gemini-3-pro-image-preview.toml | Adds Google Gemini 3 Pro Image (Preview) multimodal model. |
| providers/google-vertex/models/gemini-3-pro-image-preview.toml | Adds Vertex Gemini 3 Pro Image (Preview) model. |
| providers/google-vertex/models/gemini-2.5-flash-image.toml | Adds Vertex Gemini 2.5 Flash Image model. |
| providers/google-vertex/models/gemini-2.5-flash-image-preview.toml | Adds Vertex Gemini 2.5 Flash Image (Preview) model. |
| providers/google-vertex/models/[email protected] | Adds Vertex Claude Opus 4.5 model metadata. |
| providers/cloudrift/provider.toml | Introduces CloudRift provider configuration. |
| providers/cloudrift/models/moonshotai-kimi-k2-instruct.toml | Adds CloudRift Moonshot Kimi K2 Instruct model. |
| providers/cloudrift/models/deepseek-ai-deepseek-v3.toml | Adds CloudRift DeepSeek V3 text model. |
| providers/cloudrift/models/deepseek-ai-deepseek-r1-0528.toml | Adds CloudRift DeepSeek R1 (0528) model. |
| providers/cerebras/models/zai-glm-4.6.toml | Adds Cerebras-hosted ZAI GLM-4.6 model. |
| providers/cerebras/models/qwen-3-32b.toml | Adds Cerebras Qwen3 32B model. |
| providers/cerebras/models/llama3.1-8b.toml | Adds Cerebras Llama 3.1 8B Instruct model. |
| providers/cerebras/models/llama-3.3-70b.toml | Adds Cerebras Llama 3.3 70B Instruct model. |
| providers/canopywave/provider.toml | Introduces CanopyWave provider configuration. |
| providers/canopywave/models/zai-glm-4.7.toml | Adds CanopyWave ZAI GLM-4.7 model. |
| providers/canopywave/models/qwen-qwen3-coder.toml | Adds CanopyWave Qwen3 Coder model. |
| providers/canopywave/models/moonshotai-kimi-k2-thinking.toml | Adds CanopyWave Kimi K2 Thinking model. |
| providers/canopywave/models/minimax-minimax-m2.1.toml | Adds CanopyWave MiniMax M2.1 model. |
| providers/canopywave/models/deepseek-deepseek-chat-v3.2.toml | Adds CanopyWave DeepSeek V3.2 chat model. |
| providers/bytedance/provider.toml | Introduces ByteDance provider configuration for Volcano Engine. |
| providers/bytedance/models/seedream-4-5-251128.toml | Adds ByteDance Seedream 4.5 image model. |
| providers/bytedance/models/seedream-4-0-250828.toml | Adds ByteDance Seedream 4.0 image model. |
| providers/bytedance/models/seed-1-8-251228.toml | Adds ByteDance Seed 1.8 text/multimodal model. |
| providers/bytedance/models/seed-1-6-flash-250715.toml | Adds ByteDance Seed 1.6 Flash model. |
| providers/bytedance/models/seed-1-6-250915.toml | Adds ByteDance Seed 1.6 (250915) model. |
| providers/bytedance/models/seed-1-6-250615.toml | Adds ByteDance Seed 1.6 (250615) model. |
| providers/bytedance/models/kimi-k2-thinking-251104.toml | Adds ByteDance Kimi K2 Thinking model. |
| providers/bytedance/models/kimi-k2-250905.toml | Adds ByteDance Kimi K2 text model. |
| providers/bytedance/models/gpt-oss-120b-250805.toml | Adds ByteDance GPT OSS 120B model. |
| providers/bytedance/models/glm-4-7-251222.toml | Adds ByteDance GLM-4.7 text model. |
| providers/bytedance/models/deepseek-v3-2-251201.toml | Adds ByteDance DeepSeek V3.2 model. |
| providers/bytedance/models/deepseek-v3-1-250821.toml | Adds ByteDance DeepSeek V3.1 model. |
| providers/azure/models/gpt-5.2-pro.toml | Adds Azure GPT-5.2 Pro model definition. |
| providers/azure/models/gpt-5.2-chat-latest.toml | Adds Azure GPT-5.2 Chat latest model. |
| providers/azure/models/gpt-35-turbo.toml | Adds Azure GPT-3.5 Turbo model metadata. |
| providers/anthropic/models/claude-3-5-sonnet-latest.toml | Adds Anthropic Claude 3.5 Sonnet latest model entry. |
| providers/amazon-bedrock/models/meta.llama4-scout-17b-instruct-v1-0.toml | Adds Bedrock Llama 4 Scout 17B Instruct model. |
| providers/amazon-bedrock/models/meta.llama4-maverick-17b-instruct-v1-0.toml | Adds Bedrock Llama 4 Maverick 17B Instruct model. |
| providers/amazon-bedrock/models/meta.llama3-1-8b-instruct-v1-0.toml | Adds Bedrock Llama 3.1 8B Instruct model. |
| providers/amazon-bedrock/models/meta.llama3-1-70b-instruct-v1-0.toml | Adds Bedrock Llama 3.1 70B Instruct model. |
| providers/amazon-bedrock/models/anthropic.claude-sonnet-4-5-20250929-v1-0.toml | Adds Bedrock Claude Sonnet 4.5 2025-09-29 variant. |
| providers/amazon-bedrock/models/anthropic.claude-sonnet-4-20250514-v1-0.toml | Adds Bedrock Claude Sonnet 4 (2025-05-14) variant. |
| providers/amazon-bedrock/models/anthropic.claude-opus-4-5-20251101-v1-0.toml | Adds Bedrock Claude Opus 4.5 model. |
| providers/amazon-bedrock/models/anthropic.claude-opus-4-20250514-v1-0.toml | Adds Bedrock Claude Opus 4 (2025-05-14) variant. |
| providers/amazon-bedrock/models/anthropic.claude-opus-4-1-20250805-v1-0.toml | Adds Bedrock Claude Opus 4.1 model. |
| providers/amazon-bedrock/models/anthropic.claude-3-7-sonnet-20250219-v1-0.toml | Adds Bedrock Claude 3.7 Sonnet model. |
| providers/amazon-bedrock/models/anthropic.claude-3-5-haiku-20241022-v1-0.toml | Adds Bedrock Claude 3.5 Haiku 2024-10-22 model (deprecated). |
| providers/alibaba/models/qwen3-max-preview.toml | Adds Alibaba Qwen3 Max preview model. |
| providers/alibaba/models/qwen-plus-latest.toml | Adds Alibaba Qwen Plus Latest model. |
| providers/alibaba/models/qwen-max-latest.toml | Adds Alibaba Qwen Max Latest model. |
| providers/alibaba/models/qwen-image.toml | Adds Alibaba Qwen Image base image model. |
| providers/alibaba/models/qwen-image-plus.toml | Adds Alibaba Qwen Image Plus model. |
| providers/alibaba/models/qwen-image-max.toml | Adds Alibaba Qwen Image Max model. |
| providers/alibaba/models/qwen-image-max-2025-12-30.toml | Adds Alibaba versioned Qwen Image Max 2025-12-30 model. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| @@ -0,0 +1,21 @@ | |||
| name = "Gemma 2 27B IT" | |||
| family = "gemini" | |||
There was a problem hiding this comment.
This Gemma 2 model is tagged with family = "gemini", but elsewhere in the repo Gemma-family models consistently use family = "gemma" (for example, providers/groq/models/gemma2-9b-it.toml:2 and providers/openrouter/models/google/gemma-2-9b-it.toml:2). Please update the family to gemma to keep family-based filtering and grouping consistent.
| @@ -0,0 +1,21 @@ | |||
| name = "Gemma 3n E4B IT" | |||
| family = "gemini" | |||
There was a problem hiding this comment.
Gemma 3n E4B models elsewhere in the repo are classified with family = "gemma" (see providers/openrouter/models/google/gemma-3n-e4b-it.toml:3), but this entry uses family = "gemini". To maintain a consistent notion of model family across providers, this should also use the gemma family.
| @@ -0,0 +1,21 @@ | |||
| name = "Gemma 3n E2B IT" | |||
| family = "gemini" | |||
There was a problem hiding this comment.
This Gemma 3n E2B model is tagged with family = "gemini", while comparable Gemma 3n models use family = "gemma" (e.g. providers/openrouter/models/google/gemma-3n-e4b-it.toml:3). Please align the family value to gemma so clients relying on family can treat Gemma models consistently.
| @@ -0,0 +1,21 @@ | |||
| name = "Gemma 3 4B IT" | |||
| family = "gemini" | |||
There was a problem hiding this comment.
For Gemma 3 4B, the rest of the codebase uses family = "gemma" for Gemma models (examples: providers/amazon-bedrock/models/google.gemma-3-4b-it.toml:2, providers/cloudflare-workers-ai/models/gemma-3-12b-it.toml:4). Using family = "gemini" here is inconsistent and may cause this model to be grouped with Gemini instead of Gemma; consider changing the family to gemma.
| @@ -0,0 +1,21 @@ | |||
| name = "Gemma 3 12B IT" | |||
| family = "gemini" | |||
There was a problem hiding this comment.
The Gemma 3 12B model here is assigned family = "gemini", while the rest of the repo classifies Gemma 3 models under the gemma family (for example, providers/amazon-bedrock/models/google.gemma-3-12b-it.toml:2, providers/inference/models/google/gemma-3.toml:2). To avoid confusing Gemma with Gemini in downstream tooling, the family should be gemma.
| @@ -0,0 +1,22 @@ | |||
| name = "GPT OSS 20B" | |||
| family = "gpt" | |||
There was a problem hiding this comment.
This NanoGPT GPT OSS 20B model is given family = "gpt", but GPT OSS 20B is modeled elsewhere with family = "gpt-oss" (for example, providers/deepinfra/models/openai/gpt-oss-20b.toml:4 and providers/vercel/models/openai/gpt-oss-20b.toml:2). To align with the existing convention and avoid misclassifying GPT OSS as generic GPT, consider switching the family to gpt-oss.
| @@ -0,0 +1,21 @@ | |||
| name = "Gemma 3 27B" | |||
| family = "gemini" | |||
There was a problem hiding this comment.
This Gemma 3 27B model is using family = "gemini", but other Gemma 3 27B entries in the repo use family = "gemma" (e.g. providers/nvidia/models/google/gemma-3-27b-it.toml:2 and providers/openrouter/models/google/gemma-3-27b-it.toml:3). Consider switching the family to gemma for consistency across providers and to avoid misclassification in family-based tooling.
| @@ -0,0 +1,21 @@ | |||
| name = "Gemma 3 1B IT" | |||
| family = "gemini" | |||
There was a problem hiding this comment.
This Gemma 3 1B model is marked with family = "gemini", but Gemma models elsewhere (e.g. providers/amazon-bedrock/models/google.gemma-3-12b-it.toml:2 and providers/cloudflare-workers-ai/models/gemma-3-12b-it.toml:4) use family = "gemma". Please update the family to gemma to be consistent with how Gemma is modeled across providers.
| @@ -0,0 +1,22 @@ | |||
| name = "GPT OSS 20B" | |||
| family = "gpt" | |||
There was a problem hiding this comment.
This GPT OSS 20B model uses family = "gpt", but GPT OSS models elsewhere consistently use family = "gpt-oss" (for example, providers/deepinfra/models/openai/gpt-oss-20b.toml:4 and providers/openrouter/models/openai/gpt-oss-20b.toml:2). For consistent family grouping and to distinguish GPT OSS from other GPT-family models, this should be updated to use the gpt-oss family.
| @@ -0,0 +1,22 @@ | |||
| name = "GPT OSS 120B" | |||
| family = "gpt" | |||
There was a problem hiding this comment.
For GPT OSS 120B under NanoGPT, family = "gpt" is inconsistent with other GPT OSS 120B entries that use family = "gpt-oss" (e.g. providers/google-vertex/models/openai/gpt-oss-120b-maas.toml:2 and providers/openrouter/models/openai/gpt-oss-120b.toml:2). Please change the family to gpt-oss to match the established convention for this model family.
1b46ffa to
093e6cd
Compare
Add LLM Gateway (llmgateway.io) as a new provider with all supported models
organized by upstream provider subdirectory.
LLM Gateway is an OpenAI-compatible API gateway that provides unified
access to 40+ LLM providers through a single API endpoint.
Directory structure:
providers/llmgateway/
├── provider.toml
├── README.md
├── scripts/
│ └── generate.ts
└── models/
├── anthropic/ (16 models)
├── openai/ (28 models)
├── google/ (19 models)
├── zai/ (17 models - GLM, CogView)
├── alibaba/ (27 models - Qwen)
├── meta/ (12 models - Llama)
├── xai/ (9 models - Grok)
├── deepseek/ (5 models)
├── bytedance/ (6 models - Seed)
├── moonshot/ (4 models - Kimi)
├── mistral/ (3 models)
├── perplexity/ (3 models - Sonar)
├── minimax/ (1 model)
├── nvidia/ (1 model)
└── llmgateway/ (2 models - auto, custom)
Co-Authored-By: Claude Opus 4.5 <[email protected]>
093e6cd to
3549abe
Compare
Removes provider subdirectories, exports all models directly to models/ folder for simpler structure. Co-Authored-By: Claude Opus 4.5 <[email protected]>
Removes empty generate.ts file and scripts/ directory. README now links to llmgateway repo for regeneration. Co-Authored-By: Claude Opus 4.5 <[email protected]>
models.dev schema requires limit.output field. Defaults to 16384 when not specified. Co-Authored-By: Claude Opus 4.5 <[email protected]>
Maps internal family names to valid models.dev families: - moonshot → kimi - bytedance → seed - zai → glm - nvidia → nemotron Co-Authored-By: Claude Opus 4.5 <[email protected]>
Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Gemma models now use "gemma" family instead of "gemini" - GPT OSS models now use "gpt-oss" family instead of "gpt" Co-Authored-By: Claude Opus 4.5 <[email protected]>
## Summary Add a script to export all models and providers from `@llmgateway/models` to TOML format compatible with [models.dev](https://github.com/anomalyco/models.dev). ## Usage ```bash npx tsx scripts/export-models-dev.ts ``` ## Output ``` exports/providers/ ├── openai/ │ ├── provider.toml │ └── models/ │ ├── gpt-4o.toml │ └── ... ├── anthropic/ │ ├── provider.toml │ └── models/ │ └── ... └── ... (24 providers) ``` ## Features - Exports 222 models across 24 providers - Generates `provider.toml` with: - npm package (`@ai-sdk/openai`, etc.) - Environment variables - Documentation URL - API endpoint (for OpenAI-compatible providers) - Generates model TOMLs with: - Pricing (input/output/cache per million tokens) - Context and output limits - Capabilities (vision, tools, reasoning, structured output) - Modalities (text, image) - Status (alpha/beta/deprecated) - Auto-detects open weights models (Llama, Qwen, Gemma, etc.) - Maps provider IDs and family names to models.dev conventions ## Test plan - [x] Script runs successfully - [x] Output matches models.dev schema - [x] PR submitted to models.dev: anomalyco/models.dev#698 🤖 Generated with [Claude Code](https://claude.com/claude-code) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Added a capability to export provider and model metadata into a standardized TOML format with per-model files and a summary of outputs. * **Chores** * Updated version control ignore rules to exclude generated export outputs. * **Refactor** * Cleaned up component imports to remove a duplicate import and streamline module organization. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: Claude Opus 4.5 <[email protected]>
## Summary Add a script to export all models and providers from `@llmgateway/models` to TOML format compatible with [models.dev](https://github.com/anomalyco/models.dev). ## Usage ```bash npx tsx scripts/export-models-dev.ts ``` ## Output ``` exports/providers/ ├── openai/ │ ├── provider.toml │ └── models/ │ ├── gpt-4o.toml │ └── ... ├── anthropic/ │ ├── provider.toml │ └── models/ │ └── ... └── ... (24 providers) ``` ## Features - Exports 222 models across 24 providers - Generates `provider.toml` with: - npm package (`@ai-sdk/openai`, etc.) - Environment variables - Documentation URL - API endpoint (for OpenAI-compatible providers) - Generates model TOMLs with: - Pricing (input/output/cache per million tokens) - Context and output limits - Capabilities (vision, tools, reasoning, structured output) - Modalities (text, image) - Status (alpha/beta/deprecated) - Auto-detects open weights models (Llama, Qwen, Gemma, etc.) - Maps provider IDs and family names to models.dev conventions ## Test plan - [x] Script runs successfully - [x] Output matches models.dev schema - [x] PR submitted to models.dev: anomalyco/models.dev#698 🤖 Generated with [Claude Code](https://claude.com/claude-code) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Added a capability to export provider and model metadata into a standardized TOML format with per-model files and a summary of outputs. * **Chores** * Updated version control ignore rules to exclude generated export outputs. * **Refactor** * Cleaned up component imports to remove a duplicate import and streamline module organization. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: Claude Opus 4.5 <[email protected]>
Summary
This PR adds model data sourced from LLM Gateway, an OpenAI-compatible API gateway supporting 40+ LLM providers.
New Providers (7)
New Models (72 across 17 providers)
Test plan
🤖 Generated with Claude Code