Conversation
📝 WalkthroughWalkthroughAdds ChutesAI provider support: new Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
backend/node/create_nodes/default_providers/chutesai_deepseek-aiDeepSeek-V3.1-TEE.json (1)
1-10: Consider generating these Chutes presets and schema enums from one manifest.The current model catalog now has to stay synchronized across
backend/node/create_nodes/default_providers/*.json,backend/node/create_nodes/providers_schema.json, and the frontend schema. A single source of truth would reduce drift; today a mismatch becomes a startup-time validation failure inbackend/node/create_nodes/providers.py.backend/node/create_nodes/providers_schema.json (1)
109-134: Please avoid adding another duplicated provider/model matrix here.This block now has to stay in lockstep with
frontend/src/assets/schemas/providers_schema.json, and the two schema files are already drifted for other providers in the current snapshot (openai, andanthropicdiffer). The next Chutes model refresh will eventually make the UI offer values the backend rejects, or the reverse. Consider generating the frontend schema from the backend source, or add a sync test that compares both provider/model definitions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/node/create_nodes/providers_schema.json` around lines 109 - 134, The new JSON schema block introducing provider "chutesai" with plugin "openai-compatible" and the model enum duplicates backend/provider definitions and will drift from the frontend's providers_schema; instead of adding a separate hardcoded list here, remove the duplicated model matrix (the block where "provider": "chutesai", "plugin": "openai-compatible", "model": { "enum": [...] }) and either (a) centralize the canonical provider->models mapping so both backend and frontend read from a single source of truth or (b) add an automated sync test that compares the backend provider/model definitions against the frontend's providers_schema and fails the build if they diverge; update references to "chutesai" and the "model" enum accordingly so only the canonical source controls allowed models.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@backend/node/create_nodes/default_providers/chutesai_zai-orgGLM-4.7-TEE.json`:
- Around line 6-8: The preset's plugin_config currently sets api_url to the bare
domain which returns 404; update the plugin_config.api_url value in the
chutesai_zai-orgGLM-4.7-TEE.json preset from "https://llm.chutes.ai" to
"https://llm.chutes.ai/v1" so the openai-compatible adapter targets the correct
base path (e.g., /v1/chat/completions).
---
Nitpick comments:
In `@backend/node/create_nodes/providers_schema.json`:
- Around line 109-134: The new JSON schema block introducing provider "chutesai"
with plugin "openai-compatible" and the model enum duplicates backend/provider
definitions and will drift from the frontend's providers_schema; instead of
adding a separate hardcoded list here, remove the duplicated model matrix (the
block where "provider": "chutesai", "plugin": "openai-compatible", "model": {
"enum": [...] }) and either (a) centralize the canonical provider->models
mapping so both backend and frontend read from a single source of truth or (b)
add an automated sync test that compares the backend provider/model definitions
against the frontend's providers_schema and fails the build if they diverge;
update references to "chutesai" and the "model" enum accordingly so only the
canonical source controls allowed models.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6e8577cc-14e4-445a-9a2d-adc4e11f218a
📒 Files selected for processing (11)
.env.examplebackend/node/create_nodes/default_providers/chutesai_MiniMaxAIMiniMax-M2.5-TEE.jsonbackend/node/create_nodes/default_providers/chutesai_QwenQwen2.5-72B-Instruct.jsonbackend/node/create_nodes/default_providers/chutesai_QwenQwen3-32B.jsonbackend/node/create_nodes/default_providers/chutesai_QwenQwen3.5-397B-A17B-TEE.jsonbackend/node/create_nodes/default_providers/chutesai_deepseek-aiDeepSeek-V3.1-TEE.jsonbackend/node/create_nodes/default_providers/chutesai_moonshotaiKimi-K2.5-TEE.jsonbackend/node/create_nodes/default_providers/chutesai_zai-orgGLM-4.7-TEE.jsonbackend/node/create_nodes/default_providers/chutesai_zai-orgGLM-5-Turbo.jsonbackend/node/create_nodes/providers_schema.jsonfrontend/src/assets/schemas/providers_schema.json
| "plugin_config": { | ||
| "api_key_env_var": "CHUTESAIAPIKEY", | ||
| "api_url": "https://llm.chutes.ai" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Search where openai-compatible api_url is consumed =="
rg -n -C3 'api_url|try_llms|chat/completions|/v1/models|/v1/chat/completions' .
echo
echo "== Check the configured Chutes endpoints =="
curl -sS -o /tmp/chutes_root.out -w 'root_status=%{http_code}\n' https://llm.chutes.ai/
curl -sS -o /tmp/chutes_models.out -w 'models_status=%{http_code}\n' https://llm.chutes.ai/v1/models
echo
echo "== Preview /v1/models response =="
head -c 300 /tmp/chutes_models.out || true
echoRepository: genlayerlabs/genlayer-studio
Length of output: 50386
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Check llm.lua around api_url usage (lines 100-120) =="
sed -n '100,120p' backend/node/llm.lua
echo
echo "== Search for /v1 path appending in backend code =="
rg -n 'v1/chat|chat/completions|/v1' backend/ --type lua --type py | head -20
echo
echo "== Check how openai-compatible handles the base URL =="
rg -n -A5 -B5 'openai.*compatible|plugin.*openai' backend/node/llm.lua | head -50Repository: genlayerlabs/genlayer-studio
Length of output: 858
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Show full llm.lua context (first 150 lines to see function scope) =="
head -150 backend/node/llm.lua | tail -80
echo
echo "== Search for 'chat/completions' or endpoint path construction =="
rg -n 'chat/completions|/v1|endpoint|path' backend/node/llm.lua
echo
echo "== Check the full custom_plugin_data handler for openai-compatible =="
grep -n -B20 'custom_plugin_data.plugin_config.api_url' backend/node/llm.lua | head -40Repository: genlayerlabs/genlayer-studio
Length of output: 2467
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Test Chutes endpoint responses =="
echo "1. Bare root:"
curl -sS -w 'Status: %{http_code}\n' https://llm.chutes.ai/ 2>&1 | head -5
echo
echo "2. With /v1 path:"
curl -sS -w 'Status: %{http_code}\n' https://llm.chutes.ai/v1/ 2>&1 | head -5
echo
echo "3. /v1/chat/completions endpoint (standard openai compat):"
curl -sS -w 'Status: %{http_code}\n' https://llm.chutes.ai/v1/chat/completions -X POST -H "Content-Type: application/json" -H "Authorization: Bearer test" -d '{"model":"test","messages":[]}' 2>&1 | head -10Repository: genlayerlabs/genlayer-studio
Length of output: 532
Fix the preset's api_url — it requires the /v1 base path.
The preset configures api_url as the bare domain https://llm.chutes.ai, but this endpoint returns 404. The openai-compatible adapter uses api_url directly as the full request endpoint (no path appending), so requests to https://llm.chutes.ai/chat/completions fail. The correct api_url should be https://llm.chutes.ai/v1, which is the base path where the API responds (e.g., https://llm.chutes.ai/v1/chat/completions returns 401 with an invalid token, confirming the endpoint exists). Change line 8 from "api_url": "https://llm.chutes.ai" to "api_url": "https://llm.chutes.ai/v1".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/node/create_nodes/default_providers/chutesai_zai-orgGLM-4.7-TEE.json`
around lines 6 - 8, The preset's plugin_config currently sets api_url to the
bare domain which returns 404; update the plugin_config.api_url value in the
chutesai_zai-orgGLM-4.7-TEE.json preset from "https://llm.chutes.ai" to
"https://llm.chutes.ai/v1" so the openai-compatible adapter targets the correct
base path (e.g., /v1/chat/completions).
|
@sirouk could you please address the comments and provide a temporary API key to test the models? you can send it thorough dm |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/src/components/Simulator/ProviderModal.vue`:
- Around line 208-214: findMatchingPreset's exact-match lookup using
newProviderData.provider/newProviderData.model/newProviderData.plugin is
unreachable because newProviderData.model is overwritten earlier; capture the
originally requested model (e.g. store initialModel before the model-rewrite) or
change the lookup to use a different key (e.g. provider+plugin or
provider+plugin+initialModel) so cloning can bootstrap from an existing
template; update both findMatchingPreset and the analogous lookup at the later
block (lines referencing the same provider+model+plugin match) to use the
preserved initialModel or the provider+plugin lookup instead of the rewritten
newProviderData.model.
In `@frontend/test/unit/providersSchemaSync.test.ts`:
- Around line 17-25: The test currently uses optional chaining so it won't fail
if the provider rule is missing; update the test to assert the existence of the
provider rules before comparing nested properties: call getProviderRule for
'chutesai' (backendRule and frontendRule) and add explicit
expect(backendRule).toBeDefined() and expect(frontendRule).toBeDefined() (or
equivalent) to ensure the allOf/provider rule is present (and optionally assert
provider.examples contains 'chutesai') before comparing
backendRule.then.properties.plugin.const and
backendRule.then.properties.model.enum with their frontend counterparts.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 938df92d-c215-408e-b5d0-94b03112a86a
📒 Files selected for processing (2)
frontend/src/components/Simulator/ProviderModal.vuefrontend/test/unit/providersSchemaSync.test.ts
| const findMatchingPreset = () => | ||
| nodeStore.nodeProviders.find( | ||
| (preset) => | ||
| preset.provider === newProviderData.provider && | ||
| preset.model === newProviderData.model && | ||
| preset.plugin === newProviderData.plugin, | ||
| ); |
There was a problem hiding this comment.
Exact-match preset cloning is unreachable in the valid create path.
Line 237-Line 249 already rewrites newProviderData.model to the first model where checkExistingPreset(...) is false. That means this exact provider + model + plugin lookup can only succeed when every model is already taken, and presetAlreadyExists has already disabled Create. In practice, the new clone path never seeds config for a creatable Chutes preset. If the intent is to bootstrap from an existing template, this needs a different lookup key/source than the uniqueness check.
Also applies to: 279-285
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/Simulator/ProviderModal.vue` around lines 208 - 214,
findMatchingPreset's exact-match lookup using
newProviderData.provider/newProviderData.model/newProviderData.plugin is
unreachable because newProviderData.model is overwritten earlier; capture the
originally requested model (e.g. store initialModel before the model-rewrite) or
change the lookup to use a different key (e.g. provider+plugin or
provider+plugin+initialModel) so cloning can bootstrap from an existing
template; update both findMatchingPreset and the analogous lookup at the later
block (lines referencing the same provider+model+plugin match) to use the
preserved initialModel or the provider+plugin lookup instead of the rewritten
newProviderData.model.
| const backendRule = getProviderRule(backendSchema, 'chutesai'); | ||
| const frontendRule = getProviderRule(frontendSchema, 'chutesai'); | ||
|
|
||
| expect(backendRule?.then?.properties?.plugin?.const).toBe( | ||
| frontendRule?.then?.properties?.plugin?.const, | ||
| ); | ||
| expect(backendRule?.then?.properties?.model?.enum).toEqual( | ||
| frontendRule?.then?.properties?.model?.enum, | ||
| ); |
There was a problem hiding this comment.
Fail fast when the provider rule is missing.
These expectations are optional-chained, so the test still passes if both schemas keep chutesai in provider.examples but drop the corresponding allOf rule; both sides collapse to undefined. Add explicit existence assertions before comparing nested fields.
Suggested tightening
const backendRule = getProviderRule(backendSchema, 'chutesai');
const frontendRule = getProviderRule(frontendSchema, 'chutesai');
+ expect(backendRule).toBeDefined();
+ expect(frontendRule).toBeDefined();
expect(backendRule?.then?.properties?.plugin?.const).toBe(
frontendRule?.then?.properties?.plugin?.const,
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const backendRule = getProviderRule(backendSchema, 'chutesai'); | |
| const frontendRule = getProviderRule(frontendSchema, 'chutesai'); | |
| expect(backendRule?.then?.properties?.plugin?.const).toBe( | |
| frontendRule?.then?.properties?.plugin?.const, | |
| ); | |
| expect(backendRule?.then?.properties?.model?.enum).toEqual( | |
| frontendRule?.then?.properties?.model?.enum, | |
| ); | |
| const backendRule = getProviderRule(backendSchema, 'chutesai'); | |
| const frontendRule = getProviderRule(frontendSchema, 'chutesai'); | |
| expect(backendRule).toBeDefined(); | |
| expect(frontendRule).toBeDefined(); | |
| expect(backendRule?.then?.properties?.plugin?.const).toBe( | |
| frontendRule?.then?.properties?.plugin?.const, | |
| ); | |
| expect(backendRule?.then?.properties?.model?.enum).toEqual( | |
| frontendRule?.then?.properties?.model?.enum, | |
| ); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/test/unit/providersSchemaSync.test.ts` around lines 17 - 25, The
test currently uses optional chaining so it won't fail if the provider rule is
missing; update the test to assert the existence of the provider rules before
comparing nested properties: call getProviderRule for 'chutesai' (backendRule
and frontendRule) and add explicit expect(backendRule).toBeDefined() and
expect(frontendRule).toBeDefined() (or equivalent) to ensure the allOf/provider
rule is present (and optionally assert provider.examples contains 'chutesai')
before comparing backendRule.then.properties.plugin.const and
backendRule.then.properties.model.enum with their frontend counterparts.
What
Chutes.aias a defaultopenai-compatibleproviderbackend/node/create_nodes/default_providers/chutesaiCHUTESAI_API_KEYto.env.exampleWhy
Testing done
https://llm.chutes.ai/v1/modelsDecisions made
openai-compatiblefor the Chutes provider integrationChecks
Summary by CodeRabbit
New Features
Configuration
Tests