-
Notifications
You must be signed in to change notification settings - Fork 5
feat: LLMs in admin #863
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
feat: LLMs in admin #863
Conversation
|
You have run out of free Bugbot PR reviews for this billing cycle. This will reset on January 27. To receive reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial. |
📝 WalkthroughWalkthroughThis pull request refactors the LLM model management system from enum-based definitions (LargeLanguageModels, LLMApis) to a database-backed approach using AIModelSpec. Changes include: adding new fields to AIModelSpec for provider, deprecation status, redirect behavior, and LLM capabilities (context window, output tokens, vision/audio/thinking support, JSON/temperature support); creating a corresponding database migration; expanding the admin interface with new fieldsets and filters; refactoring language_model.py and related files to query AIModelSpec instead of using hardcoded enums; updating recipes and routers to use AIModelSpec lookups with AIModelSpec.objects.get(); and introducing a script to seed the database with model specifications. The refactor maintains existing functionality while shifting model metadata from code to database. Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (2 warnings, 1 inconclusive)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 16
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
daras_ai_v2/language_model_openai_realtime.py (1)
265-282: Guard againstNonemodel before accessingmodel_id.If
self.modelisNone(as allowed by the type annotationAIModelSpec | Noneon line 31), accessingself.model.model_idon lines 272 and 279 will raiseAttributeError.🔎 Proposed fix
def record_llm_cost(self): from usage_costs.cost_utils import record_cost_auto from usage_costs.models import ModelSku + if not self.model: + return + # record llm usage costs if self.total_input_tokens > 0: record_cost_auto( model=self.model.model_id,tests/test_token_counter.py (1)
9-19: Critical: Missing import and incomplete migration breaks test.This test file has multiple issues:
ModelProvideris used on line 11 but never imported (confirmed by static analysis)- The loop iterates over
LargeLanguageModels(line 10) which should beAIModelSpec.objects.all()or similarllm.llm_apishould bellm.providerin the AIModelSpec modelllm.is_deprecatedis correct for AIModelSpec but the iteration source is wrong🔎 Proposed fix
+from ai_models.models import AIModelSpec, ModelProvider from daras_ai_v2.language_model import ( AZURE_OPENAI_MODEL_PREFIX, ) from daras_ai_v2.text_splitter import default_length_function models = [] -for llm in LargeLanguageModels: - if llm.llm_api not in [ModelProvider.openai] or llm.is_deprecated: +for llm in AIModelSpec.objects.filter(category=AIModelSpec.Categories.llm): + if llm.provider not in [ModelProvider.openai] or llm.is_deprecated: continue if isinstance(llm.model_id, str): models.append(llm.model_id) continue for model_id in llm.model_id: if model_id.startswith(AZURE_OPENAI_MODEL_PREFIX): continue models.append(model_id)
🧹 Nitpick comments (6)
scripts/init_llm_models.py (2)
1-2: Remove unused imports.
typingandEnumare imported but never used in this file.🔎 Proposed fix
-import typing -from enum import Enum from django.db import transaction from ai_models.models import AIModelSpec, ModelProvider
7-18: Consider usingupdate_or_createfor idempotency.Using
.create()means the script will fail on re-runs due to unique constraint violations on thenamefield. Consider usingupdate_or_create()orget_or_create()for idempotent seeding.🔎 Proposed fix example
- agrillm_qwen3_30b = AIModelSpec.objects.create( + agrillm_qwen3_30b, _ = AIModelSpec.objects.update_or_create( + name="agrillm_qwen3_30b", + defaults=dict( category=AIModelSpec.Categories.llm, - name="agrillm_qwen3_30b", label="AgriLLM Qwen-3 30B • ai71", model_id="AI71ai/agrillm-Qwen3-30B-A3B", provider=ModelProvider.openai, llm_context_window=32_768, llm_max_output_tokens=4_096, llm_supports_json=True, + ), )recipes/SEOSummary.py (1)
275-275: Consider caching the AIModelSpec lookup.The model is looked up twice on lines 275 and 366. Consider storing the result in a variable at the start of
run()to avoid duplicate database queries and potential inconsistency.🔎 Proposed fix
def run(self, state: dict) -> typing.Iterator[str | None]: request: SEOSummaryPage.RequestModel = self.RequestModel.model_validate(state) + model = AIModelSpec.objects.get(name=request.selected_model) yield "Googling..." # ... existing code ... yield from _gen_final_prompt(request, state) - yield f"Generating content using {AIModelSpec.objects.get(name=request.selected_model).label}..." + yield f"Generating content using {model.label}..."Then pass
modelto_gen_final_promptor storellm_context_windowat the start.Also applies to: 365-369
recipes/VideoBots.py (1)
369-374: Good error handling; consider using exception chaining.The error handling is well-implemented. Per Ruff B904, you could use
raise UserError(...) from Noneto suppress the exception chain, making the error message cleaner.🔎 Proposed refinement
except AIModelSpec.DoesNotExist: raise UserError( f"Model {request.selected_model} not found. Should be one of: {AIModelSpec.objects.filter(category=AIModelSpec.Categories.llm).values_list('name', flat=True)}" - ) + ) from Nonelivekit_agent.py (1)
268-337: Provider routing relies on model_id string matching for some providers.The routing for Gemini and Claude uses string matching on
model_id(lines 269, 278) before falling through toModelProvider-based routing. This could lead to unexpected behavior if a model has the correct provider set but itsmodel_iddoesn't contain the expected substring. Consider usingModelProvider.googleandModelProvider.anthropicfor consistency.🔎 Suggested approach
- case _ if "gemini" in llm_model.model_id: + case ModelProvider.google: from livekit.plugins import google # ... - case _ if "claude" in llm_model.model_id: + case ModelProvider.anthropic: from livekit.plugins import anthropic # ...daras_ai_v2/language_model.py (1)
150-150: Consider handlingDoesNotExistfor user-friendly errors.If an invalid model name is passed,
AIModelSpec.objects.get(name=model)will raiseAIModelSpec.DoesNotExist, resulting in a 500 error rather than a user-friendly message.🔎 Proposed fix
+ try: + model: AIModelSpec = AIModelSpec.objects.get(name=model) + except AIModelSpec.DoesNotExist: + raise UserError(f"Unknown model: {model}") - model: AIModelSpec = AIModelSpec.objects.get(name=model)
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (25)
ai_models/admin.pyai_models/migrations/0004_aimodelspec_is_deprecated_and_more.pyai_models/models.pydaras_ai_v2/language_model.pydaras_ai_v2/language_model_openai_audio.pydaras_ai_v2/language_model_openai_realtime.pydaras_ai_v2/language_model_settings_widgets.pylivekit_agent.pyrecipes/BulkEval.pyrecipes/CompareLLM.pyrecipes/DocExtract.pyrecipes/DocSearch.pyrecipes/DocSummary.pyrecipes/GoogleGPT.pyrecipes/RelatedQnA.pyrecipes/RelatedQnADoc.pyrecipes/SEOSummary.pyrecipes/SmartGPT.pyrecipes/SocialLookupEmail.pyrecipes/VideoBots.pyrouters/twilio_ws_api.pyscripts/init_llm_models.pytests/test_llm.pytests/test_token_counter.pyusage_costs/models.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{py,js,ts,tsx,java,cs,cpp,c,go,rb,php}
📄 CodeRabbit inference engine (.cursor/rules/devs-rules.mdc)
Format code in reverse topological order: place the main() function at the top and dependencies below it
Files:
daras_ai_v2/language_model_openai_realtime.pyrecipes/RelatedQnA.pyrecipes/GoogleGPT.pylivekit_agent.pyrecipes/SEOSummary.pydaras_ai_v2/language_model_openai_audio.pyrouters/twilio_ws_api.pyrecipes/VideoBots.pyai_models/migrations/0004_aimodelspec_is_deprecated_and_more.pyrecipes/RelatedQnADoc.pydaras_ai_v2/language_model.pyai_models/models.pyscripts/init_llm_models.pyrecipes/SmartGPT.pyrecipes/DocExtract.pydaras_ai_v2/language_model_settings_widgets.pyrecipes/DocSearch.pyrecipes/SocialLookupEmail.pyai_models/admin.pyusage_costs/models.pytests/test_llm.pyrecipes/CompareLLM.pytests/test_token_counter.pyrecipes/BulkEval.pyrecipes/DocSummary.py
🧠 Learnings (5)
📚 Learning: 2025-08-12T08:22:19.003Z
Learnt from: nikochiko
Repo: GooeyAI/gooey-server PR: 768
File: daras_ai_v2/language_model.py:124-126
Timestamp: 2025-08-12T08:22:19.003Z
Learning: GPT-5 Chat (gpt-5-chat-latest) has different token limits than other GPT-5 models: 128,000 context window and 16,384 max output tokens, as confirmed by Azure documentation and actual API behavior, despite general GPT-5 documentation suggesting higher limits.
Applied to files:
recipes/SEOSummary.pyrecipes/VideoBots.pydaras_ai_v2/language_model.pyrecipes/DocSummary.py
📚 Learning: 2025-08-12T08:22:19.003Z
Learnt from: nikochiko
Repo: GooeyAI/gooey-server PR: 768
File: daras_ai_v2/language_model.py:124-126
Timestamp: 2025-08-12T08:22:19.003Z
Learning: GPT-5 Chat (gpt-5-chat-latest) has specific limits of 128,000 context window and 16,384 max output tokens according to Azure documentation and confirmed by API testing, which differ from the general GPT-5 documentation that mentions larger limits for other GPT-5 models.
Applied to files:
recipes/SEOSummary.pyrecipes/VideoBots.pyrecipes/DocSummary.py
📚 Learning: 2025-08-12T08:22:19.003Z
Learnt from: nikochiko
Repo: GooeyAI/gooey-server PR: 768
File: daras_ai_v2/language_model.py:124-126
Timestamp: 2025-08-12T08:22:19.003Z
Learning: When determining language model token limits in daras_ai_v2/language_model.py, prioritize actual API behavior and direct testing over general documentation, as different model variants within the same family may have different practical limits.
Applied to files:
recipes/SEOSummary.pydaras_ai_v2/language_model.pyscripts/init_llm_models.pytests/test_token_counter.pyrecipes/DocSummary.py
📚 Learning: 2025-08-12T08:22:19.003Z
Learnt from: nikochiko
Repo: GooeyAI/gooey-server PR: 768
File: daras_ai_v2/language_model.py:124-126
Timestamp: 2025-08-12T08:22:19.003Z
Learning: When setting token limits for language models in daras_ai_v2/language_model.py, prioritize actual API behavior and platform-specific documentation over general model documentation to ensure consistent UX and avoid API errors.
Applied to files:
recipes/SEOSummary.pyrecipes/VideoBots.pydaras_ai_v2/language_model.pyrecipes/DocSearch.pytests/test_token_counter.py
📚 Learning: 2025-08-13T10:52:15.904Z
Learnt from: nikochiko
Repo: GooeyAI/gooey-server PR: 773
File: daras_ai_v2/language_model.py:1543-1547
Timestamp: 2025-08-13T10:52:15.904Z
Learning: Claude's OpenAI-compatible API ignores the `reasoning_effort` parameter, requiring separate handling from native OpenAI thinking models in the token allocation logic.
Applied to files:
daras_ai_v2/language_model.py
🧬 Code graph analysis (19)
daras_ai_v2/language_model_openai_realtime.py (1)
ai_models/models.py (1)
AIModelSpec(28-113)
recipes/RelatedQnA.py (1)
ai_models/models.py (1)
AIModelSpec(28-113)
recipes/GoogleGPT.py (1)
ai_models/models.py (1)
AIModelSpec(28-113)
livekit_agent.py (1)
ai_models/models.py (2)
AIModelSpec(28-113)ModelProvider(4-20)
recipes/SEOSummary.py (1)
ai_models/models.py (1)
AIModelSpec(28-113)
daras_ai_v2/language_model_openai_audio.py (1)
ai_models/models.py (1)
AIModelSpec(28-113)
routers/twilio_ws_api.py (1)
ai_models/models.py (1)
AIModelSpec(28-113)
recipes/VideoBots.py (2)
ai_models/models.py (1)
AIModelSpec(28-113)daras_ai_v2/language_model.py (1)
run_language_model(124-256)
recipes/RelatedQnADoc.py (1)
ai_models/models.py (1)
AIModelSpec(28-113)
daras_ai_v2/language_model.py (1)
ai_models/models.py (2)
AIModelSpec(28-113)ModelProvider(4-20)
daras_ai_v2/language_model_settings_widgets.py (1)
ai_models/models.py (3)
AIModelSpec(28-113)ModelProvider(4-20)Categories(29-32)
recipes/DocSearch.py (1)
ai_models/models.py (1)
AIModelSpec(28-113)
recipes/SocialLookupEmail.py (1)
ai_models/models.py (1)
AIModelSpec(28-113)
ai_models/admin.py (1)
usage_costs/admin.py (1)
ModelPricingAdmin(62-80)
usage_costs/models.py (2)
ai_models/models.py (1)
ModelProvider(4-20)daras_ai_v2/stable_diffusion.py (2)
Text2ImgModels(45-83)Img2ImgModels(100-138)
recipes/CompareLLM.py (2)
ai_models/models.py (1)
AIModelSpec(28-113)daras_ai_v2/language_model.py (1)
run_language_model(124-256)
tests/test_token_counter.py (1)
ai_models/models.py (1)
ModelProvider(4-20)
recipes/BulkEval.py (2)
ai_models/models.py (1)
AIModelSpec(28-113)daras_ai_v2/language_model.py (1)
run_language_model(124-256)
recipes/DocSummary.py (1)
ai_models/models.py (1)
AIModelSpec(28-113)
🪛 Ruff (0.14.10)
livekit_agent.py
337-337: Avoid specifying long messages outside the exception class
(TRY003)
recipes/VideoBots.py
372-374: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
372-374: Avoid specifying long messages outside the exception class
(TRY003)
ai_models/migrations/0004_aimodelspec_is_deprecated_and_more.py
9-11: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
13-117: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
daras_ai_v2/language_model.py
432-432: Avoid specifying long messages outside the exception class
(TRY003)
scripts/init_llm_models.py
9-9: Local variable agrillm_qwen3_30b is assigned to but never used
Remove assignment to unused variable agrillm_qwen3_30b
(F841)
21-21: Local variable apertus_70b_instruct is assigned to but never used
Remove assignment to unused variable apertus_70b_instruct
(F841)
33-33: Local variable sea_lion_v4_gemma_3_27b_it is assigned to but never used
Remove assignment to unused variable sea_lion_v4_gemma_3_27b_it
(F841)
46-46: Local variable gpt_5_2 is assigned to but never used
Remove assignment to unused variable gpt_5_2
(F841)
61-61: Local variable gpt_5_1 is assigned to but never used
Remove assignment to unused variable gpt_5_1
(F841)
76-76: Local variable gpt_5 is assigned to but never used
Remove assignment to unused variable gpt_5
(F841)
91-91: Local variable gpt_5_mini is assigned to but never used
Remove assignment to unused variable gpt_5_mini
(F841)
105-105: Local variable gpt_5_nano is assigned to but never used
Remove assignment to unused variable gpt_5_nano
(F841)
119-119: Local variable gpt_5_chat is assigned to but never used
Remove assignment to unused variable gpt_5_chat
(F841)
144-144: Local variable gpt_4_1_mini is assigned to but never used
Remove assignment to unused variable gpt_4_1_mini
(F841)
155-155: Local variable gpt_4_1_nano is assigned to but never used
Remove assignment to unused variable gpt_4_1_nano
(F841)
168-168: Local variable gpt_4_5 is assigned to but never used
Remove assignment to unused variable gpt_4_5
(F841)
182-182: Local variable o4_mini is assigned to but never used
Remove assignment to unused variable o4_mini
(F841)
227-227: Local variable o1 is assigned to but never used
Remove assignment to unused variable o1
(F841)
244-244: Local variable o1_preview is assigned to but never used
Remove assignment to unused variable o1_preview
(F841)
260-260: Local variable o1_mini is assigned to but never used
Remove assignment to unused variable o1_mini
(F841)
302-302: Local variable gpt_4_o_audio is assigned to but never used
Remove assignment to unused variable gpt_4_o_audio
(F841)
314-314: Local variable gpt_realtime is assigned to but never used
Remove assignment to unused variable gpt_realtime
(F841)
327-327: Local variable gpt_4_o_mini_audio is assigned to but never used
Remove assignment to unused variable gpt_4_o_mini_audio
(F841)
339-339: Local variable chatgpt_4_o is assigned to but never used
Remove assignment to unused variable chatgpt_4_o
(F841)
352-352: Local variable gpt_4_turbo_vision is assigned to but never used
Remove assignment to unused variable gpt_4_turbo_vision
(F841)
368-368: Local variable gpt_4_vision is assigned to but never used
Remove assignment to unused variable gpt_4_vision
(F841)
382-382: Local variable gpt_4_turbo is assigned to but never used
Remove assignment to unused variable gpt_4_turbo
(F841)
396-396: Local variable gpt_4 is assigned to but never used
Remove assignment to unused variable gpt_4
(F841)
407-407: Local variable gpt_4_32k is assigned to but never used
Remove assignment to unused variable gpt_4_32k
(F841)
420-420: Local variable gpt_3_5_turbo is assigned to but never used
Remove assignment to unused variable gpt_3_5_turbo
(F841)
431-431: Local variable gpt_3_5_turbo_16k is assigned to but never used
Remove assignment to unused variable gpt_3_5_turbo_16k
(F841)
442-442: Local variable gpt_3_5_turbo_instruct is assigned to but never used
Remove assignment to unused variable gpt_3_5_turbo_instruct
(F841)
466-466: Local variable deepseek_r1 is assigned to but never used
Remove assignment to unused variable deepseek_r1
(F841)
479-479: Local variable deepseek_v3p1 is assigned to but never used
Remove assignment to unused variable deepseek_v3p1
(F841)
504-504: Local variable llama4_scout_17b_16e is assigned to but never used
Remove assignment to unused variable llama4_scout_17b_16e
(F841)
515-515: Local variable llama3_3_70b is assigned to but never used
Remove assignment to unused variable llama3_3_70b
(F841)
527-527: Local variable llama3_2_90b_vision is assigned to but never used
Remove assignment to unused variable llama3_2_90b_vision
(F841)
540-540: Local variable llama3_2_11b_vision is assigned to but never used
Remove assignment to unused variable llama3_2_11b_vision
(F841)
554-554: Local variable llama3_2_3b is assigned to but never used
Remove assignment to unused variable llama3_2_3b
(F841)
566-566: Local variable llama3_2_1b is assigned to but never used
Remove assignment to unused variable llama3_2_1b
(F841)
579-579: Local variable llama3_1_405b is assigned to but never used
Remove assignment to unused variable llama3_1_405b
(F841)
591-591: Local variable llama3_1_70b is assigned to but never used
Remove assignment to unused variable llama3_1_70b
(F841)
603-603: Local variable llama3_1_8b is assigned to but never used
Remove assignment to unused variable llama3_1_8b
(F841)
616-616: Local variable llama3_70b is assigned to but never used
Remove assignment to unused variable llama3_70b
(F841)
627-627: Local variable llama3_8b is assigned to but never used
Remove assignment to unused variable llama3_8b
(F841)
639-639: Local variable pixtral_large is assigned to but never used
Remove assignment to unused variable pixtral_large
(F841)
650-650: Local variable mistral_large is assigned to but never used
Remove assignment to unused variable mistral_large
(F841)
670-670: Local variable mixtral_8x7b_instruct_0_1 is assigned to but never used
Remove assignment to unused variable mixtral_8x7b_instruct_0_1
(F841)
692-692: Local variable gemma_7b_it is assigned to but never used
Remove assignment to unused variable gemma_7b_it
(F841)
706-706: Local variable gemini_3_pro is assigned to but never used
Remove assignment to unused variable gemini_3_pro
(F841)
749-749: Local variable gemini_2_5_flash_lite is assigned to but never used
Remove assignment to unused variable gemini_2_5_flash_lite
(F841)
761-761: Local variable gemini_2_5_pro_preview is assigned to but never used
Remove assignment to unused variable gemini_2_5_pro_preview
(F841)
775-775: Local variable gemini_2_5_flash_preview is assigned to but never used
Remove assignment to unused variable gemini_2_5_flash_preview
(F841)
789-789: Local variable gemini_2_flash_lite is assigned to but never used
Remove assignment to unused variable gemini_2_flash_lite
(F841)
813-813: Local variable gemini_1_5_flash is assigned to but never used
Remove assignment to unused variable gemini_1_5_flash
(F841)
826-826: Local variable gemini_1_5_pro is assigned to but never used
Remove assignment to unused variable gemini_1_5_pro
(F841)
839-839: Local variable gemini_1_pro_vision is assigned to but never used
Remove assignment to unused variable gemini_1_pro_vision
(F841)
851-851: Local variable gemini_1_pro is assigned to but never used
Remove assignment to unused variable gemini_1_pro
(F841)
861-861: Local variable palm2_chat is assigned to but never used
Remove assignment to unused variable palm2_chat
(F841)
872-872: Local variable gemini_live is assigned to but never used
Remove assignment to unused variable gemini_live
(F841)
882-882: Local variable palm2_text is assigned to but never used
Remove assignment to unused variable palm2_text
(F841)
920-920: Local variable claude_4_sonnet is assigned to but never used
Remove assignment to unused variable claude_4_sonnet
(F841)
934-934: Local variable claude_4_opus is assigned to but never used
Remove assignment to unused variable claude_4_opus
(F841)
948-948: Local variable claude_3_7_sonnet is assigned to but never used
Remove assignment to unused variable claude_3_7_sonnet
(F841)
961-961: Local variable claude_3_5_sonnet is assigned to but never used
Remove assignment to unused variable claude_3_5_sonnet
(F841)
974-974: Local variable claude_3_opus is assigned to but never used
Remove assignment to unused variable claude_3_opus
(F841)
987-987: Local variable claude_3_sonnet is assigned to but never used
Remove assignment to unused variable claude_3_sonnet
(F841)
1000-1000: Local variable claude_3_haiku is assigned to but never used
Remove assignment to unused variable claude_3_haiku
(F841)
1014-1014: Local variable afrollama_v1 is assigned to but never used
Remove assignment to unused variable afrollama_v1
(F841)
1024-1024: Local variable llama3_8b_cpt_sea_lion_v2_1_instruct is assigned to but never used
Remove assignment to unused variable llama3_8b_cpt_sea_lion_v2_1_instruct
(F841)
1032-1032: Local variable sarvam_2b is assigned to but never used
Remove assignment to unused variable sarvam_2b
(F841)
1041-1041: Local variable sarvam_m is assigned to but never used
Remove assignment to unused variable sarvam_m
(F841)
1050-1050: Local variable llama_3_groq_70b_tool_use is assigned to but never used
Remove assignment to unused variable llama_3_groq_70b_tool_use
(F841)
1061-1061: Local variable llama_3_groq_8b_tool_use is assigned to but never used
Remove assignment to unused variable llama_3_groq_8b_tool_use
(F841)
1072-1072: Local variable llama2_70b_chat is assigned to but never used
Remove assignment to unused variable llama2_70b_chat
(F841)
1082-1082: Local variable sea_lion_7b_instruct is assigned to but never used
Remove assignment to unused variable sea_lion_7b_instruct
(F841)
1091-1091: Local variable llama3_8b_cpt_sea_lion_v2_instruct is assigned to but never used
Remove assignment to unused variable llama3_8b_cpt_sea_lion_v2_instruct
(F841)
1102-1102: Local variable text_davinci_003 is assigned to but never used
Remove assignment to unused variable text_davinci_003
(F841)
1111-1111: Local variable text_davinci_002 is assigned to but never used
Remove assignment to unused variable text_davinci_002
(F841)
1120-1120: Local variable code_davinci_002 is assigned to but never used
Remove assignment to unused variable code_davinci_002
(F841)
1129-1129: Local variable text_curie_001 is assigned to but never used
Remove assignment to unused variable text_curie_001
(F841)
1138-1138: Local variable text_babbage_001 is assigned to but never used
Remove assignment to unused variable text_babbage_001
(F841)
1147-1147: Local variable text_ada_001 is assigned to but never used
Remove assignment to unused variable text_ada_001
(F841)
ai_models/admin.py
34-34: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
tests/test_token_counter.py
11-11: Undefined name ModelProvider
(F821)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: test (3.10.12, 1.8.3)
🔇 Additional comments (34)
ai_models/models.py (1)
56-90: LGTM on the new LLM capability fields.The new fields for provider, deprecation handling, redirect logic, and LLM capabilities (context window, output tokens, vision/audio/thinking support, etc.) are well-structured with sensible defaults and verbose names for admin display.
daras_ai_v2/language_model_openai_realtime.py (1)
31-31: Type annotation update aligns with the refactor.The change from
LargeLanguageModels | NonetoAIModelSpec | Noneis consistent with the broader migration to database-backed model specs.recipes/SmartGPT.py (1)
38-38: Type relaxation is consistent with the refactor.Changing from
typing.Literal[...]tostr | Noneis appropriate for the migration to database-backed model specs. Runtime validation now depends onAIModelSpec.objects.get()lookups elsewhere in the codebase.recipes/DocExtract.py (1)
109-109: Type change aligns with the AIModelSpec migration.The relaxation from
typing.Literal[...]tostr | Noneis consistent with the broader refactor. Theselected_modelis passed torun_language_model, which handles the model lookup.daras_ai_v2/language_model_openai_audio.py (1)
12-12: LGTM! Clean migration to AIModelSpec.The import and type annotations are correctly updated. The
model.model_idaccess on lines 117 and 243 aligns with the AIModelSpec model definition.Also applies to: 26-38, 191-198
recipes/DocSummary.py (1)
216-223: LGTM! Correct usage of AIModelSpec fields.The migration correctly uses
model.llm_context_windowfor token calculations andmodel.labelfor user-facing messages.usage_costs/models.py (2)
6-6: LGTM! ModelProvider import centralized.The import is correctly updated to use the centralized
ModelProviderfromai_models.models.
55-69: The attribute usage is correct.AnimationModelsandIVRPlatformMediuminherit fromTextChoices, which provides a.labelattribute.Text2ImgModels,Img2ImgModels, andInpaintingModelsinherit fromEnum, which only provides.value. The code correctly uses.labelfor TextChoices enums and.valuefor regular Enums.recipes/SocialLookupEmail.py (1)
8-8: LGTM! Consistent migration to AIModelSpec.The changes follow the same pattern as other recipe files, correctly using
AIModelSpec.objects.get(name=...)for lookup andmodel.labelfor display. Same consideration applies regardingNonehandling forselected_model.Also applies to: 55-55, 161-162, 179-179
recipes/DocSearch.py (1)
5-5: LGTM! Core changes correctly migrated.The import, type annotation, and main lookup logic are correct.
Also applies to: 71-71, 153-153, 192-192
recipes/SEOSummary.py (1)
12-12: LGTM! Import and type annotation correctly updated.Also applies to: 94-94
daras_ai_v2/language_model_settings_widgets.py (3)
78-78: LGTM!The AIModelSpec-based model lookup is correctly implemented.
104-109: Potential issue when bothllm_max_output_tokensandllm_context_windoware 0.If a model has both fields at their default value of 0, the
min()calculation could return 0, causing issues with the slider that hasmin_value=10. The fallback to 4096 only applies whenllmsis empty.Consider adding a safeguard or ensuring models in the database always have valid values for these fields.
142-154: LGTM!The provider-based check correctly replaces the previous LLMApis-based approach.
recipes/GoogleGPT.py (2)
91-91: LGTM!The type change from enum-based Literal to
str | Nonealigns with the AIModelSpec-based architecture.
315-315: LGTM!Correctly uses
model.labelfor user-facing display instead of the internal model name.ai_models/admin.py (2)
55-81: LGTM!The new fieldsets are well-organized, grouping provider-related settings and LLM-specific capabilities separately for better admin UX.
34-34: LGTM!The
redirect_toautocomplete field is correctly added for the self-referential ForeignKey. The Ruff warning aboutClassVarannotation is a false positive for Django admin classes where this pattern is standard.recipes/BulkEval.py (2)
381-391: LGTM!The function signature and implementation correctly pass the model identifier through to
run_language_model.
318-323: LGTM!The simplified model-agnostic pricing calculation is appropriate for this bulk evaluation workflow.
ai_models/migrations/0004_aimodelspec_is_deprecated_and_more.py (1)
102-106: Version field default discrepancy.The migration uses
default=-1for theversionfield, but the model definition inai_models/models.pyusesdefault=1. This inconsistency could be intentional (to mark existing records) or an oversight.Please verify this is intentional. If existing records should have version 1, update the migration default to match the model.
recipes/CompareLLM.py (3)
94-110: LGTM!The model iteration logic correctly uses
model.namefor API calls andmodel.labelfor display. The issue is in the options definition above.
139-141: PotentialDoesNotExisterror inadditional_notes.If
grouped_costscontainsmodel_idvalues instead ofnamevalues (due to the UI options issue),AIModelSpec.objects.get(name=model_name)will fail. This method should be consistent with the fix applied to line 65.Ensure this method works correctly after fixing the options definition to use
nameinstead ofmodel_id.
158-169:_render_outputsdepends on correct model keys.This function uses
AIModelSpec.objects.get(name=key)wherekeycomes fromselected_models. After fixing the options to usenameinstead ofmodel_id, this will work correctly.recipes/VideoBots.py (3)
541-558: LGTM!The context window calculations correctly use
model.llm_context_windowfrom the AIModelSpec.
647-673: LGTM!The
llm_loopmethod correctly uses AIModelSpec fields:model.labelfor display,model.namefor API calls, andmodel.llm_is_audio_modelfor capability checks.
886-888: LGTM!The audio model check correctly filters by
nameandllm_is_audio_model.livekit_agent.py (2)
215-253: LGTM!The function correctly uses
llm_model.model_idfor API calls (the actual provider model identifier) andllm_model.llm_supports_temperaturefor capability checks.
336-337: LGTM!The fallback error handling provides a clear message when an unsupported provider is encountered. The Ruff TRY003 hint about long exception messages is a style preference and not critical here.
daras_ai_v2/language_model.py (5)
25-25: LGTM!Import correctly brings in
AIModelSpecandModelProviderfrom the new models module.
162-196: LGTM!Capability flags (
llm_max_output_tokens,llm_is_chat_model,llm_is_audio_model,llm_is_vision_model,llm_supports_temperature,llm_supports_json) are correctly accessed from theAIModelSpecinstance.
270-317: LGTM!The
_run_text_modelfunction correctly usesModelProviderenum for routing and maintains the expected behavior for each provider branch.
320-432: LGTM!The
_run_chat_modelfunction correctly routes based onmodel.providerusingModelProviderenum cases and passesmodel.model_idto provider-specific implementations.
602-714: LGTM!The
run_openai_chatfunction correctly usesAIModelSpecproperties for capability checks (llm_is_thinking_model,llm_max_output_tokens) and model identification (model.name,model.label,model.version,model.model_id).
| class AIModelSpecQuerySet(models.QuerySet): | ||
| def get_queryset(self): | ||
| return super().get_queryset().filter(is_deprecated=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get_queryset() on a QuerySet class has no effect.
AIModelSpecQuerySet extends models.QuerySet, but get_queryset() is a Manager method, not a QuerySet method. When you call QuerySet.as_manager(), it creates a manager that proxies to the queryset methods, but it doesn't call get_queryset() from the queryset class.
To automatically exclude deprecated models, override get_queryset() on a custom Manager instead:
🔎 Proposed fix
-class AIModelSpecQuerySet(models.QuerySet):
- def get_queryset(self):
- return super().get_queryset().filter(is_deprecated=False)
+class AIModelSpecManager(models.Manager):
+ def get_queryset(self):
+ return super().get_queryset().filter(is_deprecated=False)Then update the model:
- objects = AIModelSpecQuerySet.as_manager()
+ objects = AIModelSpecManager()Alternatively, if you want chainable queryset methods, combine both:
class AIModelSpecQuerySet(models.QuerySet):
pass # Add custom queryset methods here if needed
class AIModelSpecManager(models.Manager):
def get_queryset(self):
return AIModelSpecQuerySet(self.model, using=self._db).filter(is_deprecated=False)Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In ai_models/models.py around lines 23 to 25, the QuerySet subclass defines
get_queryset(), but get_queryset() is a Manager method so this has no effect;
replace this by creating a custom Manager that overrides get_queryset() to
return an instance of your custom QuerySet filtered by is_deprecated=False (or
use as_manager on the QuerySet and implement filtering in a Manager that
constructs AIModelSpecQuerySet(self.model, using=self._db).filter(...)); update
the model to use that custom Manager so deprecated models are excluded
automatically.
| return gui.selectbox( | ||
| options=options, | ||
| format_func=options.__getitem__, | ||
| label=label, | ||
| label_visibility=label_visibility, | ||
| key=key, | ||
| use_selectbox=True, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing label_visibility parameter in selectbox call.
The label_visibility parameter is defined in the function signature but not passed to gui.selectbox.
🔎 Proposed fix
return gui.selectbox(
options=options,
format_func=options.__getitem__,
label=label,
key=key,
+ label_visibility=label_visibility,
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return gui.selectbox( | |
| options=options, | |
| format_func=options.__getitem__, | |
| label=label, | |
| label_visibility=label_visibility, | |
| key=key, | |
| use_selectbox=True, | |
| ) | |
| return gui.selectbox( | |
| options=options, | |
| format_func=options.__getitem__, | |
| label=label, | |
| key=key, | |
| label_visibility=label_visibility, | |
| ) |
🤖 Prompt for AI Agents
In daras_ai_v2/language_model_settings_widgets.py around lines 64 to 69, the
function accepts a label_visibility parameter but does not forward it to
gui.selectbox; update the selectbox call to include
label_visibility=label_visibility so the visibility setting is applied (ensure
the parameter name matches the function signature).
| not isinstance(text_inputs, str) | ||
| and model == LargeLanguageModels.sea_lion_7b_instruct.model_id | ||
| ): | ||
| if not isinstance(text_inputs, str) and model.name == "sea_lion_7b_instruct": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: model is a str, but .name attribute is accessed.
The parameter model is typed as str (line 437), but this line attempts to access model.name. This will raise AttributeError: 'str' object has no attribute 'name' at runtime.
🔎 Proposed fix
- if not isinstance(text_inputs, str) and model.name == "sea_lion_7b_instruct":
+ if not isinstance(text_inputs, str) and model == "sea_lion_7b_instruct":🤖 Prompt for AI Agents
In daras_ai_v2/language_model.py around line 448, the code accesses model.name
but model is a str; update the condition to compare the string directly (e.g.,
use model == "sea_lion_7b_instruct") or, if model can sometimes be an object,
use a safe check like getattr(model, "name", None) == "sea_lion_7b_instruct";
replace the current expression with one of these safe options so it no longer
raises AttributeError.
| llm_model = await AIModelSpec.objects.aget(model_id=request.selected_model) | ||
| if llm_model.llm_is_audio_model: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect lookup: using model_id but selected_model contains name.
The async lookup uses model_id=request.selected_model, but selected_model contains the model's name (as used elsewhere in the codebase). This will cause DoesNotExist errors.
🔎 Proposed fix
- llm_model = await AIModelSpec.objects.aget(model_id=request.selected_model)
+ llm_model = await AIModelSpec.objects.aget(name=request.selected_model)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| llm_model = await AIModelSpec.objects.aget(model_id=request.selected_model) | |
| if llm_model.llm_is_audio_model: | |
| llm_model = await AIModelSpec.objects.aget(name=request.selected_model) | |
| if llm_model.llm_is_audio_model: |
🤖 Prompt for AI Agents
In livekit_agent.py around lines 185-186, the code is incorrectly querying
AIModelSpec using model_id=request.selected_model even though
request.selected_model contains the model's name; change the lookup to await
AIModelSpec.objects.aget(name=request.selected_model) so it queries by the name
field instead of model_id, and optionally wrap the await in a try/except to
catch DoesNotExist and handle/log the error appropriately.
| options = dict( | ||
| AIModelSpec.objects.filter(llm_supports_json=True).values_list( | ||
| "model_id", "label" | ||
| ) | ||
| ) | ||
| gui.selectbox( | ||
| "##### Language Model", | ||
| options=[e.name for e in LargeLanguageModels if e.supports_json], | ||
| options=options, | ||
| key="selected_model", | ||
| format_func=lambda e: LargeLanguageModels[e].value, | ||
| format_func=options.__getitem__, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistency: using model_id in UI but name expected by run_language_model.
The selectbox uses model_id as the key (line 233), but run_language_model expects the model's name field (per AIModelSpec.objects.get(name=model) in language_model.py). This will cause DoesNotExist errors when the selected value is passed to run_language_model.
🔎 Proposed fix
options = dict(
- AIModelSpec.objects.filter(llm_supports_json=True).values_list(
- "model_id", "label"
+ AIModelSpec.objects.filter(
+ category=AIModelSpec.Categories.llm, llm_supports_json=True
+ ).values_list(
+ "name", "label"
)
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| options = dict( | |
| AIModelSpec.objects.filter(llm_supports_json=True).values_list( | |
| "model_id", "label" | |
| ) | |
| ) | |
| gui.selectbox( | |
| "##### Language Model", | |
| options=[e.name for e in LargeLanguageModels if e.supports_json], | |
| options=options, | |
| key="selected_model", | |
| format_func=lambda e: LargeLanguageModels[e].value, | |
| format_func=options.__getitem__, | |
| options = dict( | |
| AIModelSpec.objects.filter( | |
| category=AIModelSpec.Categories.llm, llm_supports_json=True | |
| ).values_list( | |
| "name", "label" | |
| ) | |
| ) | |
| gui.selectbox( | |
| "##### Language Model", | |
| options=options, | |
| key="selected_model", | |
| format_func=options.__getitem__, |
🤖 Prompt for AI Agents
In recipes/BulkEval.py around lines 232 to 241, the selectbox is populating
options with AIModelSpec.model_id but run_language_model expects the model's
name, so the selected value will not match AIModelSpec.objects.get(name=...),
causing DoesNotExist; update the options to use the model name as the option key
(e.g., use values_list("name", "label") or otherwise map model_id → name) so the
selectbox returns the model name that run_language_model expects, and keep the
display label unchanged.
| or not AIModelSpec.objects.get( | ||
| model_id=request.selected_model | ||
| ).llm_is_vision_model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect lookup: using model_id but selected_model contains name.
The lookup uses model_id=request.selected_model, but throughout the codebase (including line 370), selected_model contains the model's name, not model_id. This will cause DoesNotExist errors.
🔎 Proposed fix
- or not AIModelSpec.objects.get(
- model_id=request.selected_model
- ).llm_is_vision_model
+ or not AIModelSpec.objects.get(
+ name=request.selected_model
+ ).llm_is_vision_model🤖 Prompt for AI Agents
In recipes/VideoBots.py around lines 425 to 427, the code incorrectly does
AIModelSpec.objects.get(model_id=request.selected_model) even though
request.selected_model holds the model's name (see around line 370); change the
lookup to use the name field (e.g., get(name=request.selected_model)) and/or use
a safe lookup pattern (filter(...).first() or try/except for DoesNotExist) to
avoid raising DoesNotExist when the name isn't found.
| model = AIModelSpec.objects.get( | ||
| model_id=gui.session_state.get("selected_model") | ||
| ).label | ||
| except AIModelSpec.DoesNotExist: | ||
| model = "LLM" | ||
|
|
||
| llm_cost = get_non_ivr_price_credits(self.current_sr) | ||
| if model == LargeLanguageModels.agrillm_qwen3_30b.value: | ||
| if model == "agrillm_qwen3_30b": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Multiple issues with model lookup and comparison.
- Line 1585 uses
model_id=butselected_modelcontainsname - Line 1591 compares
model(which is now thelabel) to"agrillm_qwen3_30b"(which is aname)
🔎 Proposed fix
try:
- model = AIModelSpec.objects.get(
- model_id=gui.session_state.get("selected_model")
- ).label
+ model_spec = AIModelSpec.objects.get(
+ name=gui.session_state.get("selected_model")
+ )
+ model = model_spec.label
except AIModelSpec.DoesNotExist:
model = "LLM"
+ model_spec = None
llm_cost = get_non_ivr_price_credits(self.current_sr)
- if model == "agrillm_qwen3_30b":
+ if model_spec and model_spec.name == "agrillm_qwen3_30b":
llm_cost += 100📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| model = AIModelSpec.objects.get( | |
| model_id=gui.session_state.get("selected_model") | |
| ).label | |
| except AIModelSpec.DoesNotExist: | |
| model = "LLM" | |
| llm_cost = get_non_ivr_price_credits(self.current_sr) | |
| if model == LargeLanguageModels.agrillm_qwen3_30b.value: | |
| if model == "agrillm_qwen3_30b": | |
| model_spec = AIModelSpec.objects.get( | |
| name=gui.session_state.get("selected_model") | |
| ) | |
| model = model_spec.label | |
| except AIModelSpec.DoesNotExist: | |
| model = "LLM" | |
| model_spec = None | |
| llm_cost = get_non_ivr_price_credits(self.current_sr) | |
| if model_spec and model_spec.name == "agrillm_qwen3_30b": |
🤖 Prompt for AI Agents
In recipes/VideoBots.py around lines 1584 to 1591, the code looks up AIModelSpec
with model_id=gui.session_state.get("selected_model") but selected_model
contains a model name, and then compares the retrieved .label to the literal
name "agrillm_qwen3_30b". Change the lookup to use the correct field (e.g.,
AIModelSpec.objects.get(name=gui.session_state.get("selected_model"))) or else
keep the original model_id lookup and compare the same field type (compare .name
instead of .label); ensure the get call is guarded for None and catch
DoesNotExist as before so that the comparison checks the same identifier type
(name vs label) consistently.
| llm_model = AIModelSpec.objects.get( | ||
| name=self.saved_run.state.get("selected_model") | ||
| ) | ||
| if not llm_model.llm_is_audio_model: | ||
| self.request_overrides = self.request_overrides or {} | ||
| self.request_overrides["selected_model"] = ( | ||
| LargeLanguageModels.gpt_4_o_audio.name | ||
| ) | ||
| self.request_overrides["selected_model"] = "gpt_4_o_audio" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unhandled DoesNotExist and hardcoded model name.
Two concerns:
-
AIModelSpec.objects.get(name=...)will raiseDoesNotExistifselected_modelisNoneor not found in the database. This occurs in__init__, which could cause the Twilio webhook to fail unexpectedly. -
The hardcoded fallback
"gpt_4_o_audio"should be validated to ensure this model name exists in the database.
🔎 Proposed fix
# force gpt-4o-audio for non-audio models
if self.saved_run and self.saved_run.state:
- llm_model = AIModelSpec.objects.get(
- name=self.saved_run.state.get("selected_model")
- )
- if not llm_model.llm_is_audio_model:
- self.request_overrides = self.request_overrides or {}
- self.request_overrides["selected_model"] = "gpt_4_o_audio"
+ selected_model_name = self.saved_run.state.get("selected_model")
+ if selected_model_name:
+ try:
+ llm_model = AIModelSpec.objects.get(name=selected_model_name)
+ if not llm_model.llm_is_audio_model:
+ self.request_overrides = self.request_overrides or {}
+ self.request_overrides["selected_model"] = "gpt_4_o_audio"
+ except AIModelSpec.DoesNotExist:
+ pass # or log warning and continue with default🤖 Prompt for AI Agents
In routers/twilio_ws_api.py around lines 51 to 56, calling
AIModelSpec.objects.get(name=...) can raise DoesNotExist and the fallback
"gpt_4_o_audio" is hardcoded without validation; modify the code to handle
missing or invalid selected_model by catching DoesNotExist (or using
filter().first()) and avoid letting the exception escape __init__; when the
resolved model is not an audio model, look up the fallback model name from
configuration (or validate "gpt_4_o_audio" exists first) and only set
self.request_overrides["selected_model"] if that fallback lookup returns a valid
AIModelSpec; if no valid fallback is available, log a warning and do not set the
override (or set a safe default from settings).
| # https://platform.openai.com/docs/models/o3-mini | ||
| o3_mini = AIModelSpec.objects.create( | ||
| category=AIModelSpec.Categories.llm, | ||
| name="o3_mini", | ||
| label="o3-mini • openai", | ||
| model_id=("openai-o3-mini-prod-eastus2-1", "o3-mini-2025-01-31"), | ||
| provider=ModelProvider.openai, | ||
| llm_context_window=200_000, | ||
| llm_max_output_tokens=100_000, | ||
| llm_is_vision_model=False, | ||
| llm_is_thinking_model=True, | ||
| llm_supports_json=True, | ||
| llm_supports_temperature=False, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, find the AIModelSpec model definition
fd -t py -n "models.py" | head -20Repository: GooeyAI/gooey-server
Length of output: 294
🏁 Script executed:
# Search for AIModelSpec class definition
rg -n "class AIModelSpec" --type py -A 30Repository: GooeyAI/gooey-server
Length of output: 3429
🏁 Script executed:
# Check how model_id field is defined
rg -n "model_id.*=" --type py | grep -i "field\|models\." | head -20Repository: GooeyAI/gooey-server
Length of output: 1719
🏁 Script executed:
# Check usage of model.model_id in the codebase
rg -n "\.model_id" --type py -B 2 -A 2 | head -50Repository: GooeyAI/gooey-server
Length of output: 2515
🏁 Script executed:
# Search for all isinstance(model_id, tuple) patterns or tuple handling
rg -n "isinstance.*model_id.*tuple|tuple.*model_id" --type py -B 2 -A 2Repository: GooeyAI/gooey-server
Length of output: 296
🏁 Script executed:
# Check if there are any JSON field alternatives or conversions
rg -n "json\.loads.*model_id|ast\.literal_eval.*model_id" --type pyRepository: GooeyAI/gooey-server
Length of output: 46
🏁 Script executed:
# Check all tuple assignments to model_id in the init script
rg -n "model_id=\(" scripts/init_llm_models.py -A 1 -B 1Repository: GooeyAI/gooey-server
Length of output: 1810
Fix tuple handling for model_id: TextField stores tuples as strings, breaking isinstance checks.
AIModelSpec.model_id is a TextField (not JSON). When tuples like ("openai-o3-mini-prod-eastus2-1", "o3-mini-2025-01-31") are assigned, Django converts them to string representations: "('openai-o3-mini-prod-eastus2-1', 'o3-mini-2025-01-31')". The conditional in livekit_agent.py:290-292 that checks isinstance(model_id, tuple) will never be true, making the tuple extraction logic dead code. This affects at least 9 models (o3-mini, o1, o1-mini, gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-4, gpt-3.5-turbo, and others). Use a JSONField for model_id instead, or store tuples differently.
🤖 Prompt for AI Agents
In scripts/init_llm_models.py around lines 211 to 224, model_id is being
assigned a Python tuple which Django will coerce to a string for a TextField,
breaking isinstance checks later; change the persistence to a JSON-capable field
or stop storing tuples: update the AIModelSpec.model_id field from TextField to
a JSONField (or CharField with a defined delimiter) in the models module, create
and run a Django migration to alter the column type, add a one-time data
migration to convert existing stringified tuples into proper JSON arrays (or the
chosen delimiter format), and update scripts/init_llm_models.py to write
model_id as a JSON list (or delimited string) consistently so livekit_agent.py's
tuple/array detection can work correctly.
| # from daras_ai_v2.language_model import LargeLanguageModels | ||
|
|
||
|
|
||
| def test_deprecated_model_redirects(): | ||
| for model in LargeLanguageModels: | ||
| if not model.is_deprecated: | ||
| continue | ||
| # 1) There is a redirect_to on all deprecated models | ||
| assert model.redirect_to, f"{model.name} is deprecated but has no redirect_to" | ||
| # def test_deprecated_model_redirects(): | ||
| # for model in LargeLanguageModels: | ||
| # if not model.is_deprecated: | ||
| # continue | ||
| # # 1) There is a redirect_to on all deprecated models | ||
| # assert model.redirect_to, f"{model.name} is deprecated but has no redirect_to" | ||
|
|
||
| # 2) The redirect_to points to a valid model | ||
| assert model.redirect_to in LargeLanguageModels.__members__, ( | ||
| f"{model.name} redirects to invalid model {model.redirect_to}" | ||
| ) | ||
| # # 2) The redirect_to points to a valid model | ||
| # assert model.redirect_to in LargeLanguageModels.__members__, ( | ||
| # f"{model.name} redirects to invalid model {model.redirect_to}" | ||
| # ) | ||
|
|
||
| redirected_model = LargeLanguageModels[model.redirect_to] | ||
| # 3) The redirected model is not deprecated | ||
| assert not redirected_model.is_deprecated, ( | ||
| f"{model.name} redirects to deprecated model {model.redirect_to}" | ||
| ) | ||
| # redirected_model = LargeLanguageModels[model.redirect_to] | ||
| # # 3) The redirected model is not deprecated | ||
| # assert not redirected_model.is_deprecated, ( | ||
| # f"{model.name} redirects to deprecated model {model.redirect_to}" | ||
| # ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Commented-out test should be migrated, not disabled.
Disabling test_deprecated_model_redirects creates a gap in test coverage for the deprecation/redirect logic. Since the feature still exists in AIModelSpec (with is_deprecated and redirect_to fields), this test should be migrated to query AIModelSpec instead of the old enum.
🔎 Proposed migration
from ai_models.models import AIModelSpec
def test_deprecated_model_redirects():
# Use all_objects or unfiltered queryset to include deprecated models
for model in AIModelSpec.objects.all():
if not model.is_deprecated:
continue
# 1) There is a redirect_to on all deprecated models
assert model.redirect_to, f"{model.name} is deprecated but has no redirect_to"
# 2) The redirect_to points to a valid, non-deprecated model
assert not model.redirect_to.is_deprecated, (
f"{model.name} redirects to deprecated model {model.redirect_to.name}"
)Note: You may need to bypass the custom manager that filters out deprecated models, or use a raw queryset.
🤖 Prompt for AI Agents
In tests/test_llm.py around lines 1-20, the legacy
test_deprecated_model_redirects is commented out and must be migrated to use the
new AIModelSpec model: replace the disabled enum-based test with a test that
iterates over AIModelSpec records (using an unfiltered queryset or bypassing any
manager that hides deprecated entries) and for each model where is_deprecated is
True assert that redirect_to is set and that redirect_to points to a valid
AIModelSpec that is not deprecated; ensure you query all records (or use the
model's base manager/raw queryset) so deprecated models are included and update
assertion messages to reference model.name and redirect_to.name for clarity.
Q/A checklist
How to check import time?
You can visualize this using tuna:
To measure import time for a specific library:
To reduce import times, import libraries that take a long time inside the functions that use them instead of at the top of the file:
Legal Boilerplate
Look, I get it. The entity doing business as “Gooey.AI” and/or “Dara.network” was incorporated in the State of Delaware in 2020 as Dara Network Inc. and is gonna need some rights from me in order to utilize my contributions in this PR. So here's the deal: I retain all rights, title and interest in and to my contributions, and by keeping this boilerplate intact I confirm that Dara Network Inc can use, modify, copy, and redistribute my contributions, under its choice of terms.