Chatbot ui suggests messages that the user can send#1652
Chatbot ui suggests messages that the user can send#1652HeljaeRaeisaenen wants to merge 50 commits intomasterfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds message-suggestion feature and task-scoped LLM specs: DB migrations/models for suggested messages and application task defaults, structured JSON-schema LLM output support, LLM request plumbing updates, server endpoints/controllers and mock routes, frontend UI and form changes for suggestions, seeds and tests. Changes
Sequence DiagramsequenceDiagram
participant Client as Client
participant Server as Server
participant DB as Database
participant LLM as "Azure OpenAI"
Client->>Server: GET /current_conversation_info
activate Server
Server->>DB: Fetch conversation messages & chatbot configuration
Server->>DB: Check for existing suggested_messages
alt suggestions enabled and none exist
Server->>DB: Get last conversation_message
alt first message
Server->>Server: Select up to 3 random initial_suggested_messages
else subsequent message
Server->>LLM: make_blocking_llm_request(generate_suggested_messages request with JSON schema)
activate LLM
LLM-->>Server: Structured JSON { "suggestions": [...] }
deactivate LLM
Server->>Server: Parse suggestions
end
Server->>DB: Insert batch suggested messages
Server->>DB: Re-fetch updated conversation info (with suggestions)
end
Server-->>Client: Return ChatbotConversationInfo (includes suggested_messages)
deactivate Server
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
services/headless-lms/server/src/controllers/course_material/chatbot.rs
Outdated
Show resolved
Hide resolved
| } | ||
| } else { | ||
| // for other messages, generate suggested messages | ||
| headless_lms_chatbot::message_suggestion::generate_suggested_messages( |
…r hard coded but instead in db
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (6)
services/main-frontend/src/components/course-material/chatbot/shared/ChatbotChatBody.tsx (1)
384-439: Suggestion chips UI implementation looks solid.Good use of
isDisabledto prevent interaction during pending state, themed colors frombaseTheme, and the loading animation provides clear visual feedback. TheAriaButtonusage is appropriate for accessible interactive elements.Minor nit: the hardcoded colors
#ffffffand#f2f2f2in the gradient (lines 399-400) could potentially use theme values likebaseTheme.colors.clear[50]andbaseTheme.colors.clear[100]for consistency, but this is cosmetic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/main-frontend/src/components/course-material/chatbot/shared/ChatbotChatBody.tsx` around lines 384 - 439, Replace the two hardcoded gradient colors inside the AriaButton rendering (the gradient in the suggested_messages map in ChatbotChatBody.tsx) with theme values for consistency: swap `#ffffff` for `baseTheme.colors.clear[50]` and `#f2f2f2` for `baseTheme.colors.clear[100]` inside the background linear-gradient string used on the AriaButton; ensure `baseTheme` is in scope where the `suggested_messages?.map` renders so the theme values resolve correctly.services/headless-lms/models/src/application_task_default_language_models.rs (1)
73-119: Consider extracting a shared helper for the duplicated query logic.
get_for_content_cleaningandget_for_message_suggestionare identical except for the task filter string. While sqlx macros make generic parameterization of enum literals tricky, you could extract a privateget_for_taskfunction that takesApplicationTaskas a parameter and usequery_as!with a bind parameter instead of a literal:Sketch
+async fn get_for_task(conn: &mut PgConnection, task: ApplicationTask) -> ModelResult<TaskLMSpec> { + let res = sqlx::query_as!( + TaskLMSpec, + r#" +SELECT + a.id, + a.task as "task: ApplicationTask", + a.context_utilization, + model.model, + model.thinking, + model.deployment_name, + model.context_size +FROM application_task_default_language_models AS a +JOIN chatbot_configurations_models AS model ON model.id = a.model_id +WHERE a.task = $1 +AND a.deleted_at IS NULL +AND model.deleted_at IS NULL + "#, + task as ApplicationTask, + ) + .fetch_one(conn) + .await?; + Ok(res) +} + pub async fn get_for_content_cleaning(conn: &mut PgConnection) -> ModelResult<TaskLMSpec> { - // ... full query ... + get_for_task(conn, ApplicationTask::ContentCleaning).await } pub async fn get_for_message_suggestion(conn: &mut PgConnection) -> ModelResult<TaskLMSpec> { - // ... full query ... + get_for_task(conn, ApplicationTask::MessageSuggestion).await }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/headless-lms/models/src/application_task_default_language_models.rs` around lines 73 - 119, The two functions get_for_content_cleaning and get_for_message_suggestion duplicate the same SQL; extract a private helper like get_for_task(conn: &mut PgConnection, task: ApplicationTask) -> ModelResult<TaskLMSpec> that runs the shared query and uses a bind parameter for the task (e.g. WHERE a.task = $1) with sqlx::query_as! and passes the task value as the bind argument; update get_for_content_cleaning and get_for_message_suggestion to call get_for_task(ApplicationTask::ContentCleaning) and get_for_task(ApplicationTask::MessageSuggestion) respectively (ensure ApplicationTask is converted to the expected DB representation when binding).services/headless-lms/migrations/20260216103451_default_llm_models_tasks.up.sql (1)
10-10: Consider adding a CHECK constraint forcontext_utilization.
context_utilizationis a fraction of the LLM context size (per the column comment). Without a constraint, values outside the valid range (e.g., negative or > 1.0) can be inserted. A simple check would prevent misconfiguration.Proposed addition
context_utilization real NOT NULL, + CHECK (context_utilization > 0 AND context_utilization <= 1),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/headless-lms/migrations/20260216103451_default_llm_models_tasks.up.sql` at line 10, Add a CHECK constraint on the context_utilization column to enforce it is a valid fraction (e.g., between 0.0 and 1.0); update the CREATE TABLE statement in the migration (where context_utilization real NOT NULL is declared) to include a named constraint such as CONSTRAINT chk_tasks_context_utilization_range CHECK (context_utilization >= 0 AND context_utilization <= 1) or, if preferred, add an ALTER TABLE ... ADD CONSTRAINT statement in the same migration to enforce the range and prevent invalid inserts.services/headless-lms/migrations/20260121125703_chatbot_conversation_suggested_messages.up.sql (1)
1-8: Consider adding an index onconversation_message_idfor query performance.The
conversation_message_idcolumn will be used for lookups (fetching suggestions for the current message), but there's no index on it. Additionally, as noted in the controller review, there's no protection against duplicate suggestion sets for the same message. A unique or indexed constraint would help on both fronts.Proposed addition after the trigger
CREATE TRIGGER set_timestamp BEFORE UPDATE ON chatbot_conversation_suggested_messages FOR EACH ROW EXECUTE PROCEDURE trigger_set_timestamp(); + +CREATE INDEX idx_chatbot_conv_suggested_msgs_conv_msg_id + ON chatbot_conversation_suggested_messages(conversation_message_id) + WHERE deleted_at IS NULL;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/headless-lms/migrations/20260121125703_chatbot_conversation_suggested_messages.up.sql` around lines 1 - 8, Add an index (and optionally a uniqueness constraint if you want one suggestion set per conversation message) on the conversation_message_id column of the chatbot_conversation_suggested_messages table to improve lookup performance and prevent duplicates; modify the migration that creates chatbot_conversation_suggested_messages to create either a non-unique index on conversation_message_id (e.g., CREATE INDEX ON chatbot_conversation_suggested_messages(conversation_message_id)) or a unique index/constraint (e.g., ALTER TABLE chatbot_conversation_suggested_messages ADD CONSTRAINT unique_conversation_message_suggestion UNIQUE(conversation_message_id)) depending on whether multiple suggestions per message are allowed.services/headless-lms/chatbot/src/message_suggestion.rs (1)
109-109: Preferis_empty()over.len() == 0.
conversation.trim().len() == 0is more idiomatically written asconversation.trim().is_empty().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/headless-lms/chatbot/src/message_suggestion.rs` at line 109, Replace the length check on the trimmed conversation with the idiomatic empty check: locate the conditional using conversation.trim().len() == 0 (in message_suggestion.rs) and change it to use conversation.trim().is_empty() so the code uses the standard is_empty() method on the trimmed string.services/headless-lms/server/src/controllers/course_material/chatbot.rs (1)
200-208: LLM call on a GET request path adds unpredictable latency.
generate_suggested_messagesperforms a blocking LLM HTTP call during thecurrent_conversation_infoGET handler. If the LLM is slow or unavailable, this will block the user from seeing their conversation. Consider generating suggestions asynchronously (e.g., fire-and-forget withtokio::spawnafter returning the response, or generate them at the end ofsend_message) so the GET path remains fast. At minimum, a timeout on this call would prevent indefinite hangs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/headless-lms/server/src/controllers/course_material/chatbot.rs` around lines 200 - 208, The GET handler current_conversation_info should not perform a blocking LLM HTTP call via headless_lms_chatbot::message_suggestion::generate_suggested_messages; instead either (A) move invocation out of the GET path and trigger suggestion generation from the send_message flow (e.g., call generate_suggested_messages at the end of send_message) or (B) make the GET handler fire-and-forget by spawning a background task with tokio::spawn that calls generate_suggested_messages after the response is returned; if you must call it inline keep a hard deadline using tokio::time::timeout around generate_suggested_messages and handle timeout errors gracefully. Ensure you reference and update the current_conversation_info handler and any send_message function to reflect the chosen approach and handle errors/logging from the spawned task.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@services/headless-lms/chatbot/src/message_suggestion.rs`:
- Around line 70-75: The prompt construction in message_suggestion.rs currently
uses Debug formatting "{:?}" for course_desc which adds quotes and escapes;
change this to Display "{}" to avoid quotes, and simplify the dense
concatenation by building the prompt in clearer steps (e.g., start with
SYSTEM_PROMPT.to_owned(), then push_str or append format! segments for
course_name, up to 5 initial_suggested_messages if Some, and the course
description using "{}" for course_desc) — update the code around the variables
SYSTEM_PROMPT, course_name, initial_suggested_messages, and course_desc in the
function that constructs prompt to implement these changes for readability and
correct formatting.
In
`@services/headless-lms/migrations/20260216103451_default_llm_models_tasks.up.sql`:
- Line 17: The table comment for application_task_default_language_models is
empty; replace the empty string in the COMMENT ON TABLE
application_task_default_language_models statement with a concise, non-empty
description that explains the table's purpose (for example: it maps application
tasks to their default language models or stores default LLM model assignments
per task), so the migration includes a meaningful table comment per guidelines.
In `@services/headless-lms/server/src/controllers/course_material/chatbot.rs`:
- Around line 170-227: The GET handler branch that checks
chatbot_configuration.suggest_next_messages and res.suggested_messages then
calls headless_lms_models::chatbot_conversation_suggested_messages::insert_batch
can produce duplicate rows under concurrent requests; fix by adding a DB-level
uniqueness constraint on conversation_message_id (and deleted_at NULLS NOT
DISTINCT) so insert_batch becomes conflict-safe, or change the logic to
serialize/win the race (move suggestion creation into POST handlers like
send_message/new_conversation or wrap the read+insert in a transaction with
SELECT ... FOR UPDATE on the conversation_message row) and ensure insert_batch
uses an upsert/ON CONFLICT DO NOTHING path to avoid duplicates.
In `@services/headless-lms/server/src/programs/chatbot_syncer.rs`:
- Around line 348-352: The syncer currently hard-depends on a row in
application_task_default_language_models for task='content-cleaning' (queried
via
headless_lms_models::application_task_default_language_models::get_for_content_cleaning
which uses fetch_one), which will cause RowNotFound and abort the batch if that
row isn't present; fix by either seeding that row in the migration
20260216103451_default_llm_models_tasks.up.sql (add an INSERT for
task='content-cleaning') or modify the code path: change
get_for_content_cleaning to use fetch_optional and update chatbot_syncer.rs to
handle a missing row gracefully (skip content-cleaning steps or use a safe
default) so the whole sync batch doesn't fail.
In
`@services/main-frontend/src/app/manage/courses/`[id]/other/chatbot/ChatbotConfigurationForm.tsx:
- Around line 186-191: The assignment to initial_suggested_messages may yield
undefined when data.suggested_messages is missing, but NewChatbotConf expects
Array<string> | null; modify the initializer in ChatbotConfigurationForm so
initial_suggested_messages is explicitly null when suggested_messages is absent
(e.g., replace the current data.suggested_messages?.map(...) expression with a
construct that returns the mapped array or null). Ensure you update the
expression near initial_suggested_messages in the form state initialization so
the value sent to the backend is null instead of undefined.
---
Duplicate comments:
In `@services/headless-lms/chatbot/src/message_suggestion.rs`:
- Around line 83-108: The code uses DB field el.order_number as a slice index
(order_n) which can be out-of-range or not correspond to the vector index;
change the rfold over conversation_messages to use
.iter().enumerate().rfold(...) (or reverse enumerate) so you capture the actual
array index (e.g., idx) into the accumulator instead of el.order_number, update
the accumulator binds (used_tokens, idx) and set conversation =
&conversation_messages[idx..] (with a bounds check clamp to
0..conversation_messages.len() if necessary) so slicing uses the real vector
index; keep create_msg_string and the token budget logic the same but replace
uses of order_n with the collected array index.
In
`@services/main-frontend/src/app/manage/courses/`[id]/other/chatbot/ChatbotConfigurationForm.tsx:
- Around line 272-278: The Add button currently always calls append({ message:
"" }) allowing unlimited suggested messages; update ChatbotConfigurationForm to
enforce the 0–10 limit by checking the current array length (e.g.,
values.suggestedMessages or the field array state used with append) before
calling append and/or disable the Button when length >= 10; ensure the onClick
handler and the Button rendering (where append is invoked) prevent adding more
than 10 entries so UX matches the documented limit.
In
`@services/main-frontend/src/components/course-material/chatbot/shared/ChatbotChatBody.tsx`:
- Around line 157-170: The suggested message flow fails to re-trigger when the
same suggestion is clicked because suggestedNewMessage doesn't change after a
failed mutation; update the newMessageMutation's onError handler (where
newMessageMutation is defined) to call setSuggestedNewMessage("") (or otherwise
clear/reset suggestedNewMessage) so subsequent clicks set it again and the
useEffect/useEffectEvent sendSuggestedNewMessage will run; ensure you reference
suggestedNewMessage, sendSuggestedNewMessage (useEffectEvent) and
newMessageMutation when making this change.
In `@system-tests/src/tests/chatbot-conversation.spec.ts`:
- Around line 300-306: After clicking the "Set as the default chatbot" button in
the test step, add a call to waitForSuccessNotification to wait for the mutation
to complete and UI confirmation before proceeding; import
waitForSuccessNotification from the test utils (e.g., '@/utils/notifications')
at the top of the spec and call await waitForSuccessNotification() immediately
after page.getByRole(...).first().click() in the teacher changes the default
chatbot step so the test does not race ahead before the backend persists the
change.
---
Nitpick comments:
In `@services/headless-lms/chatbot/src/message_suggestion.rs`:
- Line 109: Replace the length check on the trimmed conversation with the
idiomatic empty check: locate the conditional using conversation.trim().len() ==
0 (in message_suggestion.rs) and change it to use conversation.trim().is_empty()
so the code uses the standard is_empty() method on the trimmed string.
In
`@services/headless-lms/migrations/20260121125703_chatbot_conversation_suggested_messages.up.sql`:
- Around line 1-8: Add an index (and optionally a uniqueness constraint if you
want one suggestion set per conversation message) on the conversation_message_id
column of the chatbot_conversation_suggested_messages table to improve lookup
performance and prevent duplicates; modify the migration that creates
chatbot_conversation_suggested_messages to create either a non-unique index on
conversation_message_id (e.g., CREATE INDEX ON
chatbot_conversation_suggested_messages(conversation_message_id)) or a unique
index/constraint (e.g., ALTER TABLE chatbot_conversation_suggested_messages ADD
CONSTRAINT unique_conversation_message_suggestion
UNIQUE(conversation_message_id)) depending on whether multiple suggestions per
message are allowed.
In
`@services/headless-lms/migrations/20260216103451_default_llm_models_tasks.up.sql`:
- Line 10: Add a CHECK constraint on the context_utilization column to enforce
it is a valid fraction (e.g., between 0.0 and 1.0); update the CREATE TABLE
statement in the migration (where context_utilization real NOT NULL is declared)
to include a named constraint such as CONSTRAINT
chk_tasks_context_utilization_range CHECK (context_utilization >= 0 AND
context_utilization <= 1) or, if preferred, add an ALTER TABLE ... ADD
CONSTRAINT statement in the same migration to enforce the range and prevent
invalid inserts.
In
`@services/headless-lms/models/src/application_task_default_language_models.rs`:
- Around line 73-119: The two functions get_for_content_cleaning and
get_for_message_suggestion duplicate the same SQL; extract a private helper like
get_for_task(conn: &mut PgConnection, task: ApplicationTask) ->
ModelResult<TaskLMSpec> that runs the shared query and uses a bind parameter for
the task (e.g. WHERE a.task = $1) with sqlx::query_as! and passes the task value
as the bind argument; update get_for_content_cleaning and
get_for_message_suggestion to call
get_for_task(ApplicationTask::ContentCleaning) and
get_for_task(ApplicationTask::MessageSuggestion) respectively (ensure
ApplicationTask is converted to the expected DB representation when binding).
In `@services/headless-lms/server/src/controllers/course_material/chatbot.rs`:
- Around line 200-208: The GET handler current_conversation_info should not
perform a blocking LLM HTTP call via
headless_lms_chatbot::message_suggestion::generate_suggested_messages; instead
either (A) move invocation out of the GET path and trigger suggestion generation
from the send_message flow (e.g., call generate_suggested_messages at the end of
send_message) or (B) make the GET handler fire-and-forget by spawning a
background task with tokio::spawn that calls generate_suggested_messages after
the response is returned; if you must call it inline keep a hard deadline using
tokio::time::timeout around generate_suggested_messages and handle timeout
errors gracefully. Ensure you reference and update the current_conversation_info
handler and any send_message function to reflect the chosen approach and handle
errors/logging from the spawned task.
In
`@services/main-frontend/src/components/course-material/chatbot/shared/ChatbotChatBody.tsx`:
- Around line 384-439: Replace the two hardcoded gradient colors inside the
AriaButton rendering (the gradient in the suggested_messages map in
ChatbotChatBody.tsx) with theme values for consistency: swap `#ffffff` for
`baseTheme.colors.clear[50]` and `#f2f2f2` for `baseTheme.colors.clear[100]`
inside the background linear-gradient string used on the AriaButton; ensure
`baseTheme` is in scope where the `suggested_messages?.map` renders so the theme
values resolve correctly.
services/headless-lms/migrations/20260216103451_default_llm_models_tasks.up.sql
Outdated
Show resolved
Hide resolved
| let task_lm = | ||
| headless_lms_models::application_task_default_language_models::get_for_content_cleaning( | ||
| conn, | ||
| ) | ||
| .await?; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -n "content.cleaning\|content-cleaning\|content_cleaning" --type sqlRepository: rage/secret-project-331
Length of output: 49
🏁 Script executed:
rg -n "content.cleaning\|content-cleaning\|content_cleaning" -g '*.rs' | grep -E "(migration|seed)" | head -20Repository: rage/secret-project-331
Length of output: 49
🏁 Script executed:
# Check the actual function implementation
fd -e rs | xargs rg -l "get_for_content_cleaning" | head -5Repository: rage/secret-project-331
Length of output: 203
🏁 Script executed:
# Search for content-cleaning references in the entire codebase
rg "content.cleaning\|content-cleaning\|content_cleaning" --type rs --type sql | head -50Repository: rage/secret-project-331
Length of output: 93
🏁 Script executed:
# Find and read the get_for_content_cleaning implementation
fd -e rs -path "*application_task_default_language_models*" | head -1 | xargs catRepository: rage/secret-project-331
Length of output: 235
🏁 Script executed:
# Look for migrations directory
fd -type d -name migrations | head -5Repository: rage/secret-project-331
Length of output: 237
🏁 Script executed:
# Check for seed scripts or data initialization
fd -type f -name "*seed*" | head -10Repository: rage/secret-project-331
Length of output: 237
🏁 Script executed:
# Get more context from chatbot_syncer.rs around the usage
sed -n '340,380p' services/headless-lms/server/src/programs/chatbot_syncer.rsRepository: rage/secret-project-331
Length of output: 1588
🏁 Script executed:
find . -name "application_task_default_language_models.rs" -type fRepository: rage/secret-project-331
Length of output: 145
🏁 Script executed:
find . -path "*/migrations*" -type dRepository: rage/secret-project-331
Length of output: 101
🏁 Script executed:
cat ./services/headless-lms/models/src/application_task_default_language_models.rsRepository: rage/secret-project-331
Length of output: 3004
🏁 Script executed:
find ./services/headless-lms/migrations -type f -name "*.sql" | sort | tail -20Repository: rage/secret-project-331
Length of output: 1970
🏁 Script executed:
# Search for content-cleaning or task_name references in migrations
grep -r "content.cleaning\|content-cleaning\|content_cleaning" ./services/headless-lms/migrations/ 2>/dev/null | head -20Repository: rage/secret-project-331
Length of output: 229
🏁 Script executed:
cat ./services/headless-lms/migrations/20260216103451_default_llm_models_tasks.up.sqlRepository: rage/secret-project-331
Length of output: 2171
🏁 Script executed:
cat ./services/headless-lms/migrations/20260216103451_default_llm_models_tasks.down.sqlRepository: rage/secret-project-331
Length of output: 216
🏁 Script executed:
# Search for any seed scripts or initialization code that might insert content-cleaning task
find . -name "*.sql" -o -name "*seed*" | xargs grep -l "application_task_default_language_models" 2>/dev/null | head -10Repository: rage/secret-project-331
Length of output: 232
🏁 Script executed:
# Check if there are any insert statements for this table in migrations
grep -r "INSERT INTO application_task_default_language_models" ./services/headless-lms/migrations/Repository: rage/secret-project-331
Length of output: 49
🏁 Script executed:
# Search for seed or setup Rust code that might initialize task LMs
find . -name "*.rs" | xargs grep -l "application_task_default_language_models::insert\|ContentCleaning" 2>/dev/null | head -10Repository: rage/secret-project-331
Length of output: 145
🏁 Script executed:
# Check if there are seed scripts in the repo
find . -path "*/seed*" -o -name "*seed*" -o -name "*fixture*" | head -20Repository: rage/secret-project-331
Length of output: 1345
🏁 Script executed:
# Look for any initialization code or setup docs
find . -name "README*" -o -name "SETUP*" -o -name "*.md" | xargs grep -l "seed\|initialize\|setup" 2>/dev/null | head -5Repository: rage/secret-project-331
Length of output: 161
🏁 Script executed:
# Check the migrations directory structure more broadly
ls -la ./services/headless-lms/migrations/ | head -30Repository: rage/secret-project-331
Length of output: 2918
🏁 Script executed:
# Search for any references to ContentCleaning enum being used to insert data
grep -r "ContentCleaning\|content-cleaning" ./services/headless-lms --include="*.rs" | grep -E "(insert|new|create)" | head -20Repository: rage/secret-project-331
Length of output: 49
🏁 Script executed:
# Check the seed program for any setup of application_task_default_language_models
find ./services/headless-lms/server/src/programs/seed -name "*.rs" | xargs grep -n "application_task\|ContentCleaning\|content.cleaning" 2>/dev/nullRepository: rage/secret-project-331
Length of output: 51
🏁 Script executed:
# Check the main seed file
cat ./services/headless-lms/server/src/programs/seed/mod.rs | head -100Repository: rage/secret-project-331
Length of output: 3456
🏁 Script executed:
# Look for any initialization in the main server code
grep -r "application_task_default_language_models" ./services/headless-lms/server/src --include="*.rs" -B 3 -A 3 | head -50Repository: rage/secret-project-331
Length of output: 1571
🏁 Script executed:
# Check if there are any migrations after 20260216103451 that might insert data
ls ./services/headless-lms/migrations/2026021* -laRepository: rage/secret-project-331
Length of output: 330
🏁 Script executed:
# Check for any SQL that inserts into application_task_default_language_models after the creation
grep -n "INSERT INTO application_task_default_language_models" ./services/headless-lms/migrations/*.sqlRepository: rage/secret-project-331
Length of output: 49
🏁 Script executed:
# Double check - look at all migrations in chronological order after the table creation
ls -1 ./services/headless-lms/migrations/ | grep "^202602" | sortRepository: rage/secret-project-331
Length of output: 362
Hard dependency on application_task_default_language_models row for task = 'content-cleaning'.
The migration 20260216103451_default_llm_models_tasks.up.sql creates the table but contains no INSERT statement to seed the required row. The get_for_content_cleaning() function uses fetch_one(), which returns RowNotFound if no matching row exists, causing the entire sync batch to fail. Either insert a seed row in the migration, add initialization to the seed program, or provide migration documentation explaining that a row must be manually inserted into application_task_default_language_models with task = 'content-cleaning' before running the syncer.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@services/headless-lms/server/src/programs/chatbot_syncer.rs` around lines 348
- 352, The syncer currently hard-depends on a row in
application_task_default_language_models for task='content-cleaning' (queried
via
headless_lms_models::application_task_default_language_models::get_for_content_cleaning
which uses fetch_one), which will cause RowNotFound and abort the batch if that
row isn't present; fix by either seeding that row in the migration
20260216103451_default_llm_models_tasks.up.sql (add an INSERT for
task='content-cleaning') or modify the code path: change
get_for_content_cleaning to use fetch_optional and update chatbot_syncer.rs to
handle a missing row gracefully (skip content-cleaning steps or use a safe
default) so the whole sync batch doesn't fail.
services/main-frontend/src/app/manage/courses/[id]/other/chatbot/ChatbotConfigurationForm.tsx
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
services/headless-lms/chatbot/src/message_suggestion.rs (1)
150-151: Remove comments that restate what the code already says.Lines 150–151, 156, 166, and 186–187 each narrate the immediately following code without adding information that isn't already clear from the code itself. Per coding guidelines, comments should communicate intent, constraints, or non-obvious rationale — not echo the control flow.
♻️ Proposed removals
} else { - // if it's not test mode, the default, actual endpoint is used None };- // parse chat completion let completion_content: &String = &completion- // parse structured output let suggestions: ChatbotNextMessageSuggestionResponse =let conv_len = conversation_messages.len(); - // calculate how many messages to include in the conversation context let order_n = conversation_messagesAs per coding guidelines: "Do not add comments that simply restate what the code already says, narrate control flow, or explain obvious syntax."
Also applies to: 156-156, 166-166, 186-187
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/headless-lms/chatbot/src/message_suggestion.rs` around lines 150 - 151, Remove the redundant explanatory comments that merely restate obvious control flow around the endpoint selection and return branches (e.g., the comment immediately above the match arm that returns None and the other narrating comments noted in the review); delete those comments so the code (the match/return branches that yield None/Some) stands alone and only keep comments that convey intent, constraints, or non-obvious rationale within the same function in message_suggestion.rs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@services/headless-lms/chatbot/src/message_suggestion.rs`:
- Line 193: Remove the temporary debug println! statements that print the emoji
markers and token info (the lines printing "🐈🐈🐈🐈🐈{:?}" for used_tokens and
the "🐈🐈🐈" print) in message_suggestion.rs; locate the prints near where
used_tokens is computed (and any other println! calls emitting those emojis) and
either delete them or replace them with a proper debug log (e.g.,
tracing::debug! or the project's logger) so they no longer write to stdout on
each LLM suggestion call.
In
`@services/headless-lms/migrations/20260216103451_default_llm_models_tasks.up.sql`:
- Line 27: The COMMENT for column
application_task_default_language_models.context_utilization contains a backtick
in "model`s" instead of an escaped apostrophe; update the single-quoted string
to use a proper apostrophe (escape it as '' inside the SQL literal) so the
stored comment reads "model's performance" (i.e., replace the backtick with an
escaped single quote in the COMMENT ON COLUMN statement).
In
`@services/main-frontend/src/app/manage/courses/`[id]/other/chatbot/ChatbotConfigurationForm.tsx:
- Around line 244-257: The CSS blocks inside ChatbotConfigurationForm that wrap
the list and each item are missing display: flex, so the outer container around
fields.map and the inner per-item container (the div with key={item.id}) should
include display: flex; add display: flex to both css template literals (the
outer container using flex-flow: column nowrap; gap: 0 30px; and the inner
per-item container using flex-flow: row nowrap;) so the gap takes effect and the
TextField and Remove button render side-by-side.
- Line 261: The prop passed to TextField is using
errors.suggested_messages?.[idx]?.message which yields a FieldError object;
change it to pass the string by dereferencing the FieldError.message property
(i.e., use errors.suggested_messages?.[idx]?.message?.message) in the
ChatbotConfigurationForm where TextField's error is set so the component
receives a string | undefined like other fields (e.g.,
errors.chatbot_name?.message).
---
Nitpick comments:
In `@services/headless-lms/chatbot/src/message_suggestion.rs`:
- Around line 150-151: Remove the redundant explanatory comments that merely
restate obvious control flow around the endpoint selection and return branches
(e.g., the comment immediately above the match arm that returns None and the
other narrating comments noted in the review); delete those comments so the code
(the match/return branches that yield None/Some) stands alone and only keep
comments that convey intent, constraints, or non-obvious rationale within the
same function in message_suggestion.rs.
services/headless-lms/migrations/20260216103451_default_llm_models_tasks.up.sql
Outdated
Show resolved
Hide resolved
services/main-frontend/src/app/manage/courses/[id]/other/chatbot/ChatbotConfigurationForm.tsx
Show resolved
Hide resolved
services/main-frontend/src/app/manage/courses/[id]/other/chatbot/ChatbotConfigurationForm.tsx
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
services/headless-lms/migrations/20260219120000_add_locked_reviewing_stage.down.sql (1)
1-3:⚠️ Potential issue | 🔴 CriticalMissing data migration for
user_exercise_states_copywill cause the ALTER to failLines 23–25 (new) cast
user_exercise_states_copy.reviewing_stageto the new enum via::text::reviewing_stage. However, the new enum definition (lines 8–15) omits'locked', and the data-migration UPDATE on lines 1–3 only coversuser_exercise_states. If any row inuser_exercise_states_copyhasreviewing_stage = 'locked', PostgreSQL will raiseinvalid input value for enum reviewing_stage: "locked"and abort the migration.🐛 Proposed fix
UPDATE user_exercise_states SET reviewing_stage = 'reviewed_and_locked' WHERE reviewing_stage = 'locked'; + +UPDATE user_exercise_states_copy +SET reviewing_stage = 'reviewed_and_locked' +WHERE reviewing_stage = 'locked';Also applies to: 23-27
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/headless-lms/migrations/20260219120000_add_locked_reviewing_stage.down.sql` around lines 1 - 3, The migration fails because only user_exercise_states is updated from 'locked' to 'reviewed_and_locked' while user_exercise_states_copy still may contain 'locked' values that will be cast into the new reviewing_stage enum (via ::text::reviewing_stage) and cause an invalid input error; update the data-migration to perform the same UPDATE against user_exercise_states_copy (or otherwise ensure no 'locked' values remain before casting) so that all rows in user_exercise_states_copy have valid reviewing_stage values prior to the enum cast.
♻️ Duplicate comments (3)
services/main-frontend/src/app/manage/courses/[id]/other/chatbot/ChatbotConfigurationForm.tsx (1)
258-268:⚠️ Potential issue | 🟡 Minor
errorprop receives aFieldErrorobject instead of a string.Line 263:
errors.suggested_messages?.[idx]?.messageresolves to aFieldError | undefined(an object with{ type, message?, … }), not thestring | undefinedthatTextField.errorexpects. This will render as[object Object]or be silently ignored. Needs one more.messagedereference.🐛 Proposed fix
- error={errors.suggested_messages?.[idx]?.message} + error={errors.suggested_messages?.[idx]?.message?.message}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/main-frontend/src/app/manage/courses/`[id]/other/chatbot/ChatbotConfigurationForm.tsx around lines 258 - 268, The TextField's error prop is being passed a FieldError object from errors.suggested_messages?.[idx]?.message; update the prop to pass the string message instead (e.g. use errors.suggested_messages?.[idx]?.message?.message or otherwise extract the .message string and/or cast to string | undefined) so TextField receives string | undefined; apply this change where TextField is rendered in ChatbotConfigurationForm and verify register(`suggested_messages.${idx}.message`) remains unchanged.shared-module/packages/common/src/bindings.guard.ts (1)
655-662: Duplicate: same required-field compatibility concern.
Covered in the previous comment—ensure request payloads (including older clients or tests) always includesuggest_next_messagesandinitial_suggested_messages.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@shared-module/packages/common/src/bindings.guard.ts` around lines 655 - 662, The validator currently allows older payloads to omit suggest_next_messages and initial_suggested_messages; update the runtime handling to enforce those fields exist by normalizing missing values before validation: in the code that processes typedObj (the object with properties like "chatbotconf_id", "suggest_next_messages", "initial_suggested_messages"), ensure you set typedObj["suggest_next_messages"] = false if undefined and typedObj["initial_suggested_messages"] = [] if undefined (or otherwise supply the appropriate defaults) so that later checks against typedObj["suggest_next_messages"] and typedObj["initial_suggested_messages"].every(...) always operate on defined fields.services/main-frontend/src/components/course-material/chatbot/shared/ChatbotChatBody.tsx (1)
157-170: Re-send concern from prior review is now addressed.
setSuggestedNewMessage("")is called synchronously on Line 167 before the mutation completes, so after any send attemptsuggestedNewMessageresets to"". Combined with theisDisabledguard on the chip (which prevents clicks whilenewMessageMutation.isPending), clicking the same chip again after success or failure will always togglesuggestedNewMessagefrom""→ chip text, reliably re-triggering the effect.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/main-frontend/src/components/course-material/chatbot/shared/ChatbotChatBody.tsx` around lines 157 - 170, The problem is that setSuggestedNewMessage("") is invoked synchronously inside sendSuggestedNewMessage before the mutation completes, letting the chip be re-selected immediately; move clearing of suggestedNewMessage into the mutation completion handler so it only resets after the request finishes (e.g., call setSuggestedNewMessage("") from newMessageMutation's onSuccess/onSettled callback), keep the current guards (suggestedNewMessage.trim(), suggestedNewMessage === newMessage, canSubmit) and the useEffectEvent wrapper around sendSuggestedNewMessage, and reference the symbols suggestedNewMessage, newMessage, newMessageMutation, setSuggestedNewMessage, and sendSuggestedNewMessage when making the change.
🧹 Nitpick comments (3)
services/main-frontend/src/app/manage/courses/[id]/other/chatbot/ChatbotConfigurationForm.tsx (1)
187-188: Consider sendingnullwhen suggestions are disabled.When
suggest_next_messagesisfalse,initial_suggested_messagesstill sends whatever array entries remain in the form state. This preserves the teacher's messages for when they re-enable the feature, which may be intentional. However, if the backend distinguishes between[]andnull(theNewChatbotConftype allows both), you might want to be explicit:♻️ Optional: send null when feature is off
suggest_next_messages: data.suggest_next_messages, - initial_suggested_messages: data.suggested_messages.map((v) => v.message), + initial_suggested_messages: data.suggest_next_messages + ? data.suggested_messages.map((v) => v.message) + : null,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/main-frontend/src/app/manage/courses/`[id]/other/chatbot/ChatbotConfigurationForm.tsx around lines 187 - 188, The form currently always sends initial_suggested_messages as data.suggested_messages.map(...) even when suggest_next_messages is false; update the submit payload in ChatbotConfigurationForm so that initial_suggested_messages is null when suggest_next_messages is false and only map data.suggested_messages to an array of messages when suggest_next_messages is true (i.e., use a conditional expression tied to suggest_next_messages in the code that builds the payload for submission).services/headless-lms/chatbot/src/message_suggestion.rs (2)
177-234: Clean token-budgeted windowing logic; minor nit on redundant.to_owned().The
map_while+rev+minapproach for selecting the recent conversation window within the token budget is well-implemented. The sentinelSome(conv_len)for content-less messages is a nice touch and well-commented.Line 222:
.to_owned()is redundant —join("")already returns an ownedString.Tiny cleanup
let conversation = conversation_messages[cutoff..] .iter() .map(create_msg_string) .collect::<Vec<String>>() - .join("") - .to_owned(); + .join("");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/headless-lms/chatbot/src/message_suggestion.rs` around lines 177 - 234, In create_conversation_from_msgs: remove the redundant .to_owned() after .join("") since join("") already returns an owned String; change the conversation assignment from conversation_messages[cutoff..].iter().map(create_msg_string).collect::<Vec<String>>().join("").to_owned() to just ...join("") so conversation stays a String without extra cloning.
62-77: Improve readability of prompt construction and add a doc comment to this public function.Lines 73–76 pack complex logic (conditional formatting with iterator chains) into single-line expressions, making them hard to scan during review. Breaking them out into separate
if letblocks would improve clarity without changing behavior. Also, as a public function, a brief doc comment would help callers understand its purpose.Suggested refactor
+/// Generates three suggested next messages for the user by sending the recent +/// conversation context to an LLM with structured output. pub async fn generate_suggested_messages( app_config: &ApplicationConfiguration, task_lm: TaskLMSpec, conversation_messages: &[ChatbotConversationMessage], initial_suggested_messages: Option<Vec<String>>, course_name: &str, course_desc: Option<String>, ) -> ChatbotResult<Vec<String>> { - let prompt = SYSTEM_PROMPT.to_owned() - + &format!("The course is: {}\n\n", course_name) - // if there are initial suggested messages, then include <=5 of them as examples - + &(if let Some(ism) = initial_suggested_messages { - let examples = ism.into_iter().take(5).collect::<Vec<String>>().join(" "); - format!("Example suggested messages: {}\n\n", examples)} else {"".to_string()}) - + &(if let Some(c_d) = course_desc {format!("Description for course: {}\n\n", c_d)} else {"".to_string()}) - + "The conversation so far:\n"; + let mut prompt = format!("{}The course is: {}\n\n", SYSTEM_PROMPT, course_name); + if let Some(ism) = initial_suggested_messages { + let examples = ism.into_iter().take(5).collect::<Vec<String>>().join(" "); + prompt += &format!("Example suggested messages: {}\n\n", examples); + } + if let Some(c_d) = course_desc { + prompt += &format!("Description for course: {}\n\n", c_d); + } + prompt += "The conversation so far:\n";As per coding guidelines, "it is generally preferable to include a short, plain-language description of what the function is for, to clarify intent and complement (not repeat) a descriptive function name."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/headless-lms/chatbot/src/message_suggestion.rs` around lines 62 - 77, The function generate_suggested_messages builds a complex prompt string using nested one-line conditional expressions that hurt readability; add a short doc comment above generate_suggested_messages describing its purpose and return value, then refactor the prompt construction by extracting the initial_suggested_messages handling and course_desc handling into separate, clearly named let bindings (e.g., let example_section = if let Some(ism) = initial_suggested_messages { ... } else { String::new() } and let desc_section = if let Some(d) = course_desc { ... } else { String::new() }), keep the same behavior (take up to 5 examples, join with spaces) and then concatenate SYSTEM_PROMPT + example_section + desc_section + the conversation header to form prompt.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@services/headless-lms/chatbot/src/message_suggestion.rs`:
- Around line 155-174: The current code concatenates all choices into
completion_content which can produce invalid JSON; instead extract only the
first choice's text (e.g., use completion.choices.get(0) or
completion.choices.first()), build a String (not &String) from that single
choice's APIMessageKind::Text content, return a clear error via
ChatbotError/ChatbotErrorType if no choices or the first choice isn't text, then
pass that String into serde_json::from_str to produce
ChatbotNextMessageSuggestionResponse and return suggestions.suggestions; update
symbols: completion_content, ChatbotNextMessageSuggestionResponse, ChatbotError,
ChatbotErrorType, and suggestions.suggestions.
In
`@services/main-frontend/src/components/course-material/chatbot/shared/ChatbotChatBody.tsx`:
- Around line 396-413: The conditional CSS string ends with an incomplete
`height:` declaration causing invalid CSS; update the conditional block that
checks newMessageMutation.isPending || currentConversationInfo.isLoading ||
currentConversationInfo.isRefetching (where loadAnimation is applied) to include
a valid height (or min-height) value and trailing semicolon (e.g., `height:
28px;` or `min-height: 28px;`) so the skeleton-loading animation doesn’t
collapse the chip and the &:hover block parses correctly.
- Around line 396-413: The gradient in ChatbotChatBody.tsx currently uses
hardcoded hex values (`#ffffff` and `#f2f2f2`); replace those with theme tokens from
baseTheme (e.g., baseTheme.colors.white and a suitable gray like
baseTheme.colors.gray[100] or the project’s neutral palette) so the background
line in the styled block that references loadAnimation, newMessageMutation, and
currentConversationInfo uses only baseTheme color variables rather than hex
literals; ensure the interpolation syntax matches the existing template string
usage.
- Around line 426-435: Icon opacity check for the Idea component is missing
currentConversationInfo.isRefetching: update the opacity condition inside the
Idea's className so it also turns to 0% when
currentConversationInfo.isRefetching is true (alongside
newMessageMutation.isPending and currentConversationInfo.isLoading) to match the
isDisabled and load animation behavior; locate the Idea element in
ChatbotChatBody.tsx and modify the conditional expression that sets opacity to
include currentConversationInfo.isRefetching.
---
Outside diff comments:
In
`@services/headless-lms/migrations/20260219120000_add_locked_reviewing_stage.down.sql`:
- Around line 1-3: The migration fails because only user_exercise_states is
updated from 'locked' to 'reviewed_and_locked' while user_exercise_states_copy
still may contain 'locked' values that will be cast into the new reviewing_stage
enum (via ::text::reviewing_stage) and cause an invalid input error; update the
data-migration to perform the same UPDATE against user_exercise_states_copy (or
otherwise ensure no 'locked' values remain before casting) so that all rows in
user_exercise_states_copy have valid reviewing_stage values prior to the enum
cast.
---
Duplicate comments:
In
`@services/main-frontend/src/app/manage/courses/`[id]/other/chatbot/ChatbotConfigurationForm.tsx:
- Around line 258-268: The TextField's error prop is being passed a FieldError
object from errors.suggested_messages?.[idx]?.message; update the prop to pass
the string message instead (e.g. use
errors.suggested_messages?.[idx]?.message?.message or otherwise extract the
.message string and/or cast to string | undefined) so TextField receives string
| undefined; apply this change where TextField is rendered in
ChatbotConfigurationForm and verify
register(`suggested_messages.${idx}.message`) remains unchanged.
In
`@services/main-frontend/src/components/course-material/chatbot/shared/ChatbotChatBody.tsx`:
- Around line 157-170: The problem is that setSuggestedNewMessage("") is invoked
synchronously inside sendSuggestedNewMessage before the mutation completes,
letting the chip be re-selected immediately; move clearing of
suggestedNewMessage into the mutation completion handler so it only resets after
the request finishes (e.g., call setSuggestedNewMessage("") from
newMessageMutation's onSuccess/onSettled callback), keep the current guards
(suggestedNewMessage.trim(), suggestedNewMessage === newMessage, canSubmit) and
the useEffectEvent wrapper around sendSuggestedNewMessage, and reference the
symbols suggestedNewMessage, newMessage, newMessageMutation,
setSuggestedNewMessage, and sendSuggestedNewMessage when making the change.
In `@shared-module/packages/common/src/bindings.guard.ts`:
- Around line 655-662: The validator currently allows older payloads to omit
suggest_next_messages and initial_suggested_messages; update the runtime
handling to enforce those fields exist by normalizing missing values before
validation: in the code that processes typedObj (the object with properties like
"chatbotconf_id", "suggest_next_messages", "initial_suggested_messages"), ensure
you set typedObj["suggest_next_messages"] = false if undefined and
typedObj["initial_suggested_messages"] = [] if undefined (or otherwise supply
the appropriate defaults) so that later checks against
typedObj["suggest_next_messages"] and
typedObj["initial_suggested_messages"].every(...) always operate on defined
fields.
---
Nitpick comments:
In `@services/headless-lms/chatbot/src/message_suggestion.rs`:
- Around line 177-234: In create_conversation_from_msgs: remove the redundant
.to_owned() after .join("") since join("") already returns an owned String;
change the conversation assignment from
conversation_messages[cutoff..].iter().map(create_msg_string).collect::<Vec<String>>().join("").to_owned()
to just ...join("") so conversation stays a String without extra cloning.
- Around line 62-77: The function generate_suggested_messages builds a complex
prompt string using nested one-line conditional expressions that hurt
readability; add a short doc comment above generate_suggested_messages
describing its purpose and return value, then refactor the prompt construction
by extracting the initial_suggested_messages handling and course_desc handling
into separate, clearly named let bindings (e.g., let example_section = if let
Some(ism) = initial_suggested_messages { ... } else { String::new() } and let
desc_section = if let Some(d) = course_desc { ... } else { String::new() }),
keep the same behavior (take up to 5 examples, join with spaces) and then
concatenate SYSTEM_PROMPT + example_section + desc_section + the conversation
header to form prompt.
In
`@services/main-frontend/src/app/manage/courses/`[id]/other/chatbot/ChatbotConfigurationForm.tsx:
- Around line 187-188: The form currently always sends
initial_suggested_messages as data.suggested_messages.map(...) even when
suggest_next_messages is false; update the submit payload in
ChatbotConfigurationForm so that initial_suggested_messages is null when
suggest_next_messages is false and only map data.suggested_messages to an array
of messages when suggest_next_messages is true (i.e., use a conditional
expression tied to suggest_next_messages in the code that builds the payload for
submission).
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
services/headless-lms/chatbot/src/message_suggestion.rsservices/headless-lms/migrations/20260216103451_default_llm_models_tasks.up.sqlservices/headless-lms/migrations/20260219120000_add_locked_reviewing_stage.down.sqlservices/headless-lms/server/src/ts_binding_generator.rsservices/main-frontend/src/app/manage/courses/[id]/other/chatbot/ChatbotConfigurationForm.tsxservices/main-frontend/src/components/course-material/chatbot/shared/ChatbotChatBody.tsxshared-module/packages/common/src/bindings.guard.tsshared-module/packages/common/src/bindings.tsshared-module/packages/common/src/locales/en/main-frontend.jsonshared-module/packages/common/src/locales/fi/main-frontend.json
🚧 Files skipped from review as they are similar to previous changes (4)
- shared-module/packages/common/src/locales/en/main-frontend.json
- shared-module/packages/common/src/bindings.ts
- services/headless-lms/migrations/20260216103451_default_llm_models_tasks.up.sql
- shared-module/packages/common/src/locales/fi/main-frontend.json
services/main-frontend/src/components/course-material/chatbot/shared/ChatbotChatBody.tsx
Outdated
Show resolved
Hide resolved
services/main-frontend/src/components/course-material/chatbot/shared/ChatbotChatBody.tsx
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
♻️ Duplicate comments (1)
services/headless-lms/chatbot/src/message_suggestion.rs (1)
156-175:⚠️ Potential issue | 🟠 MajorJoining all LLM choices produces invalid JSON when
n > 1— still unaddressed from prior review.Lines 156–164 collect and join all choices into a single
&String. If the API ever returns more than one choice, the result is{"suggestions":[…]}{"suggestions":[…]}which failsserde_json::from_strwith a misleading "incorrectly formatted response" error. The&Stringannotation is also non-idiomatic; preferStringdirectly.🐛 Proposed fix
- let completion_content: &String = &completion + let completion_content = completion .choices .into_iter() - .map(|x| match x.message.fields { - APIMessageKind::Text(message) => message.content, - _ => "".to_string(), - }) - .collect::<Vec<String>>() - .join(""); + .next() + .and_then(|x| match x.message.fields { + APIMessageKind::Text(message) => Some(message.content), + _ => None, + }) + .unwrap_or_default(); let suggestions: ChatbotNextMessageSuggestionResponse = - serde_json::from_str(completion_content).map_err(|_| { + serde_json::from_str(&completion_content).map_err(|_| {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/headless-lms/chatbot/src/message_suggestion.rs` around lines 156 - 175, The code incorrectly joins all LLM choices into one string and attempts to deserialize it as a single ChatbotNextMessageSuggestionResponse, which breaks when n>1 and uses a non-idiomatic &String; change the logic that builds completion_content from completion.choices so you treat each choice independently: extract each APIMessageKind::Text message.content as a String (not &String), deserialize each content into ChatbotNextMessageSuggestionResponse (using serde_json::from_str) and aggregate their .suggestions into a single Vec to return; keep the existing ChatbotError/ChatbotErrorType handling for parse failures but apply it per-choice or return a clear error if none succeed.
🧹 Nitpick comments (4)
services/headless-lms/models/src/application_task_default_language_models.rs (2)
35-43:TaskLMSpechas no derive macros.
query_as!requires no trait implementations, so this compiles fine. However, without#[derive(Debug, Clone)], consumers can neither log this struct nor move it through code paths that require cloning — the latter becomes visible when threadingTaskLMSpecacrossasynccall sites that need to reuse the spec.♻️ Suggested addition
+#[derive(Debug, Clone)] pub struct TaskLMSpec {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/headless-lms/models/src/application_task_default_language_models.rs` around lines 35 - 43, The TaskLMSpec struct is missing derive macros, preventing easy logging and cloning; add derive macros like #[derive(Debug, Clone)] above the TaskLMSpec definition so consumers can debug-print and clone instances (e.g., when threading across async boundaries) while leaving query_as! untouched; locate the struct named TaskLMSpec in application_task_default_language_models.rs and add the derive attributes there.
45-69:insertaccepts the fullApplicationTaskDefaultLanguageModelstruct but silently ignoresid,created_at,updated_at, anddeleted_at.The SQL only binds
model_id,task, andcontext_utilization. A caller who setsidexpecting to control the primary key (or who relies on the auto-generatedidin the returned value) will be confused. A dedicated input type makes the contract explicit.♻️ Suggested refactor
+pub struct NewApplicationTaskDefaultLanguageModel { + pub model_id: Uuid, + pub task: ApplicationTask, + pub context_utilization: f32, +} pub async fn insert( conn: &mut PgConnection, - input: ApplicationTaskDefaultLanguageModel, + input: NewApplicationTaskDefaultLanguageModel, ) -> ModelResult<ApplicationTaskDefaultLanguageModel> {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/headless-lms/models/src/application_task_default_language_models.rs` around lines 45 - 69, The insert function currently takes ApplicationTaskDefaultLanguageModel but ignores id/created_at/updated_at/deleted_at; change the API to accept a dedicated input DTO (e.g. NewApplicationTaskDefaultLanguageModel) that contains only model_id, task, and context_utilization, update the insert signature (pub async fn insert(conn: &mut PgConnection, input: NewApplicationTaskDefaultLanguageModel) -> ModelResult<ApplicationTaskDefaultLanguageModel>), update the SQL binding to use input.model_id, input.task and input.context_utilization as already written, and replace callers to construct the new DTO instead of passing a full ApplicationTaskDefaultLanguageModel so callers can't accidentally expect id/timestamps to be applied.services/headless-lms/chatbot/src/message_suggestion.rs (1)
28-29: Doc comments restate the constant names verbatim.
/// System prompt instructions for generating suggested next messagesand/// User prompt instructions for generating suggested next messagesadd no information beyond whatSYSTEM_PROMPTandUSER_PROMPTalready communicate. As per coding guidelines, comments should not simply restate what the code already says.♻️ Suggested fix
-/// System prompt instructions for generating suggested next messages const SYSTEM_PROMPT: &str = r#"..."#; -/// User prompt instructions for generating suggested next messages const USER_PROMPT: &str = r#"..."#;Also applies to: 59-60
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/headless-lms/chatbot/src/message_suggestion.rs` around lines 28 - 29, Remove or replace the redundant doc comments that merely repeat the constant names: update the doc comments for SYSTEM_PROMPT and USER_PROMPT in message_suggestion.rs so they either provide additional, useful context (e.g., the intent, format expectations, or examples of input/output for the prompt) or delete them entirely if no extra information is needed; ensure the identifiers SYSTEM_PROMPT and USER_PROMPT remain unchanged so references elsewhere still work.services/headless-lms/server/src/programs/seed/seed_organizations/uh_mathstat.rs (1)
369-389: Usellm.idinstead of the repeated hardcoded UUID.The model_id UUID is repeated verbatim in both inserts;
llmalready holds the freshly-inserted model and itsidis identical. Usingllm.idkeeps the inserts DRY and eliminates silent drift if the fixed UUID ever changes.♻️ Suggested fix
application_task_default_language_models::insert( &mut conn, ApplicationTaskDefaultLanguageModel { - model_id: Uuid::parse_str("f14d70bd-c228-4447-bddd-4f6f66705356")?, + model_id: llm.id, task: ApplicationTask::ContentCleaning, context_utilization: 0.75, ..Default::default() }, ) .await?; application_task_default_language_models::insert( &mut conn, ApplicationTaskDefaultLanguageModel { - model_id: Uuid::parse_str("f14d70bd-c228-4447-bddd-4f6f66705356")?, + model_id: llm.id, task: ApplicationTask::MessageSuggestion, context_utilization: 0.75, ..Default::default() }, ) .await?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/headless-lms/server/src/programs/seed/seed_organizations/uh_mathstat.rs` around lines 369 - 389, The two inserts into application_task_default_language_models use a hardcoded UUID for model_id; replace the repeated Uuid::parse_str(...) occurrences with the actual inserted model's id (llm.id) so both ApplicationTaskDefaultLanguageModel structs use model_id: llm.id; update the two ApplicationTaskDefaultLanguageModel constructions in the calls to application_task_default_language_models::insert to reference llm.id instead of the literal UUID.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@services/headless-lms/chatbot/src/message_suggestion.rs`:
- Around line 156-175: The code incorrectly joins all LLM choices into one
string and attempts to deserialize it as a single
ChatbotNextMessageSuggestionResponse, which breaks when n>1 and uses a
non-idiomatic &String; change the logic that builds completion_content from
completion.choices so you treat each choice independently: extract each
APIMessageKind::Text message.content as a String (not &String), deserialize each
content into ChatbotNextMessageSuggestionResponse (using serde_json::from_str)
and aggregate their .suggestions into a single Vec to return; keep the existing
ChatbotError/ChatbotErrorType handling for parse failures but apply it
per-choice or return a clear error if none succeed.
---
Nitpick comments:
In `@services/headless-lms/chatbot/src/message_suggestion.rs`:
- Around line 28-29: Remove or replace the redundant doc comments that merely
repeat the constant names: update the doc comments for SYSTEM_PROMPT and
USER_PROMPT in message_suggestion.rs so they either provide additional, useful
context (e.g., the intent, format expectations, or examples of input/output for
the prompt) or delete them entirely if no extra information is needed; ensure
the identifiers SYSTEM_PROMPT and USER_PROMPT remain unchanged so references
elsewhere still work.
In
`@services/headless-lms/models/src/application_task_default_language_models.rs`:
- Around line 35-43: The TaskLMSpec struct is missing derive macros, preventing
easy logging and cloning; add derive macros like #[derive(Debug, Clone)] above
the TaskLMSpec definition so consumers can debug-print and clone instances
(e.g., when threading across async boundaries) while leaving query_as!
untouched; locate the struct named TaskLMSpec in
application_task_default_language_models.rs and add the derive attributes there.
- Around line 45-69: The insert function currently takes
ApplicationTaskDefaultLanguageModel but ignores
id/created_at/updated_at/deleted_at; change the API to accept a dedicated input
DTO (e.g. NewApplicationTaskDefaultLanguageModel) that contains only model_id,
task, and context_utilization, update the insert signature (pub async fn
insert(conn: &mut PgConnection, input: NewApplicationTaskDefaultLanguageModel)
-> ModelResult<ApplicationTaskDefaultLanguageModel>), update the SQL binding to
use input.model_id, input.task and input.context_utilization as already written,
and replace callers to construct the new DTO instead of passing a full
ApplicationTaskDefaultLanguageModel so callers can't accidentally expect
id/timestamps to be applied.
In
`@services/headless-lms/server/src/programs/seed/seed_organizations/uh_mathstat.rs`:
- Around line 369-389: The two inserts into
application_task_default_language_models use a hardcoded UUID for model_id;
replace the repeated Uuid::parse_str(...) occurrences with the actual inserted
model's id (llm.id) so both ApplicationTaskDefaultLanguageModel structs use
model_id: llm.id; update the two ApplicationTaskDefaultLanguageModel
constructions in the calls to application_task_default_language_models::insert
to reference llm.id instead of the literal UUID.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
services/headless-lms/chatbot/src/message_suggestion.rsservices/headless-lms/migrations/20260219120000_add_locked_reviewing_stage.down.sqlservices/headless-lms/models/src/application_task_default_language_models.rsservices/headless-lms/server/src/programs/seed/seed_organizations/uh_mathstat.rsservices/main-frontend/src/components/course-material/chatbot/shared/ChatbotChatBody.tsxsystem-tests/src/tests/exams/taking-exam.spec.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- services/main-frontend/src/components/course-material/chatbot/shared/ChatbotChatBody.tsx
|
|
||
| let params = if task_lm.thinking { | ||
| LLMRequestParams::Thinking(ThinkingParams { | ||
| max_completion_tokens: None, |
Messages (prompts) are suggested to the user. Three suggested messages are shown at all times. The teacher can write 0 to unlimited suggested messages that are shown in a new conversation. Only max. three are shown, picked at random. After a message is sent, the suggested messages are generated with an LLM. The LLM is given a prompt and the conversation as context and it outputs structured data with three new suggested messages.
The teacher can turn this feature on and off in the chatbot configuration settings.
The user can easily send the suggested message by clicking on it (it's a button).
The chatbot box can now grow to a taller height than before. This is how it looks:
Summary by CodeRabbit
New Features
UI
Migrations / Backend
Tests
Documentation