You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* feat: v1 of AI-generated comments
* feat: added logging of inputs and outputs
* Update generate_ai_comments.ex
* feat: function to save outputs to database
* Format answers json before sending to LLM
* Add LLM Prompt to question params when submitting assessment xml file
* Add LLM Prompt to api response when grading view is open
* feat: added llm_prompt from qn to raw_prompt
* feat: enabling/disabling of LLM feature by course level
* feat: added llm_grading boolean field to course creation API
* feat: added api key storage in courses & edit api key/enable llm grading
* feat: encryption for llm_api_key
* feat: added final comment editing route
* feat: added logging of chosen comments
* fix: bugs when certain fields were missing
* feat: updated tests
* formatting
* fix: error handling when calling openai API
* fix: credo issues
* formatting
* Address some comments
* Fix formatting
* rm IO.inspect
* a
* Use case instead of if
* Streamlines generate_ai_comments to only send the selected question and its relevant info + use the correct llm_prompt
* Remove unncessary field
* default: false for llm_grading
* Add proper linking between ai_comments table and submissions. Return it to submission retrieval as well
* Resolve some migration comments
* Add llm_model and llm_api_url to the DB + schema
* Moves api key, api url, llm model and course prompt to course level
* Add encryption_key to env
* Do not hardcode formatting instructions
* Add Assessment level prompts to the XML
* Return some additional info for composing of prompts
* Remove un-used 'save comments'
* Fix existing assessment tests
* Fix generate_ai_comments test cases
* Fix bug preventing avengers from generating ai comments
* Fix up tests + error msgs
* Formatting
* some mix credo suggestions
* format
* Fix credo issue
* bug fix + credo fixes
* Fix tests
* format
* Modify test.exs
* Update lib/cadet_web/controllers/generate_ai_comments.ex
Co-authored-by: Copilot <[email protected]>
* Copilot feedback
* format
* Work on sentry comments
* Fix type
* Redate migrations to maintain total order
* Add newline at EOF
* Fix indent
* Fix capitalization
* Remove llmApiKey from any kind of storage on FE
* Remove indexes
* rm todo
* rm todo
* Re-format ai_comments to reference answer_id instead
* Abstract out + remove un-used field
* Add delimeter + bug fixes
* Mix format
* Switch to openAI module
* rm un-used
* Merge all prompts to :prompts to preserve abstraction
* rm
* Fix formatting
* Fix test
* Revert some dependency changes
* Update actions versions
* Improve encrypt + decrypt robustness
* Fix dialyzer
* Re-factor schema
---------
Co-authored-by: Eugene Oh Yun Zheng <[email protected]>
Co-authored-by: tkaixiang <[email protected]>
Copy file name to clipboardExpand all lines: config/dev.secrets.exs.example
+3-7Lines changed: 3 additions & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -98,13 +98,9 @@ config :cadet,
98
98
# ws_endpoint_address: "ws://hostname:port"
99
99
]
100
100
101
-
config :openai,
102
-
# find it at https://platform.openai.com/account/api-keys
103
-
api_key: "the actual api key",
104
-
# For source academy deployment, leave this as empty string.Ingeneral could find it at https://platform.openai.com/account/org-settings under "Organization ID".
105
-
organization_key: "",
106
-
# optional, passed to [HTTPoison.Request](https://hexdocs.pm/httpoison/HTTPoison.Request.html) options
107
-
http_options: [recv_timeout: 170_0000]
101
+
config :openai,
102
+
# Input your own AES-256 encryption key here for encrypting LLM API keys of 16, 24 or 32 bytes
0 commit comments