GET /
: The root serves as a quick status check.
GET /test
: Will respond with some kind of successful JSON response.
GET /openai/models
: Will report back information about the available OpenAI models.
GET /test/openai
: Will issue a small test prompt to OpenAI's ChatGPT.
model
: The model to use. Default:gpt-3.5-turbo
api-key
: The API key associated with the model.
POST /assessment
: Issue a rubric assessment to the AI agent and wait for a response.
-
model
: The model to use. Default: see DEFAULT_MODEL -
api-key
: The API key associated with the model. Default: the configured key -
code
: The code to assess. Required. -
prompt
: The system prompt. Required. -
rubric
: The rubric, as a CSV. Required. -
examples
: Array of pairs of code (js) and openai response (tsv). -
remove-comments
: When1
, attempts to strip comments out of the code before assessment. Default: 0 -
num-responses
: The number of times it should ask the AI model. It votes on the final answer. Default: 1 -
temperature
: The 'temperature' value for ChatGPT LLMs. -
Response:
application/json
: Data and metadata related to the response. Thedata
is the list of key concepts, assessment values, and reasons. Themetadata
is the input to the AI and some usage information.n
is the number of responses asked for in the input. Themetadata
'sagent
parameter tells you what performed the assessment. Currently this is eitherstatic
, for a simple static check, andopenai
for ChatGPT. Based on the agent, different metadata might be available. For instance, thestatic
agent does not reportusage
info. Example below.
{
"metadata": {
"agent": "openai",
"time": 39.43,
"student_id": 1553633,
"usage": {
"prompt_tokens": 454,
"completion_tokens": 1886,
"total_tokens": 2340
},
"request": {
"model": "gpt4",
"temperature": 0.2,
"messages": [ ... ],
"n": 3
}
},
"data": [
{
"Key Concept": "Program Development 2",
"Observations": "The program uses whitespace good nami [... snipped for brevity ...]. The code is easily readable.",
"Label": "Extensive Evidence",
"Reason": "The program code effectively uses whitespace, good naming conventions, indentation and comments to make the code easily readable."
}, {
"Key Concept": "Algorithms and Control Structures",
"Observations": "Sprite interactions occur at lines 48-50 (player touches burger), 52 (sw[... snipped for brevity ...]",
"Label": "Extensive Evidence",
"Reason": "The game includes multiple different interactions between sprites, responds to multiple types of user input (e.g. different arrow keys)."
}
]
(GET|POST) /test/assessment
: Issue a test rubric assessment to the AI agent and wait for a response.
-
model
: The model to use. Default: see DEFAULT_MODEL -
api-key
: The API key associated with the model. Default: the configured key -
remove-comments
: When1
, attempts to strip comments out of the code before assessment. Default: 0 -
num-responses
: The number of times it should ask the AI model. It votes on the final answer. Default: 1 -
temperature
: The 'temperature' value for ChatGPT LLMs. -
Response:
application/json
: A set of data and metadata wheredata
is a list of key concepts, assessment values, and reasons. See above.