-
Notifications
You must be signed in to change notification settings - Fork 135
Open
Description
In query.py we have:
# perform temperature sampling if list provided
# set temperature to 1.0 for reasoning models
if kwargs_dict["model_name"] in (
REASONING_OAI_MODELS
+ REASONING_CLAUDE_MODELS
+ REASONING_DEEPSEEK_MODELS
+ REASONING_GEMINI_MODELS
+ REASONING_AZURE_MODELS
+ REASONING_BEDROCK_MODELS
):
kwargs_dict["temperature"] = 1.0
else:
kwargs_dict["temperature"] = random.choice(temperatures)What is the rational behind the temperature being hardcoded to 1? DeepSeek's documentation recommend a temperature of 0 for coding, and I experienced some issues with following instructions that was resolved by lowering the temperature to 0.7.
Metadata
Metadata
Assignees
Labels
No labels