Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/features/experimental/concurrent-file-edits.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ This feature leverages the [`apply_diff`](/advanced-usage/available-tools/apply-
## Best Practices

### When to Enable
- Using capable AI models (Claude 3.5 Sonnet, GPT-4, etc.)
- Using capable AI models (Claude Sonnet 4 or Claude 3.7 Sonnet, GPT-4.1/GPT-4o, GPT-5 family)
- Comfortable reviewing multiple changes at once

### When to Keep Disabled
Expand Down
10 changes: 7 additions & 3 deletions docs/providers/claude-code.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,11 +99,15 @@ export CLAUDE_CODE_MAX_OUTPUT_TOKENS=32768 # Set to 32k tokens
The Claude Code provider supports these Claude models:

- **Claude Opus 4.1** (Most capable)
- **Claude Opus 4**
- **Claude Opus 4**
- **Claude Sonnet 4** (Latest, recommended)
- **Claude 3.7 Sonnet**
- **Claude 3.5 Sonnet**
- **Claude 3.5 Haiku** (Fast responses)

:::note Legacy models
These older models may still be available via the Claude CLI depending on your subscription, but are no longer recommended in new setups:
- Claude 3.5 Sonnet
- Claude 3.5 Haiku
:::

The specific models available depend on your Claude CLI subscription and plan.

Expand Down
2 changes: 1 addition & 1 deletion docs/providers/litellm.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ When you configure the LiteLLM provider, Roo Code interacts with your LiteLLM se
* `supportsImages`: Determined from `model_info.supports_vision` provided by LiteLLM.
* `supportsPromptCache`: Determined from `model_info.supports_prompt_caching` provided by LiteLLM.
* `inputPrice` / `outputPrice`: Calculated from `model_info.input_cost_per_token` and `model_info.output_cost_per_token` from LiteLLM.
* `supportsComputerUse`: This flag is set to `true` if the underlying model identifier (from `litellm_params.model`, e.g., `openrouter/anthropic/claude-3.5-sonnet`) matches one of the Anthropic models predefined in Roo Code as suitable for "computer use" (see `COMPUTER_USE_MODELS` in technical details).
* `supportsComputerUse`: This flag is set to `true` if the underlying model identifier (from `litellm_params.model`, e.g., `openrouter/anthropic/claude-3.7-sonnet-20250219`) matches one of the Anthropic models predefined in Roo Code as suitable for "computer use" (see `COMPUTER_USE_MODELS` in technical details).

Roo Code uses default values for some of these properties if they are not explicitly provided by your LiteLLM server's `/model/info` endpoint for a given model. The defaults are:
* `maxTokens`: 8192
Expand Down
1 change: 0 additions & 1 deletion docs/providers/openai-compatible.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,6 @@ While this provider type allows connecting to various endpoints, if you are conn
* `o1`
* `o1-preview`
* `o1-mini`
* `gpt-4.5-preview`
* `gpt-4o`
* `gpt-4o-mini`

Expand Down
1 change: 0 additions & 1 deletion docs/providers/openai.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,6 @@ Original reasoning models:
### GPT-4o Family
Optimized GPT-4 models:

* `gpt-4.5-preview`
* `gpt-4o` - Optimized GPT-4
* `gpt-4o-mini` - Smaller optimized variant

Expand Down
2 changes: 1 addition & 1 deletion docs/providers/vscode-lm.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Roo Code includes *experimental* support for the [VS Code Language Model API](ht
1. **Open Roo Code Settings:** Click the gear icon (<Codicon name="gear" />) in the Roo Code panel.
2. **Select Provider:** Choose "VS Code LM API" from the "API Provider" dropdown.
3. **Select Model:** The "Language Model" dropdown will (eventually) list available models. The format is `vendor/family`. For example, if you have Copilot, you might see options like:
* `copilot - claude-3.5-sonnet`
* `copilot - claude-3.7-sonnet`
* `copilot - o3-mini`
* `copilot - o1-ga`
* `copilot - gemini-2.0-flash`
Expand Down