It reuses the Codex CLI that already works on your machine.
The right mental model is:
- make
codexwork first - confirm the same config works in a terminal
- run
ds doctor - only then run
dsords --codex-profile <name>
For the other built-in runners, see also:
Codex CLI reads its local state from ~/.codex/.
The most important files are:
~/.codex/config.toml- your provider, model, profile, and feature configuration
~/.codex/auth.json- created by
codex loginwhen the provider uses the normal OpenAI login flow
- created by
~/.codex/history.jsonl- local session history; not required for setup
Useful inspection commands:
ls -la ~/.codex
sed -n '1,220p' ~/.codex/config.toml
codex --version
codex --help
codex exec --helpAlways follow this order:
- install Codex CLI and confirm the binary is the one you expect
- prepare
~/.codex/config.toml - validate
codexorcodex --profile <name>directly - validate DeepScientist with
ds doctor - launch DeepScientist with the same Codex profile
Check which Codex is actually being used:
which codex
codex --versionIf you need a specific binary, keep its absolute path and pass it to DeepScientist with --codex.
Example:
ds doctor --codex /absolute/path/to/codex --codex-profile glm
ds --codex /absolute/path/to/codex --codex-profile glmUse this when Codex works through normal OpenAI authentication.
Typical flow:
codex login
codexIn this case, ~/.codex/auth.json is usually present, and config.toml may stay minimal.
Minimal example:
model = "gpt-5.4"
model_reasoning_effort = "high"
[projects."/absolute/path/to/your/project"]
trust_level = "trusted"Use this when you are pointing Codex at a non-default provider or gateway.
A common pattern is:
model_provider = "myprovider"
model = "gpt-5.4"
model_reasoning_effort = "xhigh"
[model_providers.myprovider]
name = "My Provider"
base_url = "https://example.com/codex"
wire_api = "responses"
experimental_bearer_token = "YOUR_TOKEN_HERE"
requires_openai_auth = trueAnother common pattern uses an environment variable instead of embedding a bearer token:
[model_providers.myprovider]
name = "My Provider"
base_url = "https://example.com/codex"
wire_api = "chat"
env_key = "MYPROVIDER_API_KEY"
requires_openai_auth = falseThen export the key in the shell before starting Codex or DeepScientist:
export MYPROVIDER_API_KEY="..."These are the fields you usually need to touch.
model_provider- which provider block to use by default
model- the model id to send by default
model_reasoning_effort- for example
medium,high, orxhigh
- for example
service_tier- optional provider-specific runtime preference
Inside [model_providers.<name>]:
name- human-readable label
base_url- the exact provider endpoint Codex should call
wire_api- usually
responsesorchat; use the provider's documented format
- usually
env_key- name of the shell environment variable containing the API key
experimental_bearer_token- fixed bearer token if your provider setup uses one directly
requires_openai_auth- whether Codex should still expect the standard OpenAI auth shape
request_max_retries- optional request retry count
stream_max_retries- optional stream retry count
stream_idle_timeout_ms- optional stream idle timeout
Profiles live under [profiles.<alias>].
Example:
[profiles.glm]
model = "GLM-4.7"
model_provider = "glm"Then use it with:
codex --profile glmCodex also cares about project trust.
Example:
[projects."/ssdwork/deepscientist/DeepScientist"]
trust_level = "trusted"If a project is not trusted, Codex may ask again before running.
This is the safest general workflow.
Start from your existing file:
cp ~/.codex/config.toml ~/.codex/config.toml.bak
${EDITOR:-vim} ~/.codex/config.tomlExample skeleton:
[model_providers.provider_name]
name = "Provider Name"
base_url = "https://provider.example/v1"
wire_api = "chat"
env_key = "PROVIDER_API_KEY"
requires_openai_auth = false
request_max_retries = 4
stream_max_retries = 10
stream_idle_timeout_ms = 300000[profiles.provider_alias]
model = "provider-model-id"
model_provider = "provider_name"Interactive check:
codex --profile provider_aliasNon-interactive smoke check:
codex exec --profile provider_alias "Reply with exactly OK."If this fails, stop there and fix Codex first.
There are three different places people often confuse.
This is enough when you are only validating Codex directly in the current terminal.
Example:
export MINIMAX_API_KEY="..."
codex --profile m25
codex exec --profile m25 "Reply with exactly OK."This file usually tells Codex which environment variable name or which bearer token field it should use. It does not guarantee that DeepScientist will magically receive that key in every runtime context.
Examples:
env_key = "MINIMAX_API_KEY"or:
experimental_bearer_token = "YOUR_TOKEN_HERE"Use env_key when the provider key comes from the shell or another process-level environment source.
Use experimental_bearer_token only when your Codex-side provider setup truly expects a fixed bearer token directly inside config.toml.
This is the most important place when codex works in your shell, but ds doctor, ds, or ds docker still fails with a missing provider environment variable.
In that case, put the required key under runners.codex.env.
Example:
codex:
enabled: true
binary: codex
config_dir: ~/.codex
profile: m25
model: inherit
model_reasoning_effort: high
env:
MINIMAX_API_KEY: "YOUR_REAL_KEY"This is the most reliable DeepScientist-side fix when the provider works in plain codex --profile ... but fails inside DeepScientist runner execution.
- If you are only testing Codex manually in one shell: shell
exportis enough. - If you want Codex to know which variable name to read: set
env_keyin~/.codex/config.toml. - If DeepScientist or
ds dockerstill reports a missing provider env var: also set the key in~/DeepScientist/config/runners.yamlunderrunners.codex.env.
This is where most confusion comes from.
A shell-level export MINIMAX_API_KEY=... only affects the current shell and the processes spawned from it.
If DeepScientist is launched by another daemon, service, container, or supervisor process, that runtime may not inherit the same shell environment.
So for Docker or long-running daemon setups, runners.yaml -> runners.codex.env is usually the safer place.
There are three supported DeepScientist usage patterns.
codex login
ds doctor
dscodex --profile glm
codex exec --profile glm "Reply with exactly OK."
ds doctor --codex-profile glm
ds --codex-profile glmIf you want DeepScientist to keep using the same Codex profile by default, set it in runners.yaml.
codex:
enabled: true
binary: codex
config_dir: ~/.codex
profile: glm
model: inherit
model_reasoning_effort: high
approval_policy: on-request
sandbox_mode: workspace-writeImportant:
profileshould usually be your local Codex profile alias, such asglm,ark,bailian,m25, orm27-local- for provider-backed Codex profiles, prefer
model: inherit - only hard-code
model:in DeepScientist if you are sure the provider accepts that exact explicit model id - DeepScientist launches Codex from an isolated runtime home under
.ds/codex-home, but copies your configured~/.codexauth, config, skills, agents, and prompts into that runtime copy first
Codex itself supports -c key=value overrides.
Examples:
codex -c model="gpt-5.4"
codex -c model_provider="yunyi" -c model="gpt-5.4"
codex exec -c model_reasoning_effort="high" "Reply with exactly OK."This is useful for quick checks, but for repeatable DeepScientist runs, profiles in ~/.codex/config.toml are cleaner.
- a working Codex install
- successful
codex login - a direct
codexorcodex exec "Reply with exactly OK."check
ds doctor
dscodex:
enabled: true
binary: codex
config_dir: ~/.codex
profile: ""
model: gpt-5.4Official doc:
MiniMax is the clearest profile-based case.
MiniMax's official Coding Plan model MiniMax-M2.7 is not currently working reliably with Codex CLI on the supported Codex path used by this repo.
For the official Codex-compatible path, use:
MiniMax-M2.5- profile alias such as
m25 - Codex CLI
0.57.0if you want the current highest-compatibility MiniMax Coding Plan path
If you specifically want MiniMax-M2.7, the recommended route is:
- do not treat it as the default official Codex Coding Plan path
- instead expose your own local OpenAI-compatible
vllmendpoint for M2.7 - then point Codex at that local endpoint through a custom provider block in
~/.codex/config.toml
Use the official MiniMax Coding Plan endpoint.
For key placement on the MiniMax path:
~/.codex/config.tomlshould usually containenv_key = "MINIMAX_API_KEY"- for plain terminal validation, export
MINIMAX_API_KEYin that same shell - if
codex --profile m25works butds doctorords dockerstill says a provider env var is missing, also place the real key in~/DeepScientist/config/runners.yamlunderrunners.codex.env.MINIMAX_API_KEY
Use the official MiniMax Coding Plan endpoint:
- Base URL:
https://api.minimaxi.com/v1 - API key env:
MINIMAX_API_KEY - Model:
MiniMax-M2.5
Recommended config shape:
[model_providers.minimax]
name = "MiniMax Chat Completions API"
base_url = "https://api.minimaxi.com/v1"
env_key = "MINIMAX_API_KEY"
wire_api = "chat"
requires_openai_auth = false
request_max_retries = 4
stream_max_retries = 10
stream_idle_timeout_ms = 300000
[profiles.m25]
model = "MiniMax-M2.5"
model_provider = "minimax"Validation order:
unset OPENAI_API_KEY
unset OPENAI_BASE_URL
export MINIMAX_API_KEY="..."
codex --version
codex --profile m25
codex exec --profile m25 "Reply with exactly OK."
ds doctor --codex-profile m25
ds --codex-profile m25Recommended route: run M2.7 behind your own local OpenAI-compatible vllm service.
Example shape:
[model_providers.minimax_local_vllm]
name = "MiniMax M2.7 via local vLLM"
base_url = "http://127.0.0.1:8000/v1"
wire_api = "chat"
requires_openai_auth = false
env_key = "OPENAI_API_KEY"
[profiles.m27-local]
model = "MiniMax-M2.7"
model_provider = "minimax_local_vllm"Then validate it exactly the same way:
export OPENAI_API_KEY="dummy-or-local-token-if-needed"
codex --profile m27-local
codex exec --profile m27-local "Reply with exactly OK."
ds doctor --codex-profile m27-local
ds --codex-profile m27-localOfficial Coding Plan path:
codex:
enabled: true
binary: codex
config_dir: ~/.codex
profile: m25
model: inherit
model_reasoning_effort: highLocal vLLM M2.7 path:
codex:
enabled: true
binary: codex
config_dir: ~/.codex
profile: m27-local
model: inherit
model_reasoning_effort: highOfficial docs:
Official values from current public guidance:
- Base URL:
https://open.bigmodel.cn/api/coding/paas/v4 - Model:
GLM-4.7or another currently documented Coding Plan model
Recommended workflow:
- add a GLM provider block in
~/.codex/config.toml - add a profile such as
[profiles.glm] - run
codex --profile glm - run
codex exec --profile glm "Reply with exactly OK." - run
ds doctor --codex-profile glm - run
ds --codex-profile glm
codex:
enabled: true
binary: codex
config_dir: ~/.codex
profile: glm
model: inheritOfficial doc:
Official values from current public guidance:
- Base URL:
https://ark.cn-beijing.volces.com/api/coding/v3 - Models:
doubao-seed-code-preview-latest,ark-code-latest
Recommended workflow:
codex --profile ark
codex exec --profile ark "Reply with exactly OK."
ds doctor --codex-profile ark
ds --codex-profile arkcodex:
enabled: true
binary: codex
config_dir: ~/.codex
profile: ark
model: inheritOfficial docs:
- https://help.aliyun.com/zh/model-studio/other-tools-coding-plan
- https://help.aliyun.com/zh/model-studio/coding-plan-faq
Important:
- supported: Qwen through the Bailian Coding Plan endpoint
- not supported here: the generic Bailian / DashScope Qwen platform API
Official values from current public guidance:
- Base URL:
https://coding.dashscope.aliyuncs.com/v1 - key shape: Coding Plan-specific key, usually
sk-sp-...
Recommended workflow:
codex --profile bailian
codex exec --profile bailian "Reply with exactly OK."
ds doctor --codex-profile bailian
ds --codex-profile bailiancodex:
enabled: true
binary: codex
config_dir: ~/.codex
profile: bailian
model: inheritIf a provider-backed profile still fails:
- check
which codexandcodex --version - inspect
~/.codex/config.toml - verify the provider block exists and the profile points to it
- verify the API key or bearer token is actually available
- verify the Base URL is the Coding Plan or Codex-compatible endpoint, not a generic platform endpoint
- run
codex --profile <name>first - run
codex exec --profile <name> "Reply with exactly OK." - run
ds doctor --codex-profile <name> - only then run
ds --codex-profile <name>
If codex --profile <name> fails but you believe the provider config is correct, fix Codex first. DeepScientist should not be the first place you debug provider auth.