Problem Description
The current opencode models github-copilot --verbose output does not expose a github-copilot/gpt-5-nano model, but the repository still contains stale references to github-copilot/gpt-5-nano in fallback expectations and snapshots.
This creates confusion when reviewing or debugging GitHub Copilot model selection because snapshot-based or fallback-based expectations do not match the current catalog output.
Proposed Solution
Clean up stale github-copilot/gpt-5-nano references and align GitHub Copilot-specific expectations with the current opencode models github-copilot --verbose output.
Suggested scope:
- update stale GitHub Copilot nano references in snapshots and tests
- review fallback expectations that still assume
github-copilot/gpt-5-nano
- update any user-facing docs or examples if they imply GitHub Copilot still exposes that model
- keep provider-specific cleanup scoped so unrelated
openai/gpt-5-nano or opencode/gpt-5-nano references are only changed if they are also stale
Feature Type
Other
Alternatives Considered
- leave the stale references in place and treat them as historical expectations
- fix them opportunistically in unrelated PRs
Both options make future debugging harder and increase the chance of catalog drift going unnoticed.
Additional Context
Evidence from current catalog output:
opencode models github-copilot --verbose shows GPT-5 entries such as gpt-5-mini, gpt-5.1, gpt-5.2, gpt-5.2-codex, gpt-5.3-codex, gpt-5.4, and gpt-5.4-mini
- no
github-copilot/gpt-5-nano entry appears in that output
Examples of stale references observed during the Copilot variant alignment work:
src/cli/__snapshots__/model-fallback.test.ts.snap
- related fallback expectations or docs that still mention
github-copilot/gpt-5-nano
Contribution
Problem Description
The current
opencode models github-copilot --verboseoutput does not expose agithub-copilot/gpt-5-nanomodel, but the repository still contains stale references togithub-copilot/gpt-5-nanoin fallback expectations and snapshots.This creates confusion when reviewing or debugging GitHub Copilot model selection because snapshot-based or fallback-based expectations do not match the current catalog output.
Proposed Solution
Clean up stale
github-copilot/gpt-5-nanoreferences and align GitHub Copilot-specific expectations with the currentopencode models github-copilot --verboseoutput.Suggested scope:
github-copilot/gpt-5-nanoopenai/gpt-5-nanooropencode/gpt-5-nanoreferences are only changed if they are also staleFeature Type
Other
Alternatives Considered
Both options make future debugging harder and increase the chance of catalog drift going unnoticed.
Additional Context
Evidence from current catalog output:
opencode models github-copilot --verboseshows GPT-5 entries such asgpt-5-mini,gpt-5.1,gpt-5.2,gpt-5.2-codex,gpt-5.3-codex,gpt-5.4, andgpt-5.4-minigithub-copilot/gpt-5-nanoentry appears in that outputExamples of stale references observed during the Copilot variant alignment work:
src/cli/__snapshots__/model-fallback.test.ts.snapgithub-copilot/gpt-5-nanoContribution