-
Notifications
You must be signed in to change notification settings - Fork 1.5k
nes: support /models on proxy and model picker #2266
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
I want to have `InlineEditFeature`'s code as reusable pieces such that a SingularCompletionsProvider can reuse those
98e979e to
be6925a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds support for fetching available models from a /models endpoint on the proxy server and implements a model picker UI for inline edits. The implementation includes:
- A new
InlineEditsModelServicethat fetches and manages the list of available models from the proxy endpoint - Integration of the model service into the inline completion provider to expose model selection UI
- Refactoring of
XtabProviderto use the centralized model service instead of duplicated config parsing logic - A new joint completions provider that unifies completions and inline edits providers
- Configuration updates to support model preferences and enable the feature via experiment flags
Reviewed changes
Copilot reviewed 16 out of 16 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| src/platform/inlineEdits/node/inlineEditsModelService.ts | New service that fetches models from /models endpoint and manages model selection |
| src/platform/inlineEdits/common/inlineEditsModelService.ts | Service interface defining the model service contract |
| src/platform/inlineEdits/common/dataTypes/inlineEditsModelsTypes.ts | Wire type definitions for the /models API response |
| src/platform/inlineEdits/common/dataTypes/xtabPromptOptions.ts | Added type guard function for validating prompting strategy values |
| src/platform/configuration/common/configurationService.ts | Added new config keys for model picker and preferred model settings |
| src/extension/xtab/node/xtabProvider.ts | Refactored to use the new model service instead of duplicating config parsing logic |
| src/extension/test/vscode-node/services.ts | Registered model service in test service collection |
| src/extension/test/node/services.ts | Registered model service in unit test service collection |
| src/extension/inlineEdits/vscode-node/jointInlineCompletionProvider.ts | New unified provider that combines completions and inline edits with fallback logic |
| src/extension/inlineEdits/vscode-node/inlineEditProviderFeature.ts | Refactored to extract reusable logic for joint provider |
| src/extension/inlineEdits/vscode-node/inlineCompletionProvider.ts | Extended to support model picker UI via VS Code API |
| src/extension/extension/vscode-node/services.ts | Registered model service in main service collection |
| src/extension/extension/vscode-node/contributions.ts | Replaced separate contributions with joint provider contribution |
| src/extension/completions-core/vscode-node/extension/src/inlineCompletion.ts | Added undefined handlers for compatibility with joint provider |
| package.nls.json | Added localization string for preferred model setting |
| package.json | Added configuration for preferred model setting |
Comments suppressed due to low confidence (1)
src/platform/inlineEdits/node/inlineEditsModelService.ts:1
- The
IN_TESTINGflag is hardcoded tofalse. This should be removed or controlled via a proper configuration/environment variable to avoid accidentally leaving test code paths active in production. Consider using an experiment flag or environment variable instead.
/*---------------------------------------------------------------------------------------------
| try { | ||
| r = await this._fetchService.fetch(url, { | ||
| headers: { | ||
| 'Authorization': copilotToken.token, |
Copilot
AI
Nov 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Authorization header is missing the 'Bearer ' prefix. Based on the codebase pattern (see src/platform/networking/common/networking.ts:314 and other examples), the header should be 'Authorization': \Bearer ${copilotToken.token}`` to follow the OAuth 2.0 Bearer token standard.
| 'Authorization': copilotToken.token, | |
| 'Authorization': `Bearer ${copilotToken.token}`, |
| let parsedConfig: ModelConfiguration | undefined; | ||
| try { | ||
| parsedConfig = JSON.parse(configString); | ||
| // FIXME@ulugbekna: validate parsedConfig structure |
Copilot
AI
Nov 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The parsed model configuration from JSON is not validated against the expected structure. Use the existing MODEL_CONFIGURATION_VALIDATOR from xtabPromptOptions.ts to validate the parsed config before using it, similar to how the non-string version is validated.
| } | ||
|
|
||
| export function isPromptingStrategy(value: string): value is PromptingStrategy { | ||
| return Object.values(PromptingStrategy).includes(value as PromptingStrategy); |
Copilot
AI
Nov 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The type narrowing uses a cast value as PromptingStrategy before checking if it's included. While this works, it's safer to check without the cast: return (Object.values(PromptingStrategy) as string[]).includes(value); to avoid potential type confusion.
| return Object.values(PromptingStrategy).includes(value as PromptingStrategy); | |
| return (Object.values(PromptingStrategy) as string[]).includes(value); |
| export const InlineEditsNextCursorPredictionDisplayLine = defineAndMigrateExpSetting<boolean>('chat.advanced.inlineEdits.nextCursorPrediction.displayLine', 'chat.inlineEdits.nextCursorPrediction.displayLine', true); | ||
| export const InlineEditsNextCursorPredictionCurrentFileMaxTokens = defineAndMigrateExpSetting<number>('chat.advanced.inlineEdits.nextCursorPrediction.currentFileMaxTokens', 'chat.inlineEdits.nextCursorPrediction.currentFileMaxTokens', xtabPromptOptions.DEFAULT_OPTIONS.currentFile.maxTokens); | ||
| export const InlineEditsRenameSymbolSuggestions = defineSetting<boolean>('chat.inlineEdits.renameSymbolSuggestions', ConfigType.ExperimentBased, { defaultValue: false, teamDefaultValue: true }); | ||
| export const InlineEditsPreferredModel = defineSetting<string | "none">('nextEditSuggestions.preferredModel', ConfigType.ExperimentBased, "none"); |
Copilot
AI
Nov 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The configuration key nextEditSuggestions.preferredModel is inconsistent with the namespace convention. Other inline edits settings use chat.inlineEdits.* or chat.advanced.inlineEdits.* prefixes. This should be chat.inlineEdits.preferredModel or similar to match the existing pattern.
| export const InlineEditsPreferredModel = defineSetting<string | "none">('nextEditSuggestions.preferredModel', ConfigType.ExperimentBased, "none"); | |
| export const InlineEditsPreferredModel = defineSetting<string | "none">('chat.inlineEdits.preferredModel', ConfigType.ExperimentBased, "none"); |
| "onExp" | ||
| ] | ||
| }, | ||
| "github.copilot.nextEditSuggestions.preferredModel": { |
Copilot
AI
Nov 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The package.json configuration key github.copilot.nextEditSuggestions.preferredModel is inconsistent with other inline edits settings. Other settings use github.copilot.chat.inlineEdits.* prefix. This should match the pattern for consistency.
| "github.copilot.nextEditSuggestions.preferredModel": { | |
| "github.copilot.chat.inlineEdits.nextEditSuggestions.preferredModel": { |
| try { | ||
| const jsonData: unknown = await r.json(); | ||
| if (!WireTypes.ModelList.is(jsonData)) { | ||
| throw new Error('Invalid model list response'); // TODO@ulugbekna: add telemetry |
Copilot
AI
Nov 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing telemetry for invalid model list responses. This error case should send telemetry similar to the incorrectNesModelConfig event to track API format issues or potential attacks.
| throw new Error('Invalid model list response'); // TODO@ulugbekna: add telemetry | |
| /* __GDPR__ | |
| "invalidModelListResponse" : { | |
| "owner": "ulugbekna", | |
| "comment": "Capture if /models API returns an invalid response format.", | |
| "errorMessage": { "classification": "SystemMetaData", "purpose": "FeatureInsight", "comment": "Error message for invalid model list response." }, | |
| "responseValue": { "classification": "SystemMetaData", "purpose": "FeatureInsight", "comment": "The invalid response received from the /models endpoint, stringified." } | |
| } | |
| */ | |
| this._telemetryService.sendMSFTTelemetryEvent('invalidModelListResponse', { | |
| errorMessage: 'Invalid model list response', | |
| responseValue: JSON.stringify(jsonData) | |
| }); | |
| throw new Error('Invalid model list response'); |
| } | ||
| }); | ||
|
|
||
| // FIXME@ulugbekna: should I manually run it once here? |
Copilot
AI
Nov 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This FIXME comment suggests uncertainty about the implementation. Either the manual run at lines 139-145 is necessary and the comment should explain why, or it's not necessary and should be removed. The comment should be resolved before merging.
| // FIXME@ulugbekna: should I manually run it once here? | |
| // Manually run once to ensure model state is initialized immediately, | |
| // as autorun will only trigger on observable changes after this point. |
| if (preferredModelName !== 'none' && models.some(m => m.modelName === preferredModelName)) { | ||
| this._currentModelId = preferredModelName; | ||
| } else if (models.length > 0) { | ||
| this._currentModelId = models[0].modelName; // FIXME@ulugbekna: I think the first entry should be modelConfigurationString maybe |
Copilot
AI
Nov 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This FIXME suggests the model selection priority may be incorrect. The comment indicates that modelConfigurationString should take precedence over the first model in the list. This should be clarified and implemented correctly or the FIXME resolved.
| return { | ||
| modelName: m.name, | ||
| promptingStrategy: m.capabilities.promptStrategy, | ||
| includeTagsInCurrentFile: false, // FIXME@ulugbekna: determine this based on model capabilities and config |
Copilot
AI
Nov 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The includeTagsInCurrentFile property is hardcoded to false for all fetched models. The FIXME comment indicates this should be determined from model capabilities in the response. Consider adding this capability field to WireTypes.Capabilities.t or documenting why the hardcoded value is acceptable.
| includeTagsInCurrentFile: false, // FIXME@ulugbekna: determine this based on model capabilities and config | |
| includeTagsInCurrentFile: !!m.capabilities.includeTagsInCurrentFile, |
No description provided.