Skip to content

Update Max response setting description with current token limits for Azure OpenAI models #1452

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

Copilot
Copy link

@Copilot Copilot AI commented Jun 20, 2025

Summary

Updates the Settings table in the IntelliJ ChatGPT integration documentation to replace outdated token limit information with current, accurate details for Azure OpenAI models.

Changes Made

  • Fixed outdated information: Replaced the reference to "maximum of 4096 tokens" with current token limits
  • Added model-specific limits: Included specific token limits for different GPT model variants:
    • gpt-35-turbo-1106 / 0125 – up to 16,385 tokens
    • gpt-35-turbo-16k-0613 – up to 16,385 tokens
    • gpt-4-turbo-2024-04-09 – up to 128,000 tokens
    • gpt-4o-2024-05-13 – up to 128,000 tokens
  • Added reference to authoritative documentation: Points users to "Model summary table and region availability" for the most up-to-date limits
  • Enhanced user guidance: Added advice to ensure prompt and completion fit within the model's context window

Impact

This update ensures developers using the Azure Toolkit for IntelliJ have accurate, current information about token limits when configuring their Azure OpenAI Service integrations. The previous 4096 token limit was significantly outdated and could mislead users about the capabilities of newer models.

Files Changed

  • articles/java/toolkit-for-intellij/chatgpt-intellij.md - Updated Settings table Max response description

The change is minimal and surgical, affecting only the specific row that contained outdated information while preserving all other content and table formatting.


💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Copy link
Contributor

Learn Build status updates of commit 77afa28:

✅ Validation status: passed

For more details, please refer to the build report.

@Copilot Copilot AI changed the title [WIP] Correct "Max response" token limit info in Settings table to reflect current model limits Update Max response setting description with current token limits for Azure OpenAI models Jun 20, 2025
@Copilot Copilot AI requested a review from guygregory June 20, 2025 08:58
Copilot finished work on behalf of guygregory June 20, 2025 08:58
Copy link
Contributor

Learn Build status updates of commit 9885aa2:

✅ Validation status: passed

File Status Preview URL Details
articles/java/toolkit-for-intellij/chatgpt-intellij.md ✅Succeeded

For more details, please refer to the build report.

@v-regandowner v-regandowner marked this pull request as ready for review June 20, 2025 14:21
@v-regandowner
Copy link
Contributor

@KarlErickson - Can you review the proposed changes?

IMPORTANT: When the changes are ready for publication, adding a #sign-off comment is the best way to signal that the PR is ready for the review team to merge.

#label:"aq-pr-triaged"
@MicrosoftDocs/public-repo-pr-review-team

@prmerger-automator prmerger-automator bot added the aq-pr-triaged tracking label for the PR review team label Jun 20, 2025
@KarlErickson
Copy link
Contributor

@silenceJialuo can you please review? Thanks!

@v-dirichards
Copy link
Contributor

@silenceJialuo - Could you review this proposed update to your article and enter #sign-off in a comment if it's ready to merge?

Thanks.

@KarlErickson
Copy link
Contributor

@silenceJialuo ping

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
aq-pr-triaged tracking label for the PR review team do-not-merge review-team-triage
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants