You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why it worked: Specific file references, clear problem statement, focused scope
Preview: "Dependabot bumped @sentry/mcp-server from 0.24.0 to 0.26.0 in package.json, but the shared MCP configuration still referenced the old version..."
Why it failed: No code references, no file references, investigative rather than action-oriented
Preview: "Thanks for asking me to work on this. I will get started on it and keep this PR's description up to date..."
Key Insights
Based on analysis of 998 completed PRs:
🎯 Code References Matter Most: PRs with code blocks or inline code references have a 13.7 percentage point higher success rate (99.7% vs 86.1%). Including actual code examples or referencing specific code patterns dramatically improves merge likelihood.
📁 Specific File References Help: PRs mentioning specific files or paths have a 4.9 percentage point advantage (83.6% vs 78.7%). Being explicit about which files need changes provides clear scope and direction.
🎪 Test & Security Prompts Excel: Test-related prompts achieve 77.4% success rate, and security-focused prompts reach 76.6%. These categories likely benefit from clear, measurable objectives and well-defined acceptance criteria.
📏 Brevity vs Context Balance: Merged PRs average 453 words while closed PRs average 481 words. This suggests that more words don't equal better outcomes—concise, focused prompts with relevant technical details outperform verbose, unfocused descriptions.
🔤 Generic Prompts Underperform: Uncategorized prompts (those lacking clear technical keywords) have the lowest success rate at 66.7%, suggesting that explicit framing of the task type helps Copilot generate more appropriate solutions.
Recommendations
Based on today's analysis:
✅ DO:
Include code examples or inline code references - Use backticks for functions, variables, or code snippets. This nearly guarantees inclusion in the 99.7% success group.
Mention specific files and paths - Reference exact filenames like pkg/workflow/validation.go or .github/workflows/test.yml to provide clear scope.
Frame tasks in clear categories - Use keywords like "fix bug in...", "add test for...", or "implement security check..." to help categorize the work.
Keep prompts focused and actionable - Aim for 300-500 words with clear objectives rather than lengthy, exploratory descriptions.
Include technical context - Reference error messages, test failures, or specific behaviors to provide concrete anchors for the implementation.
❌ AVOID:
Minimal descriptions - Three-word prompts like "Testing only" provide insufficient context for successful implementation.
Vague investigative requests - Prompts like "Investigate failure" without specific details, code references, or file mentions struggle to produce mergeable changes.
Generic improvement requests - Broad asks like "improve performance" without specific targets, metrics, or file references lead to unfocused changes.
Historical Trends
Date
PRs
Success Rate
Top Category
2026-01-04
998
75.6%
Test (77.4%)
Trend: First day of tracking—establishing baseline. Future reports will show week-over-week trends.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
🤖 Copilot PR Prompt Pattern Analysis - 2026-01-04
Summary
Analysis Period: Last 30 days
Total PRs Analyzed: 998 (completed) | Merged: 754 (75.6%) | Closed: 244 (24.4%) | Still Open: 2
Prompt Categories and Success Rates
Note: PRs can belong to multiple categories, so totals exceed 998.
Prompt Analysis
✅ Successful Prompt Patterns
Common characteristics in merged PRs:
Standout characteristics:
Example successful prompts:
PR Sync Sentry MCP server version reference with package.json #8100 - "Sync Sentry MCP server version reference with package.json" → Merged ✅
@sentry/mcp-serverfrom 0.24.0 to 0.26.0 inpackage.json, but the shared MCP configuration still referenced the old version..."PR chore: fix formatting issues in devcontainer and test files #7981 - "chore: fix formatting issues in devcontainer and test files" → Merged ✅
make lint. Changes: Consolidated VSCode extension array in.devcontainer/devcontainer.json..."❌ Unsuccessful Prompt Patterns
Common characteristics in closed PRs:
Example unsuccessful prompts:
PR [wip] test sample workflow #6926 - "[wip] test sample workflow" → Closed ❌
PR [WIP] Investigate failure in action run #7441 - "[WIP] Investigate failure in action run" → Closed ❌
Key Insights
Based on analysis of 998 completed PRs:
🎯 Code References Matter Most: PRs with code blocks or inline code references have a 13.7 percentage point higher success rate (99.7% vs 86.1%). Including actual code examples or referencing specific code patterns dramatically improves merge likelihood.
📁 Specific File References Help: PRs mentioning specific files or paths have a 4.9 percentage point advantage (83.6% vs 78.7%). Being explicit about which files need changes provides clear scope and direction.
🎪 Test & Security Prompts Excel: Test-related prompts achieve 77.4% success rate, and security-focused prompts reach 76.6%. These categories likely benefit from clear, measurable objectives and well-defined acceptance criteria.
📏 Brevity vs Context Balance: Merged PRs average 453 words while closed PRs average 481 words. This suggests that more words don't equal better outcomes—concise, focused prompts with relevant technical details outperform verbose, unfocused descriptions.
🔤 Generic Prompts Underperform: Uncategorized prompts (those lacking clear technical keywords) have the lowest success rate at 66.7%, suggesting that explicit framing of the task type helps Copilot generate more appropriate solutions.
Recommendations
Based on today's analysis:
✅ DO:
Include code examples or inline code references - Use backticks for functions, variables, or code snippets. This nearly guarantees inclusion in the 99.7% success group.
Mention specific files and paths - Reference exact filenames like
pkg/workflow/validation.goor.github/workflows/test.ymlto provide clear scope.Frame tasks in clear categories - Use keywords like "fix bug in...", "add test for...", or "implement security check..." to help categorize the work.
Keep prompts focused and actionable - Aim for 300-500 words with clear objectives rather than lengthy, exploratory descriptions.
Include technical context - Reference error messages, test failures, or specific behaviors to provide concrete anchors for the implementation.
❌ AVOID:
Minimal descriptions - Three-word prompts like "Testing only" provide insufficient context for successful implementation.
Vague investigative requests - Prompts like "Investigate failure" without specific details, code references, or file mentions struggle to produce mergeable changes.
Generic improvement requests - Broad asks like "improve performance" without specific targets, metrics, or file references lead to unfocused changes.
Historical Trends
Trend: First day of tracking—establishing baseline. Future reports will show week-over-week trends.
References:
Beta Was this translation helpful? Give feedback.
All reactions