Copilot PR Prompt Analysis - 2025-12-26 #7728
Replies: 1 comment 2 replies
-
|
/q update agentic workflow to maintain an issue template to create issues that will work well with copilot. Use safe output create-pull-request title prefix "[ca] ", draft: true. Only update template if needed. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
🤖 Copilot PR Prompt Pattern Analysis - 2025-12-26
Summary
Analysis Period: Last 30 days
Total PRs: 1000 | Merged: 734 (73.4%) | Closed: 260 (26.0%) | Open: 6
Overall Success Rate: 73.8% (merged / completed)
Prompt Categories and Success Rates
*Success Rate = Merged / (Merged + Closed)
Prompt Analysis
✅ Successful Prompt Patterns
Common characteristics in merged PRs:
Most common keywords in merged PRs:
includes_url(335 occurrences)go(304 occurrences)includes_code(296 occurrences)add(260 occurrences)update(259 occurrences)test(244 occurrences)resolve(209 occurrences)fix(205 occurrences)Example successful prompts:
#7700: Merged main branch into the security fix PR while preserving the backslash escaping fix for CodeQL Alert Proposal: Add gemini as agentic engine #83. - **Merge ... → ✅ Merged
#7696: Fix is tests → ✅ Merged
#7695: Added comprehensive test coverage for the incomplete string escaping vulnerability (Alert Proposal: Add gemini as agentic engine #83, CWE-116) in `formatBashCo... → ✅ Merged
❌ Unsuccessful Prompt Patterns
Common characteristics in closed PRs:
Most common keywords in closed PRs:
includes_url(142 occurrences)go(135 occurrences)add(123 occurrences)resolve(120 occurrences)includes_code(114 occurrences)update(92 occurrences)md(91 occurrences)test(89 occurrences)Example unsuccessful prompts:
#7702: Fix tests → ❌ Closed
#7701: Thanks for assigning this issue to me. I'm starting to work on it and will keep this PR's description up to date as I fo... → ❌ Closed
#7677: > ----
Key Insights
📊 Pattern Analysis
Prompt Length Paradox: Closed PRs have LONGER prompts (avg 165 words) vs merged PRs (125 words), suggesting conciseness correlates with success.
Category Success Hierarchy:
URL References: 54.6% of closed PRs include URLs vs 45.6% of merged PRs - URLs are associated with MORE complex/problematic tasks.
Specificity Score: Closed PRs have HIGHER specificity (4.1) vs merged (3.3), indicating over-specification or attempting overly complex changes.
💡 Success Factors
Testing prompts perform best (72.3% success) - clear, focused scope makes them easier to implement correctly.
Bug fixes are moderately successful (71.4% success) - benefit from error context but may involve complex debugging.
Generic "Other" prompts struggle (76.3% success) - lack of clear categorization suggests ambiguous requirements.
Recommendations
Based on today's analysis:
✅ DO:
Keep prompts concise - Aim for ~125 words or less. Merged PRs average 125 words vs 165 words for closed PRs.
Focus on testing and bug fixes - These categories have the highest success rates (71.7% and 71.4% respectively).
Be specific about scope - Reference specific files when appropriate, but avoid over-specifying implementation details.
Include error context for bug fixes - 20% of merged PRs include error messages, helping provide clear reproduction steps.
Overly long explanations - More words don't equal better results. Closed PRs are 31% longer on average.
Vague or uncategorizable requests - "Other" category has lower success rate. Be explicit about the type of change.
Over-specification - Higher specificity scores in closed PRs suggest micromanaging implementation reduces success.
Complex multi-part changes - Break large changes into smaller, focused PRs.
Historical Trends
Historical trend tracking will be available after multiple daily runs.
Methodology: Analyzed 1000 Copilot-generated PRs from the last 30 days in githubnext/gh-aw repository. Prompts extracted from PR bodies, categorized by intent, and correlated with merge outcomes.
References:
Beta Was this translation helpful? Give feedback.
All reactions