[prompt-analysis] Copilot PR Prompt Analysis - 2025-11-21 #4463
Closed
Replies: 1 comment
-
|
This discussion was automatically closed because it was created by an agentic workflow more than 1 week ago. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
🤖 Copilot PR Prompt Pattern Analysis - 2025-11-21
Summary
Over the past 30 days, Copilot has created 1000 pull requests in this repository. Of the completed PRs, 767 were merged (76.9%) while 230 were closed without merging (23.1%). This analysis examines prompt patterns to identify what leads to successful merges.
The data shows that removal and documentation prompts have the highest success rates (88.9% and 86.5% respectively), while feature additions have a slightly lower success rate (73.5%). Prompts that include specific file references and detailed context show marginally better outcomes.
Full Analysis Report
Prompt Categories and Success Rates
Analysis of 997 completed PRs reveals distinct patterns across different prompt types:
Prompt Analysis
✅ Successful Prompt Patterns (Merged PRs)
Common characteristics in the 767 merged PRs:
Example successful prompts:
Generate a "custom agent" prompt (.github/agents/debug-agentic-workflow.md) file that will help a user refine and debug ...→ ✓ MergedAdd a --progress flag to the "run" command that emits regular points progress messages so that an agent cli running the ...→ ✓ MergedReview and update GitHub agentic workflows instruction files with respect to the current state of documentation and the ...→ ✓ Merged❌ Unsuccessful Prompt Patterns (Closed PRs)
Common characteristics in the 230 closed PRs:
Observations: Closed PRs show similar prompt lengths but have fewer file references and specific location mentions compared to merged PRs.
Example unsuccessful prompts:
)
Improve the readme documentation and make it is up to date with the latest features....→ ✗ ClosedKey Insights
Based on analysis of nearly 1,000 Copilot-generated PRs, three clear patterns emerge:
Simpler tasks succeed more: Removal (88.9%) and documentation (86.5%) tasks have the highest merge rates, likely because they have clear, well-defined scopes and lower risk of unintended side effects.
Bug fixes perform well: With a 77.6% success rate across 588 PRs, bug fix prompts are consistently successful. This suggests Copilot excels at targeted fixes when given clear problem descriptions.
Feature additions are more challenging: At 73.5% success rate, new feature implementations have the lowest merge rate. This likely reflects the complexity and subjective nature of feature development, where requirements may evolve or need human refinement.
Specificity correlation: Merged PRs showed 49% file reference rate vs 40.6% for closed PRs, suggesting that prompts with specific file mentions have a slight advantage (though the difference is modest at ~8 percentage points).
Recommendations
Based on patterns in successful PRs, consider these best practices when writing Copilot prompts:
✅ DO:
📝 Prompt Template Suggestions
For bug fixes (77.6% success rate):
For feature additions (73.5% success rate):
For documentation (86.5% success rate):
Statistical Summary
Analysis period: Last 30 days | Data source: githubnext/gh-aw Copilot PRs
Beta Was this translation helpful? Give feedback.
All reactions