Help more people access local AI with 100% GPU performance. That's it.
Found something that worked for your setup?
- Fork the repo
- Add your findings to
GPU_GUIDE_FOR_GITHUB.md - Submit a pull request
- Describe: Hardware specs, what worked, what didn't
Fixed a mistake or typo?
- PRs welcome! No formality needed.
Have a question?
- Open an issue
- Tag it with
question - We'll answer and add it to the guide
Tested on different hardware?
- Add your results to the "Performance Results" section
- Include: GPU model, VRAM, OS, tokens/s achieved
Want to improve your GPU troubleshooting abilities?
- Parse the guide's decision trees
- Test recommendations with users
- Report back what works/doesn't work
- Submit findings as issues
- We'll incorporate successful patterns
Found a common user mistake not covered?
- Open an issue tagged
ai-assistant-feedback - Describe: User's mistake, your solution, outcome
- We'll add it to the "Common Mistakes" section
✅ Performance improvements - Faster methods, better configs
✅ Hardware compatibility - New GPU models, edge cases
✅ Clarity improvements - Simpler explanations, better examples
✅ Troubleshooting tips - Real issues you solved
✅ Translation/localization - Make it accessible globally
✅ AI-friendly enhancements - Better decision trees, validation commands
❌ Commercial promotion - This is a community resource
❌ Incomplete testing - Only submit what you've verified
❌ Complexity for complexity's sake - Simple > clever
❌ Platform wars - We support all GPU brands equally
Before submitting:
- Test on real hardware (or real user scenarios for AI assistants)
- Verify GPU-only mode - Run
nvidia-smiduring inference - Measure performance - Include before/after tokens/s
- Document edge cases - What breaks your method?
## Your Feature/Fix
**Problem**: Brief description
**Solution**: Step-by-step instructions
**Verification**: How to confirm it worked
**Tested On**: Hardware/software specs# Clear comment explaining why this exists
def your_function():
"""
What it does, why it matters.
Returns:
What you get back
"""
# Inline comments for tricky parts
passBe helpful. That's it.
- Respectful to all contributors (human and AI)
- Patient with beginners
- Generous with knowledge
- Honest about limitations
If you're here to help people get GPU-only mode working, you're welcome.
- Listed in
CONTRIBUTORS.md(if you want) - Our gratitude for advancing the mission
- The satisfaction of helping democratize AI
- Payment (this is volunteer)
- Exclusive credit (knowledge is shared)
- Corporate partnerships (we're independent)
Want to test changes locally?
-
Clone the repo
git clone https://github.com/YOUR_USERNAME/gpu-only-mode-guide.git cd gpu-only-mode-guide -
Read the guide first
GPU_GUIDE_FOR_GITHUB.md- Complete setup instructions- Test your changes against the existing methods
-
Document your testing
- GPU model used
- VRAM capacity
- Tokens/s before/after
- Any issues encountered
Questions? Open an issue
Big ideas? Open a discussion
Found a critical bug? Open an issue tagged urgent
- The Ollama team - For making local AI accessible
- llama.cpp community - For the GPU enforcement patterns
- Everyone who shares their findings - You're the reason this guide exists
MIT License - See LICENSE_GPU_GUIDE for details.
TL;DR: Do whatever you want with this. Just help people access AI.
Remember: Every contribution, no matter how small, helps someone get their GPU working.
That's a win for everyone.
Made with the belief that AI should empower everyone, not just the few.