A Python-based tool that leverages a local LLM (via Ollama) to generate clean, context-aware commit messages for both git
and yadm
.
This tool inspects staged changes and sends either:
- A summary of file names (for large commits), or
- A diff or full content summary (for small changes),
to a locally running LLM model using Ollama, and generates a meaningful, concise commit message.
Structured output is parsed using pydantic
models to keep LLM "thinking" out of the final result.
Conventional commit message generators are either:
- Dumb keyword scanners, or
- Cloud-based LLMs (privacy issue).
This project offers:
- Full local execution with Ollama
- Intelligent message structuring
- Support for both
git
andyadm
- Experimental file-wise summarization for complex commits
.
├── main.py # CLI entrypoint
└── helper/
├── ollama_client_direct.py
├── ollama_client_api.py
├── commit_schema.py
└── __init__.py
python main.py [options]
Flag | Description |
---|---|
--git |
Use Git repository (default if .git is found) |
--yadm |
Use YADM (assumes global config) |
--path /repo/path |
Path to Git repo (optional, used with --git ) |
--smallchange |
Use full diff/content for 1-2 changed files |
--bigchange |
Use filenames only for large commits |
--experimentalSum |
Generate summaries of each file then merge via LLM |
--use-api |
Use requests instead of ollama SDK |
--help |
Show help and usage |
Ensure you have Ollama installed and the desired model pulled.
Follow instructions at: https://ollama.com
ollama pull gemma:latest
Modified: config.yaml
+ added GPU acceleration toggle
+ changed log level to DEBUG
Modified: server.py
+ added async support to request handler
Enable GPU toggle and add async request support
Explanation (optional if --debug
enabled):
This commit enables GPU support via
config.yaml
and introduces async request handling inserver.py
.
This tool prints a commit message to stdout.
msg=$(python main.py --git --smallchange)
git commit -m "$msg"
msg=$(python main.py --yadm --bigchange)
yadm commit -m "$msg"
msg=$(python main.py --git --experimentalSum)
git commit -m "$msg"
You can wrap this into an alias or shell script for convenience.
- ✅ Add debug/verbose flags to print LLM reasoning
- ✅ Add model selection via flag (e.g.
--model llama3
) - ✅ Add fallback logic between SDK and API
- 🚧 Create a VS Code extension that:
- Hooks into the Source Control panel
- Adds a "Generate Commit Message with LLM" button
- Uses the local Ollama API for instant commit suggestions
- 🚧 Allow config file (
.llmcommitrc
) to define default flags, model, temperature - 🚧 Add support for
pre-commit
hooks integration
Built by an engineer tired of writing garbage commit messages. Make machines write them better.