Thanks for contributing to llm-gateway-bench.
This guide focuses on:
- setting up a development environment
- running tests and linters
- contributing docs
If you are looking for the repository-level contributing doc, also see
CONTRIBUTING.mdat the repo root.
- Python 3.9+
- Git
Optional but recommended:
uv(fast Python env management)
git clone https://github.com/mnbplus/llm-gateway-bench
cd llm-gateway-benchUsing venv:
python -m venv .venv
# macOS/Linux
source .venv/bin/activate
# Windows PowerShell
.\.venv\Scripts\Activate.ps1Install editable + dev deps:
python -m pip install -U pip
python -m pip install -e ".[dev]"pytestWith coverage:
pytest --cov- Format:
black - Lint:
ruff - Types:
mypy
Suggested commands:
ruff check .
black .
mypy srcpython -m pip install -e ".[dev]"
mkdocs serveOpen http://127.0.0.1:8000.
- Prefer short sections and runnable snippets
- Use stable relative links between pages
- Avoid provider marketing claims; document steps + gotchas
Provider defaults live in src/llm_gateway_bench/providers.py (PROVIDER_DEFAULTS).
Before adding a provider:
- Verify the endpoint is OpenAI-compatible
- Confirm the correct base URL
- Identify the environment variable name for API key
- Add notes to Providers
- Tests pass (
pytest) - Lint passes (
ruff check .) - Formatting applied (
black .) - Updated docs for user-facing changes
- Changelog entry if behavior changes
Use conventional, readable messages. Examples:
docs: improve provider setup examplesfeat: add report json outputfix: handle streaming usage missing
- Never commit real API keys
- Use
.envlocally and GitHub Secrets in CI
Open an issue with:
- provider name + base URL
- model id
- a minimal command that reproduces the bug
- sanitized error output