This is a minimal FastAPI project skeleton designed for rapid development of tiny APIs, especially when serving ML models. It includes:
- A single
POST /bmi
endpoint to calculate Body Mass Index (BMI). - Docker support with a simple
Dockerfile
. - A
Makefile
to manage the Docker lifecycle (build, run, stop, logs, restart). - Instructions for local development and testing.
fast-api-skeleton/
├── app/
│ ├── main.py # FastAPI application
│ └── schemas.py # Pydantic models
├── Dockerfile # Container build instructions
├── Makefile # Docker lifecycle commands
├── requirements.txt # Python dependencies
└── README.md # Project documentation
- Python 3.9+
- Docker (for containerized deployment)
- (Optional)
curl
or HTTP client for testing
-
Install dependencies
pip install --no-cache-dir -r requirements.txt
-
Start the FastAPI server
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
-
Access the API docs
Open http://localhost:8000/docs to explore and test the/bmi
endpoint.
-
Build the Docker image
docker build -t fast-api-skeleton-app .
-
Run the container
docker run -d --name fast-api-skeleton -p 8000:80 fast-api-skeleton-app
-
View logs
docker logs -f fast-api-skeleton
-
Stop & remove container
make stop make rm
The included Makefile
provides convenient commands:
-
make build
Build the Docker image. -
make run
Run the container (detached). -
make stop
Stop the running container. -
make rm
Remove the stopped container. -
make logs
Follow container logs. -
make restart
Rebuild and restart the container.
POST /bmi
- Request Body:
{ "name": "Alice", "weight": 70.0, "height": 1.75 }
- Response:
{ "name": "Alice", "bmi": 22.9, "category": "Normal weight" }
This endpoint serves as a test and template for adding additional ML model inference routes.
- Add more endpoints in
app/main.py
. - Define new data models in
app/schemas.py
. - Integrate ML inference in place of the BMI calculation.
- Update dependencies in
requirements.txt
as needed.
This repository is intended as a starting point for small FastAPI projects, particularly those that will serve machine learning models. Feel free to customize!