A real-time object detection application for Raspberry Pi using YOLOv8 and MQTT, built with Gradio for an interactive web interface.
- Real-time object detection using Raspberry Pi Camera
- YOLO v8 neural network for accurate object detection
- MQTT integration for IoT connectivity
- Web-based user interface using Gradio
- Adjustable confidence threshold during runtime
- Live video feed with detection visualization
- Multi-tab interface for settings and detection
- Raspberry Pi (tested on Raspberry Pi 4&5, minimum 4gb RAM)
- Raspberry Pi Camera Module
- Internet connection for MQTT (if using remote broker)
- Raspberry Pi OS (Bullseye or newer)
- Python 3.9+
- MQTT Broker (e.g., Mosquitto)
- Required Python packages (installed automatically)
- Clone the repository:
git clone https://github.com/phillipfoxsmaflex/object_detection_rpi.git
cd object_detection_rpi/raspberry-detection
- Run the installation script:
sudo chmod +x install.sh
sudo ./install.sh
The installation script will:
- Install required system packages
- Set up Python virtual environment
- Install Python dependencies
- Download YOLOv8 model
- Configure MQTT broker
- Set up camera permissions
- Create necessary directories and configuration files
- Set up systemd service (optional)
Edit config/settings.json
to configure MQTT connection:
{
"mqtt_broker": "localhost",
"mqtt_port": 1883,
"mqtt_topic": "detections",
"model_path": "models/yolov8n.pt",
"conf_threshold": 0.25
}
The camera is configured to use a resolution of 640x480 by default. You can modify this in src/camera.py
.
- Default model: YOLOv8n (nano version)
- Location:
models/yolov8n.pt
- You can replace it with other YOLOv8 models for different performance/accuracy trade-offs
- Normal start:
sudo chmod +x start.sh
./start.sh
- Debug mode (with logging):
./start_debug.sh
- Camera test:
./test_camera.sh
- As a system service:
sudo systemctl start object-detection
- Open a web browser and navigate to:
http://[raspberry-pi-ip]:7860
- The interface has two tabs:
- Settings: Configure MQTT and model parameters
- Object Detection: Live detection with adjustable confidence threshold
- Click "Start" to begin object detection
- Adjust confidence threshold using the slider
- View detections in real-time
- Click "Stop" to end detection
Detection results are published to the configured MQTT topic in JSON format:
{
"person": {
"count": 2,
"confidences": [0.92, 0.87]
},
"car": {
"count": 1,
"confidences": [0.95]
}
}
- Check camera connection
- Verify camera is enabled:
vcgencmd get_camera
- Check permissions:
ls -l /dev/video*
- Check MQTT broker status:
sudo systemctl status mosquitto
- Test MQTT connection:
mosquitto_sub -t "detections"
- Check application logs:
tail -f logs/debug.log
- Check system logs:
sudo journalctl -u object-detection
raspberry-detection/
├── config/
│ └── settings.json
├── models/
│ └── yolov8n.pt
├── src/
│ ├── app.py
│ ├── camera.py
│ ├── mqtt_client.py
│ └── object_detection.py
├── logs/
├── requirements.txt
└── README.md
app.py
: Main application with Gradio interfacecamera.py
: Camera handling using picamera2mqtt_client.py
: MQTT client implementationobject_detection.py
: YOLOv8 object detection
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- YOLOv8 by Ultralytics
- Gradio team for the web interface framework
- Raspberry Pi Foundation
- Eclipse Mosquitto project