Autonomous Driving Pipeline for the Bosch Future Mobility Challenge
Official Challenge Website
Aftermovie 2025
This repository contains the Autonomous Driving (AD) pipeline developed for the Bosch Future Mobility Challenge. Designed for a 1:10 scale autonomous vehicle, the project includes core modules for:
- Perception: Object detection and lane recognition using deep learning.
- Localization: Sensor fusion with Extended Kalman Filtering (EKF).
- Planning: Path optimization and decision-making.
- Control: Model Predictive Control (MPC) for smooth trajectory execution.
Built on ROS (Robot Operating System), the pipeline supports both simulation (Gazebo) and real-world deployment, leveraging an STM32-based embedded platform for low-level control.
- Embedded_Platform/ - Interfaces with STM32 for motor, servo, and sensor control.
- container/ - Nix container configuration files.
- src/ - Contains the main modules:
- control/ - Implements Model Predictive Control (MPC) and state machine logic.
- localization/ - Fuses sensor data using an Extended Kalman Filter (EKF).
- perception/ - Uses deep learning models for object detection and lane recognition.
- planning/ - Generates optimized paths from global waypoints.
- gui/ - Provides visualization and manual control using PyQt5.
- utils/ - Defines custom ROS messages and services used by other packages.
Each of these modules is described in detail below, with figures where applicable.
This package contains firmware modified from Bosch’s provided code to interface with the STM32 microcontroller, controlling the vehicle’s motor, servo, and sensors (e.g., IMU, camera, encoders).
This package contains the nix flake for the development environment. To install, simply run the install script on any linux distribution. For Windows, install wsl2 and run the install script inside wsl2.
- Uses Model Predictive Control (MPC) for smooth trajectory tracking.
- Implements a finite state machine for handling autonomous behaviors.

- Employs the
robot_localizationpackage to integrate GPS, IMU, and odometry data using an Extended Kalman Filter (EKF). - Simulated GPS delay and noise are added for realism in Gazebo.
- Lane Detection: Uses a histogram-based approach.
- Sign Detection:
- YOLOFastestV2 using NCNN (CPU inference) or YOLOv8 using TensorRT (GPU inference) for real-time inference.
- Detects competition-relevant traffic signs, traffic lights (with color classification), vehicles, and pedestrian dolls.
- Loads global waypoints from a GraphML file.
- Plans optimal path that goes through all desired destination points.
- Uses spline interpolation to generate a smooth, drivable path.
- Built with PyQt5 for real-time visualization of:
- Camera feed
- Vehicle state estimation
- Planned paths and detected objects
- Provides manual override for:
- Recommended OS. Not sure if ROS Noetic compatible with other versions.
- Follow the ROS Installation Guide.
- Install the Gazebo-based simulator.
- Install using this guide with
opencv_contrib. - Build
cv_bridge:catkin_make -DOpenCV_DIR=/path/to/opencv-4.9.0/build
- Install dependencies:
pip install -r ~/AD/requirements.txt
- Install:
sudo apt update && sudo apt install ros-noetic-robot-localization
- Follow this NCNN build guide.
- Follow
Cuda&TrtInstall.mdinstructions for installation. - Add paths to
CMakeLists.txt:include_directories(/home/{user}/TensorRT-8.6.1.6/include) link_directories(/home/{user}/TensorRT-8.6.1.6/lib)
- Install using this guide.
sudo apt-get update && sudo apt-get install autoconf libudev-dev
- Follow Acados installation steps to install dependencies.
- Configure:
echo 'export ACADOS_SOURCE_DIR="/home/{user}/acados"' >> ~/.bashrc
- Build the packages:
catkin_make --pkg utils
catkin_make- cd to where the simulator workspace is located.
source devel/setup.bash
roslaunch sim_pkg run132.launch- rest is the same as real run.
rosrun gui main.py- run this on another computer to see what the car is doing.
- ssh to the jetson orin nano and run roscore:
ssh scandy@{ip_address_of_jetson}
roscore- ssh to the jetson orin nano and run the controller:
ssh scandy@{ip_address_of_jetson}
roslaunch control controller.launch camera:=true ip:=192.168.50.216 use_gps:=true real:=true realsense:=true use_traffic_server:=true gps_id:=10- replace 192.168.50.216 by ip address of computer on which the gui is run.
- change gps_id according to the id of the gps module mounted on the car.
- if gps is used, the car should autonomously plan a path from the starting point to all destination points.
- once the path is visible on the gui, press the start button to start the run.
- if gps is not used, click on any destination point on the map in the gui, press the plan button to plan a path, then press the start button to start the run.
