This is for my Undergraduate Final Project. Using 2 Logitech C270 cameras as the sensor and Jetson Nano 4GB as the depth processor.
Laptop: Huawei Matebook X Pro (2018) i7-8550u 16GB RAM
SBC: Nvidia Jetson Nano Development Kit 4GB
Flight (Robot) Controller: Pixhawk 2.1 Cube
GPS: Here3 (u-blox M8P)
Stereo Camera: 2 pcs Logitech C270 HD Webcam
Robot Chassis: TP100 Tracked Differential Steering Robot
Motor: 2 pcs Brushed 6-12V Motors
Motor Driver: L298N H-Bridge Motor Driver
Battery: 2S LiPo battery
BEC: UBEC 5V/3A (for powering Jetson)
To use the CUDA version of OpenCV in this project, we have to build our own OpenCV as, by default, the built-in OpenCV shipped with Jetson Nano Jetpack is not having Cuda support like the tutorial here. And also need to install pymavlink and StereoVision for calibration.
pip3 install pymavlink
pip3 install StereoVision
You can download the zip of this repo or use git clone
. Choose whatever you want.
If running for the first time, prepare the chessboard then run the calibration first:
python3 capture_calib.py
then
python3 calibrate.py
How to run this program
Open separate terminal or ssh connection for these:
for broadcasting mavlink connection over UDP to control and monitor pixhawk over wi-fi
sudo ./mav.sh
but you need to install mavproxy first from here
to run the main program:
python3 main.py
then press ctrl+c when the terminal display:
>>> disabling camera auto setting
You can choose which algorithm is used via the -a
command line switch:
python main.py -a <algorithm>
options:
-a bm
-a sgbm
-a cudabm
-a cudasgm
bm: CPU block matching algorithm
sgbm: CPU semi global block matching algorithm
cudabm: CUDA/GPU block matching algorithm, similiar to bm but using GPU to compute (for my use case, this is the best so i set this as default)
cudasgm: CUDA/GPU semi global matching, based on OpenCV docs is similiar to sgbm but using GPU to compute. (I'm still curious why it is not named sgbm even if the documentation is similiar)
CudaSGM is the default if this option is ignored.
All configuration will be done via a config file named settings.conf
. I've made sure to write comments in the default configuration file to give an understanding of what each parameter does.
This file must live next to main.py
, and will be automatically loaded.
Output is the average sum from the disparity map's center area and is sent to Pixhawk.
This is a sample of the result from the disparity map; on the top left corner is FPS measured (when not streamed through SSH, the FPS increases to around 20)
Because of the limitation of the USB camera, there are slight delays between the capture of left and right images. This will affect depth mapping, especially when the rover is moving. Here is the delay proof