This repository contains all the code and tools needed to build the "Syndrone" generated using the CARLA simulator.
Everything reported in this repository refers to the work done in the paper: "SynDrone--Multi-modal UAV Dataset for Urban Scenarios "
The repository is organized into four branches:
- main: dataset generation code
- analyses: code used to compute the benchmark semantic segmentation numerical results
- synth2real: code used to compute the benchmark domain adaptation numerical results
- detection: code used to compute the benchmark object detection numerical results
You can either download the full dataset here or download each sensor from the table below.
- Color ZIPs contain RGB images, Semantic Segmentation labels, Camera Extrinsics, and Bounding Box ground truth.
- Depth ZIPs contain the depth frames
- LiDAR ZIPs contain the LiDAR frames
- Split ZIP contains the lists of samples to use for training and test sets.
Town | Color | Depth | LiDAR |
---|---|---|---|
Town01 | link | link | link |
Town02 | link | link | link |
Town03 | link | link | link |
Town04 | link | link | link |
Town05 | link | link | link |
Town06 | link | link | link |
Town07 | link | link | link |
Town10HD | link | link | link |
-
Install miniconda (or anaconda) and create a new environment with the following command:
conda env create --name syndrone --file "<project_folder>/extra/carla_env.yml"
-
Activate the environment with the following command:
conda activate syndrone
-
Install the CARLA python API provided in the carla zip file with the following command (if you use Windows make sure to change the name to install the appropriate wheel file):
pip install "<CARLA_installation_folder>/PythonAPI/carla/dist/carla-0.9.12-cp39-cp39-linux_x86_64.whl"
-
Activate the environment with the following command (if not already activated)):
conda activate syndrone
-
Run the CARLA simulator with the following command:
cd <CARLA_installation_folder> ./CarlaUE4.sh
-
Log the trajectories for each town (for the setup follow the prompt proposed by the code):
python <project_folder>/log_spectator.py
-
Once all the required trajectories are logged, run the following command to generate a representative video of each trajectory:
python <project_folder>/validate_trajectories.py
-
Generate the dataset with the following command:
python <project_folder>/run_simulation.py --slen 120 --lidar True --fps 25
Arguments:
--pov
: List containing height and angle of the camera, format: [(H1,A1),(H2,A2),...];--slen
: Length of the acquired sequence in seconds;--lidar
: Wether to enable logging for the lidar, only heights in [50, 100)m will be used;--fps
: Fps of the generated data;--dry
: Wether to start in dry mode, no walkers or vehicles will be spawned.
-
If needed it is possible to render a single sample of the dataset specifying the town and the weather-daytime pair with the following command:
python <project_folder>/save_frames.py --town <town_number> --weather <weather_daytime_pair> --slen 120 --lidar True --fps 25
Arguments:
--town
: Number of the Town world to be loaded;--weather
: Weather and Daytime to be used for the simulation.--pov
: List containing height and angle of the camera, format: [(H1,A1),(H2,A2),...];--slen
: Length of the acquired sequence in seconds;--lidar
: Wether to enable logging for the lidar, only heights in [50, 100)m will be used;--fps
: Fps of the generated data;--dry
: Wether to start in dry mode, no walkers or vehicles will be spawned.
-
Optionally it is possible to generate the videos for each rendered sample with the following command:
python <project_folder>/parse_to_videos.py --fps 25
Arguments:
--fps
: Fps of the generated data;
This project was created by:
(Dipartimento di Ingegneria dell'Informazione (DEI) - UniPD)
If you want to use this work cite it using:
@article{rizzoli2023syndrone,
title={SynDrone--Multi-modal UAV Dataset for Urban Scenarios},
author={Rizzoli, Giulia and Barbato, Francesco and Caligiuri, Matteo and Zanuttigh, Pietro},
journal={arXiv preprint arXiv:2308.10491},
year={2023}
}