Skip to content

Latest commit

 

History

History
997 lines (945 loc) · 29.6 KB

readme.md

File metadata and controls

997 lines (945 loc) · 29.6 KB

**** (provided by 'ADEPTLab') is a brand new dataset based on 4D radar that can be used for studies on deep learning object detection and tracking in the field of autonomous driving. The system of ego vehicle includes a high-resolution camera, a 80-line LiDAR and two up-to-date and different models of 4D radars operating in different modes(Arbe and ARS548). The dataset comprises of raw data collected from ego vehicle, including scenarios such as tunnels and urban, with weather conditions rainy, cloudy and sunny. Our dataset also includes data from different time periods, including dusk, nighttime, and daytime. Our collected raw data amounts to a total of 12.5 hours, encompassing a total distance of over 600 kilometers. Our dataset covers a route distance of approximately 50 kilometers. It consists of 151 continuous time sequences, with the majority being 20-second sequences, resulting in a total of 10,007 carefully time-synchronized frames.

<style> .table-noborder{ border: none; padding: 0px; } </style>
Image 1

a) First-person perspective observation

Image 1

b) Third-person perspective observation

Figure 1. Up to a visual range of 80 meters in urban

Radar Dataset

  • Notice: On the left is a color RGB image, while on the right side, the cyan represents the Arbe point cloud, the white represents the LiDAR point cloud, and the yellow represents the ARS548 point cloud.
<style> .table-noborder{ border: none; padding: 0px; } </style>
Image 1

a) sunny,daytime,up to a distance of 80 meters

Image 1

b) sunny,nightime,up to a distance of 80 meters

Image 1

c) rainy,daytime,up to a distance of 80 meters

Image 1

d) cloudy,daytime,up to a distance of 80 meters

Figure 1. Up to a visual range of 80 meters in urban

Figure 2. Data Visualization

The URLs listed below are useful for using the **** dataset and benchmark:

  • paper **** paper and appendix [arxiv]

Sensor Configuration

Figure 3. Sensor Configuration and Coordinate Systems

  • The specification of the autonomous vehicle system platform
Dataset Resolution Fov FPS
Range Azimuth Elevation Range Azimuth Elevation
camera - 1920X 1200X - - - 10
LiDAR 0.05m 0.2° 0.2° 230m 360° 40° 10
ARS548 RDI 0.22m 0.1° 0.1° 300m ±60° ±4° 20
Arbe Phoenix 0.07m 300m 100° 30° 20

Table 1. The specification of the autonomous vehicle system platform

  • The statistics of number of points cloud per frame
Transducers Minimum Values Average Values Maximum Values
LiDAR 74386 116096 133538
Arbe Phnoeix 898 11172 93721
ARS548 RDI 243 523 800

Table 2. The statistics of number of points cloud per frame

Data statistics

We separately counted the number of instances for each category in the Radeptset dataset and the distribution of different types of weather.

Figure 4. Distribution of weather conditions.

Figure 5. Distribution of instance conditions.

Environment

This is the documentation for how to use our detection frameworks with Radeptset-Radar dataset. We tested the Radeptset-Radar detection frameworks on the following environment:

  • Python 3.8.16 (3.10+ does not support open3d.)
  • Ubuntu 18.04/20.04
  • Torch 1.10.1+cu113
  • CUDA 11.3
  • opencv 4.2.0.32

Notice

[2022-09-30] The Radeptset-Radar dataset is made available via a network-attached storage (NAS) download link.

Preparing the Dataset

  • Via our server
  1. To download the dataset, log in to our server with the following credentials: ID : your email Password : your password
  2. After all files are downloaded, please arrange the workspace directory with the following structure:

Organize your code structure as follows

Frameworks
  ├── checkpoints
  ├── data
  ├── docs
  ├── Radeptsetdet
  ├── output

Organize the dataset according to the following file structure

Dataset
  ├── ImageSets
  ├── training
        ├── arbe
        ├── ars548
        ├── calib
        ├── image_2
        ├── label_2
        ├── velodyne
  ├── testing
        ├── arbe
        ├── ars548
        ├── calib
        ├── image_2
        ├── velodyne

Requirements

  1. Clone the repository
 git clone https://github.com/****/Radeptset-Radar.git
 cd Radeptsetdet
  1. Create a conda environment
conda create -n Radeptsetdet python=3.8.16
conda activate Radeptsetdet
  1. Install PyTorch (We recommend pytorch 1.10.1.)

  2. Install the dependencies

pip install -r requirements.txt
  1. Install Spconv(our cuda version is 113)
pip install spconv-cu113
  1. Build packages for Radeptsetdet
python setup.py develop

Train & Evaluation

  • Generate the data infos by running the following command:
python -m Radeptsetdet.datasets.Radeptset.Radeptset_dataset create_Radeptset_infos tools/cfgs/dataset_configs/Radeptset_dataset.yaml
# or you want to use arbe data
python -m Radeptsetdet.datasets.Radeptset.Radeptset_dataset_arbe create_Radeptset_infos tools/cfgs/dataset_configs/Radeptset_dataset_arbe.yaml
# or ars548
python -m Radeptsetdet.datasets.Radeptset.Radeptset_dataset_ars create_Radeptset_infos tools/cfgs/dataset_configs/Radeptset_dataset_ars.yaml
  • To train the model on single GPU, prepare the total dataset and run
python train.py --cfg_file ${CONFIG_FILE}
  • To train the model on multi-GPUS, prepare the total dataset and run
sh scripts/dist_train.sh ${NUM_GPUS} --cfg_file ${CONFIG_FILE}
  • To evaluate the model on single GPU, modify the path and run
python test.py --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE} --ckpt ${CKPT}
  • To evaluate the model on multi-GPUS, modify the path and run
sh scripts/dist_test.sh ${NUM_GPUS} \
    --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE}

Quick Demo

Here we provide a quick demo to test a pretrained model on the custom point cloud data and visualize the predicted results

  • Download the pretrained model as shown in Table 4~8.
  • Make sure you have installed the Open3d and mayavi visualization tools. If not, you could install it as follow:
pip install open3d
pip install mayavi
  • prepare your point cloud data
points[:, 3] = 0 
np.save(`my_data.npy`, points)
  • Run the demo with a pretrained model and point cloud data as follows
python demo.py --cfg_file ${CONFIG_FILE} \
    --ckpt ${CKPT} \
    --data_path ${POINT_CLOUD_DATA}

Experimental Results

Baseline Data Car Pedestrain Cyclist model pth
[email protected] [email protected] [email protected]
Easy Mod. Hard Easy Mod. Hard Easy Mod. Hard
VFF camera+LiDAR 94.60 84.14 78.77 39.79 35.99 36.54 55.87 51.55 51.00 link
camera+Arbe 31.83 14.43 11.30 0.01 0.01 0.01 0.20 0.07 0.08 link
camera+ARS548 12.60 6.53 4.51 0.00 0.00 0.00 0.00 0.00 0.00 link
M2Fusion LiDAR+Arbe 67.37 46.50 33.17 12.90 8.45 8.36 51.79 46.79 45.57 link
LiDAR+ARS548 69.56 49.13 35.43 12.69 9.80 9.67 42.42 40.92 39.98 link

Table 3. Multimodal Experimental Results([email protected] 0.5 0.5)

Baseline Data Car Pedestrain Cyclist model pth
[email protected] [email protected] [email protected]
Easy Mod. Hard Easy Mod. Hard Easy Mod. Hard
VFF camera+Lidar 94.60 84.28 80.55 40.32 36.59 37.28 55.87 51.55 51.00 link
camera+Arbe 36.09 17.20 13.23 0.01 0.01 0.01 0.20 0.08 0.08 link
camera+ARS548 16.34 9.58 6.61 0.00 0.00 0.00 0.00 0.00 0.00 link
M2Fusion LiDAR+Arbe 81.36 70.31 54.09 17.45 12.67 12.59 53.06 47.83 46.32 link
LiDAR+ARS548 86.88 74.75 57.52 21.50 18.18 18.00 43.12 41.57 40.29 link

Table 4. Multimodal Experimental Results([email protected] 0.5 0.5)

Baseline Data Car Pedestrain Cyclist model pth
[email protected] [email protected] [email protected]
Easy Mod. Hard Easy Mod. Hard Easy Mod. Hard
pointpillars LiDAR 77.93 50.14 36.36 25.31 22.34 22.01 25.60 24.35 23.97 link
Arbe 11.28 4.60 2.75 0.00 0.00 0.00 0.19 0.12 0.12 link
ARS548 10.64 9.09 9.09 0.00 0.00 0.00 0.99 0.63 0.58 link
RDIou LiDAR 62.33 37.56 27.43 20.40 17.47 17.03 38.26 35.62 35.02 link
Arbe 18.46 7.96 4.86 0.00 0.00 0.00 0.51 0.37 0.35 link
ARS548 1.35 0.57 0.29 0.00 0.00 0.00 0.21 0.15 0.15 link
VoxelRCNN LiDAR 84.19 54.19 38.53 36.66 32.08 31.66 38.89 35.13 34.52 link
Arbe 20.49 9.06 5.75 0.00 0.00 0.00 0.15 0.06 0.06 link
ARS548 4.04 1.54 0.76 0.00 0.00 0.00 0.24 0.21 0.21 link
Cas-V LiDAR 78.15 54.16 41.58 38.22 33.78 33.33 42.84 40.32 39.09 link
Arbe 6.66 2.00 1.05 0.00 0.00 0.00 0.05 0.04 0.04 link
ARS548 1.61 0.49 0.19 0.00 0.00 0.00 0.08 0.06 0.06 link
Cas-T LiDAR 72.46 42.62 30.77 40.61 34.87 34.45 35.42 33.78 33.36 link
Arbe 0.59 0.17 0.11 0.00 0.00 0.00 0.09 0.06 0.05 link
ARS548 3.16 1.60 1.00 0.00 0.00 0.00 0.36 0.20 0.20 link

Table 5. Single modity Experimental Results([email protected] 0.5 0.5)

Baseline Data Car Pedestrain Cyclist model pth
[email protected] [email protected] [email protected]
Easy Mod. Hard Easy Mod. Hard Easy Mod. Hard
pointpillars LiDAR 81.60 54.08 43.98 31.00 27.66 27.38 38.78 38.74 38.42 link
Arbe 29.78 14.91 9.70 0.00 0.00 0.00 0.41 0.24 0.23 link
ARS548 15.01 11.61 9.23 0.00 0.00 0.00 2.27 1.64 1.53 link
RDIou LiDAR 63.22 40.39 32.16 24.90 21.48 21.13 49.33 47.48 46.85 link
Arbe 35.01 17.08 11.09 0.00 0.00 0.00 0.84 0.66 0.65 link
ARS548 3.45 2.14 1.25 0.00 0.00 0.00 0.61 0.46 0.44 link
VoxelRCNN LiDAR 86.21 56.41 41.82 41.21 36.29 35.89 47.47 45.43 43.85 link
Arbe 40.53 20.71 13.16 0.00 0.00 0.00 0.21 0.15 0.15 link
ARS548 9.66 4.03 2.63 0.00 0.00 0.00 0.33 0.30 0.30 link
Cas-V LiDAR 80.25 58.46 49.17 42.88 38.11 37.58 51.51 50.03 49.35 link
Arbe 18.63 6.47 3.87 0.01 0.01 0.01 0.13 0.05 0.05 link
ARS548 4.14 1.62 0.84 0.00 0.00 0.00 0.25 0.21 0.19 link
Cas-T LiDAR 73.19 44.53 34.03 44.15 39.00 38.61 44.35 44.41 42.88 link
Arbe 3.27 1.57 1.12 0.00 0.00 0.00 0.17 0.08 0.08 link
ARS548 4.21 2.21 1.49 0.00 0.00 0.00 0.68 0.43 0.42 link

Table 6. Single modity Experimental Results([email protected] 0.5 0.5)

License

The **** dataset is published under the CC BY-NC-ND License, and all codes are published under the Apache License 2.0.

Acknowledgement

Citation

If you find this work is useful for your research, please consider citing: