This directory contains the data needed for training and benchmarking various navigation models.
-
Download the data from the [dataset website] (http://buildingparser.stanford.edu/dataset.html).
- Raw meshes. We need the meshes
which are in the noXYZ folder. Download the tar files and place them in
the
stanford_building_parser_dataset_raw
folder. You need to downloadarea_1_noXYZ.tar
,area_3_noXYZ.tar
,area_5a_noXYZ.tar
,area_5b_noXYZ.tar
,area_6_noXYZ.tar
for training andarea_4_noXYZ.tar
for evaluation. - Annotations for setting up
tasks. We will need the file called
Stanford3dDataset_v1.2.zip
. Place the file in the directorystanford_building_parser_dataset_raw
.
- Raw meshes. We need the meshes
which are in the noXYZ folder. Download the tar files and place them in
the
-
Preprocess the data.
- Extract meshes using
scripts/script_preprocess_meshes_S3DIS.sh
. After thisls data/stanford_building_parser_dataset/mesh
should have 6 foldersarea1
,area3
,area4
,area5a
,area5b
,area6
, with textures and obj files within each directory. - Extract out room information and semantics from zip file using
scripts/script_preprocess_annoations_S3DIS.sh
. After this there should beroom-dimension
andclass-maps
folder indata/stanford_building_parser_dataset
. (If you find this script to crash because of an exception in np.loadtxt while processingArea_5/office_19/Annotations/ceiling_1.txt
, there is a special character on line 323474, that should be removed manually.)
- Extract meshes using
-
Download ImageNet Pre-trained models. We used ResNet-v2-50 for representing images. For RGB images this is pre-trained on ImageNet. For Depth images we distill the RGB model to depth images using paired RGB-D images. Both there models are available through
scripts/script_download_init_models.sh