Skip to content

Dataset and official Caffe Implementation for learning a condition-robust feature representation for long-term visual localization

Notifications You must be signed in to change notification settings

QVPR/DLfeature_PlaceRecog_icra2017

 
 

Repository files navigation

Note

This fork is intended to generate HybridNet and AMOSNet descriptors given an image folder path. The codebase needs significant cleaning. But it serves the purpse for now. The original Readme is stored as README_original.md

How To

Only HPC and CPU is supported for now.

Setup HPC

Log on to HPC and run an interactive job (set parameters as per your requirements)

qsub -I -l ncpus=1,mem=10gb,walltime=12:00:00

When a job is assigned:

source /etc/profile.d/modules.sh
module load caffe/rc3-foss-2016a-7.5.18-python-2.7.11
cd /work/qvpr/workspace/DLfeature_PlaceRecog_icra2017/

Extract features:

python extract_feat_usingAMOS.py -p /work/qvpr/data/ready/gt_aligned/sample_2014-Multi-Lane-Road-Sideways-Camera/NIL/images/

Descriptors will be stored in your current directory. -p <imgDirPath> is from where images are read. Additionally, one can add -u uniId in the above command to include a uniqueStringId in the default savename.

Model choice

Defaul model is HybridNet, one can use -m AmosNet to use AmosNet model instead.

Layer choice

By default, features from fc7 layer will be extracted. Use -l option to specify another layer, say, conv6 or conv3.

Run python extract_feat_usingAMOS.py -h to know all the choices.

About

Dataset and official Caffe Implementation for learning a condition-robust feature representation for long-term visual localization

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.0%
  • Shell 1.0%