Skip to content

Commit 5b9d909

Browse files
committed
Implementation for Cognitive Mapping and Planning paper.
1 parent c136af6 commit 5b9d909

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

51 files changed

+8476
-0
lines changed
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
deps
2+
*.pyc
3+
lib*.so
4+
lib*.so*
Lines changed: 122 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,122 @@
1+
# Cognitive Mapping and Planning for Visual Navigation
2+
**Saurabh Gupta, James Davidson, Sergey Levine, Rahul Sukthankar, Jitendra Malik**
3+
4+
**Computer Vision and Pattern Recognition (CVPR) 2017.**
5+
6+
**[ArXiv](https://arxiv.org/abs/1702.03920),
7+
[Project Website](https://sites.google.com/corp/view/cognitive-mapping-and-planning/)**
8+
9+
### Citing
10+
If you find this code base and models useful in your research, please consider
11+
citing the following paper:
12+
```
13+
@inproceedings{gupta2017cognitive,
14+
title={Cognitive Mapping and Planning for Visual Navigation},
15+
author={Gupta, Saurabh and Davidson, James and Levine, Sergey and
16+
Sukthankar, Rahul and Malik, Jitendra},
17+
booktitle={CVPR},
18+
year={2017}
19+
}
20+
```
21+
22+
### Contents
23+
1. [Requirements: software](#requirements-software)
24+
2. [Requirements: data](#requirements-data)
25+
3. [Test Pre-trained Models](#test-pre_trained-models)
26+
4. [Train your Own Models](#train-your-own-models)
27+
28+
### Requirements: software
29+
1. Python Virtual Env Setup: All code is implemented in Python but depends on a
30+
small number of python packages and a couple of C libraries. We recommend
31+
using virtual environment for installing these python packages and python
32+
bindings for these C libraries.
33+
```Shell
34+
VENV_DIR=venv
35+
pip install virtualenv
36+
virtualenv $VENV_DIR
37+
source $VENV_DIR/bin/activate
38+
39+
# You may need to upgrade pip for installing openv-python.
40+
pip install --upgrade pip
41+
# Install simple dependencies.
42+
pip install -r requirements.txt
43+
44+
# Patch bugs in dependencies.
45+
sh patches/apply_patches.sh
46+
```
47+
48+
2. Install [Tensorflow](https://www.tensorflow.org/) inside this virtual
49+
environment. Typically done with `pip install --upgrade tensorflow-gpu`.
50+
51+
3. Swiftshader: We use
52+
[Swiftshader](https://github.com/google/swiftshader.git), a CPU based
53+
renderer to render the meshes. It is possible to use other renderers,
54+
replace `SwiftshaderRenderer` in `render/swiftshader_renderer.py` with
55+
bindings to your renderer.
56+
```Shell
57+
mkdir -p deps
58+
git clone --recursive https://github.com/google/swiftshader.git deps/swiftshader-src
59+
cd deps/swiftshader-src && git checkout 91da6b00584afd7dcaed66da88e2b617429b3950
60+
mkdir build && cd build && cmake .. && make -j 16 libEGL libGLESv2
61+
cd ../../../
62+
cp deps/swiftshader-src/build/libEGL* libEGL.so.1
63+
cp deps/swiftshader-src/build/libGLESv2* libGLESv2.so.2
64+
```
65+
66+
4. PyAssimp: We use [PyAssimp](https://github.com/assimp/assimp.git) to load
67+
meshes. It is possible to use other libraries to load meshes, replace
68+
`Shape` `render/swiftshader_renderer.py` with bindings to your library for
69+
loading meshes.
70+
```Shell
71+
mkdir -p deps
72+
git clone https://github.com/assimp/assimp.git deps/assimp-src
73+
cd deps/assimp-src
74+
git checkout 2afeddd5cb63d14bc77b53740b38a54a97d94ee8
75+
cmake CMakeLists.txt -G 'Unix Makefiles' && make -j 16
76+
cd port/PyAssimp && python setup.py install
77+
cd ../../../..
78+
cp deps/assimp-src/lib/libassimp* .
79+
```
80+
81+
5. graph-tool: We use [graph-tool](https://git.skewed.de/count0/graph-tool)
82+
library for graph processing.
83+
```Shell
84+
mkdir -p deps
85+
# If the following git clone command fails, you can also download the source
86+
# from https://downloads.skewed.de/graph-tool/graph-tool-2.2.44.tar.bz2
87+
git clone https://git.skewed.de/count0/graph-tool deps/graph-tool-src
88+
cd deps/graph-tool-src && git checkout 178add3a571feb6666f4f119027705d95d2951ab
89+
bash autogen.sh
90+
./configure --disable-cairo --disable-sparsehash --prefix=$HOME/.local
91+
make -j 16
92+
make install
93+
cd ../../
94+
```
95+
96+
### Requirements: data
97+
1. Download the Stanford 3D Inddor Spaces Dataset (S3DIS Dataset) and ImageNet
98+
Pre-trained models for initializing different models. Follow instructions in
99+
`data/README.md`
100+
101+
### Test Pre-trained Models
102+
1. Download pre-trained models using
103+
`scripts/scripts_download_pretrained_models.sh`
104+
105+
2. Test models using `scripts/script_test_pretrained_models.sh`.
106+
107+
### Train Your Own Models
108+
All models were trained asynchronously with 16 workers each worker using data
109+
from a single floor. The default hyper-parameters coorespond to this setting.
110+
See [distributed training with
111+
Tensorflow](https://www.tensorflow.org/deploy/distributed) for setting up
112+
distributed training. Training with a single worker is possible with the current
113+
code base but will require some minor changes to allow each worker to load all
114+
training environments.
115+
116+
### Contact
117+
For questions or issues open an issue on the tensorflow/models [issues
118+
tracker](https://github.com/tensorflow/models/issues). Please assign issues to
119+
@s-gupta.
120+
121+
### Credits
122+
This code was written by Saurabh Gupta (@s-gupta).

cognitive_mapping_and_planning/__init__.py

Whitespace-only changes.

cognitive_mapping_and_planning/cfgs/__init__.py

Whitespace-only changes.

0 commit comments

Comments
 (0)