Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing steps from installation instructions #10

Open
SharkWipf opened this issue Sep 19, 2023 · 7 comments
Open

Missing steps from installation instructions #10

SharkWipf opened this issue Sep 19, 2023 · 7 comments

Comments

@SharkWipf
Copy link

Describe the bug
I just set this project up, but the installation instructions were incomplete.
I'm not sure if this should go in the AutoRecon or the AutoDecomp docs, so I figured I'd make an issue rather than a PR.
There are 2 steps missing from the installation instructions to get this working: COLMAP and the LoFTR pretrained models.

COLMAP can, if it isn't already be present, be installed through conda install -y -c conda-forge colmap, but the package doesn't always play very nice with conda, so it may be preferred to do conda install -y -c conda-forge mamba && mamba install -y -c conda-forge colmap, which can save literally hours of time sometimes.

As for LoFTR, they provide a page to download the pretrained models that this project uses in the installation section: https://github.com/zju3dv/LoFTR#installation. The weights (or at least the outdoor weights this project uses) need to be extracted to AutoRecon/third_party/AutoDecomp/third_party/LoFTR/weights/ manually.

After this, I can modify the low_res_demo script and successfully run it on my own data.

Sidenote: Have you considered upstreaming your work into Nerfstudio properly? There are more methods in there that have external dependencies, and getting your project upstreamed would let it make use of all the many improvements in Nerfstudio upstream, while making it easier to stay up-to-date.

@SharkWipf SharkWipf changed the title Missing steps from installation instructoins Missing steps from installation instructions Sep 19, 2023
@Haobo-Liu
Copy link

Could you please offer a correct low_res_demo script? I also think there are some mistakes in low_res demo script. Thnaks a lot!

@SharkWipf
Copy link
Author

Could you please offer a correct low_res_demo script? I also think there are some mistakes in low_res demo script. Thnaks a lot!

The only change I had to make to the demo script was the INST_REL_DIR path, to point it at my own data. Other than that, it worked fine for me (after installing COLMAP and the LoFTR weights as per above), and I got some very decent results. Not quite the level I was hoping for yet but I'm hoping I can get there by tweaking the settings a bit.

INST_REL_DIR should point at a folder containing an "images" folder, with in it your images themselves. Keep in mind by default it'll only use up to 40 images from that folder.

You'll have to manually run the export command at the end after everything has successfully run, pointing it at the directory containing your generated config.yml file and now trained models.

It's a bit messy the way it's set up atm, but it does work.

@SharkWipf
Copy link
Author

@Haobo-Liu some things I've noticed that might affect you:

  • Paths in the script aren't quoted properly, you can't use paths with spaces or special characters without modifying the script.
  • If your input images are outside of the data dir, it won't work either unless you modify the script. Either use relative paths from the data dir, or modify the script to not use the data dir where not necessary.

@Haobo-Liu
Copy link

@Haobo-Liu some things I've noticed that might affect you:

* Paths in the script aren't quoted properly, you can't use paths with spaces or special characters without modifying the script.

* If your input images are outside of the data dir, it won't work either unless you modify the script. Either use relative paths from the data dir, or modify the script to not use the data dir where not necessary.

Thanks a lot for your patient reply,but the error I met is a little strange.
I command:
bash ./exps/code-release/run_pipeline_demo_low-res.sh
and the following errors occurs:
#################################################################
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/lhb/projects/2023.9.19-ShoesReconstruction/AutoRecon/third_party/AutoDecomp/auto_decomp/cl │
│ i/inference_transformer.py:26 in │
│ │
│ 23 │ SfMTransformerConfig, │
│ 24 ) │
│ 25 from auto_decomp.feature_extraction.dino_vit import extract_features as dino_extractor │
│ ❱ 26 from auto_decomp.sfm import colmap_from_idr, sfm │
│ 27 │
│ 28 SfmMode = Enum("SfmMode", ["sparse_recon", "idr2colmap"]) │
│ 29 │
│ │
│ /home/lhb/projects/2023.9.19-ShoesReconstruction/AutoRecon/third_party/AutoDecomp/auto_decomp/sf │
│ m/colmap_from_idr.py:12 in │
│ │
│ 9 import hydra │
│ 10 import imagesize │
│ 11 import numpy as np │
│ ❱ 12 from hloc.utils.read_write_model import Camera, Image, rotmat2qvec, write_model │
│ 13 from hydra.core.config_store import ConfigStore │
│ 14 from hydra_zen import store │
│ 15 from loguru import logger │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ModuleNotFoundError: No module named 'hloc'

#################################################################
then,I go into inference_transformer.py and add
import sys
sys.path.append('the path of'hloc')
I solved the "ModuleNotFoundError: No module named 'hloc'"
but,some other problems still occurs:
##########################################
bash ./exps/code-release/run_pipeline_demo_low-res.sh
2023-09-21 18:34:06.665 | INFO | main:main:84 | - Running sfm w/ sparse features
2023-09-21 18:34:08,880 INFO worker.py:1642 -- Started a local Ray instance.
2023-09-21 18:34:09.148 | INFO | auto_decomp.sfm.sfm:reconstruct_instance:449 | - Reconstruction directory: sfm_spp-spg_sequential_np-10_nimgs-40
2023-09-21 18:34:09.148 | INFO | auto_decomp.sfm.sfm:read_write_cache:369 | - Cache updated: data/custom_data_example/co3d_chair/.cache.json
2023-09-21 18:34:09.152 | INFO | auto_decomp.sfm.sfm:evenly_sample_images:381 | - Images subsampled: 40 / 202
2023-09-21 18:34:09.152 | INFO | auto_decomp.sfm.sfm:reconstruct_instance:463 | - [custom_data_example/co3d_chair] #mapping_images: 40
[2023/09/21 18:34:09 hloc INFO] Extracting local features with configuration:
{'model': {'name': 'netvlad'},
'output': 'global-feats-netvlad',
'preprocessing': {'resize_max': 1024}}
[2023/09/21 18:34:09 hloc.extractors.netvlad INFO] Downloading the NetVLAD model with ['wget', 'https://cvg-data.inf.ethz.ch/hloc/netvlad/Pitts30K_struct.mat', '-O', '/home/lhb/.cache/torch/hub/netvlad/VGG16-NetVLAD-Pitts30K.mat'].
--2023-09-21 18:34:09-- https://cvg-data.inf.ethz.ch/hloc/netvlad/Pitts30K_struct.mat
Connecting to 127.0.0.1:8080... connected.
(TemporaryActor pid=437494) Exception raised in creation task: The actor died because of an error raised in its creation task, ray::SparseReconActor.init() (pid=437494, ip=192.168.1.35, actor_id=2f2a3113f33fea7c1d361be801000000, repr=<auto_decomp.sfm.sfm.FunctionActorManager._create_fake_actor_class..TemporaryActor object at 0x7f5eccaa16f0>)
(TemporaryActor pid=437494) RuntimeError: The actor with name SparseReconActor failed to import on the worker. This may be because needed library dependencies are not installed in the worker environment:
(TemporaryActor pid=437494)
(TemporaryActor pid=437494) ray::SparseReconActor.init() (pid=437494, ip=192.168.1.35, actor_id=2f2a3113f33fea7c1d361be801000000, repr=<auto_decomp.sfm.sfm.FunctionActorManager._create_fake_actor_class..TemporaryActor object at 0x7f5eccaa16f0>)
(TemporaryActor pid=437494) File "/home/lhb/anaconda3/envs/AutoDecomp/lib/python3.10/site-packages/ray/cloudpickle/cloudpickle.py", line 665, in subimport
(TemporaryActor pid=437494) import(name)
(TemporaryActor pid=437494) ModuleNotFoundError: No module named 'hloc'
(TemporaryActor pid=437541) RuntimeError: The actor with name FeatureActor failed to import on the worker. This may be because needed library dependencies are not installed in the worker environment:
Proxy request sent, awaiting response... 200 OK
Length: 554551295 (529M)
Saving to: ‘/home/lhb/.cache/torch/hub/netvlad/VGG16-NetVLAD-Pitts30K.mat’

/home/lhb/.cache/torch/hub/netvlad/VGG16-NetVLAD-Pi 100%[==================================================================================================================>] 528.86M 741KB/s in 15m 27s

2023-09-21 18:49:39 (584 KB/s) - ‘/home/lhb/.cache/torch/hub/netvlad/VGG16-NetVLAD-Pitts30K.mat’ saved [554551295/554551295]

100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40/40 [00:03<00:00, 12.02it/s]
[2023/09/21 18:49:47 hloc INFO] Finished exporting features.
[2023/09/21 18:49:47 hloc INFO] Extracting image pairs from a retrieval database.
[2023/09/21 18:49:47 hloc INFO] Found 600 pairs.
2023-09-21 18:49:47.160 | INFO | auto_decomp.sfm.pairs_from_sequential:main:144 | - Found 410 pairs (n_seq=280 | n_loop=343).
Error executing job with overrides: ['data_root=data', 'inst_rel_dir=custom_data_example/co3d_chair', 'sparse_recon.n_images=40', 'sparse_recon.force_rerun=True', 'sparse_recon.n_feature_workers=1', 'sparse_recon.n_recon_workers=1', 'triangulation.force_rerun=True', 'triangulation.n_feature_workers=1', 'triangulation.n_recon_workers=1', 'dino_feature.force_extract=True', 'dino_feature.n_workers=1']
(_QueueActor pid=437488) No module named 'auto_decomp'
(_QueueActor pid=437488) Traceback (most recent call last):
(_QueueActor pid=437488) File "/home/lhb/anaconda3/envs/AutoDecomp/lib/python3.10/site-packages/ray/_private/serialization.py", line 404, in deserialize_objects
(_QueueActor pid=437488) obj = self._deserialize_object(data, metadata, object_ref)
(_QueueActor pid=437488) File "/home/lhb/anaconda3/envs/AutoDecomp/lib/python3.10/site-packages/ray/_private/serialization.py", line 270, in _deserialize_object
(_QueueActor pid=437488) return self._deserialize_msgpack_data(data, metadata_fields)
(_QueueActor pid=437488) File "/home/lhb/anaconda3/envs/AutoDecomp/lib/python3.10/site-packages/ray/_private/serialization.py", line 225, in _deserialize_msgpack_data
(_QueueActor pid=437488) python_objects = self._deserialize_pickle5_data(pickle5_data)
(_QueueActor pid=437488) File "/home/lhb/anaconda3/envs/AutoDecomp/lib/python3.10/site-packages/ray/_private/serialization.py", line 215, in _deserialize_pickle5_data
(_QueueActor pid=437488) obj = pickle.loads(in_band)
(_QueueActor pid=437488) ModuleNotFoundError: No module named 'auto_decomp'
(TemporaryActor pid=437541) Exception raised in creation task: The actor died because of an error raised in its creation task, ray::FeatureActor.init() (pid=437541, ip=192.168.1.35, actor_id=343fbb1f9ce482f63ee3c5c201000000, repr=<auto_decomp.sfm.sfm.FunctionActorManager._create_fake_actor_class..TemporaryActor object at 0x7f904ae2dd20>)
(TemporaryActor pid=437541)
(TemporaryActor pid=437541) ray::FeatureActor.init() (pid=437541, ip=192.168.1.35, actor_id=343fbb1f9ce482f63ee3c5c201000000, repr=<auto_decomp.sfm.sfm.FunctionActorManager._create_fake_actor_class..TemporaryActor object at 0x7f904ae2dd20>)
(TemporaryActor pid=437541) File "/home/lhb/anaconda3/envs/AutoDecomp/lib/python3.10/site-packages/ray/cloudpickle/cloudpickle.py", line 665, in subimport
(TemporaryActor pid=437541) import(name)
(TemporaryActor pid=437541) ModuleNotFoundError: No module named 'hloc'
Traceback (most recent call last):
File "/home/lhb/projects/2023.9.19-ShoesReconstruction/AutoRecon/third_party/AutoDecomp/auto_decomp/cli/inference_transformer.py", line 85, in main
sfm.main(cfg.sparse_recon)
File "/home/lhb/projects/2023.9.19-ShoesReconstruction/AutoRecon/third_party/AutoDecomp/auto_decomp/sfm/sfm.py", line 610, in main
reconstruct_instance(copy.deepcopy(args), args.inst_rel_dir, *_queue_args, task_id=0)
File "/home/lhb/projects/2023.9.19-ShoesReconstruction/AutoRecon/third_party/AutoDecomp/auto_decomp/sfm/sfm.py", line 548, in reconstruct_instance
feature_queue.put_nowait(feature_task)
File "/home/lhb/anaconda3/envs/AutoDecomp/lib/python3.10/site-packages/ray/util/queue.py", line 198, in put_nowait
return self.put(item, block=False)
File "/home/lhb/anaconda3/envs/AutoDecomp/lib/python3.10/site-packages/ray/util/queue.py", line 102, in put
ray.get(self.actor.put_nowait.remote(item))
File "/home/lhb/anaconda3/envs/AutoDecomp/lib/python3.10/site-packages/ray/_private/auto_init_hook.py", line 24, in auto_init_wrapper
return fn(*args, **kwargs)
File "/home/lhb/anaconda3/envs/AutoDecomp/lib/python3.10/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
return func(*args, **kwargs)
File "/home/lhb/anaconda3/envs/AutoDecomp/lib/python3.10/site-packages/ray/_private/worker.py", line 2547, in get
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError: ray::_QueueActor.put_nowait() (pid=437488, ip=192.168.1.35, actor_id=c7e69685e0714b469c05c7e701000000, repr=<ray.util.queue._QueueActor object at 0x7faa6f3bf850>)
At least one of the input arguments for this task could not be computed:
ray.exceptions.RaySystemError: System error: No module named 'auto_decomp'
traceback: Traceback (most recent call last):
ModuleNotFoundError: No module named 'auto_decomp'

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
###################################################
I don't know how to solve it.Please give me some help.

@SharkWipf
Copy link
Author

This sounds like you missed some steps in the install instructions, did you fully follow the AutoDecomp installation guide linked from the AutoRecon install instructions?

@Haobo-Liu
Copy link

This sounds like you missed some steps in the install instructions, did you fully follow the AutoDecomp installation guide linked from the AutoRecon install instructions?

I appreciate a lot,I have done my problems.

@Genshin-Impact-king
Copy link

This sounds like you missed some steps in the install instructions, did you fully follow the AutoDecomp installation guide linked from the AutoRecon install instructions?

why it always shows that "OSError: could not read bytes",my script is the following:
DATA_ROOT=/root/autodl-tmp/AutoRecon/data/
INST_REL_DIR=custom_data_example/co3d_chair/
FORCE_RERUN=True

Coarse decomposition

python third_party/AutoDecomp/auto_decomp/cli/inference_transformer.py --config-name=cvpr
data_root=$DATA_ROOT
inst_rel_dir=$INST_REL_DIR
sparse_recon.n_images=40
sparse_recon.force_rerun=$FORCE_RERUN
sparse_recon.n_feature_workers=1
sparse_recon.n_recon_workers=1
triangulation.force_rerun=$FORCE_RERUN
triangulation.n_feature_workers=1 triangulation.n_recon_workers=1
dino_feature.force_extract=$FORCE_RERUN dino_feature.n_workers=1

can you help me?plz

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants