Skip to content

nissmar/ShapeShifter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ShapeShifter: 3D Variations Using Multiscale and Sparse Point-Voxel Diffusion [CVPR 2025]

Nissim Maruani1,2, Wang Yifan3, Matthew Fisher3, Pierre Alliez1,2, Mathieu Desbrun1,4.

1Inria, 2Université Côte d'Azur, 3Adobe Research, 4École polytechnique

ShapeShifter Summary

Please see our project page for video presentation & 3D meshes.

News

  • 2025-02 Accepted at CVPR 2025 🚀🚀🚀

Getting Started

  1. Start by cloning the repository and fVDB submodule:
git clone --recursive https://github.com/nissmar/ShapeShifter.git
  1. Create the shapeshifter conda environment (tested with CUDA 12.1):
conda env create -f dev_env.yml
conda activate shapeshifter
  1. Our code requires building fVDB, which can take a while (please refer to the original README for more details). Run:
cd openvdb/fvdb
export MAX_JOBS=$(free -g | awk "/^Mem:/{jobs=int($4/2.5); if(jobs<1) jobs=1; print jobs}")
pip install .
cd ../..

Data and Pretrained Models

Download the data and/or pre-trained checkpoints here and place them at the source of this repository.

To process your own data, see here.

Quickstart

Run the notebook src/diffusion/sample_diffusion.ipynb

Evaluation

Sample diffusion models and extract pointclouds

python src/diffusion/sample_diffusion.py

Mesh the pointclouds

python src/meshing/mesh_with_poisson_or_apss.py

Compute the metrics with src/eval/eval_metrics.ipynb

Training

Sequential training

Training roughly takes 12 minutes for each level (upsampler+diffusion). The 5 levels can be trained sequentially on a single machine.

To train a single shape:

bash scripts/train_single_shape.sh acropolis

To reproduce the experiments of the paper:

bash scripts/train_all.sh

Parallel training

All of the levels (upsampler + diffusion) are independent of each other and can be trained in parallel.

For the first level:

python src/diffusion/train_diffusion.py -model_name "acropolis" -level 0 -config "./configs/train_diffusion_0.yaml"

For subsequent levels:

python src/diffusion/train_upsamplers.py -model_name "acropolis" -level 1 -config "./configs/train_upsampler.yaml" ; 
python src/diffusion/train_diffusion.py -model_name "acropolis" -level 1 -config "./configs/train_diffusion_up.yaml"

Acknowledgments

This work was supported by the French government through the 3IA Cote d’Azur Investments in the project managed by the National Research Agency (ANR-23-IACL-0001), Ansys, Adobe Research, and a Choose France Inria chair.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published