Skip to content

Latest commit

 

History

History
152 lines (96 loc) · 4.15 KB

File metadata and controls

152 lines (96 loc) · 4.15 KB

Human 3D Avatar Generation from an Image

This project implements a robust system to generate realistic 3D human avatars from single RGB images. It integrates advanced techniques in computer vision and deep learning, focusing on pose-guided normal prediction and fine-detailed implicit surface reconstruction. The system is suitable for applications in virtual reality, fashion, animation, and gaming.


Team


Key Features

  • Pose-Guided Normal Prediction
    Leverages SMPL body models and neural networks to estimate accurate surface normals for clothed human bodies.

  • Fine-Detailed Reconstruction
    Combines local image features and body shape priors to create detailed 3D models with intricate clothing and surface textures.

  • High-Quality 3D Models
    Capable of generating 3D meshes with high fidelity, validated on datasets like AGORA and THuman.


Applications

  • Virtual Reality & Gaming: Create lifelike avatars for immersive environments.
  • Fashion & Animation: Design detailed 3D characters for creative industries.
  • Healthcare: Convert medical scans (e.g., CT/MRI) into 3D anatomical models.

Project Workflow

  1. Input Processing
    An RGB image is processed to estimate a parametric SMPL body mesh.

  2. Normal Map Prediction
    Pose-guided normal maps are generated to represent surface normals of the body.

  3. Iterative Refinement
    Normal maps are refined iteratively for enhanced accuracy.

  4. 3D Reconstruction
    Combines local features with normal maps to generate fine-detailed 3D meshes.


Dataset

The system was trained and evaluated on the AGORA and THuman datasets. Additional synthetic renderings were used to supplement training data.


Installation

This document provides step-by-step instructions for setting up the PIFuHD system for high-resolution 3D human digitization.

Step 1: Conda Installation

If you do not already have Conda installed, follow this YouTube tutorial to install Conda.

Step 2: Clone the Repository

Clone this GitHub repository using the following commands:

git clone https://github.com/facebookresearch/pifuhd
cd pifuhd

Step 3: Create a Conda Environment

  1. Create a new Conda environment:
conda create -n pifuhd python=3.6
  1. Activate the environment:
conda activate pifuhd

Step 4: Install Dependencies

Using Conda

Install PyTorch and torchvision:

conda install -c pytorch pytorch
conda install -c pytorch torchvision

Using pip

Install additional libraries:

pip install pillow
pip install json
pip install scikit-image
pip install tqdm
pip install opencv-python
pip install trimesh
pip install ffmpeg

Install PyOpenGL

Download the appropriate .whl file for PyOpenGL and install it:

pip install C:/Users/user/Downloads/PyOpenGL-3.1.5-cp36-cp36m-win_amd64.whl

Step 5: Setup Checkpoints

Create a directory for storing model checkpoints:

mkdir checkpoints

Link

Step 6: Run the Model

To test the setup, run the following commands:

1) Generate results from an input image:

python -m apps.simple_test

2) Render a turntable animation of the reconstruction:

python -m apps.render_turntable -f ./results/pifuhd_final/recon -ww 512 -hh 512

Results

1 Gif

1 2d