Skip to content

Commit

Permalink
Add README
Browse files Browse the repository at this point in the history
  • Loading branch information
caizhongang authored and yl-1993 committed Dec 1, 2021
1 parent 819af78 commit a002976
Show file tree
Hide file tree
Showing 28 changed files with 1,522 additions and 339 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -129,6 +129,9 @@ logs/
*.jpg
!demo/resources/*

# Resources as exception
!resources/*

# Body models
body_models

Expand Down
123 changes: 119 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,122 @@
# mmhuman3d
<div align="center">
<img src="resources/mmhuman3d-logo.png" width="400"/>
</div>

OpenMMLab 3D Human Toolbox and Benchmark.
## Introduction

## Installation
<!-- [![Documentation](https://readthedocs.org/projects/mmpose/badge/?version=latest)](https://mmpose.readthedocs.io/en/latest/?badge=latest)
[![actions](https://github.com/open-mmlab/mmpose/workflows/build/badge.svg)](https://github.com/open-mmlab/mmpose/actions)
[![codecov](https://codecov.io/gh/open-mmlab/mmpose/branch/master/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmpose)
[![PyPI](https://img.shields.io/pypi/v/mmpose)](https://pypi.org/project/mmpose/)
[![LICENSE](https://img.shields.io/github/license/open-mmlab/mmpose.svg)](https://github.com/open-mmlab/mmpose/blob/master/LICENSE)
[![Average time to resolve an issue](https://isitmaintained.com/badge/resolution/open-mmlab/mmpose.svg)](https://github.com/open-mmlab/mmpose/issues)
[![Percentage of issues still open](https://isitmaintained.com/badge/open/open-mmlab/mmpose.svg)](https://github.com/open-mmlab/mmpose/issues) -->

Please refer to [install.md](docs/install.md) for installation.
### Major Features

- **Reproducing popular methods with a modular framework**

MMHuman3D reimplements popular methods, allowing users to reproduce SOTAs with one line of code. The modular framework is convenient for rapid prototyping: the users may attempt various hyperparameter settings and even network architectures, without actually modifying the code.

- **Supporting various datasets with a unified data convention**

With the help of a convention toolbox, a unified data format *HumanData* is used to align all supported datasets. Preprocessed data files are also available.

- **Versatile visualization toolbox**

A suite of differentiale visualization tools for human parametric model rendering (including part segmentation, depth map and point clouds) and conventional 2D/3D keypoints are available.

## Benchmark and Model Zoo

More details can be found in [model_zoo.md](docs/model_zoo.md).

Supported methods:

<details open>
<summary>(click to collapse)</summary>

- [x] SMPLify (ECCV'2016)
- [x] SMPLify-X (CVPR'2019)
- [x] HMR (CVPR'2018)
- [x] SPIN (ICCV'2019)
- [x] VIBE (CVPR'2020)
- [x] HybrIK (CVPR'2021)

</details>

Supported datasets:

<details open>
<summary>(click to collapse)</summary>

- [x] 3DPW (ECCV'2018)
- [x] AGORA (CVPR'2021)
- [x] AMASS (ICCV'2019)
- [x] COCO (ECCV'2014)
- [x] COCO-WholeBody (ECCV'2020)
- [x] CrowdPose (CVPR'2019)
- [x] EFT (3DV'2021)
- [x] Human3.6M (TPAMI'2014)
- [x] InstaVariety (CVPR'2019)
- [x] LSP (BMVC'2010)
- [x] LSP-Extended (CVPR'2011)
- [x] MPI-INF-3DHP (3DC'2017)
- [x] MPII (CVPR'2014)
- [x] Penn Action (ICCV'2012)
- [x] PoseTrack18 (CVPR'2018)
- [x] SURREAL (CVPR'2017)
- [x] UP3D (CVPR'2017)

</details>

We will keep up with the latest progress of the community, and support more popular methods and frameworks.

If you have any feature requests, please feel free to leave an issue.

## Get Started

Please see [getting_started.md](docs/getting_started.md) for the basic usage of MMHuman3D.

## License

This project is released under the [Apache 2.0 license](LICENSE). Some supported methods may carry [additional licenses](docs/additional_licenses.md).

## Citation

If you find this project useful in your research, please consider cite:

```bibtex
@misc{mmhuman3d,
title={OpenMMLab Human Pose and Shape Estimation Toolbox and Benchmark},
author={MMHuman3D Contributors},
howpublished = {\url{https://github.com/open-mmlab/mmhuman3d}},
year={2021}
}
```

## Contributing

We appreciate all contributions to improve MMHuman3D. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.

## Acknowledgement

MMHuman3D is an open source project that is contributed by researchers and engineers from both the academia and the industry.
We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks.
We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new models.

## Projects in OpenMMLab

- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.
- [MIM](https://github.com/open-mmlab/mim): MIM Installs OpenMMLab Packages.
- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark.
- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.
- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab next-generation platform for general 3D object detection.
- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.
- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab next-generation action understanding toolbox and benchmark.
- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.
- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.
- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.
- [MMOCR](https://github.com/open-mmlab/mmocr): A Comprehensive Toolbox for Text Detection, Recognition and Understanding.
- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab next-generation toolbox for generative models.
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab FewShot Learning Toolbox and Benchmark.
70 changes: 70 additions & 0 deletions configs/hmr/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
# HMR

## Introduction

We provide the config files for HMR: [End-to-End Recovery of Human Shape and Pose](https://arxiv.org/pdf/1712.06584.pdf).

```BibTeX
@inproceedings{HMR,
author = {Angjoo Kanazawa and
Michael J. Black and
David W. Jacobs and
Jitendra Malik},
title = {End-to-End Recovery of Human Shape and Pose},
booktitle = {CVPR},
year = {2018}
}
```

## Notes

- [SMPL](https://smpl.is.tue.mpg.de/) v1.0 is used in our experiments.
- [J_regressor_extra.npy](https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmhuman3d/models/J_regressor_extra.npy?versionId=CAEQHhiBgIDD6c3V6xciIGIwZDEzYWI5NTBlOTRkODU4OTE1M2Y4YTI0NTVlZGM1)
- [J_regressor_h36m.npy](https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmhuman3d/models/J_regressor_h36m.npy?versionId=CAEQHhiBgIDE6c3V6xciIDdjYzE3MzQ4MmU4MzQyNmRiZDA5YTg2YTI5YWFkNjRi)
- [smpl_mean_params.npz](https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmhuman3d/models/smpl_mean_params.npz?versionId=CAEQHhiBgICN6M3V6xciIDU1MzUzNjZjZGNiOTQ3OWJiZTJmNThiZmY4NmMxMTM4)

Download the above resources and arrange them in the following file structure:

```text
mmhuman3d
├── mmhuman3d
├── docs
├── tests
├── tools
├── configs
└── data
├── body_models
│ ├── J_regressor_extra.npy
│ ├── J_regressor_h36m.npy
│ ├── smpl_mean_params.npz
│ └── smpl
│ ├── SMPL_FEMALE.pkl
│ ├── SMPL_MALE.pkl
│ └── SMPL_NEUTRAL.pkl
├── preprocessed_datasets
│ ├── cmu_mosh.npz
│ ├── coco_2014_train.npz
│ ├── h36m_mosh_train.npz
│ ├── lspet_train.npz
│ ├── lsp_train.npz
│ ├── mpi_inf_3dhp_train.npz
│ ├── mpii_train.npz
│ └── pw3d_test.npz
└── datasets
├── coco
├── h36m
├── lspet
├── lsp
├── mpi_inf_3dhp
├── mpii
└── pw3d
```

## Results and Models

We evaluate HMR on 3DPW. Values are MPJPE/PA-MPJPE.

| Config | 3DPW | Download |
|:------:|:-------:|:------:|
| [resnet50_hmr_pw3d.py](resnet50_hmr_pw3d.py) | 112.34 / 67.53 | [model](https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmhuman3d/models/hmr/resnet50_hmr_pw3d-04f40f58_20211201.pth?versionId=CAEQHhiBgMD6zJfR6xciIDE0ODQ3OGM2OWJjMTRlNmQ5Y2ZjMWZhMzRkOTFiZDFm) &#124; [log](https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmhuman3d/models/hmr/20211128_053633.log?versionId=CAEQHhiBgMDbzZfR6xciIGZkZjM2NWEwN2ExYzQ1NGViNzg2ODA0YTAxMmU4M2Vi) |
63 changes: 63 additions & 0 deletions configs/hybrik/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# HybrIK

## Introduction

We provide the config files for HybrIK: [HybrIK: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and Shape Estimation](https://arxiv.org/pdf/2011.14672.pdf).

```BibTeX
@inproceedings{HybrIK,
author = {Jiefeng Li and
Chao Xu and
Zhicun Chen and
Siyuan Bian and
Lixin Yang and
Cewu Lu},
title = {{HybrIK}: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and Shape Estimation},
booktitle = {CVPR},
year = {2021}
}
```

## Notes

- [SMPL](https://smpl.is.tue.mpg.de/) v1.0 is used in our experiments.
- [J_regressor_h36m.npy](https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmhuman3d/models/J_regressor_h36m.npy?versionId=CAEQHhiBgIDE6c3V6xciIDdjYzE3MzQ4MmU4MzQyNmRiZDA5YTg2YTI5YWFkNjRi)
- [smpl_mean_beta.npz](https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmhuman3d/models/hybrik/h36m_mean_beta.npy?versionId=CAEQHhiBgMDnt_DV6xciIGM5MzM0MGI1NzBmYjRkNDU5MzUxMjdkM2Y1ZWRiZWM2)
- [basicModel_neutral_lbs_10_207_0_v1.0.0.pkl](https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmhuman3d/models/hybrik/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl?versionId=CAEQHhiBgIC_v.zV6xciIDkwMDE4M2NjZTRkMjRmMWRiNTY3MWQ5YjQ0YzllNDYz)

Download the above resources and arrange them in the following file structure:

```text
mmhuman3d
├── mmhuman3d
├── docs
├── tests
├── tools
├── configs
└── data
├── body_models
│ ├── basicModel_neutral_lbs_10_207_0_v1.0.0.pkl
│ ├── h36m_mean_beta.npy
│ ├── J_regressor_h36m.npy
│ └── smpl
│ ├── SMPL_FEMALE.pkl
│ ├── SMPL_MALE.pkl
│ └── SMPL_NEUTRAL.pkl
├── preprocessed_datasets
│ ├── hybrik_coco_2017_train.npz
│ ├── hybrik_h36m_train.npz
│ └── hybrik_mpi_inf_3dhp_train.npz
└── datasets
├── coco
├── h36m
└── mpi_inf_3dhp
```


## Results and Models

We evaluate HybrIK on 3DPW. Values are MPJPE/PA-MPJPE.

| Config | 3DPW | Download |
|:------:|:-------:|:------:|
| [resnet34_hybrik_mixed.py](resnet34_hybrik_mixed.py) | 86.92 / 50.30 | [model](https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmhuman3d/models/hybrik/resnet34_hybrik_pw3d-b2f87fa5_20211201.pth?versionId=CAEQHhiBgID_vYnS6xciIDAwZTk1MDJhNmM0ZDRlZmI5MTk3ZTAzNzJkOTIwZTc3) &#124; [log](https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmhuman3d/models/hybrik/20211109_164017.log?versionId=CAEQHhiBgICdvonS6xciIDdiNGYzY2Q3N2NiMTQ5MzdhOTZjYjEwZDM0ZjI3ODU1) |
29 changes: 5 additions & 24 deletions configs/hybrik/resnet34_hybrik_mixed.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,4 @@
_base_ = ['../_base_/default_runtime.py']
# _base_ = [
# '../_base_/datasets/mixed_hybrik.py', '../_base_/schedulers/hybrik.py',
# '../_base_/default_runtime.py', 'resnet34_hybrik.py'
# ]

# optimizer
optimizer = dict(type='Adam', lr=1e-3, weight_decay=0)
Expand Down Expand Up @@ -47,8 +43,6 @@
dataset_type = 'HybrIKHumanImageDataset'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
# img_norm_cfg = dict( # to achieve the same values as Hybrik
# mean=[103.53, 116.28, 123.675], std=[57.375, 57.12, 58.395], to_rgb=True)

data_keys = [
'trans_inv', 'intrinsic_param', 'joint_root', 'depth_factor',
Expand Down Expand Up @@ -145,7 +139,7 @@
]

data = dict(
samples_per_gpu=32, # 32
samples_per_gpu=32,
workers_per_gpu=1,
train=dict(
type='MixedDataset',
Expand All @@ -155,37 +149,24 @@
dataset_name='h36m',
data_prefix='data',
pipeline=train_pipeline,
ann_file='h36m_hybrik_train.npz'),
ann_file='hybrik_h36m_train.npz'),
dict(
type=dataset_type,
dataset_name='mpi_inf_3dhp',
data_prefix='data',
pipeline=train_pipeline,
ann_file='mpi_inf_3dhp_hybrik_train.npz'),
ann_file='hybrik_mpi_inf_3dhp_train.npz'),
dict(
type=dataset_type,
dataset_name='coco',
data_prefix='data',
pipeline=train_pipeline,
ann_file='coco_2017_hybrik_train.npz'),
ann_file='hybrik_coco_2017_train.npz'),
],
partition=[0.4, 0.1, 0.5]),
test=dict(
type=dataset_type,
dataset_name='pw3d',
data_prefix='data',
pipeline=test_pipeline,
ann_file='3dpw_hybrik_test.npz'),
# test=dict(
# type=dataset_type,
# dataset_name='h36m',
# data_prefix='data',
# pipeline=test_pipeline,
# ann_file='h36m_hybrik_valid_protocol2.npz'),
# test=dict(
# type=dataset_type,
# dataset_name='mpi_inf_3dhp',
# data_prefix='data',
# pipeline=test_hp3d_pipeline,
# ann_file='mpi_inf_3dhp_hybrik_test.npz'),
)
ann_file='hybrik_pw3d_test.npz'))
Loading

0 comments on commit a002976

Please sign in to comment.