Skip to content

Commit 538b21a

Browse files
[docs] Add pytorchvideo docs
Add pytorchvideo tutorial docs for using a pytorchvideo model as an encoder through the TorchVideoEncoder class. ghstack-source-id: 7a30c75 Pull Request resolved: #1164
1 parent 176e9c4 commit 538b21a

File tree

2 files changed

+103
-0
lines changed

2 files changed

+103
-0
lines changed
Lines changed: 102 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,102 @@
1+
---
2+
id: pytorchvideo
3+
title: Using Pytorchvideo
4+
sidebar_label: Using Pytorchvideo
5+
---
6+
7+
MMF is integrating with Pytorchvideo!
8+
9+
This means you'll be able to use Pytorchvideo models, datasets, and transforms in multimodal models from MMF.
10+
Pytorch datasets and transforms are coming soon!
11+
12+
If you find PyTorchVideo useful in your work, please use the following BibTeX entry for citation.
13+
```
14+
@inproceedings{fan2021pytorchvideo,
15+
author = {Haoqi Fan and Tullie Murrell and Heng Wang and Kalyan Vasudev Alwala and Yanghao Li and Yilei Li and Bo Xiong and Nikhila Ravi and Meng Li and Haichuan Yang and Jitendra Malik and Ross Girshick and Matt Feiszli and Aaron Adcock and Wan-Yen Lo and Christoph Feichtenhofer},
16+
title = {{PyTorchVideo}: A Deep Learning Library for Video Understanding},
17+
booktitle = {Proceedings of the 29th ACM International Conference on Multimedia},
18+
year = {2021},
19+
note = {\url{https://pytorchvideo.org/}},
20+
}
21+
```
22+
23+
## Setup
24+
25+
In order to use pytorchvideo in MMF you need pytorchvideo installed.
26+
You can install pytorchvideo by running
27+
```
28+
pip install pytorchvideo
29+
```
30+
For detailed instructions consult https://github.com/facebookresearch/pytorchvideo/blob/main/INSTALL.md
31+
32+
33+
## Using Pytorchvideo Models in MMF
34+
35+
Currently Pytorchvideo models are supported as MMF encoders.
36+
To use a Pytorchvideo model as the image encoder for your multimodal model,
37+
use the MMF TorchVideoEncoder or write your own encoder that uses pytorchvideo directly.
38+
39+
The TorchVideoEncoder class is a wrapper around pytorchvideo models.
40+
To instantiate a pytorchvideo model as an encoder you can do,
41+
42+
```python
43+
from mmf.modules import encoders
44+
from omegaconf import OmegaConfg
45+
46+
config = OmegaConf.create(
47+
{
48+
"name": "torchvideo",
49+
"model_name": "slowfast_r50",
50+
"random_init": True,
51+
"drop_last_n_layers": -1,
52+
}
53+
)
54+
encoder = encoders.TorchVideoEncoder(config)
55+
56+
# some video input
57+
fast = torch.rand((1, 3, 32, 224, 224))
58+
slow = torch.rand((1, 3, 8, 224, 224))
59+
output = encoder([slow, fast])
60+
```
61+
62+
In our config object, we specify that we want to build the `torchvideo` (name) encoder,
63+
that we want to use the pytorchvideo model `slowfast_r50` (model_name),
64+
without pretrained weights (`random_init: True`),
65+
and that we want to remove the last module of the network (the transformer head) (`drop_last_n_layers: -1`) to just get the hidden state.
66+
This part depends on which model you're using and what you need it for.
67+
68+
This encoder is usually configured from yaml through your model_config yaml.
69+
70+
71+
Suppose we want to use MViT as our image encoder and we only want the first hidden state.
72+
As the MViT model in pytorchvideo returns hidden states in format (batch size, feature dim, num features),
73+
we want to pass in MViT custom configs and choose the cls pooler.
74+
75+
```python
76+
from mmf.modules import encoders
77+
from omegaconf import OmegaConfg
78+
79+
config = {
80+
"name": "pytorchvideo",
81+
"model_name": "mvit_base_32x3",
82+
"random_init": True,
83+
"drop_last_n_layers": 0,
84+
"pooler_name": "cls",
85+
"spatial_size": 224,
86+
"temporal_size": 8,
87+
"head": None,
88+
"embed_dim_mul": [[1, 2.0], [3, 2.0], [14, 2.0]],
89+
"atten_head_mul": [[1, 2.0], [3, 2.0], [14, 2.0]],
90+
"pool_q_stride_size": [[1, 1, 2, 2], [3, 1, 2, 2], [14, 1, 2, 2]],
91+
"pool_kv_stride_adaptive": [1, 8, 8],
92+
"pool_kvq_kernel": [3, 3, 3],
93+
}
94+
encoder = encoders.PytorchVideoEncoder(OmegaConf.create(config))
95+
96+
# some video input
97+
x = torch.rand((1, 3, 8, 224, 224))
98+
output = encoder(x)
99+
```
100+
101+
Here we use the TorchVideoEncoder class to make our MViT model and pick a pooler.
102+
The configs are passed onto the MViT pytorchvideo model.

website/sidebars.js

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ module.exports = {
2626
'notes/model_zoo',
2727
'notes/pretrained_models',
2828
'notes/projects',
29+
'tutorials/pytorchvideo',
2930
],
3031
Tutorials: [
3132
'tutorials/dataset',

0 commit comments

Comments
 (0)