Skip to content

Commit eb42d44

Browse files
committed
Fix typo in deep
1 parent 2955e26 commit eb42d44

File tree

5 files changed

+8
-7
lines changed

5 files changed

+8
-7
lines changed

BigGANdeep.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ def forward(self, x, y):
5959
h = self.upsample(h)
6060
x = self.upsample(x)
6161
# 3x3 convs
62-
h = self.conv2(self.activation(self.bn2(h, y)))
62+
h = self.conv2(h)
6363
h = self.conv3(self.activation(self.bn3(h, y)))
6464
# Final 1x1 conv
6565
h = self.conv4(self.activation(self.bn4(h, y)))

README.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ You will need:
1515
- tqdm, scipy, and h5py
1616
- The ImageNet training set
1717

18-
First, you may optionally prepare a pre-processed HDF5 version of your target dataset for faster I/O. Following this (or not), you'll need the Inception moments needed to calculate FID. This can be done by modifying and running
18+
First, you may optionally prepare a pre-processed HDF5 version of your target dataset for faster I/O. Following this (or not), you'll need the Inception moments needed to calculate FID. These can both be done by modifying and running
1919

2020
```sh
2121
sh scripts/utils/prepare_data.sh
@@ -30,17 +30,17 @@ full-sized BigGAN model with a batch size of 256 and 8 gradient accumulations, f
3030
You will first need to figure out the maximum batch size your setup can support. The pre-trained models provided here were trained on 8xV100 (16GB VRAM each) which can support slightly more than the BS256 used by default.
3131
Once you've determined this, you should modify the script so that the batch size times the number of gradient accumulations is equal to your desired total batch size (BigGAN defaults to 2048).
3232

33-
Note also that this script also uses the `--load_in_mem` which loads the entire (~64GB) I128.hdf5 file into RAM for faster data loading. If you don't have enough RAM to support this (probably 96GB+), remove this argument.
33+
Note also that this script uses the `--load_in_mem` arg, which loads the entire (~64GB) I128.hdf5 file into RAM for faster data loading. If you don't have enough RAM to support this (probably 96GB+), remove this argument.
3434

3535
During training, this script will output logs with training metrics and test metrics, will save multiple copies (2 most recent and 5 highest-scoring) of the model weights/optimizer params, and will produce samples and interpolations every time it saves weights.
3636
The logs folder contains scripts to process these logs and plot the results using MATLAB (sorry not sorry).
3737

38-
After training, one can use `sample.py` to produce additional samples and interpolations, test with different truncation values, batch sizes, number of standing stat accumulations, etc. See the `sample_BigGAN_256x8.sh` script for an example.
38+
After training, one can use `sample.py` to produce additional samples and interpolations, test with different truncation values, batch sizes, number of standing stat accumulations, etc. See the `sample_BigGAN_bs256x8.sh` script for an example.
3939

4040
By default, everything is saved to weights/samples/logs/data folders which are assumed to be in the same folder as this repo.
4141
You can point all of these to a different base folder using the `--base_root` argument, or pick specific locations for each of these with their respective arguments (e.g. `--logs_root`).
4242

43-
There are scripts to run a model on CIFAR, and to run BigGAN-deep, SA-GAN (with EMA) and SN-GAN on ImageNet. The SA-GAN code assumes you have 4xTitanX (or equivalent in terms of GPU RAM) and will run with a batch size of 128 and 2 gradient accumulations.
43+
Additionally, we include scripts to run a model on CIFAR, and to run BigGAN-deep, SA-GAN (with EMA) and SN-GAN on ImageNet. The SA-GAN code assumes you have 4xTitanX (or equivalent in terms of GPU RAM) and will run with a batch size of 128 and 2 gradient accumulations.
4444

4545

4646
## Pretrained models
@@ -110,9 +110,10 @@ Want to work on or improve this code? There are a couple things this repo would
110110
See [This directory](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a) for ImageNet labels.
111111

112112
## Acknowledgments
113-
We would like to thank Jiahui Yu and Ming-Yu Liu of NVIDIA for helping run experiments. Thanks to Google for the generous cloud credit donations.
113+
Thanks to Google for the generous cloud credit donations.
114+
115+
[SyncBN](https://github.com/vacancy/Synchronized-BatchNorm-PyTorch) by Jiayuan Mao and Tete Xiao.
114116

115-
[SyncBN](https://github.com/vacancy/Synchronized-BatchNorm-PyTorch)] by Jiayuan Mao and Tete Xiao.
116117
[Progress bar](https://github.com/Lasagne/Recipes/tree/master/papers/densenet) originally from Jan Schlüter.
117118

118119
Test metrics logger from [VoxNet.](https://github.com/dimatura/voxnet)

imgs/party.mp4

2.4 MB
Binary file not shown.
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)