Skip to content

Commit 7b65e82

Browse files
committed
Update converter
1 parent 19bf57b commit 7b65e82

12 files changed

+799
-647
lines changed

BigGAN.py

+6
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,12 @@
1818
# block at both resolution 32x32 and 64x64. Just '64' will apply at 64x64.
1919
def G_arch(ch=64, attention='64', ksize='333333', dilation='111111'):
2020
arch = {}
21+
arch[512] = {'in_channels' : [ch * item for item in [16, 16, 8, 8, 4, 2, 1]],
22+
'out_channels' : [ch * item for item in [16, 8, 8, 4, 2, 1, 1]],
23+
'upsample' : [True] * 7,
24+
'resolution' : [8, 16, 32, 64, 128, 256, 512],
25+
'attention' : {2**i: (2**i in [int(item) for item in attention.split('_')])
26+
for i in range(3,10)}}
2127
arch[256] = {'in_channels' : [ch * item for item in [16, 16, 8, 8, 4, 2]],
2228
'out_channels' : [ch * item for item in [16, 8, 8, 4, 2, 1]],
2329
'upsample' : [True] * 6,

README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ This code is by Andy Brock and Alex Andonian.
1212
You will need:
1313

1414
- [PyTorch](https://PyTorch.org/), version 1.0.1
15-
- tqdm, scipy, and h5py
15+
- tqdm, numpy, scipy, and h5py
1616
- The ImageNet training set
1717

1818
First, you may optionally prepare a pre-processed HDF5 version of your target dataset for faster I/O. Following this (or not), you'll need the Inception moments needed to calculate FID. These can both be done by modifying and running
@@ -92,11 +92,11 @@ but it looks like this particular model got a winning ticket. Regardless, we pro
9292

9393
## A Note On The Design Of This Repo
9494
This code is designed from the ground up to serve as an extensible, hackable base for further research code.
95-
I've put a lot of thought into making sure the abstractions are the *right* thickness for how I do research--not so thick as to be impenetrable, but not so thin as to be useless.
95+
We've put a lot of thought into making sure the abstractions are the *right* thickness for research--not so thick as to be impenetrable, but not so thin as to be useless.
9696
The key idea is that if you want to experiment with a SOTA setup and make some modification (try out your own new loss function, architecture, self-attention block, etc) you should be able to easily do so just by dropping your code in one or two places, without having to worry about the rest of the codebase.
9797
Things like the use of self.which_conv and functools.partial in the BigGAN.py model definition were put together with this in mind, as was the design of the Spectral Norm class inheritance.
9898

99-
With that said, this is a somewhat large codebase for a single project. While I tried to be thorough with the comments, if there's something you think could be more clear, better written, or better refactored, please feel free to raise an issue or a pull request.
99+
With that said, this is a somewhat large codebase for a single project. While we tried to be thorough with the comments, if there's something you think could be more clear, better written, or better refactored, please feel free to raise an issue or a pull request.
100100

101101
## Feature Requests
102102
Want to work on or improve this code? There are a couple things this repo would benefit from, but which don't yet work.

TFHub/README.md

+14
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
# BigGAN-PyTorch TFHub converter
2+
This dir contains scripts for taking the [pre-trained generator weights from TFHub](https://tfhub.dev/s?q=biggan) and porting them to BigGAN-Pytorch.
3+
4+
In addition to the base libraries for BigGAN-PyTorch, to run this code you will need:
5+
6+
TensorFlow
7+
TFHub
8+
parse
9+
10+
Note that this code is only presently set up to run the ported models without truncation--you'll need to accumulate standing stats at each truncation level yourself if you wish to employ it.
11+
12+
To port the 128x128 model from tfhub, produce a pretrained weights .pth file, and generate samples, run
13+
14+
python converter.py -r 128 --generate_samples

0 commit comments

Comments
 (0)