You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ This code is by Andy Brock and Alex Andonian.
12
12
You will need:
13
13
14
14
-[PyTorch](https://PyTorch.org/), version 1.0.1
15
-
- tqdm, scipy, and h5py
15
+
- tqdm, numpy, scipy, and h5py
16
16
- The ImageNet training set
17
17
18
18
First, you may optionally prepare a pre-processed HDF5 version of your target dataset for faster I/O. Following this (or not), you'll need the Inception moments needed to calculate FID. These can both be done by modifying and running
@@ -92,11 +92,11 @@ but it looks like this particular model got a winning ticket. Regardless, we pro
92
92
93
93
## A Note On The Design Of This Repo
94
94
This code is designed from the ground up to serve as an extensible, hackable base for further research code.
95
-
I've put a lot of thought into making sure the abstractions are the *right* thickness for how I do research--not so thick as to be impenetrable, but not so thin as to be useless.
95
+
We've put a lot of thought into making sure the abstractions are the *right* thickness for research--not so thick as to be impenetrable, but not so thin as to be useless.
96
96
The key idea is that if you want to experiment with a SOTA setup and make some modification (try out your own new loss function, architecture, self-attention block, etc) you should be able to easily do so just by dropping your code in one or two places, without having to worry about the rest of the codebase.
97
97
Things like the use of self.which_conv and functools.partial in the BigGAN.py model definition were put together with this in mind, as was the design of the Spectral Norm class inheritance.
98
98
99
-
With that said, this is a somewhat large codebase for a single project. While I tried to be thorough with the comments, if there's something you think could be more clear, better written, or better refactored, please feel free to raise an issue or a pull request.
99
+
With that said, this is a somewhat large codebase for a single project. While we tried to be thorough with the comments, if there's something you think could be more clear, better written, or better refactored, please feel free to raise an issue or a pull request.
100
100
101
101
## Feature Requests
102
102
Want to work on or improve this code? There are a couple things this repo would benefit from, but which don't yet work.
This dir contains scripts for taking the [pre-trained generator weights from TFHub](https://tfhub.dev/s?q=biggan) and porting them to BigGAN-Pytorch.
3
+
4
+
In addition to the base libraries for BigGAN-PyTorch, to run this code you will need:
5
+
6
+
TensorFlow
7
+
TFHub
8
+
parse
9
+
10
+
Note that this code is only presently set up to run the ported models without truncation--you'll need to accumulate standing stats at each truncation level yourself if you wish to employ it.
11
+
12
+
To port the 128x128 model from tfhub, produce a pretrained weights .pth file, and generate samples, run
0 commit comments