Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A question on the dropout noise #18

Open
huangchaoxing opened this issue Feb 27, 2019 · 1 comment
Open

A question on the dropout noise #18

huangchaoxing opened this issue Feb 27, 2019 · 1 comment

Comments

@huangchaoxing
Copy link

I am a little bit confused about the noise applying method.
Does it mean that we apply the dropout layer in each component blocks after the LeReLu while the dropout porbabilities of different blocks are randomly selected ?
I am not sure about what the kernel[2] means.

if kernel[2] > 0: keep_prob = 1.0 - kernel[2] if self.training else 1.0 output = tf.nn.dropout(output, keep_prob=keep_prob, name='dropout_' + name, seed=seed)

@knazeri
Copy link
Member

knazeri commented Feb 28, 2019

@huangchaoxing The original GAN formulation requires some noise vector as an input to the network. The network then learns a transformation from noise (uniform distribution) to the data distribution. In CGAN (conditional GAN) since data is partly available as input (grayscale image in our case) the input noise is not very effective! To prevent the network to be completely predictive, some authors suggest
to use dropout as a form of noise in training!
In our early experiments, we used dropout for that matter. However, later we found it not really effective! In the code kernel[2] refers to the 3rd index of the kernel options used to define a network! As you can see here:

(64, 1, 0), # [batch, 256, 256, ch] => [batch, 256, 256, 64]
(64, 2, 0), # [batch, 256, 256, 64] => [batch, 128, 128, 64]
(128, 2, 0), # [batch, 128, 128, 64] => [batch, 64, 64, 128]
(256, 2, 0), # [batch, 64, 64, 128] => [batch, 32, 32, 256]
(512, 2, 0), # [batch, 32, 32, 256] => [batch, 16, 16, 512]
(512, 2, 0), # [batch, 16, 16, 512] => [batch, 8, 8, 512]
(512, 2, 0), # [batch, 8, 8, 512] => [batch, 4, 4, 512]
(512, 2, 0) # [batch, 4, 4, 512] => [batch, 2, 2, 512]

Right now dropout value is set to zero for every kernel option, meaning we are not applying dropout anywhere in the network!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants