You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am a little bit confused about the noise applying method.
Does it mean that we apply the dropout layer in each component blocks after the LeReLu while the dropout porbabilities of different blocks are randomly selected ?
I am not sure about what the kernel[2] means.
@huangchaoxing The original GAN formulation requires some noise vector as an input to the network. The network then learns a transformation from noise (uniform distribution) to the data distribution. In CGAN (conditional GAN) since data is partly available as input (grayscale image in our case) the input noise is not very effective! To prevent the network to be completely predictive, some authors suggest
to use dropout as a form of noise in training!
In our early experiments, we used dropout for that matter. However, later we found it not really effective! In the code kernel[2] refers to the 3rd index of the kernel options used to define a network! As you can see here:
I am a little bit confused about the noise applying method.
Does it mean that we apply the dropout layer in each component blocks after the LeReLu while the dropout porbabilities of different blocks are randomly selected ?
I am not sure about what the kernel[2] means.
if kernel[2] > 0: keep_prob = 1.0 - kernel[2] if self.training else 1.0 output = tf.nn.dropout(output, keep_prob=keep_prob, name='dropout_' + name, seed=seed)
The text was updated successfully, but these errors were encountered: