You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've noticed some (potentially harmful) inconsistencies in bias initializers when running a simple test of the keras package, i.e. using a shallow MLP to learn a sine wave function in the [-1, 1] interval.
Context
Most of the times (or for deep enough networks), using the default zero-initialization for biases is fine. However, for this simple problem having randomized biases is essential, since without them the neurons end up being too similar (redundant) and training converges to a very poor local optimum.
The official guide suggests to use weight initializers for biases as well.
Now:
The default initialization from native PyTorch leads to good results that improve as expected as the network size grows.
Several keras initializers are expected to be similar or identical to the PyTorch behavior (i.e. VarianceScaling and all its subclasses), but they fail to produce good results, regardless of the number of neurons in the hidden layer.
Issues
The issues are due to the fact that all RandomInitializer subclasses in their __call__ function only have access to the shape they need to fill.
In case of bias vectors for Dense layers, this shape is a one element tuple, i.e. (n,) where n is the number of units in the current layer.
The compute_fans function in this case reports a fan in of n, which is actually the number of units, i.e. the fan out.
Unfortunately, the correct fan in is not accessible, since the number of layer inputs is not included in the shape of the bias vector.
This makes the official description of the VarianceScaling initializer incorrect when applied to neuron biases. The same holds for the description of the Glorot, He, LeCun initializers, which are implemented as VarianceScaling subclasses.
In my simple example, as soon as the shallow network has more than very few neurons, all size-dependent initializers have so little variability that they behave very similar to a zero initialization (i.e. incredibly poorly). What stumped me (before understanding the problem) is that the larger is the network, the worse the behavior.
About possible fixes
I can now easily fix the issue by computing bounds for RandomUniform initializers externally so as to replicate the default PyTorch behavior, but this is not an elegant solution -- and I am worried other users may have encountered similar problems without noticing.
If the goal is correctly computing the fan in, I am afraid that I see no easy fix, short of restructuring the RandomInitializer API and giving it access to more information.
However, the real goal here is not actually computing the fan in, but preserving the properties that the size-dependent initializers were attempting to enforce. I would need to read more literature on the topic before suggesting a theoretically sound fix from this perspective. I would be willing to do that, in case the keras teams is fine with going in this direction.
The text was updated successfully, but these errors were encountered:
Thanks for providing detailed information. Could you please share a standalone code with the model structure, bias initializers you have been using, with the sample output or error to help reproduce this issue.
Hello,
I've noticed some (potentially harmful) inconsistencies in bias initializers when running a simple test of the keras package, i.e. using a shallow MLP to learn a sine wave function in the [-1, 1] interval.
Context
Most of the times (or for deep enough networks), using the default zero-initialization for biases is fine. However, for this simple problem having randomized biases is essential, since without them the neurons end up being too similar (redundant) and training converges to a very poor local optimum.
The official guide suggests to use weight initializers for biases as well.
Now:
VarianceScaling
and all its subclasses), but they fail to produce good results, regardless of the number of neurons in the hidden layer.Issues
The issues are due to the fact that all RandomInitializer subclasses in their
__call__
function only have access to the shape they need to fill.In case of bias vectors for
Dense
layers, this shape is a one element tuple, i.e.(n,)
wheren
is the number of units in the current layer.The compute_fans function in this case reports a fan in of
n
, which is actually the number of units, i.e. the fan out.Unfortunately, the correct fan in is not accessible, since the number of layer inputs is not included in the shape of the bias vector.
This makes the official description of the VarianceScaling initializer incorrect when applied to neuron biases. The same holds for the description of the Glorot, He, LeCun initializers, which are implemented as
VarianceScaling
subclasses.In my simple example, as soon as the shallow network has more than very few neurons, all size-dependent initializers have so little variability that they behave very similar to a zero initialization (i.e. incredibly poorly). What stumped me (before understanding the problem) is that the larger is the network, the worse the behavior.
About possible fixes
I can now easily fix the issue by computing bounds for
RandomUniform
initializers externally so as to replicate the default PyTorch behavior, but this is not an elegant solution -- and I am worried other users may have encountered similar problems without noticing.If the goal is correctly computing the fan in, I am afraid that I see no easy fix, short of restructuring the
RandomInitializer
API and giving it access to more information.However, the real goal here is not actually computing the fan in, but preserving the properties that the size-dependent initializers were attempting to enforce. I would need to read more literature on the topic before suggesting a theoretically sound fix from this perspective. I would be willing to do that, in case the keras teams is fine with going in this direction.
The text was updated successfully, but these errors were encountered: