You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I'm working on a project in which I have to manually set weights for some layers of my network. However when doing so, the resulting weights are changed (not even rounded) at around 8 digits, as seen in the picture.
This breaks my algorithm and I am curious if someone knows how this can happen or how this can be prevented. Since this is part of a very large project, I'm afraid that giving a "minimal" working example would need several thousands of lines, so I will refrain from doing so. Maybe someone had a similar problem and has some insights.
The text was updated successfully, but these errors were encountered:
You're losing precision by converting from float64 to float32. Keras layer weights default to float32 typically. You can customize that by passing dtype = 'float64' when creating the layer.
You can also try setting a global default with keras::k_set_floatx("float64"), but be aware that it doesn't always propagate to all the tensorflow operations, so you may still have to search out stray 'float32' conversions in a large codebase.
Hi,
I'm working on a project in which I have to manually set weights for some layers of my network. However when doing so, the resulting weights are changed (not even rounded) at around 8 digits, as seen in the picture.
This breaks my algorithm and I am curious if someone knows how this can happen or how this can be prevented. Since this is part of a very large project, I'm afraid that giving a "minimal" working example would need several thousands of lines, so I will refrain from doing so. Maybe someone had a similar problem and has some insights.
The text was updated successfully, but these errors were encountered: