-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature requests for Lowpass: non-trainable and/or homogeneous #15
Comments
You can make the layer non-trainable by passing trainable=False to the constructor (that's a general Keras feature, not something specific to keras-spiking). But allowing a single tau value would be good. It might make sense to implement this by adding a constraint parameter (similar to standard Keras layers), and then we could have a built in constraint that enforces homogeneity. |
If I wanted to do this just for the time-constant and not for the initial value then I would need to do |
Hmm I'm not positive whether modifying tau_var.trainable after the initial build would have an effect or not, you'd have to experiment. But yeah trainable=False wouldn't work for that case. |
|
There are two features I've found myself needing lately w.r.t. the lowpass layer:
(Non-Trainability) The ability to make the time-constant non-trainable.
apply_during_training
is close, but setting this toFalse
skips over the lowpass entirely during training. I still want the lowpass in the forward training pass. I just don't want its time-constants to be modified from the initial value that I've provided.(Homogeneity) The ability to learn only a single time-constant. Currently the initial
tau
is broadcasted like so:keras-spiking/keras_spiking/layers.py
Lines 582 to 583 in 116fc6c
such that a different tau is learned for each dimension. Sometimes prior knowledge can tell us that the time-constant should be the same. This would also make trained lowpass filters compatible with NengoDL's converter (see Learn synaptic time-constants via backpropagation nengo-dl#60 (comment)). But even independently of that, I've encountered a sitaution where I'd like to be able to learn just a single time-constant, and then change the shape of the data going into the layer at inference time (i.e., have the single lowpass that is broadcast across all of the dimensions, and also with the same initial values).
The text was updated successfully, but these errors were encountered: