Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature requests for Lowpass: non-trainable and/or homogeneous #15

Open
arvoelke opened this issue Feb 5, 2021 · 4 comments
Open

Feature requests for Lowpass: non-trainable and/or homogeneous #15

arvoelke opened this issue Feb 5, 2021 · 4 comments

Comments

@arvoelke
Copy link
Contributor

arvoelke commented Feb 5, 2021

There are two features I've found myself needing lately w.r.t. the lowpass layer:

  1. (Non-Trainability) The ability to make the time-constant non-trainable. apply_during_training is close, but setting this to False skips over the lowpass entirely during training. I still want the lowpass in the forward training pass. I just don't want its time-constants to be modified from the initial value that I've provided.

  2. (Homogeneity) The ability to learn only a single time-constant. Currently the initial tau is broadcasted like so:

    shape=[1] + self.state_size.as_list(),
    initializer=tf.initializers.constant(np.ones(self.state_size) * self.tau),

    such that a different tau is learned for each dimension. Sometimes prior knowledge can tell us that the time-constant should be the same. This would also make trained lowpass filters compatible with NengoDL's converter (see Learn synaptic time-constants via backpropagation nengo-dl#60 (comment)). But even independently of that, I've encountered a sitaution where I'd like to be able to learn just a single time-constant, and then change the shape of the data going into the layer at inference time (i.e., have the single lowpass that is broadcast across all of the dimensions, and also with the same initial values).

@drasmuss
Copy link
Member

drasmuss commented Feb 5, 2021

You can make the layer non-trainable by passing trainable=False to the constructor (that's a general Keras feature, not something specific to keras-spiking). But allowing a single tau value would be good. It might make sense to implement this by adding a constraint parameter (similar to standard Keras layers), and then we could have a built in constraint that enforces homogeneity.

@arvoelke
Copy link
Contributor Author

arvoelke commented Feb 5, 2021

You can make the layer non-trainable by passing trainable=False to the constructor

If I wanted to do this just for the time-constant and not for the initial value then I would need to do layer.tau_var.trainable = False? (Or how would I do that if tau_var doesn't exist until it's built?)

@drasmuss
Copy link
Member

drasmuss commented Feb 5, 2021

Hmm I'm not positive whether modifying tau_var.trainable after the initial build would have an effect or not, you'd have to experiment. But yeah trainable=False wouldn't work for that case.

@drasmuss
Copy link
Member

  1. (Homogeneous time constant) can now be done by setting tau_constraint=keras_spiking.constraints.Mean()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants