Skip to content

RLM Development #13

@jkbchouinard

Description

@jkbchouinard

Using the principles we explored in our meeting (Jan. 20), we can add some form of memory into the model using recurrence. This can be easily integrated into our existing MELM as follows:

with nengo.Network() as model:

    # Existing MELM ensemble and connection construction

    rep_recur_con = nengo.Connection(rep_ens, rep_ens, "transform", "synapse")

    # Existing MELM probing

We can also consider a separately integrating and block that contributes to the output just like the representation ensemble:
image
(I apologize, I drew this with my computer mouse and it is... not incredible)

Just like with the MELM, we're interested in seeing how this model performs when trained and tested on our spiking dataset. Since we're concerned with performance over time, plotting the error (absolute and squared) over time could be a useful exercise in visualizing where the model struggles.

Similarly, it would be useful to generate visualizations of the RLM model using nengo-gui for a more visual and intuitive way to interpret the underlying architecture (my drawing is not adequate).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions