Skip to content

Systems-Intelligent-Lab/Plasticity-Scan

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Plasticity-Scan: Visualizing Neural Network Plasticity

Overview

Plasticity-Scan is a small, self-contained research notebook for visualizing neural network plasticity on a controlled shifting regression (concept drift) task.
It is designed as a didactic, reproducible example rather than a benchmark, with a focus on:

  • Clear experimental setup (data, model, metrics, plots)
  • Interpretable internal metrics of plasticity
  • Minimal dependencies and simple code structure

The core logic lives in main.ipynb.

Problem setup

  • The model receives a continuous stream of 1D inputs \(x\) and must predict targets \(y\).
  • The mapping from \(x\) to \(y\) shifts once during training:
    • Steps 0–499: \(y = 2x + \varepsilon\)
    • Steps 500–999: \(y = -5x + \varepsilon\) (abrupt concept drift)
  • A small two-layer ReLU network is trained online with SGD.

While the loss tracks external performance, we also log internal metrics to probe how representations adapt across the drift.

Key components (in main.ipynb)

  • Reproducible setup

    • set_seed(...) fixes Python, NumPy, and PyTorch RNGs for deterministic behavior.
  • Data generation

    • generate_data(step, batch_size) simulates concept drift by switching the regression slope at a chosen step.
    • Includes small Gaussian noise to avoid degenerate activations.
  • Model

    • SimpleRegressor is a 2-layer MLP (Linear(1 → 8), ReLU, Linear(8 → 1)).
    • Caches hidden activations on each forward pass for downstream plasticity analysis.
  • Plasticity metrics

    • get_metrics(model, activations) computes:
      • Dormant neuron ratio – share of hidden units whose average activation is effectively zero.
      • Effective rank (via SVD) – a scalar proxy for how many directions in activation space are meaningfully used.
  • Training loop

    • Streams batches step-by-step from generate_data.
    • Optimizes MSE loss with SGD.
    • Logs loss, dormant ratio, and effective rank at each step.
  • Visualization

    • 3D and 2D visualizations of the drifting data distribution.
    • Time series plots of:
      • Training loss (with the drift point marked).
      • Dormant neuron ratio.
      • Effective rank of hidden activations.

Getting started

1. Environment

You will need a recent Python environment (3.9+ recommended) with the following libraries:

  • torch
  • numpy
  • matplotlib

You can install them via:

pip install torch numpy matplotlib

or, for conda users:

conda install pytorch numpy matplotlib -c pytorch

2. Running the notebook

  1. Start Jupyter (or VS Code / Cursor / any notebook frontend) in the repository root:

    jupyter notebook
  2. Open main.ipynb.

  3. Run the cells from top to bottom:

    • The early cells set up imports and seeds.
    • Middle cells define the data generator, model, metrics, and training loop.
    • Final cells render the visualizations and plasticity curves.

Interpreting the results

  • Loss curve
    Expect a spike or plateau around the drift step (~500), followed by re-adaptation as the network re-learns the new mapping.

  • Dormant neuron ratio

    • High values suggest many hidden units are effectively unused.
    • Changes around the drift can indicate whether the network recruits additional capacity or prunes itself functionally.
  • Effective rank

    • Higher rank indicates more diverse hidden representations.
    • A drop can signal collapse to low-dimensional structure; a rise after drift can reflect renewed exploration of representation space.

Taken together, these curves offer a compact "scan" of plasticity that goes beyond scalar loss.

Repository structure

  • main.ipynb – core experiment, including data generation, model, metrics, training, and plots.
  • LICENSE – project license.

Additional scripts, configs, or experiment variants can be added under new files or subdirectories as the project grows.

Contributing

Contributions are welcome. In particular, you might:

  • Add alternative plasticity metrics (e.g., weight-change norms, curvature, Fisher information).
  • Extend the dataset to more complex or gradual drifts.
  • Compare different optimizers or architectures under the same protocol.

Please open an issue or pull request describing:

  • What you changed,
  • Why it is useful, and
  • How to reproduce or verify it.

License

This project is licensed under the terms of the license in LICENSE. Please see that file for details.

About

Visualizing Neural Network Plasticity Using a Shifting Regression Task.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors