Skip to content

Testing theories about retinal coding in reinforcement learning environments

License

Notifications You must be signed in to change notification settings

berenslab/retinal-rl

Repository files navigation

Retinal RL

A deep learning framework for vision research using deep reinforcement learning.

Apptainer Environment

Retinal-Rl is designed to run in a containerized environment using Apptainer.

Installation

  1. Install Apptainer to run the containerized environment.

  2. Get the container:

  • Either pull the pre-built container:
apptainer pull retinal-rl.sif oras://ghcr.io/berenslab/retinal-rl:singularity-image
  • or build from source:
apptainer build retinal-rl.sif resources/retinal-rl.def

Running Experiments

The scan command prints info about the proposed neural network architecture:

apptainer exec retinal-rl.sif python main.py +experiment="{experiment}" command=scan

The experiment must always be specified with the +experiment flag. To train a model, use the train command:

apptainer exec retinal-rl.sif python main.py +experiment="{experiment}" command=train

apptainer commands can typically be replaced with singularity if the latter is rather used.

Hydra Configuration

The project uses Hydra for configuration management.

Directory Structure

The structure of the ./config/ directory is as follows:

base/config.yaml     # General and system configurations
user/
├── brain/           # Neural network architectures
├── dataset/         # Dataset configurations
├── optimizer/       # Training optimizers
└── experiment/      # Experiment configurations

Default Configuration

Template configs are available under ./resources/config_templates/user/..., which also provide documentation of the configuration variables themselves. Consult the hydra documentation for more information on configuring your project.

Configuration Management

  1. Configuration templates may be copied to the user directory by running:
bash tests/ci/copy_configs.sh
  1. Template and custom configurations can be sanity-checked with:
bash tests/ci/scan_configs.sh

which runs the scan command for all experiments.

Weights & Biases Integration

Retinal-RL supports logging to Weights & Biases for experiment tracking.

Basic Configuration

By default plots and analyses are saved locally. To enable Weights & Biases logging, add the logging.use_wandb: True flag to the command line:

apptainer exec retinal-rl.sif python main.py +experiment="{experiment}" logging.use_wandb=True command=train

Parameter Sweeps

Wandb sweeps can be added to user/sweeps/{sweep}.yaml and launched from the command line:

apptainer exec retinal-rl.sif python main.py +experiment="{experiment}" +sweep="{sweep}" command=sweep

Typically the only command line arguments that need a + prefix will be +experiment and +sweep. Also note that .yaml extensions are dropped at the command line.

About

Testing theories about retinal coding in reinforcement learning environments

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors 3

  •  
  •  
  •