Virtual Neural Lab (VNL) in MJX. Deep reinforcement learning environments for neuroscience following the MuJoCo Playground API
To install vnl-mjx, first navigate to the project directory and run:
pip install -e .[with-cuda]- Python 3.11 or 3.12
- uv package manager (recommended) or pip
- CUDA 12.x or 13.x (for GPU support, optional)
If you don't have uv installed:
# Linux/macOS
curl -LsSf https://astral.sh/uv/install.sh | sh
# Or using pip
pip install uv- Clone the repository:
git clone https://github.com/talmolab/vnl-playground.git
cd vnl-playground- Create and activate a virtual environment:
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate- Install the package with optional dependencies based on your hardware. CUDA 12, CUDA 13, and CPU-only configurations are supported:
For CUDA 12.x:
uv pip install -e ".[cuda12]"For CUDA 13.x:
uv pip install -e ".[cuda13]"For CPU-only:
uv pip install -e .For development, include the [dev] extras in addition to the hardware optional dependencies:
uv pip install -e ".[cuda13,dev]"- Verify the installation:
python -c "import jax; print(f'JAX version: {jax.__version__}'); print(f'Available devices: {jax.devices()}')"- Register the environment as a Jupyter kernel:
python -m ipykernel install --user --name=track-mjx --display-name="Python (track-mjx)"We adopt the mujoco_playground approach to environment and task management. Here, each task is tied to a specific walker, rather than treating tasks and walkers as separate entities (as in dm_control Composer). This allows environments to make more assumptions about body model definitions at the cost of repeated environment logic.
vnl-playground uses mujoco.Mjspec during model creation and editing. This allows us to generate environments procedurally, such as adding target locations for reaching tasks or randomizing terrain shapes.
RL training can be done out of the box with Brax and RSL-RL. Our demo notebooks: WIP. Also, check out the MuJoCo Playground examples.