This repository contains a collection of Jupyter notebooks and Python modules to replicate the results presented in "Deterministic Model of Incremental Multi-Agent Boltzmann Q-Learning: Transient Cooperation, Metastability, and Oscillations" (D. Goll, J. Heitzig, W. Barfuss, 2024, ArXiv). The notebooks can be used for simulating, analyzing, and visualizing learning dynamics in repeated games, with a focus on the Prisoner's Dilemma.
PaperCompanion1_I.ipynb,PaperCompanion1_II.ipynb, ...,PaperCompanion5.ipynb:
Jupyter notebooks for generating figures and running experiments as described in the paper. Each notebook corresponds to a specific figure.agent_game_sim.py:
Core Python module containing classes and functions for agent-based simulations, Q-learning, and game setup.requirements.txt:
List of required Python packages.data/:
Directory for simulation data and intermediate results.PaperFigures/:
Output directory for generated figures.
- Python 3.10 or later
-
Clone this repository:
git clone <repository-url> cd PaperCompanion_DetModelMAQL
-
Install dependencies:
pip install -r requirements.txt
- Open any of the
PaperCompanion*.ipynbnotebooks in JupyterLab or VS Code. - Run the cells to reproduce the figures and analyses from the paper.
- Generated figures will be saved in the
PaperFigures/directory.
To reproduce the deterministic learning trajectories in policy space (Figure 3):
- Open
PaperCompanion3.ipynb - Run all cells
- The resulting figures will be saved in
PaperFigures/
- Simulation data is saved and loaded from the
data/directory. - If
load_datais set toFalsein a notebook, new simulations will be run and data will be generated.
- Deterministic and Stochastic Q-Learning:
Simulate both deterministic and stochastic learning processes. - Flexible Experimentation:
Easily modify learning rates, discount factors, temperatures, and initial conditions.