This repository contains the code and resources for our paper, "Reflecting on the State of Rehearsal-Free Continual Learning with Pretrained Models" published at CoLLAs 2024. The paper examines the effectiveness of rehearsal-free continual learning (RFCL) methods using parameter-efficient finetuning (PEFT) techniques on pretrained models. We investigate the influence of query-based mechanisms in RFCL, revealing that simpler PEFT methods often match or exceed the performance of more complex systems. Our findings aim to provide a grounded understanding of RFCL with pretrained models.
To get started, clone this repository and set up your environment.
git clone https://github.com/username/repository.git
cd repositoryFollow these steps to set up the environment and dependencies for this project.
Environment Setup:
conda create -n rfcl python=3.8
conda activate rfclDependencies: Install the required libraries and dependencies:
pip install -r requirements.txtOnce setup is complete, you can run the experiments as described below. Each script is organized to reproduce specific parts of the study.
To reproduce the results from the paper, run the following command:
python main.py --config-name <config_name>We provide a list of configuration files for each experiment in the configs directory. You can specify the desired configuration file to run the corresponding experiment.
All config files default to the Split CIFAR100 benchmark. For further benchmarks, change the data config in line 2 to one of the config files in the data directory.
This section provides an outline for running the experiments described in the paper.
Experiments Table 1: In this section, we provide detailed instructions for running the experiments in Table 1 of the paper. To get the base performance of L2P, DP, CODA and HiDe, run the following command:
python run_experiment1.py --config configs/vit_<method>.yamlreplacing with the desired method (l2p, dp, coda, hide).
To reproduce our experiments with the oracle query function, replace the query config in the config file with ViT-B_16_ft.
python run_experiment1.py --config configs/vit_<method>.yaml ++module.model.query="ViT-B_16_ft"Note that this is only implemented for the L2P, DP, and CODA methods on CIFAR100 and ImageNet-r and expects a ViT-B_16 model fine-tuned on the target task under 'data/cifar100/cifar100_finetune.pt'.
Experiments Table 2 and 3: In this section, we provide detailed instructions for running the experiments in Table 2 of the paper. To reproduce the results presented in Tables 2 and 3 execute the following command:
python run_experiment2.py --config configs/vit_<method>.yamlreplacing with the desired method:
onlypromptfor our proposed OnlyPrompt methodl2pfor Learning to Prompt (L2P)dpfor Dual Prompt (DP)codafor CODA-Prompthidefor HiDe-Promptlinearprobefor training only a linear classifier on the frozen featuressimplecilfor SimpleCILlae_adapterfor LAEadam_vptoradam_adapterfor ADAM with VPT or Adapter (note that ADAM was renamed to APER after the publication of our paper)rpfor RanPAC VPT
Experiments Table 5: In this section, we provide detailed instructions for running the experiments in Table 5 and Figure 2 of the paper. To reproduce the results presented in Table 5 and Figure 2 execute the following command:
python run_experiment2.py --config configs/vit_onlyprompt_<reg_method>.yamlreplacing <reg_method> with the desired regularization method:
ewcfor Elastic Weight Consolidation (EWC)sifor Synaptic Intelligence (SI)
If you find this work useful, please consider citing our paper:
@article{YourLastName2024,
title={Reflecting on the State of Rehearsal-Free Continual Learning with Pretrained Models},
author={Your Name and Collaborator Names},
journal={Conference/Journal Name},
year={2024},
url={Link to paper or repository}
}
This codebase is based on the codebase of the LAE paper by Quiankun Gao and is licensed under the apache-2.0 license. For the implementation of the baseline methods code has been integrated from the following repositories: