This repository is an implementation of our submission "Oops, I Sampled it Again: Reinterpreting Confidence Intervals in Few-Shot Learning" published in TMLR 2024.
You can cite this work using:
@article{
lafargue2024oops,
title={Oops, I Sampled it Again: Reinterpreting Confidence Intervals in Few-Shot Learning},
author={Raphael Lafargue and Luke A Smith and Franck VERMET and Matthias L{\"o}we and Ian Reid and Jack Valmadre and Vincent Gripon},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2024},
url={https://openreview.net/forum?id=JxxkKt9yrx},
note={}}
This repository is a fork of the implementation of ICML 2023 paper: A Closer Look at Few-shot Classification Again, as well as a Pytorch implementation of Meta-Dataset without any component of TensorFlow.
We modified the sampler to only sample without replacement. Comparision with the predominent method was performed on the original branch of the repo. To recreate the experiments, follow the step below.
You can compare your performance to our results. In order to ensure a correct task to task comparision check the results folder and measure correlation with your results.
Install packages using pip:
$ pip install -r requirements.txtPlease follow instructions on the orginal repository
The config files for the benchmark can be found in .config/benchmark but you can recreate them using
$ python loop_yaml.pyyou can then run the scripts by selecting which backbone you prefer and the number of shots:
./loop_exp.sh clip 10It will test every datasets and store the results in the folder specified in loop_exp.sh.