Official PyTorch Implementation of Aux-NAS.
Please refer to our paper for more technical details:
Yuan Gao, Weizhong Zhang, Wenhan Luo, Lin Ma, Jin-Gang Yu, Gui-Song Xia, Jiayi Ma. Aux-NAS: Exploiting Auxiliary Labels with Negligibly Extra Inference Cost, International Conference on Learning Representations (ICLR), 2024. [arXiv]
If this code is helpful to your research, please consider citing our paper by:
@inproceedings{auxnas2024,
title={Aux-NAS: Exploiting Auxiliary Labels with Negligibly Extra Inference Cost},
author={Yuan Gao and Weizhong Zhang and Wenhan Luo and Lin Ma and Jin-Gang Yu and Gui-Song Xia and Jiayi Ma},
year={2024},
booktitle = {International Conference on Learning Representations (ICLR)}
}
Install the necessary dependencies:
$ pip install -r requirements.txt
Follow the instruction here to prepare the dataset. Alternatively, download the preprocessed dataset here and unzip it in the Aux-NAS
folder.
Train the single task weights following the instruction below. Alternatively, download the converted PyTorch models from here and unzip it in the Aux-NAS
folder.
When you are all set, you should have the following file structure:
Aux-NAS/datasets/nyu_train_val
Aux-NAS/ckpts/single_task/normal
Aux-NAS/ckpts/single_task/seg
The configuration files for different experiments are provided in the configs
directory.
$ CUDA_VISIBLE_DEVICES=0 python train.py --config-file configs/config_normal2.json # normal
$ CUDA_VISIBLE_DEVICES=0 python train.py --config-file configs/config_seg2.json # seg
$ CUDA_VISIBLE_DEVICES=0 python train.py --config-file configs/config_full2.json # normal + seg
$ CUDA_VISIBLE_DEVICES=0 python train.py --config-file configs/config_aux_nas2_prim_normal.json # normal (primary) + seg (auxiliary)
The ViT
implementation for multiple tasks is modified based on FocusOnDepth.