-
Notifications
You must be signed in to change notification settings - Fork 2
Description
Hi authors, congratulations on your great work! I am currently trying to reproduce your results, but I’m running into a few issues and I’m not sure whether what I’m seeing is expected.
I ran the command from the README (with data downloaded from the VARC repository):
python offline_train_loop_ARC.py --architecture loop_vit --batch-size 32 --max-loop-steps 8
Here are the logs around the end of training:
Epoch 78 [#######################-------] 78.0% ETA 00h12m16s
epoch=78 | train_loss=0.2399 | train_acc=0.0169 | avg_train_steps=8.00 | epoch_time=27.6s | lr=0.000042 | eval_loss=0.6326 | eval_acc=0.0000 | eval_steps=8.00
Epoch 79 [#######################-------] 79.0% ETA 00h11m43s
epoch=79 | train_loss=0.2384 | train_acc=0.0200 | avg_train_steps=8.00 | epoch_time=27.5s | lr=0.000039 | eval_loss=0.6292 | eval_acc=0.0000 | eval_steps=8.00
Epoch 80 [#######################-------] 80.0% ETA 00h11m09s
epoch=80 | train_loss=0.2410 | train_acc=0.0108 | avg_train_steps=8.00 | epoch_time=27.5s | lr=0.000035 | eval_loss=0.6265 | eval_acc=0.0000 | eval_steps=8.00
Epoch 81 [########################------] 81.0% ETA 00h10m36s
epoch=81 | train_loss=0.2403 | train_acc=0.0223 | avg_train_steps=8.00 | epoch_time=27.8s | lr=0.000032 | eval_loss=0.6331 | eval_acc=0.0000 | eval_steps=8.00
Epoch 82 [########################------] 82.0% ETA 00h10m02s
epoch=82 | train_loss=0.2346 | train_acc=0.0215 | avg_train_steps=8.00 | epoch_time=27.5s | lr=0.000029 | eval_loss=0.6359 | eval_acc=0.0000 | eval_steps=8.00
Epoch 83 [########################------] 83.0% ETA 00h09m29s
epoch=83 | train_loss=0.2366 | train_acc=0.0169 | avg_train_steps=8.00 | epoch_time=27.5s | lr=0.000026 | eval_loss=0.6367 | eval_acc=0.0000 | eval_steps=8.00
Epoch 84 [#########################-----] 84.0% ETA 00h08m55s
epoch=84 | train_loss=0.2331 | train_acc=0.0154 | avg_train_steps=8.00 | epoch_time=27.6s | lr=0.000023 | eval_loss=0.6325 | eval_acc=0.0000 | eval_steps=8.00
Epoch 85 [#########################-----] 85.0% ETA 00h08m22s
epoch=85 | train_loss=0.2229 | train_acc=0.0169 | avg_train_steps=8.00 | epoch_time=27.5s | lr=0.000020 | eval_loss=0.6371 | eval_acc=0.0000 | eval_steps=8.00
Epoch 86 [#########################-----] 86.0% ETA 00h07m48s
epoch=86 | train_loss=0.2382 | train_acc=0.0161 | avg_train_steps=8.00 | epoch_time=27.5s | lr=0.000018 | eval_loss=0.6300 | eval_acc=0.0000 | eval_steps=8.00
Epoch 87 [##########################----] 87.0% ETA 00h07m15s
epoch=87 | train_loss=0.2361 | train_acc=0.0184 | avg_train_steps=8.00 | epoch_time=27.5s | lr=0.000015 | eval_loss=0.6312 | eval_acc=0.0000 | eval_steps=8.00
Epoch 88 [##########################----] 88.0% ETA 00h06m41s
epoch=88 | train_loss=0.2344 | train_acc=0.0230 | avg_train_steps=8.00 | epoch_time=27.6s | lr=0.000013 | eval_loss=0.6326 | eval_acc=0.0000 | eval_steps=8.00
Epoch 89 [##########################----] 89.0% ETA 00h06m08s
epoch=89 | train_loss=0.2335 | train_acc=0.0177 | avg_train_steps=8.00 | epoch_time=27.6s | lr=0.000011 | eval_loss=0.6318 | eval_acc=0.0000 | eval_steps=8.00
Epoch 90 [##########################----] 90.0% ETA 00h05m35s
epoch=90 | train_loss=0.2302 | train_acc=0.0123 | avg_train_steps=8.00 | epoch_time=27.8s | lr=0.000009 | eval_loss=0.6351 | eval_acc=0.0000 | eval_steps=8.00
Epoch 91 [###########################---] 91.0% ETA 00h05m01s
epoch=91 | train_loss=0.2255 | train_acc=0.0207 | avg_train_steps=8.00 | epoch_time=27.5s | lr=0.000007 | eval_loss=0.6350 | eval_acc=0.0000 | eval_steps=8.00
Epoch 92 [###########################---] 92.0% ETA 00h04m28s
epoch=92 | train_loss=0.2303 | train_acc=0.0154 | avg_train_steps=8.00 | epoch_time=27.5s | lr=0.000006 | eval_loss=0.6325 | eval_acc=0.0000 | eval_steps=8.00
Epoch 93 [###########################---] 93.0% ETA 00h03m54s
epoch=93 | train_loss=0.2314 | train_acc=0.0138 | avg_train_steps=8.00 | epoch_time=27.5s | lr=0.000004 | eval_loss=0.6333 | eval_acc=0.0000 | eval_steps=8.00
Epoch 94 [############################--] 94.0% ETA 00h03m21s
epoch=94 | train_loss=0.2259 | train_acc=0.0131 | avg_train_steps=8.00 | epoch_time=27.7s | lr=0.000003 | eval_loss=0.6323 | eval_acc=0.0000 | eval_steps=8.00
Epoch 95 [############################--] 95.0% ETA 00h02m47s
epoch=95 | train_loss=0.2341 | train_acc=0.0146 | avg_train_steps=8.00 | epoch_time=27.5s | lr=0.000002 | eval_loss=0.6321 | eval_acc=0.0000 | eval_steps=8.00
Epoch 96 [############################--] 96.0% ETA 00h02m14s
epoch=96 | train_loss=0.2234 | train_acc=0.0192 | avg_train_steps=8.00 | epoch_time=27.6s | lr=0.000001 | eval_loss=0.6319 | eval_acc=0.0000 | eval_steps=8.00
Epoch 97 [#############################-] 97.0% ETA 00h01m41s
epoch=97 | train_loss=0.2226 | train_acc=0.0192 | avg_train_steps=8.00 | epoch_time=27.8s | lr=0.000001 | eval_loss=0.6321 | eval_acc=0.0000 | eval_steps=8.00
Epoch 98 [#############################-] 98.0% ETA 00h01m07s
epoch=98 | train_loss=0.2240 | train_acc=0.0184 | avg_train_steps=8.00 | epoch_time=27.5s | lr=0.000000 | eval_loss=0.6323 | eval_acc=0.0000 | eval_steps=8.00
Epoch 99 [#############################-] 99.0% ETA 00h00m34s
epoch=99 | train_loss=0.2266 | train_acc=0.0154 | avg_train_steps=8.00 | epoch_time=27.6s | lr=0.000000 | eval_loss=0.6323 | eval_acc=0.0000 | eval_steps=8.00
Epoch 100 [#############################-] 100.0% ETA 00h00m00s
epoch=100 | train_loss=0.2313 | train_acc=0.0177 | avg_train_steps=8.00 | epoch_time=27.5s | lr=0.000000 | eval_loss=0.6323 | eval_acc=0.0000 | eval_steps=8.00
The training accuracy stays around 1 to 2 percent and the evaluation accuracy remains 0 throughout. Is this expected for the default configuration, or does it indicate a setup or hyperparameter issue?
I also noticed that the paper mentions training on the re-ARC dataset, whereas this command appears to run without re ARC by default. Is the README command missing a required flag or configuration? My assumption is that using re ARC is important for reproducing the reported results.
In addition, could you share an estimate of the training compute used in the paper, for example GPU type, number of GPUs, training time, and effective batch size? With the current settings, batch size 32 seems quite small, and if I switch to re-ARC and keep everything else unchanged, my rough estimate is that it would take around 235 hours on a single A100 80GB, which makes me think this is not the exact setup used for the paper.
I also ran into a couple of missing module errors:
Traceback (most recent call last):
from src.ARC_UNet import ARCUNet
ModuleNotFoundError: No module named 'src.ARC_UNet'
from utils.wandb_vis import grid_to_pil
ModuleNotFoundError: No module named 'utils.wandb_vis'
I fixed these locally by copying the corresponding files from the VARC repository, but I wanted to mention them in case the repo installation instructions are incomplete.
Sorry if I missed something obvious in the documentation. I would greatly appreciate any guidance on the correct command, dataset setup, and training configuration needed to reproduce your results.