Skip to content
This repository was archived by the owner on Jan 5, 2024. It is now read-only.
This repository was archived by the owner on Jan 5, 2024. It is now read-only.

My experience uploading a model ;) #2

@qbilius

Description

@qbilius

Hey guys,

Great job making an automated submission pipeline! I gave it a try as I wanted to score an updated CORnet-R version that merged CORnet-S's powers with the full recurrence of CORnet-R. According to my internal measures, it's not as good as CORnet-S, but Martin encouraged me to submit it nonetheless cause it maybe helpful for somebody.

At any rate, in this issue I wanted to document several issues I encountered while trying to wrap the model for scoring (thanks for the helpful PytorchWrapper class!). Some of those issues are probably at my end but some might need your attention.

  • README:
    • spelling "avaiable"
    • doesn't explain how exactly I am supposed to run a test
    • links to cancidate-models and model-tools mixed up
    • might be a good idea to mention an easy way to to get a virtual environment going:
      • python3 -m venv .venv
      • source .venv/bin/activate (at least in unix-like)
      • (now continue with pip install .)
    • Also might be good to mention a simple way to "package" you model but uploading code and weights to GitHub. It took me a bit to figure out where I could host that huge weight file.
  • PytorchWrapper: Doesn't work for recurrent models that return not only outputs but also states
  • Tests:
    • Somehow xarray version is not properly specified. I got 0.15 or so installed initially (by just running pip install . as suggested, but then got an error at some point running test, so had to downgrade to 0.12.
    • I could not get tests working as from test import test_modules kept importing some test module from someplace else. I think this may be due to relative imports, but not sure. Look up in my submission what I did to resolve it.
    • Tests are trying to download 9.83G of imagenet. Maybe a minor point but seems like tests could be run with much less resources. Or at least there could be some "light" version of tests just to make sure everything is more or less working.
  • I ran out of memory on my instance when running with all layers (V1, V2, V4, IT). This is due to activations not fitting into memory when doing pca (line 65 in activations/pca.py). Hopefully you guys have sufficiently large instances that fit all the crazy models :)
  • When submitting, I choose a zip file to be uploaded and it showed its path as C:\fakepath\...zip :D
  • Now let's see if my submission doesn't crash ;)

Good luck!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions