You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there, I read the paper and was excited to see a method that is lightweight and efficient. However, when I was trying on my own data set, the size of the model surged up to 600 Gb! Do you have any ideas why? Is it because my dataset comes with 128 channels and 2500 sampling points? I am not that familiar with how the code works so I wonder if you can help me :)
The text was updated successfully, but these errors were encountered:
hello @Joscelin-666,
Is that 600G on GPU?
You may use larger kernel of convolution module to reduce the scale of the data and also capture better features for Transformer module.
Hi there, I read the paper and was excited to see a method that is lightweight and efficient. However, when I was trying on my own data set, the size of the model surged up to 600 Gb! Do you have any ideas why? Is it because my dataset comes with 128 channels and 2500 sampling points? I am not that familiar with how the code works so I wonder if you can help me :)
The text was updated successfully, but these errors were encountered: