Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Found dtype Long but expected Float #45

Open
greed2411 opened this issue Jul 28, 2021 · 1 comment
Open

RuntimeError: Found dtype Long but expected Float #45

greed2411 opened this issue Jul 28, 2021 · 1 comment
Labels
bug Something isn't working

Comments

@greed2411
Copy link
Member

dependency versions:

simpletransformers (0.60.9)
torch (1.9.0)

when one tries to create a classification model with just one unique label, this happens. I'm assuming simpletransformers takes care of the tensor type differently when the no. of unique labels is 1 (and opp type, when it's greater than 1 -> usual scenario).

actual stack trace:

2021-07-28 22:24:07:851 slu [train.py:54] INFO Training started.
Epochs 0/1. Running Loss:    0.1531:   0%|                                                                                            | 0/2 [00:00<?, ?it/s]
Epoch 1 of 1:   0%|                                                                                                                   | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/root/jaivarsan/to_be_deleted/yellow/slu/dev/cli.py", line 100, in main
    train_intent_classifier(config, file_format=file_format)
  File "/root/jaivarsan/to_be_deleted/yellow/slu/dev/train.py", line 55, in train_intent_classifier
    model.train_model(
  File "/root/.pyenv/versions/3.8.9/envs/dgblue/lib/python3.8/site-packages/simpletransformers/classification/classification_model.py", line 463, in train_model
    global_step, training_details = self.train(
  File "/root/.pyenv/versions/3.8.9/envs/dgblue/lib/python3.8/site-packages/simpletransformers/classification/classification_model.py", line 720, in train
    loss.backward()
  File "/root/.pyenv/versions/3.8.9/envs/dgblue/lib/python3.8/site-packages/torch/_tensor.py", line 255, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "/root/.pyenv/versions/3.8.9/envs/dgblue/lib/python3.8/site-packages/torch/autograd/__init__.py", line 147, in backward
    Variable._execution_engine.run_backward(
RuntimeError: Found dtype Long but expected Float
@greed2411 greed2411 added the bug Something isn't working label Jul 28, 2021
@greed2411
Copy link
Member Author

supposedly every label should have (strictly) greater than 1 observations too. else one can face this error.

we require dataset validators from our end and, say the reason why it can fail (with warnings maybe).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant