Commit 63bb3b5
Support torch.int32 as a dtype for quantize and dequantize (#289)
Summary:
Pull Request resolved: #289
The ops like `quantized_decomposed.quantize_per_tensor.default` did not support
an int32 quantized type. Add support for these to the portable and aten runtimes.
This is important for Turing which uses int32 to represent uint16 (as the latter is not a valid
pytorch dtype).
Reviewed By: kimishpatel
Differential Revision: D49202048
fbshipit-source-id: 0faa89ce1d34b60ece443fb02fa14f02abf2d3761 parent fbbec00 commit 63bb3b5
2 files changed
+9
-1
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
37 | 37 | | |
38 | 38 | | |
39 | 39 | | |
40 | | - | |
| 40 | + | |
| 41 | + | |
| 42 | + | |
41 | 43 | | |
42 | 44 | | |
43 | 45 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
56 | 56 | | |
57 | 57 | | |
58 | 58 | | |
| 59 | + | |
| 60 | + | |
| 61 | + | |
| 62 | + | |
| 63 | + | |
| 64 | + | |
59 | 65 | | |
60 | 66 | | |
61 | 67 | | |
| |||
0 commit comments