Unable to inference with malformed ONNX graph after export #184
Unanswered
addisonklinke
asked this question in
Q&A
Replies: 1 comment 1 reply
-
After additional testing, this appears to be a compatibility issue between Despite getting a valid ONNX graph in 1.5.0, there are still warnings regarding the trace and integer division. I think the trace warning is benign since the same padding is only calculated once to initialize the model and therefore should generalize to other inputs. However, the division warning seems like it could be responsible for the incompatibility with later versions of PyTorch. Any thoughts @rwightman?
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
The Problem
I successfully exported the PyTorch checkpoint for
tf_efficientdet_lite0
to ONNX by following the official PyTorch tutorial. However, when I try to create an inference session using the newly created ONNX model, the graph around nodeConcat_42
appears to be malformed since the inputs do not have the same number of dimensionsHas anyone else been able to successfully export and run inference with an ONNX model?
To Reproduce
Below is my complete code for initializing the model (which automatically fetches pretrained weights from torchhub) and exporting to ONNX. I pip installed
onnx==1.8.0
andonnxruntime==1.6.0
in addition to the requiredeffdet
packages (i.e.torch==1.7.1
andtimm==0.3.4
)Additional Context
Here is a Netron visualization of the ONNX graph with
Concat_42
highlighted red. It is towards the very beginning, in what appears to be some mean subtraction and pixel normalization preprocessing steps. Note I enabled View > Show Attributes (ctrl + D) and View > Show Names (ctrl + U) before taking the screenshotBeta Was this translation helpful? Give feedback.
All reactions