You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Regarding the different apps in your reop, I see you are interested in Pose Estimation Models.
I'm trying to use the new movenet multipose (https://tfhub.dev/google/movenet/multipose/lightning/1) from google but I've found that the Jetson boards do not benefit from the use of the GPU delegate.
Here (https://qengineering.eu/install-tensorflow-2-lite-on-jetson-nano.html), they are referring to that. But then in your repository, you use the GPU delegate. Do you find improvement in using GPU Delegate instead of standard CPU Delegate or XNNPACK (I don't know if XNNPACK makes sense on a Jetson Board) Delegate on a Jetson Board?
You offer some pose estimation model on tensorRT format. In PINTO repo you can find the Movenet onnx version of the model
(https://github.com/PINTO0309/PINTO_model_zoo/tree/main/137_MoveNet_MultiPose). AFAIK the first thing you need for running TensorRT models is to have the onnx version of the original model. An then? Do you need to make some changes to ONNX model?
Sorry if this is too many questions, but I would like to get the best performance on jetson cards and it is not something so trivial, I imagine you have encountered similar problems. 😅
The text was updated successfully, but these errors were encountered:
Hi. I've found your repository very useful.
Regarding the different apps in your reop, I see you are interested in Pose Estimation Models.
I'm trying to use the new movenet multipose (https://tfhub.dev/google/movenet/multipose/lightning/1) from google but I've found that the Jetson boards do not benefit from the use of the GPU delegate.
Here (https://qengineering.eu/install-tensorflow-2-lite-on-jetson-nano.html), they are referring to that. But then in your repository, you use the GPU delegate. Do you find improvement in using GPU Delegate instead of standard CPU Delegate or XNNPACK (I don't know if XNNPACK makes sense on a Jetson Board) Delegate on a Jetson Board?
You offer some pose estimation model on tensorRT format. In PINTO repo you can find the Movenet onnx version of the model
(https://github.com/PINTO0309/PINTO_model_zoo/tree/main/137_MoveNet_MultiPose). AFAIK the first thing you need for running TensorRT models is to have the onnx version of the original model. An then? Do you need to make some changes to ONNX model?
Sorry if this is too many questions, but I would like to get the best performance on jetson cards and it is not something so trivial, I imagine you have encountered similar problems. 😅
The text was updated successfully, but these errors were encountered: