-
Notifications
You must be signed in to change notification settings - Fork 2
Workflow
This graph show the workflow of the original python script (provided by nVidia). One of the difference in our scripts is that we use a file instead of the web camera as graph source. Therefore, this container have to use the ffmpeg to decode the video file. Also, since there is no graphic interface in the server/moc/openshift, we removed the realtime progress showing codes and replace it by saving the output to an output file (output.avi) in the outgoing
directiory. This means ffmpeg is essentical.
There is another output file called FramePerSecondRecord.csv
. This file contains the benchmarking results of the plugin. The output should be like this:
maximum_fps | minimum_fps | average_fps |
---|---|---|
250.0 | 142.86 | 239.92 |
If you wanna more research details of this project, check this tutorial.
(If you run it multiple times , the newest result will be added to the last line of file.)
(Results from a ppc64le
machine)
This shows the information about the inference time for every frame. We think it shows the data bus latency from cpu/main memory to the GPU.
On ppc64le
machine, the typical inference time for each frame is about 4 ms. However in x86_64
machine, we got about 6~7 ms inference time for every frame. We think the difference is significant (powerpc is about 40% faster than x86_64
).