It should be the standard behavior: based on a subject system, it first produces all results; then we can exploit new data.
I would start by first integrating the "analysis" Python scripts (to verify that the data generated are exploitable).
I highly suggest to have a quick and dirty approach: simply integrate what we already have; then we will refactor/modularize/optimize the Docker or scripts.
It should be the standard behavior: based on a subject system, it first produces all results; then we can exploit new data.
I would start by first integrating the "analysis" Python scripts (to verify that the data generated are exploitable).
I highly suggest to have a quick and dirty approach: simply integrate what we already have; then we will refactor/modularize/optimize the Docker or scripts.