diff --git a/doc/source/coverage.rst b/doc/source/coverage.rst new file mode 100644 index 0000000..26bc66c --- /dev/null +++ b/doc/source/coverage.rst @@ -0,0 +1,16 @@ +Test coverage report for dahu +============================= + +Measured on *dahu* version 1.0.0, 21/09/2021 + +.. csv-table:: Test suite coverage + :header: "Name", "Stmts", "Exec", "Cover" + :widths: 35, 8, 8, 8 + + "factory.py", "86", "29", "33.7 %" + "job.py", "330", "76", "23.0 %" + "plugin.py", "101", "51", "50.5 %" + "utils.py", "45", "4", "8.9 %" + "plugins/example.py", "35", "24", "68.6 %" + + "dahu total", "597", "184", "30.8 %" diff --git a/doc/source/dahu.rst b/doc/source/dahu.rst index 60aa529..5bdf807 100644 --- a/doc/source/dahu.rst +++ b/doc/source/dahu.rst @@ -6,20 +6,20 @@ Dahu: online data analysis server The *dahu* server executes **jobs**: ------------------------------------ -* Each job lives in its own thread (yes, thread, not process, it the plugin developper to ensure the work he is doing is GIL-compliant). -* Each job executes one plugin, provided by the plugin developper (i.e. the scientist) +* Each job lives in its own thread (yes, thread, not process, it the plugin's developper to ensure the work he is doing is GIL-compliant). +* Each job executes one plugin, provided by the plugin's developper (i.e. the scientist) * The job de/serialises JSON strings coming from/returning to Tango * Jobs are executed asynchronously, the request for calculation is answered instantaneously with a *jobid*. -* The *jobid* can be used to poll the server for the status of the job or for manual synchronization (Tango can time-out!). +* The *jobid* can be used to poll the server for the status of the job or for manual synchronization (mind that Tango can time-out!). * When jobs are finished, the client is notified via Tango events about the status * Results can be retrieved after the job has finished. Jobs execute **plugin**: ------------------------ -* Plugins are written in Python -* Plugins can be classes or simple function -* The input and output must be JSON-seriablisable as simple dictionnaries +* Plugins are written in Python (extension in Cython or OpenCL are common) +* Plugins can be classes or simple functions +* The input and output MUST be JSON-seriablisable as simple dictionnaries * Plugins are dynamically loaded from Python modules * Plugins can be profiled for performance analysis @@ -30,12 +30,12 @@ All jobs can be run offline using the `dahu-reprocess` command line tool. This tool is not multithreaded and plugins are directly run, it is intended for: * offline developments -* re-processing some failed online processing. +* re-processing some failed online processing (where performances are less critical). Dahu is light ! --------------- Dahu is a small project started at ESRF in 2013 with less than 1000 lines of code. It is used in production since then on a couple of beamlines. -With its FIFO scheduler, dahu is very fast (1µs, 0.3ms from Tango) +With its FIFO scheduler, `dahu` is very fast (1µs locally, 0.3ms from Tango) diff --git a/doc/source/index.rst b/doc/source/index.rst index 09dcfbe..8a2702e 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -13,6 +13,7 @@ Contents: dahu installation + coverage api/modules Indices and tables