You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, ChRIS_ui asks PFDCM, which asks the PACS to send images to oxidicom, which sends images to CUBE. This is a bad architecture and is susceptible to performance problems when a large influx of requests causes the system to slow down and drop data.
@chhsiao1981 proposed for CUBE to manage PACS retrieve via a queue of jobs.
ChRIS_ui asks CUBE to retrieve some data from PACS. CUBE will create a PacsRetrieveJob and respond with the JobId.
ChRIS_ui can ask CUBE for the status of a job using the JobId. ChRIS_ui can also search for pending/active jobs by DICOM tags, e.g. it can ask CUBE: "are you currently trying to pull this series for patient A?"
CUBE will run PacsRetrieveJob when they are created. The maximum number of concurrently running PacsRetrieveJob is configurable.
When a PacsRetrieveJob is running, CUBE will ask the PACS for the data requested by the job, then wait for the data to be received (by oxidicom then registered to CUBE).
The proposal has the following advantages:
A work queue protects the entire system from being overloaded by a large influx of requests.
Having CUBE keep track of jobs makes it easier for CUBE to manage job state, and retry jobs when they fail.
When multiple users are active at the same time, they can query CUBE for pending/active jobs, instead of them both doing a PACS pull of the same data.
The text was updated successfully, but these errors were encountered:
Currently, ChRIS_ui asks PFDCM, which asks the PACS to send images to oxidicom, which sends images to CUBE. This is a bad architecture and is susceptible to performance problems when a large influx of requests causes the system to slow down and drop data.
@chhsiao1981 proposed for CUBE to manage PACS retrieve via a queue of jobs.
The proposal has the following advantages:
The text was updated successfully, but these errors were encountered: