-
Notifications
You must be signed in to change notification settings - Fork 14
/
Copy pathmayo-case-study.html
259 lines (243 loc) · 18.9 KB
/
mayo-case-study.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
<!DOCTYPE html>
<html lang="en" class="scroll-smooth">
<!-- #include file="components/head.html" -->
<script>
document.head.innerHTML = document.head.innerHTML
.replace('${title}', 'MONAI - Mayo Clinic Case Study')
.replace('${description}', 'Learn how Mayo Clinic\'s Center for Augmented Intelligence in Imaging (CAII) uses MONAI to integrate AI models within clinical-imaging workflows.')
.replace('${canonical_url}', 'https://monai.io/mayo-case-study.html');
</script>
<body class="flex flex-col min-h-screen">
<!-- #include file="components/header.html" -->
<main class="flex-grow">
<section class="py-24 bg-white">
<div class="container">
<div class="grid grid-cols-1 lg:grid-cols-2 gap-16 items-center">
<div>
<div class="inline-flex items-center px-3 py-1 rounded-full text-sm font-medium bg-brand-primary/10 text-brand-primary mb-6">
Real-world Case Study
</div>
<h1 class="text-4xl font-bold text-gray-800 mb-8 relative inline-block pb-2">
Center for Augmented Intelligence in Imaging Mayo Clinic Florida
<span class="absolute bottom-0 left-0 w-full h-0.5 bg-brand-primary"></span>
</h1>
<p class="text-lg text-gray-600 leading-relaxed mb-8">
Integrating and Deploying AI Models within Clinical-Imaging Workflows
</p>
</div>
<div class="hidden lg:flex lg:justify-center">
<div class="relative w-full max-w-[200px] transform hover:scale-105 transition-transform duration-300">
<img class="w-auto h-auto" src="assets/img/mayo_clinic_logo_hq.png" alt="Mayo Clinic Logo">
</div>
</div>
</div>
</div>
</section>
<section class="py-16 bg-brand-dark/15">
<div class="container">
<div class="prose max-w-none">
<p class="text-gray-600 mb-8">
Effective integrations of imaging-related (pixel- and nonpixel-based) Artificial Intelligence (AI) models into existing clinical Radiology workflows is critical since such additions can greatly impact (either positively or negatively) operational efficiencies or downstream decision making (e.g., surgery, pathology, interventions, and drug precautions) [1]. In order to facilitate seamless integrations of imaging-AI capabilities, with minimal negative influence on existing Radiology workflows (Figure 1), the Center for Augmented Intelligence in Imaging (CAII) in Mayo Clinic Florida has developed infrastructure and modular software packages functionally compatible with MONAI [2] software packages (e.g., "MONAI Core" and "MONAI Deploy").
</p>
<figure class="bg-white p-8 md:p-12 rounded-lg shadow-sm hover:shadow-lg transition-all duration-300 mb-16">
<a href="assets/img/mayo-case-study-figure-1A.png" target="_blank" rel="noopener noreferrer" class="block group">
<img class="w-full h-auto rounded-lg transform transition-transform duration-300 group-hover:scale-[1.02]" src="assets/img/mayo-case-study-figure-1A.png" alt="Clinical workflow diagram">
</a>
<figcaption class="mt-6 text-sm text-gray-600 italic text-center max-w-3xl mx-auto">
Figure 1: A representative workflow (modeled after IHE Scheduled Workflow) shows an examination order being generated, image data being acquired during patient scanning, produced images being evaluated by a radiologist, and a report being generated by the image interpreter and then forwarded to the referring clinician for review. Clinician reviews leading to ordered biopsies or surgical interventions, may result in associated digital pathology on excised tissue samples.
</figcaption>
</figure>
<p class="text-gray-600 mb-8">
AI-based infrastructure should be both indistinguishable from the existing IT environment and require, at most, minimal training of Radiology users (e.g., radiologists and technologists). Nevertheless, the introduction of such tools requires the fostering of trust among the users as well as beneficiaries (e.g., patients and referring clinicians).
</p>
<p class="text-gray-600 mb-8">
As the leading discipline in utilizing AI in medicine, Radiology has already recognized the need for greater efficiencies in all aspects of imaging-AI application, including AI-model: development, deployment, and adaptation to real-world encounters. Unfortunately, these processes remain prohibitively time-consuming, laborious, and costly, often resulting in significant limitations to meaningful imaging-AI use (Figure 2).
</p>
<div class="grid lg:grid-cols-2 gap-8 lg:gap-12 mb-16">
<figure class="bg-white p-8 md:p-12 rounded-lg shadow-sm hover:shadow-lg transition-all duration-300">
<a href="assets/img/mayo-case-study-figure-2.png" target="_blank" rel="noopener noreferrer" class="block group">
<img class="w-full h-auto rounded-lg transform transition-transform duration-300 group-hover:scale-[1.02]" src="assets/img/mayo-case-study-figure-2.png" alt="AI project development timeline">
</a>
<figcaption class="mt-6 text-sm text-gray-600 italic text-center">
Figure 2: Typical AI project development and time commitments
</figcaption>
</figure>
<figure class="bg-white p-8 md:p-12 rounded-lg shadow-sm hover:shadow-lg transition-all duration-300">
<a href="assets/img/mayo-case-study-figure-3.png" target="_blank" rel="noopener noreferrer" class="block group">
<img class="w-full h-auto rounded-lg transform transition-transform duration-300 group-hover:scale-[1.02]" src="assets/img/mayo-case-study-figure-3.png" alt="Example use cases">
</a>
<figcaption class="mt-6 text-sm text-gray-600 italic text-center">
Figure 3: Example use-cases: (a) MRI-unsafe device detection on chest x-ray, (b) Breast-density classification on mammography, (c) White matter disease segmentation on MRI, (d) Segmental coronary artery stenosis detection vs. exclusion on coronary CTA
</figcaption>
</figure>
</div>
<figure class="bg-white p-8 md:p-12 rounded-lg shadow-sm hover:shadow-lg transition-all duration-300 mb-16">
<a href="assets/img/mayo-case-study-figure-4.png" target="_blank" rel="noopener noreferrer" class="block group">
<img class="w-full h-auto rounded-lg transform transition-transform duration-300 group-hover:scale-[1.02]" src="assets/img/mayo-case-study-figure-4.png" alt="CAII infrastructure diagram">
</a>
<figcaption class="mt-6 text-sm text-gray-600 italic text-center max-w-3xl mx-auto">
Figure 4: CAII infrastructure and software packages
</figcaption>
</figure>
<p class="text-gray-600 mb-8">
Engineers, imaging scientists, and physicians working in the CAII have developed infrastructure and containerized software packages, enabling imaging-AI models to be seamlessly integrated into the existing IT environment of a busy Department of Radiology [3-9]. The necessary interfaces and packages (Figure 3) can be deployed on-prem, in-cloud, or in hybrid settings (Figure 4). The goal is to require minimal user training and IT support and foster confidence in users and beneficiaries.
</p>
<p class="text-gray-600 mb-4">
CAII at Mayo Clinic Florida has developed various capabilities to streamline the integration of imaging AI models into Radiology workflows. These capabilities include:
</p>
<ul class="space-y-2 mb-12 text-gray-600">
<li class="flex items-start gap-3">
<span class="w-6 h-6 rounded-sm flex items-center justify-center flex-shrink-0">
<svg class="w-4 h-4 text-brand-primary" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M5 13l4 4L19 7"></path>
</svg>
</span>
<span>Critical-results alerting</span>
</li>
<li class="flex items-start gap-3">
<span class="w-6 h-6 rounded-sm flex items-center justify-center flex-shrink-0">
<svg class="w-4 h-4 text-brand-primary" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M5 13l4 4L19 7"></path>
</svg>
</span>
<span>Expert-in-the-loop AI-model deployment</span>
</li>
<li class="flex items-start gap-3">
<span class="w-6 h-6 rounded-sm flex items-center justify-center flex-shrink-0">
<svg class="w-4 h-4 text-brand-primary" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M5 13l4 4L19 7"></path>
</svg>
</span>
<span>On-demand model training in clinical settings</span>
</li>
<li class="flex items-start gap-3">
<span class="w-6 h-6 rounded-sm flex items-center justify-center flex-shrink-0">
<svg class="w-4 h-4 text-brand-primary" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M5 13l4 4L19 7"></path>
</svg>
</span>
<span>Real-time user inference-results adjudication with feedback in clinical settings</span>
</li>
<li class="flex items-start gap-3">
<span class="w-6 h-6 rounded-sm flex items-center justify-center flex-shrink-0">
<svg class="w-4 h-4 text-brand-primary" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M5 13l4 4L19 7"></path>
</svg>
</span>
<span>Monitoring of user satisfaction</span>
</li>
<li class="flex items-start gap-3">
<span class="w-6 h-6 rounded-sm flex items-center justify-center flex-shrink-0">
<svg class="w-4 h-4 text-brand-primary" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M5 13l4 4L19 7"></path>
</svg>
</span>
<span>Data collection for FDA approvals</span>
</li>
<li class="flex items-start gap-3">
<span class="w-6 h-6 rounded-sm flex items-center justify-center flex-shrink-0">
<svg class="w-4 h-4 text-brand-primary" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M5 13l4 4L19 7"></path>
</svg>
</span>
<span>Continuous Learning</span>
</li>
<li class="flex items-start gap-3">
<span class="w-6 h-6 rounded-sm flex items-center justify-center flex-shrink-0">
<svg class="w-4 h-4 text-brand-primary" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M5 13l4 4L19 7"></path>
</svg>
</span>
<span>Federated Learning</span>
</li>
<li class="flex items-start gap-3">
<span class="w-6 h-6 rounded-sm flex items-center justify-center flex-shrink-0">
<svg class="w-4 h-4 text-brand-primary" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M5 13l4 4L19 7"></path>
</svg>
</span>
<span>Standards-based communication (DICOM, FHIR, HL7, IHE) between clinical systems</span>
</li>
<li class="flex items-start gap-3">
<span class="w-6 h-6 rounded-sm flex items-center justify-center flex-shrink-0">
<svg class="w-4 h-4 text-brand-primary" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M5 13l4 4L19 7"></path>
</svg>
</span>
<span>Standards-based data collection regarding system and model performances</span>
</li>
</ul>
<div class="bg-white p-8 rounded-lg shadow-sm mb-12">
<h2 class="text-2xl font-bold text-gray-800 mb-6 relative inline-block pb-2">
References
<span class="absolute bottom-0 left-0 w-full h-0.5 bg-brand-primary"></span>
</h2>
<ol class="space-y-4 text-gray-600">
<li class="flex gap-4">
<span class="flex-none font-medium">1.</span>
<span>Gupta V, Erdal BS, Ramirez C, Floca R, Jackson L, Genereaux B, Bryson S et al. "Current State of Community-Driven Radiological AI Deployment in Medical Imaging." arXiv preprint arXiv:2212.14177 (2022).</span>
</li>
<li class="flex gap-4">
<span class="flex-none font-medium">2.</span>
<span>Cardoso J, Li W, Brown R, Ma N, Kerfoot E, Wang Y, Murrey B et al. "MONAI: An open-source framework for deep learning in healthcare." arXiv preprint arXiv:2211.02701 (2022).</span>
</li>
<li class="flex gap-4">
<span class="flex-none font-medium">3.</span>
<span>Testagrose C, Gupta V, Erdal BS, White RD, Maxwell RW, Liu X, Kahanda I, Elfayoumy S, Klostermeyer W, Demirer M. "Impact of Concatenation of Digital Craniocaudal Mammography Images on a Deep-Learning Breast-Density Classifier Using Inception-V3 and ViT." In 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 3399-3406. IEEE, 2022.</span>
</li>
<li class="flex gap-4">
<span class="flex-none font-medium">4.</span>
<span>White RD, Demirer M, Gupta V, Sebro RA, Kusumoto FM, Erdal BS. "Pre-deployment assessment of an AI model to assist radiologists in chest X-ray detection and identification of lead-less implanted electronic devices for pre-MRI safety screening: realized implementation needs and proposed operational solutions." Journal of Medical Imaging 9, no. 5 (2022): 054504.</span>
</li>
<li class="flex gap-4">
<span class="flex-none font-medium">5.</span>
<span>Gupta V, Demirer M, Maxwell RW, White RD, Erdal BS "A multi-reconstruction study of breast density estimation using Deep Learning." arXiv preprint arXiv:2202.08238 (2022).</span>
</li>
<li class="flex gap-4">
<span class="flex-none font-medium">6.</span>
<span>Demirer M, White RD, Gupta V, Sebro RA, Erdal BS. "Cascading neural network methodology for artificial intelligence-assisted radiographic detection and classification of lead-less implanted electronic devices within the chest." arXiv preprint arXiv:2108.11954 (2021).</span>
</li>
<li class="flex gap-4">
<span class="flex-none font-medium">7.</span>
<span>White RD, Erdal BS, Demirer M, Gupta V, Bigelow MT, Dikici E, Candemir S, Galizia MS, Carpenter JL, O'Donnell TP, Halabi AH, Prevedello LM. Artificial Intelligence to Assist in Exclusion of Coronary Atherosclerosis During CCTA Evaluation of Chest Pain in the Emergency Department: Preparing an Application for Real-world Use. J Digit Imaging. 2021 Jun;34(3):554-571. doi: 10.1007/s10278-021-00441-6. Epub 2021 Mar 31. PMID: 33791909; PMCID: PMC8329136.</span>
</li>
<li class="flex gap-4">
<span class="flex-none font-medium">8.</span>
<span>Rockenbach MABC, Buch V, Gupta V, Kotecha GK, Laur O, Erdal BS, Yang D, Xu D, Ghoshajra BB, Flores MG, Dayan I, Roth H, White RD. Automatic detection of decreased ejection fraction and left ventricular hypertrophy on 4D cardiac CTA: Use of artificial intelligence with transfer learning to facilitate multi-site operations. ntelligence-Based Medicine. 2022; 6.</span>
</li>
<li class="flex gap-4">
<span class="flex-none font-medium">9.</span>
<span>Gupta V, Taylor C, Bonnet S, Prevedello LM, Hawley J, White RD, Flores MG, Erdal BS. Deep Learning Based Automatic Detection of Adequately Positioned Mammograms. Lecture Notes in Computer Science: Domain Adaptation and Representation Transfer, and Affordable Healthcare and AI for Resource Diverse Global Health. 2021; 12968:239-250.</span>
</li>
</ol>
</div>
<div class="bg-white p-8 rounded-lg shadow-sm">
<h2 class="text-2xl font-bold text-gray-800 mb-6 relative inline-block pb-2">
Talks
<span class="absolute bottom-0 left-0 w-full h-0.5 bg-brand-primary"></span>
</h2>
<ul class="space-y-4">
<li class="flex items-center gap-3">
<svg class="w-5 h-5 text-brand-primary flex-shrink-0" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M14.752 11.168l-3.197-2.132A1 1 0 0010 9.87v4.263a1 1 0 001.555.832l3.197-2.132a1 1 0 000-1.664z"></path>
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M21 12a9 9 0 11-18 0 9 9 0 0118 0z"></path>
</svg>
<a href="https://www.youtube.com/watch?v=mpVEiNW9qtw&t=1950s" target="_blank" rel="noopener noreferrer" class="text-brand-primary hover:text-brand-dark transition-colors">MONAI Bootcamp 2023</a>
</li>
<li class="flex items-center gap-3">
<svg class="w-5 h-5 text-brand-primary flex-shrink-0" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M14.752 11.168l-3.197-2.132A1 1 0 0010 9.87v4.263a1 1 0 001.555.832l3.197-2.132a1 1 0 000-1.664z"></path>
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M21 12a9 9 0 11-18 0 9 9 0 0118 0z"></path>
</svg>
<a href="https://www.youtube.com/watch?v=pS68i8ShoOk" target="_blank" rel="noopener noreferrer" class="text-brand-primary hover:text-brand-dark transition-colors">MONAI Bootcamp 2021</a>
</li>
</ul>
</div>
</div>
</div>
</section>
</main>
<!-- #include file="components/footer.html" -->
<!-- #include file="components/scripts.html" -->
</body>
</html>