This repository contains the research paper examining the adoption of reporting guidelines and explainability standards in medical imaging AI literature from 2020-2025. The study analyzes citation patterns and identifies barriers to clinical adoption of AI systems in medical imaging.
Artificial intelligence (AI) has achieved strong results in medical imaging tasks such as segmentation, detection, and classification. However, clinical adoption remains limited, with trust and reporting standards emerging as significant barriers. This review examines literature from 2020 to 2025 to evaluate the uptake of reporting guidelines and to assess how explainability and trust principles are addressed. Citation analysis of PubMed-indexed papers shows that fewer than 11% referenced these frameworks within the last 10 years, with older standards like TRIPOD (2015) and CLAIM (2020) contributing a significant portion of citations. Furthermore, citation analysis within the last 5 years reveals that fewer than 7% of papers referenced or used these frameworks. Barriers include low awareness, weak enforcement, and the complexity of newer checklists. Combining these results with qualitative findings, this review highlights the need for transparent reporting and stronger policy enforcement to support compliance. Achieving this requires standardized practices that enable trustworthy and effective AI integration in clinical workflows.
- Low Guideline Adoption: Fewer than 11% of papers referenced reporting frameworks within the last 10 years
- Recent Decline: Only 6.8% of papers from 2020-2025 referenced these frameworks
- Citation Dominance: Older standards like TRIPOD (2015) and CLAIM (2020) dominate citations
- Barriers Identified: Low awareness, weak enforcement, and complexity of newer checklists
- Trust Gap: Despite high technical performance, clinical adoption remains limited due to transparency concerns
- Literature Review: Analysis of 30-40 publications from 2020-2025
- Citation Analysis: PubMed-based evaluation of reporting guideline adoption
- Qualitative Synthesis: Assessment of barriers and emerging solutions
- Validation: Cross-reference with external studies on guideline adherence
main.tex- Complete LaTeX source file for the research papersample.bib- Bibliography file containing all referencesCitation Contribution (2015 - 2025).png- Visualization of citation patterns over past 10 yearsCitation Contribution (2020 - 2025).png- Visualization of citation patterns over past 5 yearsNon-Review Publications (2015-2025).png- Publication Guideline Pie Chart over past 10 yearsNon-Review Publications (2020-2025).png- Publication Guideline Pie Chart over past 5 years
- Alan Weng
- Nipun Deelaka Pathirage
- Neha Keshan
medical imaging, artificial intelligence, explainable AI, reporting guidelines, clinical adoption, trust frameworks, reproducibility, medical AI standards
If you use this work in your research, please cite this repository.
This work is licensed under the terms specified in the LICENSE file.