1
|
Sun JP, Bu CX, Dang JH, Lv QQ, Tao QY, Kang YM, Niu XY, Wen BH, Wang WJ, Wang KY, Cheng JL, Zhang Y. Enhanced image quality and lesion detection in FLAIR MRI of white matter hyperintensity through deep learning-based reconstruction. Asian J Surg 2024:S1015-9584(24)02201-2. [PMID: 39368951 DOI: 10.1016/j.asjsur.2024.09.156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Revised: 08/12/2024] [Accepted: 09/23/2024] [Indexed: 10/07/2024] Open
Abstract
OBJECTIVE To delve deeper into the study of degenerative diseases, it becomes imperative to investigate whether deep-learning reconstruction (DLR) can improve the evaluation of white matter hyperintensity (WMH) on 3.0T scanners, and compare its lesion detection capabilities with conventional reconstruction (CR). METHODS A total of 131 participants (mean age, 46 years ±17; 46 men) were included in the study. The images of these participants were evaluated by readers blinded to clinical data. Two readers independently assessed subjective image indicators on a 4-point scale. The severity of WMH was assessed by four raters using the Fazekas scale. To evaluate the relative detection capabilities of each method, we employed the Wilcoxon signed rank test to compare scores between the DLR and the CR group. Additionally, we assessed interrater reliability using weighted k statistics and intraclass correlation coefficient to test consistency among the raters. RESULTS In terms of subjective image scoring, the DLR group exhibited significantly better scores compared to the CR group (P < 0.001). Regarding the severity of WMH, the DL group demonstrated superior performance in detecting lesions. Majority readers agreed that the DL group provided clearer visualization of the lesions compared to the conventional group. CONCLUSION DLR exhibits notable advantages over CR, including subjective image quality, lesion detection sensitivity, and inter reader reliability.
Collapse
Affiliation(s)
- Jie Ping Sun
- Department of MRI, The First Affiliated Hospital of Zhengzhou University, 450052, Zhengzhou, China
| | - Chun Xiao Bu
- Department of MRI, The First Affiliated Hospital of Zhengzhou University, 450052, Zhengzhou, China
| | - Jing Han Dang
- Department of MRI, The First Affiliated Hospital of Zhengzhou University, 450052, Zhengzhou, China
| | - Qing Qing Lv
- Department of Radiology, The Third Affiliated Hospital of Zhengzhou University, 450052, Zhengzhou, China
| | - Qiu Ying Tao
- Department of MRI, The First Affiliated Hospital of Zhengzhou University, 450052, Zhengzhou, China
| | - Yi Meng Kang
- Department of MRI, The First Affiliated Hospital of Zhengzhou University, 450052, Zhengzhou, China
| | - Xiao Yu Niu
- Department of MRI, The First Affiliated Hospital of Zhengzhou University, 450052, Zhengzhou, China
| | - Bao Hong Wen
- Department of MRI, The First Affiliated Hospital of Zhengzhou University, 450052, Zhengzhou, China
| | - Wei Jian Wang
- Department of MRI, The First Affiliated Hospital of Zhengzhou University, 450052, Zhengzhou, China
| | - Kai Yu Wang
- MR Research China, GE Healthcare, Beijing, China
| | - Jing Liang Cheng
- Department of MRI, The First Affiliated Hospital of Zhengzhou University, 450052, Zhengzhou, China.
| | - Yong Zhang
- Department of MRI, The First Affiliated Hospital of Zhengzhou University, 450052, Zhengzhou, China.
| |
Collapse
|
2
|
Lin DJ, Doshi AM, Fritz J, Recht MP. Designing Clinical MRI for Enhanced Workflow and Value. J Magn Reson Imaging 2024; 60:29-39. [PMID: 37795927 DOI: 10.1002/jmri.29038] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 09/18/2023] [Accepted: 09/18/2023] [Indexed: 10/06/2023] Open
Abstract
MRI is an expensive and traditionally time-intensive modality in imaging. With the paradigm shift toward value-based healthcare, radiology departments must examine the entire MRI process cycle to identify opportunities to optimize efficiency and enhance value for patients. Digital tools such as "frictionless scheduling" prioritize patient preference and convenience, thereby delivering patient-centered care. Recent advances in conventional and deep learning-based accelerated image reconstruction methods have reduced image acquisition time to such a degree that so-called nongradient time now constitutes a major percentage of total room time. For this reason, architectural design strategies that reconfigure patient preparation processes and decrease the turnaround time between scans can substantially impact overall throughput while also improving patient comfort and privacy. Real-time informatics tools that provide an enterprise-wide overview of MRI workflow and Picture Archiving and Communication System (PACS)-integrated instant messaging can complement these efforts by offering transparent, situational data and facilitating communication between radiology team members. Finally, long-term investment in training, recruiting, and retaining a highly skilled technologist workforce is essential for building a pipeline and team of technologists committed to excellence. Here, we highlight various opportunities for optimizing MRI workflow and enhancing value by offering many of our own on-the-ground experiences and conclude by anticipating some of the future directions for process improvement and innovation in clinical MR imaging. EVIDENCE LEVEL: N/A TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Dana J Lin
- Department of Radiology, NYU Grossman School of Medicine/NYU Langone Health, New York, New York, USA
| | - Ankur M Doshi
- Department of Radiology, NYU Grossman School of Medicine/NYU Langone Health, New York, New York, USA
| | - Jan Fritz
- Department of Radiology, NYU Grossman School of Medicine/NYU Langone Health, New York, New York, USA
| | - Michael P Recht
- Department of Radiology, NYU Grossman School of Medicine/NYU Langone Health, New York, New York, USA
| |
Collapse
|
3
|
Belton N, Hagos MT, Lawlor A, Curran KM. Towards a unified approach for unsupervised brain MRI Motion Artefact Detection with few shot Anomaly Detection. Comput Med Imaging Graph 2024; 115:102391. [PMID: 38718561 DOI: 10.1016/j.compmedimag.2024.102391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 03/19/2024] [Accepted: 04/26/2024] [Indexed: 06/03/2024]
Abstract
Automated Motion Artefact Detection (MAD) in Magnetic Resonance Imaging (MRI) is a field of study that aims to automatically flag motion artefacts in order to prevent the requirement for a repeat scan. In this paper, we identify and tackle the three current challenges in the field of automated MAD; (1) reliance on fully-supervised training, meaning they require specific examples of Motion Artefacts (MA), (2) inconsistent use of benchmark datasets across different works and use of private datasets for testing and training of newly proposed MAD techniques and (3) a lack of sufficiently large datasets for MRI MAD. To address these challenges, we demonstrate how MAs can be identified by formulating the problem as an unsupervised Anomaly Detection (AD) task. We compare the performance of three State-of-the-Art AD algorithms DeepSVDD, Interpolated Gaussian Descriptor and FewSOME on two open-source Brain MRI datasets on the task of MAD and MA severity classification, with FewSOME achieving a MAD AUC >90% on both datasets and a Spearman Rank Correlation Coefficient of 0.8 on the task of MA severity classification. These models are trained in the few shot setting, meaning large Brain MRI datasets are not required to build robust MAD algorithms. This work also sets a standard protocol for testing MAD algorithms on open-source benchmark datasets. In addition to addressing these challenges, we demonstrate how our proposed 'anomaly-aware' scoring function improves FewSOME's MAD performance in the setting where one and two shots of the anomalous class are available for training. Code available at https://github.com/niamhbelton/Unsupervised-Brain-MRI-Motion-Artefact-Detection/.
Collapse
Affiliation(s)
- Niamh Belton
- Science Foundation Ireland Centre for Research Training in Machine Learning, Ireland; School of Medicine, University College Dublin, Ireland.
| | - Misgina Tsighe Hagos
- Science Foundation Ireland Centre for Research Training in Machine Learning, Ireland; School of Computer Science, University College Dublin, Ireland
| | - Aonghus Lawlor
- School of Computer Science, University College Dublin, Ireland; Insight Centre for Data Analytics, University College Dublin, Dublin, Ireland
| | - Kathleen M Curran
- Science Foundation Ireland Centre for Research Training in Machine Learning, Ireland; School of Medicine, University College Dublin, Ireland
| |
Collapse
|
4
|
Demuth S, Paris J, Faddeenkov I, De Sèze J, Gourraud PA. Clinical applications of deep learning in neuroinflammatory diseases: A scoping review. Rev Neurol (Paris) 2024:S0035-3787(24)00522-8. [PMID: 38772806 DOI: 10.1016/j.neurol.2024.04.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2024] [Revised: 03/26/2024] [Accepted: 04/09/2024] [Indexed: 05/23/2024]
Abstract
BACKGROUND Deep learning (DL) is an artificial intelligence technology that has aroused much excitement for predictive medicine due to its ability to process raw data modalities such as images, text, and time series of signals. OBJECTIVES Here, we intend to give the clinical reader elements to understand this technology, taking neuroinflammatory diseases as an illustrative use case of clinical translation efforts. We reviewed the scope of this rapidly evolving field to get quantitative insights about which clinical applications concentrate the efforts and which data modalities are most commonly used. METHODS We queried the PubMed database for articles reporting DL algorithms for clinical applications in neuroinflammatory diseases and the radiology.healthairegister.com website for commercial algorithms. RESULTS The review included 148 articles published between 2018 and 2024 and five commercial algorithms. The clinical applications could be grouped as computer-aided diagnosis, individual prognosis, functional assessment, the segmentation of radiological structures, and the optimization of data acquisition. Our review highlighted important discrepancies in efforts. The segmentation of radiological structures and computer-aided diagnosis currently concentrate most efforts with an overrepresentation of imaging. Various model architectures have addressed different applications, relatively low volume of data, and diverse data modalities. We report the high-level technical characteristics of the algorithms and synthesize narratively the clinical applications. Predictive performances and some common a priori on this topic are finally discussed. CONCLUSION The currently reported efforts position DL as an information processing technology, enhancing existing modalities of paraclinical investigations and bringing perspectives to make innovative ones actionable for healthcare.
Collapse
Affiliation(s)
- S Demuth
- Inserm U1064, CR2TI - Center for Research in Transplantation and Translational Immunology, Nantes University, 44000 Nantes, France; Inserm U1119 : biopathologie de la myéline, neuroprotection et stratégies thérapeutiques, University of Strasbourg, 1, rue Eugène-Boeckel - CS 60026, 67084 Strasbourg, France.
| | - J Paris
- Inserm U1064, CR2TI - Center for Research in Transplantation and Translational Immunology, Nantes University, 44000 Nantes, France
| | - I Faddeenkov
- Inserm U1064, CR2TI - Center for Research in Transplantation and Translational Immunology, Nantes University, 44000 Nantes, France
| | - J De Sèze
- Inserm U1119 : biopathologie de la myéline, neuroprotection et stratégies thérapeutiques, University of Strasbourg, 1, rue Eugène-Boeckel - CS 60026, 67084 Strasbourg, France; Department of Neurology, University Hospital of Strasbourg, 1, avenue Molière, 67200 Strasbourg, France; Inserm CIC 1434 Clinical Investigation Center, University Hospital of Strasbourg, 1, avenue Molière, 67200 Strasbourg, France
| | - P-A Gourraud
- Inserm U1064, CR2TI - Center for Research in Transplantation and Translational Immunology, Nantes University, 44000 Nantes, France; "Data clinic", Department of Public Health, University Hospital of Nantes, Nantes, France
| |
Collapse
|
5
|
Flory MN, Napel S, Tsai EB. Artificial Intelligence in Radiology: Opportunities and Challenges. Semin Ultrasound CT MR 2024; 45:152-160. [PMID: 38403128 DOI: 10.1053/j.sult.2024.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Artificial intelligence's (AI) emergence in radiology elicits both excitement and uncertainty. AI holds promise for improving radiology with regards to clinical practice, education, and research opportunities. Yet, AI systems are trained on select datasets that can contain bias and inaccuracies. Radiologists must understand these limitations and engage with AI developers at every step of the process - from algorithm initiation and design to development and implementation - to maximize benefit and minimize harm that can be enabled by this technology.
Collapse
Affiliation(s)
- Marta N Flory
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA
| | - Sandy Napel
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA
| | - Emily B Tsai
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA.
| |
Collapse
|
6
|
Zhong NN, Wang HQ, Huang XY, Li ZZ, Cao LM, Huo FY, Liu B, Bu LL. Enhancing head and neck tumor management with artificial intelligence: Integration and perspectives. Semin Cancer Biol 2023; 95:52-74. [PMID: 37473825 DOI: 10.1016/j.semcancer.2023.07.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 07/11/2023] [Accepted: 07/15/2023] [Indexed: 07/22/2023]
Abstract
Head and neck tumors (HNTs) constitute a multifaceted ensemble of pathologies that primarily involve regions such as the oral cavity, pharynx, and nasal cavity. The intricate anatomical structure of these regions poses considerable challenges to efficacious treatment strategies. Despite the availability of myriad treatment modalities, the overall therapeutic efficacy for HNTs continues to remain subdued. In recent years, the deployment of artificial intelligence (AI) in healthcare practices has garnered noteworthy attention. AI modalities, inclusive of machine learning (ML), neural networks (NNs), and deep learning (DL), when amalgamated into the holistic management of HNTs, promise to augment the precision, safety, and efficacy of treatment regimens. The integration of AI within HNT management is intricately intertwined with domains such as medical imaging, bioinformatics, and medical robotics. This article intends to scrutinize the cutting-edge advancements and prospective applications of AI in the realm of HNTs, elucidating AI's indispensable role in prevention, diagnosis, treatment, prognostication, research, and inter-sectoral integration. The overarching objective is to stimulate scholarly discourse and invigorate insights among medical practitioners and researchers to propel further exploration, thereby facilitating superior therapeutic alternatives for patients.
Collapse
Affiliation(s)
- Nian-Nian Zhong
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Han-Qi Wang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Xin-Yue Huang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Zi-Zhan Li
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Lei-Ming Cao
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Fang-Yi Huo
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Bing Liu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China; Department of Oral & Maxillofacial - Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China.
| | - Lin-Lin Bu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China; Department of Oral & Maxillofacial - Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China.
| |
Collapse
|
7
|
Gao Y, Liu W(V, Li L, Liu C, Zha Y. Usefulness of T2-Weighted Images with Deep-Learning-Based Reconstruction in Nasal Cartilage. Diagnostics (Basel) 2023; 13:3044. [PMID: 37835786 PMCID: PMC10572289 DOI: 10.3390/diagnostics13193044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 09/22/2023] [Accepted: 09/22/2023] [Indexed: 10/15/2023] Open
Abstract
OBJECTIVE This study aims to evaluate the feasibility of visualizing nasal cartilage using deep-learning-based reconstruction (DLR) fast spin-echo (FSE) imaging in comparison to three-dimensional fast spoiled gradient-echo (3D FSPGR) images. MATERIALS AND METHODS This retrospective study included 190 set images of 38 participants, including axial T1- and T2-weighted FSE images using DLR (T1WIDL and T2WIDL, belong to FSEDL) and without using DLR (T1WIO and T2WIO, belong to FSEO) and 3D FSPGR images. Subjective evaluation (overall image quality, noise, contrast, artifacts, and identification of anatomical structures) was independently conducted by two radiologists. Objective evaluation including signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) was conducted using manual region-of-interest (ROI)-based analysis. Coefficient of variation (CV) and Bland-Altman plots were used to demonstrate the intra-rater repeatability of measurements for cartilage thickness on five different images. RESULTS Both qualitative and quantitative results confirmed superior FSEDL to 3D FSPGR images (both p < 0.05), improving the diagnosis confidence of the observers. Lower lateral cartilage (LLC), upper lateral cartilage (ULC), and septal cartilage (SP) were relatively well delineated on the T2WIDL, while 3D FSPGR showed poorly on the septal cartilage. For the repeatability of cartilage thickness measurements, T2WIDL showed the highest intra-observer (%CV = 8.7% for SP, 9.5% for ULC, and 9.7% for LLC) agreements. In addition, the acquisition time for T1WIDL and T2WIDL was respectively reduced by 14.2% to 29% compared to 3D FSPGR (both p < 0.05). CONCLUSIONS Two-dimensional equivalent-thin-slice T1- and T2-weighted images using DLR showed better image quality and shorter scan time than 3D FSPGR and conventional construction images in nasal cartilages. The anatomical details were preserved without losing clinical performance on diagnosis and prognosis, especially for pre-rhinoplasty planning.
Collapse
Affiliation(s)
- Yufan Gao
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | | | - Liang Li
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Changsheng Liu
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Yunfei Zha
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan 430060, China
| |
Collapse
|
8
|
Wen BH, Liu ZJ, Zhang Y, Cheng JL. Using deep learning to accelerate temporomandibular joint MRI at 3 T: A case report. Asian J Surg 2023; 46:4110-4111. [PMID: 37295988 DOI: 10.1016/j.asjsur.2023.05.111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 05/25/2023] [Indexed: 06/12/2023] Open
Affiliation(s)
- Bao-Hong Wen
- Department of MRI, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, China
| | - Zi-Jun Liu
- Department of MRI, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, China
| | - Yong Zhang
- Department of MRI, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, China
| | - Jing-Liang Cheng
- Department of MRI, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, China.
| |
Collapse
|
9
|
Vakli P, Weiss B, Szalma J, Barsi P, Gyuricza I, Kemenczky P, Somogyi E, Nárai Á, Gál V, Hermann P, Vidnyánszky Z. Automatic brain MRI motion artifact detection based on end-to-end deep learning is similarly effective as traditional machine learning trained on image quality metrics. Med Image Anal 2023; 88:102850. [PMID: 37263108 DOI: 10.1016/j.media.2023.102850] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 04/28/2023] [Accepted: 05/22/2023] [Indexed: 06/03/2023]
Abstract
Head motion artifacts in magnetic resonance imaging (MRI) are an important confounding factor concerning brain research as well as clinical practice. For this reason, several machine learning-based methods have been developed for the automatic quality control of structural MRI scans. Deep learning offers a promising solution to this problem, however, given its data-hungry nature and the scarcity of expert-annotated datasets, its advantage over traditional machine learning methods in identifying motion-corrupted brain scans is yet to be determined. In the present study, we investigated the relative advantage of the two methods in structural MRI quality control. To this end, we collected publicly available T1-weighted images and scanned subjects in our own lab under conventional and active head motion conditions. The quality of the images was rated by a team of radiologists from the point of view of clinical diagnostic use. We present a relatively simple, lightweight 3D convolutional neural network trained in an end-to-end manner that achieved a test set (N = 411) balanced accuracy of 94.41% in classifying brain scans into clinically usable or unusable categories. A support vector machine trained on image quality metrics achieved a balanced accuracy of 88.44% on the same test set. Statistical comparison of the two models yielded no significant difference in terms of confusion matrices, error rates, or receiver operating characteristic curves. Our results suggest that these machine learning methods are similarly effective in identifying severe motion artifacts in brain MRI scans, and underline the efficacy of end-to-end deep learning-based systems in brain MRI quality control, allowing the rapid evaluation of diagnostic utility without the need for elaborate image pre-processing.
Collapse
Affiliation(s)
- Pál Vakli
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary.
| | - Béla Weiss
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary.
| | - János Szalma
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - Péter Barsi
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - István Gyuricza
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - Péter Kemenczky
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - Eszter Somogyi
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - Ádám Nárai
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - Viktor Gál
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - Petra Hermann
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - Zoltán Vidnyánszky
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary.
| |
Collapse
|
10
|
Hoffmann M, Singh NM, Dalca AV, Fischl B, Frost R. Can we predict motion artifacts in clinical MRI before the scan completes? PROCEEDINGS OF THE INTERNATIONAL SOCIETY FOR MAGNETIC RESONANCE IN MEDICINE ... SCIENTIFIC MEETING AND EXHIBITION. INTERNATIONAL SOCIETY FOR MAGNETIC RESONANCE IN MEDICINE. SCIENTIFIC MEETING AND EXHIBITION 2023; 2023:1372. [PMID: 37692094 PMCID: PMC10490829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
Subject motion can cause artifacts in clinical MRI, frequently necessitating repeat scans. We propose to alleviate this inefficiency by predicting artifact scores from partial multi-shot multi-slice acquisitions, which may guide the operator in aborting corrupted scans early.
Collapse
Affiliation(s)
- Malte Hoffmann
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
| | - Nalini M Singh
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA 02139, USA
- Harvard-MIT Division of Health Sciences and Technology, MIT, Cambridge, MA 02139, USA
| | - Adrian V Dalca
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA 02139, USA
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA 02139, USA
- Harvard-MIT Division of Health Sciences and Technology, MIT, Cambridge, MA 02139, USA
| | - Robert Frost
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
11
|
Pierre K, Haneberg AG, Kwak S, Peters KR, Hochhegger B, Sananmuang T, Tunlayadechanont P, Tighe PJ, Mancuso A, Forghani R. Applications of Artificial Intelligence in the Radiology Roundtrip: Process Streamlining, Workflow Optimization, and Beyond. Semin Roentgenol 2023; 58:158-169. [PMID: 37087136 DOI: 10.1053/j.ro.2023.02.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Accepted: 02/14/2023] [Indexed: 04/24/2023]
Abstract
There are many impactful applications of artificial intelligence (AI) in the electronic radiology roundtrip and the patient's journey through the healthcare system that go beyond diagnostic applications. These tools have the potential to improve quality and safety, optimize workflow, increase efficiency, and increase patient satisfaction. In this article, we review the role of AI for process improvement and workflow enhancement which includes applications beginning from the time of order entry, scan acquisition, applications supporting the image interpretation task, and applications supporting tasks after image interpretation such as result communication. These non-diagnostic workflow and process optimization tasks are an important part of the arsenal of potential AI tools that can streamline day to day clinical practice and patient care.
Collapse
Affiliation(s)
- Kevin Pierre
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Adam G Haneberg
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Division of Medical Physics, Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Sean Kwak
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL
| | - Keith R Peters
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Bruno Hochhegger
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Thiparom Sananmuang
- Department of Diagnostic and Therapeutic Radiology and Research, Faculty of Medicine Ramathibodi Hospital, Ratchathewi, Bangkok, Thailand
| | - Padcha Tunlayadechanont
- Department of Diagnostic and Therapeutic Radiology and Research, Faculty of Medicine Ramathibodi Hospital, Ratchathewi, Bangkok, Thailand
| | - Patrick J Tighe
- Departments of Anesthesiology & Orthopaedic Surgery, University of Florida College of Medicine, Gainesville, FL
| | - Anthony Mancuso
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Reza Forghani
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL; Division of Medical Physics, Department of Radiology, University of Florida College of Medicine, Gainesville, FL.
| |
Collapse
|
12
|
Surianarayanan C, Lawrence JJ, Chelliah PR, Prakash E, Hewage C. Convergence of Artificial Intelligence and Neuroscience towards the Diagnosis of Neurological Disorders-A Scoping Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:3062. [PMID: 36991773 PMCID: PMC10053494 DOI: 10.3390/s23063062] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 03/09/2023] [Accepted: 03/09/2023] [Indexed: 06/19/2023]
Abstract
Artificial intelligence (AI) is a field of computer science that deals with the simulation of human intelligence using machines so that such machines gain problem-solving and decision-making capabilities similar to that of the human brain. Neuroscience is the scientific study of the struczture and cognitive functions of the brain. Neuroscience and AI are mutually interrelated. These two fields help each other in their advancements. The theory of neuroscience has brought many distinct improvisations into the AI field. The biological neural network has led to the realization of complex deep neural network architectures that are used to develop versatile applications, such as text processing, speech recognition, object detection, etc. Additionally, neuroscience helps to validate the existing AI-based models. Reinforcement learning in humans and animals has inspired computer scientists to develop algorithms for reinforcement learning in artificial systems, which enables those systems to learn complex strategies without explicit instruction. Such learning helps in building complex applications, like robot-based surgery, autonomous vehicles, gaming applications, etc. In turn, with its ability to intelligently analyze complex data and extract hidden patterns, AI fits as a perfect choice for analyzing neuroscience data that are very complex. Large-scale AI-based simulations help neuroscientists test their hypotheses. Through an interface with the brain, an AI-based system can extract the brain signals and commands that are generated according to the signals. These commands are fed into devices, such as a robotic arm, which helps in the movement of paralyzed muscles or other human parts. AI has several use cases in analyzing neuroimaging data and reducing the workload of radiologists. The study of neuroscience helps in the early detection and diagnosis of neurological disorders. In the same way, AI can effectively be applied to the prediction and detection of neurological disorders. Thus, in this paper, a scoping review has been carried out on the mutual relationship between AI and neuroscience, emphasizing the convergence between AI and neuroscience in order to detect and predict various neurological disorders.
Collapse
Affiliation(s)
| | | | | | - Edmond Prakash
- Research Center for Creative Arts, University for the Creative Arts (UCA), Farnham GU9 7DS, UK
| | - Chaminda Hewage
- Cardiff School of Technologies, Cardiff Metropolitan University, Cardiff CF5 2YB, UK
| |
Collapse
|
13
|
Cui L, Song Y, Wang Y, Wang R, Wu D, Xie H, Li J, Yang G. Motion artifact reduction for magnetic resonance imaging with deep learning and k-space analysis. PLoS One 2023; 18:e0278668. [PMID: 36603007 DOI: 10.1371/journal.pone.0278668] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2021] [Accepted: 11/22/2022] [Indexed: 01/06/2023] Open
Abstract
Motion artifacts deteriorate the quality of magnetic resonance (MR) images. This study proposes a new method to detect phase-encoding (PE) lines corrupted by motion and remove motion artifacts in MR images. 67 cases containing 8710 slices of axial T2-weighted images from the IXI public dataset were split into three datasets, i.e., training (50 cases/6500 slices), validation (5/650), and test (12/1560) sets. First, motion-corrupted k-spaces and images were simulated using a pseudo-random sampling order and random motion tracks. A convolutional neural network (CNN) model was trained to filter the motion-corrupted images. Then, the k-space of the filtered image was compared with the motion-corrupted k-space line-by-line, to detect the PE lines affected by motion. Finally, the unaffected PE lines were used to reconstruct the final image using compressed sensing (CS). For the simulated images with 35%, 40%, 45%, and 50% unaffected PE lines, the mean peak signal-to-noise ratio (PSNRs) of resulting images (mean±standard deviation) were 36.129±3.678, 38.646±3.526, 40.426±3.223, and 41.510±3.167, respectively, and the mean structural similarity (SSIMs) were 0.950±0.046, 0.964±0.035, 0.975±0.025, and 0.979±0.023, respectively. For images with more than 35% PE lines unaffected by motion, images reconstructed with proposed algorithm exhibited better quality than those images reconstructed with CS using 35% under-sampled data (PSNR 37.678±3.261, SSIM 0.964±0.028). It was proved that deep learning and k-space analysis can detect the k-space PE lines affected by motion and CS can be used to reconstruct images from unaffected data, effectively alleviating the motion artifacts.
Collapse
Affiliation(s)
- Long Cui
- Department of Physics, Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, China
| | - Yang Song
- Department of Physics, Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, China
| | - Yida Wang
- Department of Physics, Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, China
| | - Rui Wang
- Department of Physics, Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, China
| | - Dongmei Wu
- Department of Physics, Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, China
| | - Haibin Xie
- Department of Physics, Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, China
| | - Jianqi Li
- Department of Physics, Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, China
| | - Guang Yang
- Department of Physics, Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, China
| |
Collapse
|
14
|
Inkinen SI, Mäkelä T, Kaasalainen T, Peltonen J, Kangasniemi M, Kortesniemi M. Automatic head computed tomography image noise quantification with deep learning. Phys Med 2022; 99:102-112. [PMID: 35671678 DOI: 10.1016/j.ejmp.2022.05.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 04/02/2022] [Accepted: 05/25/2022] [Indexed: 10/18/2022] Open
Abstract
PURPOSE Computed tomography (CT) image noise is usually determined by standard deviation (SD) of pixel values from uniform image regions. This study investigates how deep learning (DL) could be applied in head CT image noise estimation. METHODS Two approaches were investigated for noise image estimation of a single acquisition image: direct noise image estimation using supervised DnCNN convolutional neural network (CNN) architecture, and subtraction of a denoised image estimated with denoising UNet-CNN experimented with supervised and unsupervised noise2noise training approaches. Noise was assessed with local SD maps using 3D- and 2D-CNN architectures. Anthropomorphic phantom CT image dataset (N = 9 scans, 3 repetitions) was used for DL-model comparisons. Mean square error (MSE) and mean absolute percentage errors (MAPE) of SD values were determined using the SD values of subtraction images as ground truth. Open-source clinical head CT low-dose dataset (Ntrain = 37, Ntest = 10 subjects) were used to demonstrate DL applicability in noise estimation from manually labeled uniform regions and in automated noise and contrast assessment. RESULTS The direct SD estimation using 3D-CNN was the most accurate assessment method when comparing in phantom dataset (MAPE = 15.5%, MSE = 6.3HU). Unsupervised noise2noise approach provided only slightly inferior results (MAPE = 20.2%, MSE = 13.7HU). 2DCNN and unsupervised UNet models provided the smallest MSE on clinical labeled uniform regions. CONCLUSIONS DL-based clinical image assessment is feasible and provides acceptable accuracy as compared to true image noise. Noise2noise approach may be feasible in clinical use where no ground truth data is available. Noise estimation combined with tissue segmentation may enable more comprehensive image quality characterization.
Collapse
Affiliation(s)
- Satu I Inkinen
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland.
| | - Teemu Mäkelä
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland; Department of Physics, University of Helsinki, P.O. Box 64, FI-00014 Helsinki, Finland
| | - Touko Kaasalainen
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Juha Peltonen
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Marko Kangasniemi
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Mika Kortesniemi
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland
| |
Collapse
|
15
|
Monsour R, Dutta M, Mohamed AZ, Borkowski A, Viswanadhan NA. Neuroimaging in the Era of Artificial Intelligence: Current Applications. Fed Pract 2022; 39:S14-S20. [PMID: 35765692 PMCID: PMC9227741 DOI: 10.12788/fp.0231] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/17/2023]
Abstract
BACKGROUND Artificial intelligence (AI) in medicine has shown significant promise, particularly in neuroimaging. AI increases efficiency and reduces errors, making it a valuable resource for physicians. With the increasing amount of data processing and image interpretation required, the ability to use AI to augment and aid the radiologist could improve the quality of patient care. OBSERVATIONS AI can predict patient wait times, which may allow more efficient patient scheduling. Additionally, AI can save time for repeat magnetic resonance neuroimaging and reduce the time spent during imaging. AI has the ability to read computed tomography, magnetic resonance imaging, and positron emission tomography with reduced or without contrast without significant loss in sensitivity for detecting lesions. Neuroimaging does raise important ethical considerations and is subject to bias. It is vital that users understand the practical and ethical considerations of the technology. CONCLUSIONS The demonstrated applications of AI in neuroimaging are numerous and varied, and it is reasonable to assume that its implementation will increase as the technology matures. AI's use for detecting neurologic conditions holds promise in combatting ever increasing imaging volumes and providing timely diagnoses.
Collapse
Affiliation(s)
- Robert Monsour
- University of South Florida Morsani College of Medicine, Tampa, Florida
| | - Mudit Dutta
- University of South Florida Morsani College of Medicine, Tampa, Florida
| | | | - Andrew Borkowski
- University of South Florida Morsani College of Medicine, Tampa, Florida
- James A. Haley Veterans’ Hospital, Tampa, Florida
| | - Narayan A. Viswanadhan
- University of South Florida Morsani College of Medicine, Tampa, Florida
- James A. Haley Veterans’ Hospital, Tampa, Florida
| |
Collapse
|
16
|
Tadavarthi Y, Makeeva V, Wagstaff W, Zhan H, Podlasek A, Bhatia N, Heilbrun M, Krupinski E, Safdar N, Banerjee I, Gichoya J, Trivedi H. Overview of Noninterpretive Artificial Intelligence Models for Safety, Quality, Workflow, and Education Applications in Radiology Practice. Radiol Artif Intell 2022; 4:e210114. [PMID: 35391770 DOI: 10.1148/ryai.210114] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 12/17/2021] [Accepted: 01/11/2022] [Indexed: 12/17/2022]
Abstract
Artificial intelligence has become a ubiquitous term in radiology over the past several years, and much attention has been given to applications that aid radiologists in the detection of abnormalities and diagnosis of diseases. However, there are many potential applications related to radiologic image quality, safety, and workflow improvements that present equal, if not greater, value propositions to radiology practices, insurance companies, and hospital systems. This review focuses on six major categories for artificial intelligence applications: study selection and protocoling, image acquisition, worklist prioritization, study reporting, business applications, and resident education. All of these categories can substantially affect different aspects of radiology practices and workflows. Each of these categories has different value propositions in terms of whether they could be used to increase efficiency, improve patient safety, increase revenue, or save costs. Each application is covered in depth in the context of both current and future areas of work. Keywords: Use of AI in Education, Application Domain, Supervised Learning, Safety © RSNA, 2022.
Collapse
Affiliation(s)
- Yasasvi Tadavarthi
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Valeria Makeeva
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - William Wagstaff
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Henry Zhan
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Anna Podlasek
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Neil Bhatia
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Marta Heilbrun
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Elizabeth Krupinski
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Nabile Safdar
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Imon Banerjee
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Judy Gichoya
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Hari Trivedi
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| |
Collapse
|
17
|
Balagurunathan Y, Beers A, McNitt-Gray M, Hadjiiski L, Napel S, Goldgof D, Perez G, Arbelaez P, Mehrtash A, Kapur T, Yang E, Moon JW, Bernardino G, Delgado-Gonzalo R, Farhangi MM, Amini AA, Ni R, Feng X, Bagari A, Vaidhya K, Veasey B, Safta W, Frigui H, Enguehard J, Gholipour A, Castillo LS, Daza LA, Pinsky P, Kalpathy-Cramer J, Farahani K. Lung Nodule Malignancy Prediction in Sequential CT Scans: Summary of ISBI 2018 Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3748-3761. [PMID: 34264825 PMCID: PMC9531053 DOI: 10.1109/tmi.2021.3097665] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Lung cancer is by far the leading cause of cancer death in the US. Recent studies have demonstrated the effectiveness of screening using low dose CT (LDCT) in reducing lung cancer related mortality. While lung nodules are detected with a high rate of sensitivity, this exam has a low specificity rate and it is still difficult to separate benign and malignant lesions. The ISBI 2018 Lung Nodule Malignancy Prediction Challenge, developed by a team from the Quantitative Imaging Network of the National Cancer Institute, was focused on the prediction of lung nodule malignancy from two sequential LDCT screening exams using automated (non-manual) algorithms. We curated a cohort of 100 subjects who participated in the National Lung Screening Trial and had established pathological diagnoses. Data from 30 subjects were randomly selected for training and the remaining was used for testing. Participants were evaluated based on the area under the receiver operating characteristic curve (AUC) of nodule-wise malignancy scores generated by their algorithms on the test set. The challenge had 17 participants, with 11 teams submitting reports with method description, mandated by the challenge rules. Participants used quantitative methods, resulting in a reporting test AUC ranging from 0.698 to 0.913. The top five contestants used deep learning approaches, reporting an AUC between 0.87 - 0.91. The team's predictor did not achieve significant differences from each other nor from a volume change estimate (p =.05 with Bonferroni-Holm's correction).
Collapse
Affiliation(s)
| | | | | | | | - Sandy Napel
- Dept. of Radiology, School of Medicine, Stanford University (SU), CA
| | | | - Gustavo Perez
- Biomedical computer vision lab (BCV), Universidad de los Andes, Colombia
| | - Pablo Arbelaez
- Biomedical computer vision lab (BCV), Universidad de los Andes, Colombia
| | - Alireza Mehrtash
- Robotics and Control Laboratory (RCL), Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC
- Surgical Planning Laboratory (SPL), Radiology Department, Brigham and Women’s Hospital, Boston, MA, 02130
| | - Tina Kapur
- Surgical Planning Laboratory (SPL), Radiology Department, Brigham and Women’s Hospital, Boston, MA, 02130
| | - Ehwa Yang
- Sungkyunkwan University School of Medicine, Seoul 06351, Korea
| | - Jung Won Moon
- Human Medical Imaging & Intervention Center, Seoul 06524, Korea
| | - Gabriel Bernardino
- Centre Suisse d’Électronique et de Microtechnique, Neuchâtel, Switzerland
| | | | - M. Mehdi Farhangi
- Medical Imaging Laboratory, University of Louisville, Louisville, KY. USA
- Computer Engineering and Computer Science, University of Louisville
| | - Amir A. Amini
- Medical Imaging Laboratory, University of Louisville, Louisville, KY. USA
- Electrical and Computer Engineering Department, University of Louisville, Louisville, KY. USA
| | | | - Xue Feng
- Spingbok Inc
- Department of Biomedical Engineering, University of Virginia, Charlottesville
| | | | | | - Benjamin Veasey
- Medical Imaging Laboratory, University of Louisville, Louisville, KY. USA
- Electrical and Computer Engineering Department, University of Louisville, Louisville, KY. USA
| | - Wiem Safta
- Computer Engineering and Computer Science, University of Louisville
| | - Hichem Frigui
- Computer Engineering and Computer Science, University of Louisville
| | - Joseph Enguehard
- Department of Radiology, Boston Children’s Hospital, and Harvard Medical School
| | - Ali Gholipour
- Department of Radiology, Boston Children’s Hospital, and Harvard Medical School
| | | | - Laura Alexandra Daza
- Department of Biomedical Engineering, Universidad de los Andes, Bogota, Colombia
| | - Paul Pinsky
- Divsion of Cancer Prevention, National Cancer Institute (NCI), Washington DC
| | | | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute (NCI), Washington DC
| |
Collapse
|
18
|
Vrenken H, Jenkinson M, Pham DL, Guttmann CRG, Pareto D, Paardekooper M, de Sitter A, Rocca MA, Wottschel V, Cardoso MJ, Barkhof F. Opportunities for Understanding MS Mechanisms and Progression With MRI Using Large-Scale Data Sharing and Artificial Intelligence. Neurology 2021; 97:989-999. [PMID: 34607924 PMCID: PMC8610621 DOI: 10.1212/wnl.0000000000012884] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Accepted: 09/09/2021] [Indexed: 11/15/2022] Open
Abstract
Patients with multiple sclerosis (MS) have heterogeneous clinical presentations, symptoms, and progression over time, making MS difficult to assess and comprehend in vivo. The combination of large-scale data sharing and artificial intelligence creates new opportunities for monitoring and understanding MS using MRI. First, development of validated MS-specific image analysis methods can be boosted by verified reference, test, and benchmark imaging data. Using detailed expert annotations, artificial intelligence algorithms can be trained on such MS-specific data. Second, understanding disease processes could be greatly advanced through shared data of large MS cohorts with clinical, demographic, and treatment information. Relevant patterns in such data that may be imperceptible to a human observer could be detected through artificial intelligence techniques. This applies from image analysis (lesions, atrophy, or functional network changes) to large multidomain datasets (imaging, cognition, clinical disability, genetics). After reviewing data sharing and artificial intelligence, we highlight 3 areas that offer strong opportunities for making advances in the next few years: crowdsourcing, personal data protection, and organized analysis challenges. Difficulties as well as specific recommendations to overcome them are discussed, in order to best leverage data sharing and artificial intelligence to improve image analysis, imaging, and the understanding of MS.
Collapse
Affiliation(s)
- Hugo Vrenken
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK.
| | - Mark Jenkinson
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | - Dzung L Pham
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | - Charles R G Guttmann
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | - Deborah Pareto
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | - Michel Paardekooper
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | - Alexandra de Sitter
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | - Maria A Rocca
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | - Viktor Wottschel
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | - M Jorge Cardoso
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | - Frederik Barkhof
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | | |
Collapse
|
19
|
Oura D, Ihara R, Myo E, Sato S, Sugimori H. Construction of super-rapid brain MRA using oblique transverse acquisition phase contrast angiography with tilted optimized non-saturated excitation pulse. Magn Reson Imaging 2021; 85:193-201. [PMID: 34715289 DOI: 10.1016/j.mri.2021.10.037] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 10/23/2021] [Accepted: 10/23/2021] [Indexed: 10/20/2022]
Abstract
[Background] Magnetic resonance angiography (MRA) is one of the most important sequences to estimate a cerebrovascular disease. We often encounter poor image quality due to slow arterial flow related to aging and motion artifact caused by disturbance of consciousness. We focused on phase contrast angiography (PCA) to overcome these difficulties. PCA can reduce scan time drastically by combining transverse acquisition and partial slab setting covering entire brain arteries. However, transverse acquisition in PCA has a large difference in signal intensity between proximal and distal vessels. Therefore, we apply tilted optimized non-saturated excitation (TONE) to improve image quality. [Purpose] The purpose of this study to investigate the usefulness of TONE for PCA. [Method] We estimated the efficacy of TONE in transverse acquisition PCA using measurement of signal intensity in arteries. We compared image quality among 1 min PCA with/without TONE and time-of flight (TOF)-MRA, by visual. [Result] TONE improved the signal inhomogeneity in entire brain arteries. PCA with TONE (5°-9°) demonstrated the highest image quality. [Conclusion] Oblique transverse acquisition PCA with TONE provides superior image quality compared with TOF with similar scan time. TONE improved image quality by the homogenizing signal intensity of vessels from proximal to distal in oblique transvers acquisition PCA. Our MRA can be performed in about 1 min and provides sufficient quality to estimate brain vessels.
Collapse
Affiliation(s)
- Daisuke Oura
- Otaru General Hospital, Otaru 047-0152, Japan; Graduate School of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan
| | - Riku Ihara
- Otaru General Hospital, Otaru 047-0152, Japan
| | | | | | - Hiroyuki Sugimori
- Faculty of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan.
| |
Collapse
|
20
|
Kitamura FC, Pan I, Ferraciolli SF, Yeom KW, Abdala N. Clinical Artificial Intelligence Applications in Radiology: Neuro. Radiol Clin North Am 2021; 59:1003-1012. [PMID: 34689869 DOI: 10.1016/j.rcl.2021.07.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Radiologists have been at the forefront of the digitization process in medicine. Artificial intelligence (AI) is a promising area of innovation, particularly in medical imaging. The number of applications of AI in neuroradiology has also grown. This article illustrates some of these applications. This article reviews machine learning challenges related to neuroradiology. The first approval of reimbursement for an AI algorithm by the Centers for Medicare and Medicaid Services, covering a stroke software for early detection of large vessel occlusion, is also discussed.
Collapse
Affiliation(s)
- Felipe Campos Kitamura
- DasaInova, Diagnósticos da América SA (Dasa), São Paulo, São Paulo, Brazil; Universidade Federal de São Paulo, São Paulo, São Paulo, Brazil.
| | - Ian Pan
- DasaInova, Diagnósticos da América SA (Dasa), São Paulo, São Paulo, Brazil; Brigham and Woman's Hospital, Boston, MA, USA
| | | | | | - Nitamar Abdala
- Universidade Federal de São Paulo, São Paulo, São Paulo, Brazil
| |
Collapse
|
21
|
Kontopodis EE, Papadaki E, Trivizakis E, Maris TG, Simos P, Papadakis GZ, Tsatsakis A, Spandidos DA, Karantanas A, Marias K. Emerging deep learning techniques using magnetic resonance imaging data applied in multiple sclerosis and clinical isolated syndrome patients (Review). Exp Ther Med 2021; 22:1149. [PMID: 34504594 PMCID: PMC8393268 DOI: 10.3892/etm.2021.10583] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 07/29/2021] [Indexed: 12/18/2022] Open
Abstract
Computer-aided diagnosis systems aim to assist clinicians in the early identification of abnormal signs in order to optimize the interpretation of medical images and increase diagnostic precision. Multiple sclerosis (MS) and clinically isolated syndrome (CIS) are chronic inflammatory, demyelinating diseases affecting the central nervous system. Recent advances in deep learning (DL) techniques have led to novel computational paradigms in MS and CIS imaging designed for automatic segmentation and detection of areas of interest and automatic classification of anatomic structures, as well as optimization of neuroimaging protocols. To this end, there are several publications presenting artificial intelligence-based predictive models aiming to increase diagnostic accuracy and to facilitate optimal clinical management in patients diagnosed with MS and/or CIS. The current study presents a thorough review covering DL techniques that have been applied in MS and CIS during recent years, shedding light on their current advances and limitations.
Collapse
Affiliation(s)
- Eleftherios E Kontopodis
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, 70013 Heraklion, Greece.,Department of Radiology, Medical School, University of Crete, 70013 Heraklion, Greece
| | - Efrosini Papadaki
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, 70013 Heraklion, Greece.,Department of Radiology, Medical School, University of Crete, 70013 Heraklion, Greece
| | - Eleftherios Trivizakis
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, 70013 Heraklion, Greece.,Department of Radiology, Medical School, University of Crete, 70013 Heraklion, Greece
| | - Thomas G Maris
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, 70013 Heraklion, Greece.,Department of Radiology, Medical School, University of Crete, 70013 Heraklion, Greece
| | - Panagiotis Simos
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, 70013 Heraklion, Greece.,Department of Psychiatry and Behavioral Sciences, Medical School, University of Crete, 70013 Heraklion, Greece
| | - Georgios Z Papadakis
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, 70013 Heraklion, Greece.,Department of Radiology, Medical School, University of Crete, 70013 Heraklion, Greece
| | - Aristidis Tsatsakis
- Centre of Toxicology Science and Research, Faculty of Medicine, University of Crete, 71003 Heraklion, Greece
| | - Demetrios A Spandidos
- Laboratory of Clinical Virology, Medical School, University of Crete, 71003 Heraklion, Greece
| | - Apostolos Karantanas
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, 70013 Heraklion, Greece.,Department of Radiology, Medical School, University of Crete, 70013 Heraklion, Greece
| | - Kostas Marias
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, 70013 Heraklion, Greece.,Department of Electrical and Computer Engineering, Hellenic Mediterranean University, 71410 Heraklion, Greece
| |
Collapse
|
22
|
Abstract
Clinical MRI systems have continually improved over the years since their introduction in the 1980s. In MRI technical development, the developments in each MRI system component, including data acquisition, image reconstruction, and hardware systems, have impacted the others. Progress in each component has induced new technology development opportunities in other components. New technologies outside of the MRI field, for example, computer science, data processing, and semiconductors, have been immediately incorporated into MRI development, which resulted in innovative applications. With high performance computing and MR technology innovations, MRI can now provide large volumes of functional and anatomical image datasets, which are important tools in various research fields. MRI systems are now combined with other modalities, such as positron emission tomography (PET) or therapeutic devices. These hybrid systems provide additional capabilities. In this review, MRI advances in the last two decades will be considered. We will discuss the progress of MRI systems, the enabling technology, established applications, current trends, and the future outlook.
Collapse
Affiliation(s)
- Hiroyuki Kabasawa
- Department of Radiological Sciences, School of Health Sciences at Narita, International University of Health and Welfare
| |
Collapse
|
23
|
Bakker L, Aarts J, Uyl-de Groot C, Redekop W. Economic evaluations of big data analytics for clinical decision-making: a scoping review. J Am Med Inform Assoc 2021; 27:1466-1475. [PMID: 32642750 PMCID: PMC7526472 DOI: 10.1093/jamia/ocaa102] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 04/06/2020] [Accepted: 05/11/2020] [Indexed: 12/17/2022] Open
Abstract
OBJECTIVE Much has been invested in big data analytics to improve health and reduce costs. However, it is unknown whether these investments have achieved the desired goals. We performed a scoping review to determine the health and economic impact of big data analytics for clinical decision-making. MATERIALS AND METHODS We searched Medline, Embase, Web of Science and the National Health Services Economic Evaluations Database for relevant articles. We included peer-reviewed papers that report the health economic impact of analytics that assist clinical decision-making. We extracted the economic methods and estimated impact and also assessed the quality of the methods used. In addition, we estimated how many studies assessed "big data analytics" based on a broad definition of this term. RESULTS The search yielded 12 133 papers but only 71 studies fulfilled all eligibility criteria. Only a few papers were full economic evaluations; many were performed during development. Papers frequently reported savings for healthcare payers but only 20% also included costs of analytics. Twenty studies examined "big data analytics" and only 7 reported both cost-savings and better outcomes. DISCUSSION The promised potential of big data is not yet reflected in the literature, partly since only a few full and properly performed economic evaluations have been published. This and the lack of a clear definition of "big data" limit policy makers and healthcare professionals from determining which big data initiatives are worth implementing.
Collapse
Affiliation(s)
- Lytske Bakker
- Erasmus School of Health Policy and Management, Erasmus University, Rotterdam, Netherlands.,Institute for Medical Technology Assessment, Erasmus University, Rotterdam, Netherlands
| | - Jos Aarts
- Erasmus School of Health Policy and Management, Erasmus University, Rotterdam, Netherlands
| | - Carin Uyl-de Groot
- Erasmus School of Health Policy and Management, Erasmus University, Rotterdam, Netherlands.,Institute for Medical Technology Assessment, Erasmus University, Rotterdam, Netherlands
| | - William Redekop
- Erasmus School of Health Policy and Management, Erasmus University, Rotterdam, Netherlands.,Institute for Medical Technology Assessment, Erasmus University, Rotterdam, Netherlands
| |
Collapse
|
24
|
Duong MT, Rauschecker AM, Mohan S. Diverse Applications of Artificial Intelligence in Neuroradiology. Neuroimaging Clin N Am 2020; 30:505-516. [PMID: 33039000 DOI: 10.1016/j.nic.2020.07.003] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Recent advances in artificial intelligence (AI) and deep learning (DL) hold promise to augment neuroimaging diagnosis for patients with brain tumors and stroke. Here, the authors review the diverse landscape of emerging neuroimaging applications of AI, including workflow optimization, lesion segmentation, and precision education. Given the many modalities used in diagnosing neurologic diseases, AI may be deployed to integrate across modalities (MR imaging, computed tomography, PET, electroencephalography, clinical and laboratory findings), facilitate crosstalk among specialists, and potentially improve diagnosis in patients with trauma, multiple sclerosis, epilepsy, and neurodegeneration. Together, there are myriad applications of AI for neuroradiology."
Collapse
Affiliation(s)
- Michael Tran Duong
- Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, 3400 Spruce Street, 219 Dulles Building, Philadelphia, PA 19104, USA. https://twitter.com/MichaelDuongMD
| | - Andreas M Rauschecker
- Department of Radiology & Biomedical Imaging, University of California, San Francisco, 513 Parnassus Avenue, Room S-261, San Francisco, CA 94143, USA. https://twitter.com/DrDreMDPhD
| | - Suyash Mohan
- Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, 3400 Spruce Street, 219 Dulles Building, Philadelphia, PA 19104, USA.
| |
Collapse
|
25
|
Lui YW, Chang PD, Zaharchuk G, Barboriak DP, Flanders AE, Wintermark M, Hess CP, Filippi CG. Artificial Intelligence in Neuroradiology: Current Status and Future Directions. AJNR Am J Neuroradiol 2020; 41:E52-E59. [PMID: 32732276 PMCID: PMC7658873 DOI: 10.3174/ajnr.a6681] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Fueled by new techniques, computational tools, and broader availability of imaging data, artificial intelligence has the potential to transform the practice of neuroradiology. The recent exponential increase in publications related to artificial intelligence and the central focus on artificial intelligence at recent professional and scientific radiology meetings underscores the importance. There is growing momentum behind leveraging artificial intelligence techniques to improve workflow and diagnosis and treatment and to enhance the value of quantitative imaging techniques. This article explores the reasons why neuroradiologists should care about the investments in new artificial intelligence applications, highlights current activities and the roles neuroradiologists are playing, and renders a few predictions regarding the near future of artificial intelligence in neuroradiology.
Collapse
Affiliation(s)
- Y W Lui
- From the Department of Radiology (Y.W.L.), New York University Langone Medical Center, New York, New York
| | - P D Chang
- Department of Radiology (P.D.C.), University of California Irvine Health Medical Center, Orange, California
| | - G Zaharchuk
- Department of Neuroradiology (G.Z., M.W.), Stanford University, Stanford, California
| | - D P Barboriak
- Department of Radiology (D.P.B.), Duke University Medical Center, Durham, North Carolina
| | - A E Flanders
- Department of Radiology (A.E.F.), Thomas Jefferson University Hospital, Philadelphia, Pennsylvania
| | - M Wintermark
- Department of Neuroradiology (G.Z., M.W.), Stanford University, Stanford, California
| | - C P Hess
- Department of Radiology and Biomedical Imaging (C.P.H.), University of California, San Francisco, San Francisco, California
| | - C G Filippi
- Department of Radiology (C.G.F.), Northwell Health, New York, New York.
| |
Collapse
|
26
|
Werth K, Ledbetter L. Artificial Intelligence in Head and Neck Imaging: A Glimpse into the Future. Neuroimaging Clin N Am 2020; 30:359-368. [PMID: 32600636 DOI: 10.1016/j.nic.2020.04.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Artificial intelligence, specifically machine learning and deep learning, is a rapidly developing field in imaging sciences with the potential to improve the efficiency and effectiveness of radiologists. This review covers common technical terms and basic concepts in imaging artificial intelligence and briefly reviews the application of these techniques to general imaging as well as head and neck imaging. Artificial intelligence has the potential to contribute improvements to all areas of patient care, including image acquisition, processing, segmentation, automated detection of findings, integration of clinical information, quality improvement, and research. Numerous challenges remain, however, before widespread imaging clinical adoption and integration occur.
Collapse
Affiliation(s)
- Kyle Werth
- Department of Radiology, University of Kansas Medical Center, 3901 Rainbow Boulevard, Mailstop 4032, Kansas City, KS 66160, USA
| | - Luke Ledbetter
- Department of Radiology, David Geffen School of Medicine at UCLA, 757 Westwood Plaza, Suite 1621D, Los Angeles, CA 90095, USA.
| |
Collapse
|