1
|
Kim B, Mathai TS, Helm K, Summers RM. AUTOMATED CLASSIFICATION OF MULTI-PARAMETRIC BODY MRI SERIES. ARXIV 2024:arXiv:2405.08247v1. [PMID: 38903740 PMCID: PMC11188138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 06/22/2024]
Abstract
Multi-parametric MRI (mpMRI) studies are widely available in clinical practice for the diagnosis of various diseases. As the volume of mpMRI exams increases yearly, there are concomitant inaccuracies that exist within the DICOM header fields of these exams. This precludes the use of the header information for the arrangement of the different series as part of the radiologist's hanging protocol, and clinician oversight is needed for correction. In this pilot work, we propose an automated framework to classify the type of 8 different series in mpMRI studies. We used 1,363 studies acquired by three Siemens scanners to train a DenseNet-121 model with 5-fold cross-validation. Then, we evaluated the performance of the DenseNet-121 ensemble on a held-out test set of 313 mpMRI studies. Our method achieved an average precision of 96.6%, sensitivity of 96.6%, specificity of 99.6%, and F 1 score of 96.6% for the MRI series classification task. To the best of our knowledge, we are the first to develop a method to classify the series type in mpMRI studies acquired at the level of the chest, abdomen, and pelvis. Our method has the capability for robust automation of hanging protocols in modern radiology practice.
Collapse
Affiliation(s)
- Boah Kim
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Tejas Sudharshan Mathai
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Kimberly Helm
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| |
Collapse
|
2
|
You S, Wiest R, Reyes M. SaRF: Saliency regularized feature learning improves MRI sequence classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107867. [PMID: 37866127 DOI: 10.1016/j.cmpb.2023.107867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 10/11/2023] [Accepted: 10/15/2023] [Indexed: 10/24/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning based medical image analysis technologies have the potential to greatly improve the workflow of neuro-radiologists dealing routinely with multi-sequence MRI. However, an essential step for current deep learning systems employing multi-sequence MRI is to ensure that their sequence type is correctly assigned. This requirement is not easily satisfied in clinical practice and is subjected to protocol and human-prone errors. Although deep learning models are promising for image-based sequence classification, robustness, and reliability issues limit their application to clinical practice. METHODS In this paper, we propose a novel method that uses saliency information to guide the learning of features for sequence classification. The method uses two self-supervised loss terms to first enhance the distinctiveness among class-specific saliency maps and, secondly, to promote similarity between class-specific saliency maps and learned deep features. RESULTS On a cohort of 2100 patient cases comprising six different MR sequences per case, our method shows an improvement in mean accuracy by 4.4% (from 0.935 to 0.976), mean AUC by 1.2% (from 0.9851 to 0.9968), and mean F1 score by 20.5% (from 0.767 to 0.924). Furthermore, based on feedback from an expert neuroradiologist, we show that the proposed approach improves the interpretability of trained models as well as their calibration with reduced expected calibration error (by 30.8%, from 0.065 to 0.045). The code will be made publicly available. CONCLUSIONS In this paper, the proposed method shows an improvement in accuracy, AUC, and F1 score, as well as improved calibration and interpretability of resulting saliency maps.
Collapse
Affiliation(s)
- Suhang You
- ARTORG, Graduate School for Cellular and Biomedical Research, University of Bern, Murtenstrasse 50, Bern, 3008, Switzerland.
| | - Roland Wiest
- Support Center of Advanced Neuroimaging, Institute of Diagnostic and Interventional Neuroradiology, University Hospital Bern, University of Bern, Freiburgstrasse 18, Bern, 3010, Switzerland.
| | - Mauricio Reyes
- ARTORG, Graduate School for Cellular and Biomedical Research, University of Bern, Murtenstrasse 50, Bern, 3008, Switzerland; Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse, Bern, 3010, Switzerland.
| |
Collapse
|
3
|
Mahmutoglu MA, Preetha CJ, Meredig H, Tonn JC, Weller M, Wick W, Bendszus M, Brugnara G, Vollmuth P. Deep Learning-based Identification of Brain MRI Sequences Using a Model Trained on Large Multicentric Study Cohorts. Radiol Artif Intell 2024; 6:e230095. [PMID: 38166331 PMCID: PMC10831512 DOI: 10.1148/ryai.230095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 09/30/2023] [Accepted: 10/30/2023] [Indexed: 01/04/2024]
Abstract
Purpose To develop a fully automated device- and sequence-independent convolutional neural network (CNN) for reliable and high-throughput labeling of heterogeneous, unstructured MRI data. Materials and Methods Retrospective, multicentric brain MRI data (2179 patients with glioblastoma, 8544 examinations, 63 327 sequences) from 249 hospitals and 29 scanner types were used to develop a network based on ResNet-18 architecture to differentiate nine MRI sequence types, including T1-weighted, postcontrast T1-weighted, T2-weighted, fluid-attenuated inversion recovery, susceptibility-weighted, apparent diffusion coefficient, diffusion-weighted (low and high b value), and gradient-recalled echo T2*-weighted and dynamic susceptibility contrast-related images. The two-dimensional-midsection images from each sequence were allocated to training or validation (approximately 80%) and testing (approximately 20%) using a stratified split to ensure balanced groups across institutions, patients, and MRI sequence types. The prediction accuracy was quantified for each sequence type, and subgroup comparison of model performance was performed using χ2 tests. Results On the test set, the overall accuracy of the CNN (ResNet-18) ensemble model among all sequence types was 97.9% (95% CI: 97.6, 98.1), ranging from 84.2% for susceptibility-weighted images (95% CI: 81.8, 86.6) to 99.8% for T2-weighted images (95% CI: 99.7, 99.9). The ResNet-18 model achieved significantly better accuracy compared with ResNet-50 despite its simpler architecture (97.9% vs 97.1%; P ≤ .001). The accuracy of the ResNet-18 model was not affected by the presence versus absence of tumor on the two-dimensional-midsection images for any sequence type (P > .05). Conclusion The developed CNN (www.github.com/neuroAI-HD/HD-SEQ-ID) reliably differentiates nine types of MRI sequences within multicenter and large-scale population neuroimaging data and may enhance the speed, accuracy, and efficiency of clinical and research neuroradiologic workflows. Keywords: MR-Imaging, Neural Networks, CNS, Brain/Brain Stem, Computer Applications-General (Informatics), Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms Supplemental material is available for this article. © RSNA, 2023.
Collapse
Affiliation(s)
- Mustafa Ahmed Mahmutoglu
- From the Department of Neuroradiology (M.A.M., C.J.P., H.M., M.B., G.B., P.V.), Department of Neuroradiology, Division for Computational Neuroimaging (M.A.M., C.J.P., H.M., G.B., P.V.), and Department of Neurology (W.W.), Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; Department of Neurosurgery, University Hospital Munich LMU, Munich, Germany (J.C.T.); and Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland (M.W.)
| | - Chandrakanth Jayachandran Preetha
- From the Department of Neuroradiology (M.A.M., C.J.P., H.M., M.B., G.B., P.V.), Department of Neuroradiology, Division for Computational Neuroimaging (M.A.M., C.J.P., H.M., G.B., P.V.), and Department of Neurology (W.W.), Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; Department of Neurosurgery, University Hospital Munich LMU, Munich, Germany (J.C.T.); and Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland (M.W.)
| | - Hagen Meredig
- From the Department of Neuroradiology (M.A.M., C.J.P., H.M., M.B., G.B., P.V.), Department of Neuroradiology, Division for Computational Neuroimaging (M.A.M., C.J.P., H.M., G.B., P.V.), and Department of Neurology (W.W.), Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; Department of Neurosurgery, University Hospital Munich LMU, Munich, Germany (J.C.T.); and Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland (M.W.)
| | - Joerg-Christian Tonn
- From the Department of Neuroradiology (M.A.M., C.J.P., H.M., M.B., G.B., P.V.), Department of Neuroradiology, Division for Computational Neuroimaging (M.A.M., C.J.P., H.M., G.B., P.V.), and Department of Neurology (W.W.), Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; Department of Neurosurgery, University Hospital Munich LMU, Munich, Germany (J.C.T.); and Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland (M.W.)
| | - Michael Weller
- From the Department of Neuroradiology (M.A.M., C.J.P., H.M., M.B., G.B., P.V.), Department of Neuroradiology, Division for Computational Neuroimaging (M.A.M., C.J.P., H.M., G.B., P.V.), and Department of Neurology (W.W.), Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; Department of Neurosurgery, University Hospital Munich LMU, Munich, Germany (J.C.T.); and Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland (M.W.)
| | - Wolfgang Wick
- From the Department of Neuroradiology (M.A.M., C.J.P., H.M., M.B., G.B., P.V.), Department of Neuroradiology, Division for Computational Neuroimaging (M.A.M., C.J.P., H.M., G.B., P.V.), and Department of Neurology (W.W.), Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; Department of Neurosurgery, University Hospital Munich LMU, Munich, Germany (J.C.T.); and Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland (M.W.)
| | - Martin Bendszus
- From the Department of Neuroradiology (M.A.M., C.J.P., H.M., M.B., G.B., P.V.), Department of Neuroradiology, Division for Computational Neuroimaging (M.A.M., C.J.P., H.M., G.B., P.V.), and Department of Neurology (W.W.), Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; Department of Neurosurgery, University Hospital Munich LMU, Munich, Germany (J.C.T.); and Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland (M.W.)
| | - Gianluca Brugnara
- From the Department of Neuroradiology (M.A.M., C.J.P., H.M., M.B., G.B., P.V.), Department of Neuroradiology, Division for Computational Neuroimaging (M.A.M., C.J.P., H.M., G.B., P.V.), and Department of Neurology (W.W.), Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; Department of Neurosurgery, University Hospital Munich LMU, Munich, Germany (J.C.T.); and Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland (M.W.)
| | - Philipp Vollmuth
- From the Department of Neuroradiology (M.A.M., C.J.P., H.M., M.B., G.B., P.V.), Department of Neuroradiology, Division for Computational Neuroimaging (M.A.M., C.J.P., H.M., G.B., P.V.), and Department of Neurology (W.W.), Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; Department of Neurosurgery, University Hospital Munich LMU, Munich, Germany (J.C.T.); and Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland (M.W.)
| |
Collapse
|
4
|
Gao R, Luo G, Ding R, Yang B, Sun H. A Lightweight Deep Learning Framework for Automatic MRI Data Sorting and Artifacts Detection. J Med Syst 2023; 47:124. [PMID: 37999807 DOI: 10.1007/s10916-023-02017-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 11/05/2023] [Indexed: 11/25/2023]
Abstract
The purpose of this study is to develop a lightweight and easily deployable deep learning system for fully automated content-based brain MRI sorting and artifacts detection. 22092 MRI volumes from 4076 patients between 2017 and 2021 were involved in this retrospective study. The dataset mainly contains 4 common contrast (T1-weighted (T1w), contrast-enhanced T1-weighted (T1c), T2-weighted (T2w), fluid-attenuated inversion recovery (FLAIR)) in three perspectives (axial, coronal, and sagittal), and magnetic resonance angiography (MRA), as well as three typical artifacts (motion, aliasing, and metal artifacts). In the proposed architecture, a pre-trained EfficientNetB0 with the fully connected layers removed was used as the feature extractor and a multilayer perceptron (MLP) module with four hidden layers was used as the classifier. Precision, recall, F1_Score, accuracy, the number of trainable parameters, and float-point of operations (FLOPs) were calculated to evaluate the performance of the proposed model. The proposed model was also compared with four other existing CNN-based models in terms of classification performance and model size. The overall precision, recall, F1_Score, and accuracy of the proposed model were 0.983, 0.926, 0.950, and 0.991, respectively. The performance of the proposed model was outperformed the other four CNN-based models. The number of trainable parameters and FLOPs were the smallest among the investigated models. Our proposed model can accurately sort head MRI scans and identify artifacts with minimum computational resources and can be used as a tool to support big medical imaging data research and facilitate large-scale database management.
Collapse
Affiliation(s)
- Ronghui Gao
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, Sichuan, China
| | - Guoting Luo
- Department of Radiology, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| | - Renxin Ding
- IT Center, West China Hospital of Sichuan University, Chengdu, Sichuan, China
| | - Bo Yang
- IT Center, West China Hospital of Sichuan University, Chengdu, Sichuan, China
| | - Huaiqiang Sun
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, Sichuan, China.
- Huaxi MR Research Center, West China Hospital of Sichuan University, Chengdu, Sichuan, China.
- Huaxi MR Research Center, Department of Radiology, West China Hospital of Sichuan University, Chengdu, 610041, Sichuan, China.
| |
Collapse
|
5
|
Baumgärtner GL, Hamm CA, Schulze-Weddige S, Ruppel R, Beetz NL, Rudolph M, Dräger F, Froböse KP, Posch H, Lenk J, Biessmann F, Penzkofer T. Metadata-independent classification of MRI sequences using convolutional neural networks: Successful application to prostate MRI. Eur J Radiol 2023; 166:110964. [PMID: 37453274 DOI: 10.1016/j.ejrad.2023.110964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 06/21/2023] [Accepted: 07/06/2023] [Indexed: 07/18/2023]
Abstract
PURPOSE The ever-increasing volume of medical imaging data and interest in Big Data research brings challenges to data organization, categorization, and retrieval. Although the radiological value chain is almost entirely digital, data structuring has been widely performed pragmatically, but with insufficient naming and metadata standards for the stringent needs of image analysis. To enable automated data management independent of naming and metadata, this study focused on developing a convolutional neural network (CNN) that classifies medical images based solely on voxel data. METHOD A 3D CNN (3D-ResNet18) was trained using a dataset of 31,602 prostate MRI volumes with 10 different sequence types of 1243 patients. A five-fold cross-validation approach with patient-based splits was chosen for training and testing. Training was repeated with a gradual reduction in training data assessing classification accuracies to determine the minimum training data required for sufficient performance. The trained model and developed method were tested on three external datasets. RESULTS The model achieved an overall accuracy of 99.88 % ± 0.13 % in classifying typical prostate MRI sequence types. When being trained with approximately 10 % of the original cohort (112 patients), the CNN still achieved an accuracy of 97.43 % ± 2.10 %. In external testing the model achieved sensitivities of > 90 % for 10/15 tested sequence types. CONCLUSIONS The herein developed CNN enabled automatic and reliable sequence identification in prostate MRI. Ultimately, such CNN models for voxel-based sequence identification could substantially enhance the management of medical imaging data, improve workflow efficiency and data quality, and allow for robust clinical AI workflows.
Collapse
Affiliation(s)
- Georg L Baumgärtner
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Campus Virchow Klinikum, Augustenburgerplatz 1, 13353 Berlin, Germany.
| | - Charlie A Hamm
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Campus Virchow Klinikum, Augustenburgerplatz 1, 13353 Berlin, Germany; Berlin Institute of Health (BIH), Anna-Louisa-Karsch-Straße 2, 10178 Berlin, Germany.
| | - Sophia Schulze-Weddige
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Campus Virchow Klinikum, Augustenburgerplatz 1, 13353 Berlin, Germany.
| | - Richard Ruppel
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Campus Virchow Klinikum, Augustenburgerplatz 1, 13353 Berlin, Germany.
| | - Nick L Beetz
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Campus Virchow Klinikum, Augustenburgerplatz 1, 13353 Berlin, Germany.
| | - Madhuri Rudolph
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Campus Virchow Klinikum, Augustenburgerplatz 1, 13353 Berlin, Germany.
| | - Franziska Dräger
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Campus Virchow Klinikum, Augustenburgerplatz 1, 13353 Berlin, Germany.
| | - Konrad P Froböse
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Campus Virchow Klinikum, Augustenburgerplatz 1, 13353 Berlin, Germany.
| | - Helena Posch
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Campus Virchow Klinikum, Augustenburgerplatz 1, 13353 Berlin, Germany.
| | - Julian Lenk
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Campus Virchow Klinikum, Augustenburgerplatz 1, 13353 Berlin, Germany.
| | - Felix Biessmann
- Faculty VI - Informatics and Media, Berliner Hochschule für Technik (BHT), Einstein Center Digital Future, 13353 Berlin, Germany.
| | - Tobias Penzkofer
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Campus Virchow Klinikum, Augustenburgerplatz 1, 13353 Berlin, Germany; Berlin Institute of Health (BIH), Anna-Louisa-Karsch-Straße 2, 10178 Berlin, Germany.
| |
Collapse
|
6
|
Cluceru J, Lupo JM, Interian Y, Bove R, Crane JC. Improving the Automatic Classification of Brain MRI Acquisition Contrast with Machine Learning. J Digit Imaging 2023; 36:289-305. [PMID: 35941406 PMCID: PMC9984597 DOI: 10.1007/s10278-022-00690-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 06/22/2022] [Accepted: 07/22/2022] [Indexed: 10/15/2022] Open
Abstract
Automated quantification of data acquired as part of an MRI exam requires identification of the specific acquisition of relevance to a particular analysis. This motivates the development of methods capable of reliably classifying MRI acquisitions according to their nominal contrast type, e.g., T1 weighted, T1 post-contrast, T2 weighted, T2-weighted FLAIR, proton-density weighted. Prior studies have investigated using imaging-based methods and DICOM metadata-based methods with success on cohorts of patients acquired as part of a clinical trial. This study compares the performance of these methods on heterogeneous clinical datasets acquired with many different scanners from many institutions. RF and CNN models were trained on metadata and pixel data, respectively. A combined RF model incorporated CNN logits from the pixel-based model together with metadata. Four cohorts were used for model development and evaluation: MS research (n = 11,106 series), MS clinical (n = 3244 series), glioma research (n = 612 series, test/validation only), and ADNI PTSD (n = 477 series, training only). Together, these cohorts represent a broad range of acquisition contexts (scanners, sequences, institutions) and subject pathologies. Pixel-based CNN and combined models achieved accuracies between 97 and 98% on the clinical MS cohort. Validation/test accuracies with the glioma cohort were 99.7% (metadata only) and 98.4 (CNN). Accurate and generalizable classification of MRI acquisition contrast types was demonstrated. Such methods are important for enabling automated data selection in high-throughput and big-data image analysis applications.
Collapse
Affiliation(s)
- Julia Cluceru
- Center for Intelligent Imaging, Department of Radiology & Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Janine M Lupo
- Center for Intelligent Imaging, Department of Radiology & Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Yannet Interian
- MS in Analytics Program, University of San Francisco, San Francisco, CA, USA
| | - Riley Bove
- Department of Neurology, MS and Neuroinflammation Clinic, University of California San Francisco, San Francisco, CA, USA
- Weill Institute for Neurosciences, University of California San Francisco, San Francisco, CA, USA
| | - Jason C Crane
- Center for Intelligent Imaging, Department of Radiology & Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA.
| |
Collapse
|
7
|
Augmented Behavioral Annotation Tools, with Application to Multimodal Datasets and Models: A Systematic Review. AI 2023. [DOI: 10.3390/ai4010007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023] Open
Abstract
Annotation tools are an essential component in the creation of datasets for machine learning purposes. Annotation tools have evolved greatly since the turn of the century, and now commonly include collaborative features to divide labor efficiently, as well as automation employed to amplify human efforts. Recent developments in machine learning models, such as Transformers, allow for training upon very large and sophisticated multimodal datasets and enable generalization across domains of knowledge. These models also herald an increasing emphasis on prompt engineering to provide qualitative fine-tuning upon the model itself, adding a novel emerging layer of direct machine learning annotation. These capabilities enable machine intelligence to recognize, predict, and emulate human behavior with much greater accuracy and nuance, a noted shortfall of which have contributed to algorithmic injustice in previous techniques. However, the scale and complexity of training data required for multimodal models presents engineering challenges. Best practices for conducting annotation for large multimodal models in the most safe and ethical, yet efficient, manner have not been established. This paper presents a systematic literature review of crowd and machine learning augmented behavioral annotation methods to distill practices that may have value in multimodal implementations, cross-correlated across disciplines. Research questions were defined to provide an overview of the evolution of augmented behavioral annotation tools in the past, in relation to the present state of the art. (Contains five figures and four tables).
Collapse
|
8
|
Gai ND. Highly Efficient and Accurate Deep Learning-Based Classification of MRI Contrast on a CPU and GPU. J Digit Imaging 2022; 35:482-495. [PMID: 35138509 PMCID: PMC9156587 DOI: 10.1007/s10278-022-00583-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 12/22/2021] [Accepted: 12/28/2021] [Indexed: 12/15/2022] Open
Abstract
Classifying MR images based on their contrast mechanism can be useful in image segmentation where additional information from different contrast mechanisms can improve intensity-based segmentation and help separate the class distributions. In addition, automated processing of image type can be beneficial in archive management, image retrieval, and staff training. Different clinics and scanners have their own image labeling scheme, resulting in ambiguity when sorting images. Manual sorting of thousands of images would be a laborious task and prone to error. In this work, we used the power of transfer learning to modify pretrained residual convolution neural networks to classify MRI images based on their contrast mechanisms. Training and validation were performed on a total of 5169 images belonging to 10 different classes and from different MRI vendors and field strengths. Time for training and validation was 36 min. Testing was performed on a different data set with 2474 images. Percentage of correctly classified images (accuracy) was 99.76%. (A deeper version of the residual network was trained for 103 min and showed slightly lower accuracy of 99.68%.) In consideration of model deployment in the real world, performance on a single CPU computer was compared with GPU implementation. Highly accurate classification, training, and testing can be achieved without use of a GPU in a relatively short training time, through proper choice of a convolutional neural network and hyperparameters, making it feasible to improve accuracy by repeated training with cumulative training sets. Techniques to improve accuracy further are discussed and demonstrated. Derived heatmaps indicate areas of image used in decision making and correspond well with expert human perception. The methods used can be easily extended to other classification tasks with minimal changes.
Collapse
Affiliation(s)
- Neville D. Gai
- grid.279885.90000 0001 2293 4638Systems Biology Center, National Heart, Lung, and Blood Institute, National Institutes of Health, Bethesda, MD USA
| |
Collapse
|
9
|
Design and Implementation of Interactive Platform for Operation and Maintenance of Multimedia Information System Based on Artificial Intelligence and Big Data. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4620930. [PMID: 35571710 PMCID: PMC9095371 DOI: 10.1155/2022/4620930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 03/31/2022] [Accepted: 04/06/2022] [Indexed: 11/22/2022]
Abstract
In order to cope with the challenges that operators face after the impact of diversified social channels in the field of interactive services, we actively build a new generation of intelligent interactive systems based on artificial intelligence technology, a semantic understanding, and intention recognition, covering rich media content, omnichannel coverage, high-frequency knowledge updates, consistent service response, high-quality, and low-cost intelligent interactive solutions is proposed. The solution provides overall business modeling for scenario design and a scenario-based knowledge expression system, with the function of fragmented knowledge processing. With complete text and voice information, combined with pictures, text, audio, video, and other multimedia, we intelligently interact with users, allowing users to obtain required information and solve problems in a pleasant and relaxing interaction. Therefore, the research and exploration of the intelligent interactive system architecture based on artificial intelligence is a useful practice and strong support for operators to redefine the connotation and elements of “smart service” in the process of building “smart operation.” Through repeated tests, it can be seen that the language similarity has reached 0.75549, which is very close to 1.0000. It can be seen that the design of this platform has been successful.
Collapse
|
10
|
Ranjbar S, Singleton KW, Curtin L, Rickertsen CR, Paulson LE, Hu LS, Mitchell JR, Swanson KR. Weakly Supervised Skull Stripping of Magnetic Resonance Imaging of Brain Tumor Patients. FRONTIERS IN NEUROIMAGING 2022; 1:832512. [PMID: 37555156 PMCID: PMC10406204 DOI: 10.3389/fnimg.2022.832512] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 02/21/2022] [Indexed: 08/10/2023]
Abstract
Automatic brain tumor segmentation is particularly challenging on magnetic resonance imaging (MRI) with marked pathologies, such as brain tumors, which usually cause large displacement, abnormal appearance, and deformation of brain tissue. Despite an abundance of previous literature on learning-based methodologies for MRI segmentation, few works have focused on tackling MRI skull stripping of brain tumor patient data. This gap in literature can be associated with the lack of publicly available data (due to concerns about patient identification) and the labor-intensive nature of generating ground truth labels for model training. In this retrospective study, we assessed the performance of Dense-Vnet in skull stripping brain tumor patient MRI trained on our large multi-institutional brain tumor patient dataset. Our data included pretreatment MRI of 668 patients from our in-house institutional review board-approved multi-institutional brain tumor repository. Because of the absence of ground truth, we used imperfect automatically generated training labels using SPM12 software. We trained the network using common MRI sequences in oncology: T1-weighted with gadolinium contrast, T2-weighted fluid-attenuated inversion recovery, or both. We measured model performance against 30 independent brain tumor test cases with available manual brain masks. All images were harmonized for voxel spacing and volumetric dimensions before model training. Model training was performed using the modularly structured deep learning platform NiftyNet that is tailored toward simplifying medical image analysis. Our proposed approach showed the success of a weakly supervised deep learning approach in MRI brain extraction even in the presence of pathology. Our best model achieved an average Dice score, sensitivity, and specificity of, respectively, 94.5, 96.4, and 98.5% on the multi-institutional independent brain tumor test set. To further contextualize our results within existing literature on healthy brain segmentation, we tested the model against healthy subjects from the benchmark LBPA40 dataset. For this dataset, the model achieved an average Dice score, sensitivity, and specificity of 96.2, 96.6, and 99.2%, which are, although comparable to other publications, slightly lower than the performance of models trained on healthy patients. We associate this drop in performance with the use of brain tumor data for model training and its influence on brain appearance.
Collapse
Affiliation(s)
- Sara Ranjbar
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
| | - Kyle W. Singleton
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
| | - Lee Curtin
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
| | - Cassandra R. Rickertsen
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
| | - Lisa E. Paulson
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
| | - Leland S. Hu
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
- Department of Diagnostic Imaging and Interventional Radiology, Mayo Clinic, Phoenix, AZ, United States
| | - Joseph Ross Mitchell
- Department of Medicine, Faculty of Medicine & Dentistry and the Alberta Machine Intelligence Institute, University of Alberta, Edmonton, AB, Canada
- Provincial Clinical Excellence Portfolio, Alberta Health Services, Edmonton, AB, Canada
| | - Kristin R. Swanson
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
| |
Collapse
|
11
|
Big Data Analysis and Prediction System Based on Improved Convolutional Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4564247. [PMID: 35310582 PMCID: PMC8930225 DOI: 10.1155/2022/4564247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 02/07/2022] [Accepted: 02/08/2022] [Indexed: 11/18/2022]
Abstract
This paper presents a big data analysis and prediction system based on convolutional neural networks. Continuous template matching technology is used to analyze the distributed data structure of big data, and the information fusion processing of cloud service combination big data is combined with matching related detection methods, frequent item detection, and association rule feature extraction of high-dimensional fusion data. A clustering method is adopted to realize the classification and mining of cloud service portfolio big data. The hardware equipment of the car to detect the surrounding environment is complicated, and the combination of the convolutional neural network and the camera to detect the surrounding environment has become a research hotspot. However, simply using the convolutional neural network to process the camera data to control the turning angle of the car has the problems of long training time and low accuracy. An improved convolutional neural network is proposed. The experimental results show that the accuracy of data mining by this method is 12.43% and 21.76% higher than that of traditional methods, and the number of iteration steps is shorter, indicating that the timeliness of mining is higher. This network structure can effectively improve the training speed of the network and improve the accuracy of the network. It is proven that the convolutional neural network has faster training speed and higher accuracy.
Collapse
|
12
|
The Application of the Big Data Medical Imaging System in Improving the Medical and Health Examination. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:8251702. [PMID: 34567488 PMCID: PMC8463181 DOI: 10.1155/2021/8251702] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 09/08/2021] [Indexed: 11/27/2022]
Abstract
To explore the application effect of the big data medical imaging tertiary diagnostic system in improving the medical and health examination, cases in township health centers were collected by the medical imaging tertiary diagnosis system. Clinical cases examined by the tertiary diagnostic system of big data medical imaging will be set as the observation group. Clinical cases not involved in the tertiary diagnostic system of big data medical imaging were set as the control group. The qualified rate, film positive rate, and film diagnosis accuracy between the two groups are compared, and X-ray perspective, X-ray examination, and CT multiple medical imaging examinations are used in two groups. The experimental results showed that the pass rate was 86.57%, positive rate was 72.32%, and diagnosis rate was 80.17%. Pass rate, positive rate, and diagnostic accuracy were higher than the control group (P < 0.05). X-line film is the most cost effective. CT examination has a high diagnostic sensitivity and can achieve a clear diagnosis of the benign and malignant diseases. The three-level diagnosis system of medical imaging has significantly improved and improved the technical level in the medical and health examination, which has good practical value.
Collapse
|
13
|
Using DICOM Metadata for Radiological Image Series Categorization: a Feasibility Study on Large Clinical Brain MRI Datasets. J Digit Imaging 2021; 33:747-762. [PMID: 31950302 DOI: 10.1007/s10278-019-00308-x] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023] Open
Abstract
The growing interest in machine learning (ML) in healthcare is driven by the promise of improved patient care. However, how many ML algorithms are currently being used in clinical practice? While the technology is present, as demonstrated in a variety of commercial products, clinical integration is hampered by a lack of infrastructure, processes, and tools. In particular, automating the selection of relevant series for a particular algorithm remains challenging. In this work, we propose a methodology to automate the identification of brain MRI sequences so that we can automatically route the relevant inputs for further image-related algorithms. The method relies on metadata required by the Digital Imaging and Communications in Medicine (DICOM) standard, resulting in generalizability and high efficiency (less than 0.4 ms/series). To support our claims, we test our approach on two large brain MRI datasets (40,000 studies in total) from two different institutions on two different continents. We demonstrate high levels of accuracy (ranging from 97.4 to 99.96%) and generalizability across the institutions. Given the complexity and variability of brain MRI protocols, we are confident that similar techniques could be applied to other forms of radiological imaging.
Collapse
|
14
|
Pandey B, Kumar Pandey D, Pratap Mishra B, Rhmann W. A comprehensive survey of deep learning in the field of medical imaging and medical natural language processing: Challenges and research directions. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2021. [DOI: 10.1016/j.jksuci.2021.01.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|