1
|
Bajaj S, Bala M, Angurala M. A comparative analysis of different augmentations for brain images. Med Biol Eng Comput 2024:10.1007/s11517-024-03127-7. [PMID: 38782880 DOI: 10.1007/s11517-024-03127-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 05/10/2024] [Indexed: 05/25/2024]
Abstract
Deep learning (DL) requires a large amount of training data to improve performance and prevent overfitting. To overcome these difficulties, we need to increase the size of the training dataset. This can be done by augmentation on a small dataset. The augmentation approaches must enhance the model's performance during the learning period. There are several types of transformations that can be applied to medical images. These transformations can be applied to the entire dataset or to a subset of the data, depending on the desired outcome. In this study, we categorize data augmentation methods into four groups: Absent augmentation, where no modifications are made; basic augmentation, which includes brightness and contrast adjustments; intermediate augmentation, encompassing a wider array of transformations like rotation, flipping, and shifting in addition to brightness and contrast adjustments; and advanced augmentation, where all transformation layers are employed. We plan to conduct a comprehensive analysis to determine which group performs best when applied to brain CT images. This evaluation aims to identify the augmentation group that produces the most favorable results in terms of improving model accuracy, minimizing diagnostic errors, and ensuring the robustness of the model in the context of brain CT image analysis.
Collapse
Affiliation(s)
- Shilpa Bajaj
- Applied Sciences (Computer Applications), I.K. Gujral Punjab Technical University, Jalandhar, Kapurthala, India.
| | - Manju Bala
- Department of Computer Science and Engineering, Khalsa College of Engineering and Technology, Amritsar, India
| | - Mohit Angurala
- Apex Institute of Technology (CSE), Chandigarh University, Gharuan, Mohali, Punjab, India
| |
Collapse
|
2
|
Jiménez-Gaona Y, Álvarez MJR, Castillo-Malla D, García-Jaen S, Carrión-Figueroa D, Corral-Domínguez P, Lakshminarayanan V. BraNet: a mobil application for breast image classification based on deep learning algorithms. Med Biol Eng Comput 2024:10.1007/s11517-024-03084-1. [PMID: 38693328 DOI: 10.1007/s11517-024-03084-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 03/26/2024] [Indexed: 05/03/2024]
Abstract
Mobile health apps are widely used for breast cancer detection using artificial intelligence algorithms, providing radiologists with second opinions and reducing false diagnoses. This study aims to develop an open-source mobile app named "BraNet" for 2D breast imaging segmentation and classification using deep learning algorithms. During the phase off-line, an SNGAN model was previously trained for synthetic image generation, and subsequently, these images were used to pre-trained SAM and ResNet18 segmentation and classification models. During phase online, the BraNet app was developed using the react native framework, offering a modular deep-learning pipeline for mammography (DM) and ultrasound (US) breast imaging classification. This application operates on a client-server architecture and was implemented in Python for iOS and Android devices. Then, two diagnostic radiologists were given a reading test of 290 total original RoI images to assign the perceived breast tissue type. The reader's agreement was assessed using the kappa coefficient. The BraNet App Mobil exhibited the highest accuracy in benign and malignant US images (94.7%/93.6%) classification compared to DM during training I (80.9%/76.9%) and training II (73.7/72.3%). The information contrasts with radiological experts' accuracy, with DM classification being 29%, concerning US 70% for both readers, because they achieved a higher accuracy in US ROI classification than DM images. The kappa value indicates a fair agreement (0.3) for DM images and moderate agreement (0.4) for US images in both readers. It means that not only the amount of data is essential in training deep learning algorithms. Also, it is vital to consider the variety of abnormalities, especially in the mammography data, where several BI-RADS categories are present (microcalcifications, nodules, mass, asymmetry, and dense breasts) and can affect the API accuracy model.
Collapse
Affiliation(s)
- Yuliana Jiménez-Gaona
- Departamento de Química y Ciencias Exactas, Universidad Técnica Particular de Loja, San Cayetano Alto s/n CP1101608, Loja, Ecuador.
- Instituto de Instrumentación para la Imagen Molecular I3M, Universitat Politécnica de Valencia, 46022, Valencia, Spain.
- Theoretical and Experimental Epistemology Lab, School of Opto ΩN2L3G1, Waterloo, Canada.
| | - María José Rodríguez Álvarez
- Instituto de Instrumentación para la Imagen Molecular I3M, Universitat Politécnica de Valencia, 46022, Valencia, Spain
| | - Darwin Castillo-Malla
- Departamento de Química y Ciencias Exactas, Universidad Técnica Particular de Loja, San Cayetano Alto s/n CP1101608, Loja, Ecuador
- Instituto de Instrumentación para la Imagen Molecular I3M, Universitat Politécnica de Valencia, 46022, Valencia, Spain
- Theoretical and Experimental Epistemology Lab, School of Opto ΩN2L3G1, Waterloo, Canada
| | - Santiago García-Jaen
- Departamento de Química y Ciencias Exactas, Universidad Técnica Particular de Loja, San Cayetano Alto s/n CP1101608, Loja, Ecuador
| | | | - Patricio Corral-Domínguez
- Corporación Médica Monte Sinaí-CIPAM (Centro Integral de Patología Mamaria) Cuenca-Ecuador, Facultad de Ciencias Médicas, Universidad de Cuenca, Cuenca, 010203, Ecuador
| | - Vasudevan Lakshminarayanan
- Department of Systems Design Engineering, Physics, and Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, N2L3G1, Canada
| |
Collapse
|
3
|
Hampole P, Harding T, Gillies D, Orlando N, Edirisinghe C, Mendez LC, D'Souza D, Velker V, Correa R, Helou J, Xing S, Fenster A, Hoover DA. Deep learning-based ultrasound auto-segmentation of the prostate with brachytherapy implanted needles. Med Phys 2024; 51:2665-2677. [PMID: 37888789 DOI: 10.1002/mp.16811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 10/12/2023] [Accepted: 10/13/2023] [Indexed: 10/28/2023] Open
Abstract
BACKGROUND Accurate segmentation of the clinical target volume (CTV) corresponding to the prostate with or without proximal seminal vesicles is required on transrectal ultrasound (TRUS) images during prostate brachytherapy procedures. Implanted needles cause artifacts that may make this task difficult and time-consuming. Thus, previous studies have focused on the simpler problem of segmentation in the absence of needles at the cost of reduced clinical utility. PURPOSE To use a convolutional neural network (CNN) algorithm for segmentation of the prostatic CTV in TRUS images post-needle insertion obtained from prostate brachytherapy procedures to better meet the demands of the clinical procedure. METHODS A dataset consisting of 144 3-dimensional (3D) TRUS images with implanted metal brachytherapy needles and associated manual CTV segmentations was used for training a 2-dimensional (2D) U-Net CNN using a Dice Similarity Coefficient (DSC) loss function. These were split by patient, with 119 used for training and 25 reserved for testing. The 3D TRUS training images were resliced at radial (around the axis normal to the coronal plane) and oblique angles through the center of the 3D image, as well as axial, coronal, and sagittal planes to obtain 3689 2D TRUS images and masks for training. The network generated boundary predictions on 300 2D TRUS images obtained from reslicing each of the 25 3D TRUS images used for testing into 12 radial slices (15° apart), which were then reconstructed into 3D surfaces. Performance metrics included DSC, recall, precision, unsigned and signed volume percentage differences (VPD/sVPD), mean surface distance (MSD), and Hausdorff distance (HD). In addition, we studied whether providing algorithm-predicted boundaries to the physicians and allowing modifications increased the agreement between physicians. This was performed by providing a subset of 3D TRUS images of five patients to five physicians who segmented the CTV using clinical software and repeated this at least 1 week apart. The five physicians were given the algorithm boundary predictions and allowed to modify them, and the resulting inter- and intra-physician variability was evaluated. RESULTS Median DSC, recall, precision, VPD, sVPD, MSD, and HD of the 3D-reconstructed algorithm segmentations were 87.2 [84.1, 88.8]%, 89.0 [86.3, 92.4]%, 86.6 [78.5, 90.8]%, 10.3 [4.5, 18.4]%, 2.0 [-4.5, 18.4]%, 1.6 [1.2, 2.0] mm, and 6.0 [5.3, 8.0] mm, respectively. Segmentation time for a set of 12 2D radial images was 2.46 [2.44, 2.48] s. With and without U-Net starting points, the intra-physician median DSCs were 97.0 [96.3, 97.8]%, and 94.4 [92.5, 95.4]% (p < 0.0001), respectively, while the inter-physician median DSCs were 94.8 [93.3, 96.8]% and 90.2 [88.7, 92.1]%, respectively (p < 0.0001). The median segmentation time for physicians, with and without U-Net-generated CTV boundaries, were 257.5 [211.8, 300.0] s and 288.0 [232.0, 333.5] s, respectively (p = 0.1034). CONCLUSIONS Our algorithm performed at a level similar to physicians in a fraction of the time. The use of algorithm-generated boundaries as a starting point and allowing modifications reduced physician variability, although it did not significantly reduce the time compared to manual segmentations.
Collapse
Affiliation(s)
- Prakash Hampole
- Department of Medical Biophysics, Western University, London, ON, Canada
- Robarts Research Institute, Western University, London, ON, Canada
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
| | - Thomas Harding
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
| | - Derek Gillies
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
| | - Nathan Orlando
- Department of Medical Biophysics, Western University, London, ON, Canada
- Robarts Research Institute, Western University, London, ON, Canada
| | | | - Lucas C Mendez
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
- Department of Oncology, Western University, London, ON, Canada
| | - David D'Souza
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
- Department of Oncology, Western University, London, ON, Canada
| | - Vikram Velker
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
- Department of Oncology, Western University, London, ON, Canada
| | - Rohann Correa
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
- Department of Oncology, Western University, London, ON, Canada
| | - Joelle Helou
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
- Department of Oncology, Western University, London, ON, Canada
| | - Shuwei Xing
- Robarts Research Institute, Western University, London, ON, Canada
- School of Biomedical Engineering, Western University, London, ON, Canada
| | - Aaron Fenster
- Department of Medical Biophysics, Western University, London, ON, Canada
- Robarts Research Institute, Western University, London, ON, Canada
- Department of Medical Imaging, Western University, London, ON, Canada
| | - Douglas A Hoover
- Department of Medical Biophysics, Western University, London, ON, Canada
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
- Department of Oncology, Western University, London, ON, Canada
| |
Collapse
|
4
|
Maramraju S, Kowalczewski A, Kaza A, Liu X, Singaraju JP, Albert MV, Ma Z, Yang H. AI-organoid integrated systems for biomedical studies and applications. Bioeng Transl Med 2024; 9:e10641. [PMID: 38435826 PMCID: PMC10905559 DOI: 10.1002/btm2.10641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 12/07/2023] [Accepted: 12/13/2023] [Indexed: 03/05/2024] Open
Abstract
In this review, we explore the growing role of artificial intelligence (AI) in advancing the biomedical applications of human pluripotent stem cell (hPSC)-derived organoids. Stem cell-derived organoids, these miniature organ replicas, have become essential tools for disease modeling, drug discovery, and regenerative medicine. However, analyzing the vast and intricate datasets generated from these organoids can be inefficient and error-prone. AI techniques offer a promising solution to efficiently extract insights and make predictions from diverse data types generated from microscopy images, transcriptomics, metabolomics, and proteomics. This review offers a brief overview of organoid characterization and fundamental concepts in AI while focusing on a comprehensive exploration of AI applications in organoid-based disease modeling and drug evaluation. It provides insights into the future possibilities of AI in enhancing the quality control of organoid fabrication, label-free organoid recognition, and three-dimensional image reconstruction of complex organoid structures. This review presents the challenges and potential solutions in AI-organoid integration, focusing on the establishment of reliable AI model decision-making processes and the standardization of organoid research.
Collapse
Affiliation(s)
- Sudhiksha Maramraju
- Department of Biomedical EngineeringUniversity of North TexasDentonTexasUSA
- Texas Academy of Mathematics and ScienceUniversity of North TexasDentonTexasUSA
| | - Andrew Kowalczewski
- Department of Biomedical & Chemical EngineeringSyracuse UniversitySyracuseNew YorkUSA
- BioInspired Institute for Material and Living SystemsSyracuse UniversitySyracuseNew YorkUSA
| | - Anirudh Kaza
- Department of Biomedical EngineeringUniversity of North TexasDentonTexasUSA
- Texas Academy of Mathematics and ScienceUniversity of North TexasDentonTexasUSA
| | - Xiyuan Liu
- Department of Mechanical & Aerospace EngineeringSyracuse UniversitySyracuseNew YorkUSA
| | - Jathin Pranav Singaraju
- Department of Biomedical EngineeringUniversity of North TexasDentonTexasUSA
- Texas Academy of Mathematics and ScienceUniversity of North TexasDentonTexasUSA
| | - Mark V. Albert
- Department of Biomedical EngineeringUniversity of North TexasDentonTexasUSA
- Department of Computer Science and EngineeringUniversity of North TexasDentonTexasUSA
| | - Zhen Ma
- Department of Biomedical & Chemical EngineeringSyracuse UniversitySyracuseNew YorkUSA
- BioInspired Institute for Material and Living SystemsSyracuse UniversitySyracuseNew YorkUSA
| | - Huaxiao Yang
- Department of Biomedical EngineeringUniversity of North TexasDentonTexasUSA
| |
Collapse
|
5
|
Jahangir R, Kamali-Asl A, Arabi H, Zaidi H. Strategies for deep learning-based attenuation and scatter correction of brain 18 F-FDG PET images in the image domain. Med Phys 2024; 51:870-880. [PMID: 38197492 DOI: 10.1002/mp.16914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 11/30/2023] [Accepted: 12/06/2023] [Indexed: 01/11/2024] Open
Abstract
BACKGROUND Attenuation and scatter correction is crucial for quantitative positron emission tomography (PET) imaging. Direct attenuation correction (AC) in the image domain using deep learning approaches has been recently proposed for combined PET/MR and standalone PET modalities lacking transmission scanning devices or anatomical imaging. PURPOSE In this study, different input settings were considered in the model training to investigate deep learning-based AC in the image space. METHODS Three different deep learning methods were developed for direct AC in the image space: (i) use of non-attenuation-corrected PET images as input (NonAC-PET), (ii) use of attenuation-corrected PET images with a simple two-class AC map (composed of soft-tissue and background air) obtained from NonAC-PET images (PET segmentation-based AC [SegAC-PET]), and (iii) use of both NonAC-PET and SegAC-PET images in a Double-Channel fashion to predict ground truth attenuation corrected PET images with Computed Tomography images (CTAC-PET). Since a simple two-class AC map (generated from NonAC-PET images) can easily be generated, this work assessed the added value of incorporating SegAC-PET images into direct AC in the image space. A 4-fold cross-validation scheme was adopted to train and evaluate the different models based using 80 brain 18 F-Fluorodeoxyglucose PET/CT images. The voxel-wise and region-wise accuracy of the models were examined via measuring the standardized uptake value (SUV) quantification bias in different regions of the brain. RESULTS The overall root mean square error (RMSE) for the Double-Channel setting was 0.157 ± 0.08 SUV in the whole brain region, while RMSEs of 0.214 ± 0.07 and 0.189 ± 0.14 SUV were observed in NonAC-PET and SegAC-PET models, respectively. A mean SUV bias of 0.01 ± 0.26% was achieved by the Double-Channel model regarding the activity concentration in cerebellum region, as opposed to 0.08 ± 0.28% and 0.05 ± 0.28% SUV biases for the network that uniquely used NonAC-PET or SegAC-PET as input, respectively. SegAC-PET images with an SUV bias of -1.15 ± 0.54%, served as a benchmark for clinically accepted errors. In general, the Double-Channel network, relying on both SegAC-PET and NonAC-PET images, outperformed the other AC models. CONCLUSION Since the generation of two-class AC maps from non-AC PET images is straightforward, the current study investigated the potential added value of incorporating SegAC-PET images into a deep learning-based direct AC approach. Altogether, compared with models that use only NonAC-PET and SegAC-PET images, the Double-Channel deep learning network exhibited superior attenuation correction accuracy.
Collapse
Affiliation(s)
- Reza Jahangir
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Alireza Kamali-Asl
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Geneva University Neurocenter, Geneva University, Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
6
|
Gusinu G, Frau C, Trunfio GA, Solla P, Sechi LA. Segmentation of Substantia Nigra in Brain Parenchyma Sonographic Images Using Deep Learning. J Imaging 2023; 10:1. [PMID: 38276318 PMCID: PMC11154334 DOI: 10.3390/jimaging10010001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 12/11/2023] [Accepted: 12/13/2023] [Indexed: 01/27/2024] Open
Abstract
Currently, Parkinson's Disease (PD) is diagnosed primarily based on symptoms by experts clinicians. Neuroimaging exams represent an important tool to confirm the clinical diagnosis. Among them, Brain Parenchyma Sonography (BPS) is used to evaluate the hyperechogenicity of Substantia Nigra (SN), found in more than 90% of PD patients. In this article, we exploit a new dataset of BPS images to investigate an automatic segmentation approach for SN that can increase the accuracy of the exam and its practicability in clinical routine. This study achieves state-of-the-art performance in SN segmentation of BPS images. Indeed, it is found that the modified U-Net network scores a Dice coefficient of 0.859 ± 0.037. The results presented in this study demonstrate the feasibility and usefulness of SN automatic segmentation in BPS medical images, to the point that this study can be considered as the first stage of the development of an end-to-end CAD (Computer Aided Detection) system. Furthermore, the used dataset, which will be further enriched in the future, has proven to be very effective in supporting the training of CNNs and may pave the way for future studies in the field of CAD applied to PD.
Collapse
Affiliation(s)
- Giansalvo Gusinu
- Department of Biomedical Sciences, University of Sassari, 07100 Sassari, Italy; (G.G.); (G.A.T.)
| | - Claudia Frau
- Department of Medicine, Surgery and Pharmacy, University of Sassari, Viale San Pietro 8, 07100 Sassari, Italy; (C.F.); (P.S.)
| | - Giuseppe A. Trunfio
- Department of Biomedical Sciences, University of Sassari, 07100 Sassari, Italy; (G.G.); (G.A.T.)
| | - Paolo Solla
- Department of Medicine, Surgery and Pharmacy, University of Sassari, Viale San Pietro 8, 07100 Sassari, Italy; (C.F.); (P.S.)
| | - Leonardo Antonio Sechi
- Department of Biomedical Sciences, University of Sassari, 07100 Sassari, Italy; (G.G.); (G.A.T.)
| |
Collapse
|
7
|
Lin Z, Lei C, Yang L. Modern Image-Guided Surgery: A Narrative Review of Medical Image Processing and Visualization. SENSORS (BASEL, SWITZERLAND) 2023; 23:9872. [PMID: 38139718 PMCID: PMC10748263 DOI: 10.3390/s23249872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 11/15/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023]
Abstract
Medical image analysis forms the basis of image-guided surgery (IGS) and many of its fundamental tasks. Driven by the growing number of medical imaging modalities, the research community of medical imaging has developed methods and achieved functionality breakthroughs. However, with the overwhelming pool of information in the literature, it has become increasingly challenging for researchers to extract context-relevant information for specific applications, especially when many widely used methods exist in a variety of versions optimized for their respective application domains. By being further equipped with sophisticated three-dimensional (3D) medical image visualization and digital reality technology, medical experts could enhance their performance capabilities in IGS by multiple folds. The goal of this narrative review is to organize the key components of IGS in the aspects of medical image processing and visualization with a new perspective and insights. The literature search was conducted using mainstream academic search engines with a combination of keywords relevant to the field up until mid-2022. This survey systemically summarizes the basic, mainstream, and state-of-the-art medical image processing methods as well as how visualization technology like augmented/mixed/virtual reality (AR/MR/VR) are enhancing performance in IGS. Further, we hope that this survey will shed some light on the future of IGS in the face of challenges and opportunities for the research directions of medical image processing and visualization.
Collapse
Affiliation(s)
- Zhefan Lin
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Chen Lei
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Liangjing Yang
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| |
Collapse
|
8
|
Singh A, Paruthy SB, Belsariya V, Chandra J N, Singh SK, Manivasagam SS, Choudhary S, Kumar MA, Khera D, Kuraria V. Revolutionizing Breast Healthcare: Harnessing the Role of Artificial Intelligence. Cureus 2023; 15:e50203. [PMID: 38192969 PMCID: PMC10772314 DOI: 10.7759/cureus.50203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/09/2023] [Indexed: 01/10/2024] Open
Abstract
Breast cancer has the highest incidence and second-highest mortality rate among all cancers. The management of breast cancer is being revolutionized by artificial intelligence (AI), which is improving early detection, pathological diagnosis, risk assessment, individualized treatment recommendations, and treatment response prediction. Nuclear medicine has used artificial intelligence (AI) for over 50 years, but more recent advances in machine learning (ML) and deep learning (DL) have given AI in nuclear medicine additional capabilities. AI accurately analyzes breast imaging scans for early detection, minimizing false negatives while offering radiologists reliable, swift image processing assistance. It smoothly fits into radiology workflows, which may result in early treatments and reduced expenditures. In pathological diagnosis, artificial intelligence improves the quality of diagnostic data by ensuring accurate diagnoses, lowering inter-observer variability, speeding up the review process, and identifying errors or poor slides. By taking into consideration nutritional, genetic, and environmental factors, providing individualized risk assessments, and recommending more regular tests for higher-risk patients, AI aids with the risk assessment of breast cancer. The integration of clinical and genetic data into individualized treatment recommendations by AI facilitates collaborative decision-making and resource allocation optimization while also enabling patient progress monitoring, drug interaction consideration, and alignment with clinical guidelines. AI is used to analyze patient data, imaging, genomic data, and pathology reports in order to forecast how a treatment would respond. These models anticipate treatment outcomes, make sure that clinical recommendations are followed, and learn from historical data. The implementation of AI in medicine is hampered by issues with data quality, integration with healthcare IT systems, data protection, bias reduction, and ethical considerations, necessitating transparency and constant surveillance. Protecting patient privacy, resolving biases, maintaining transparency, identifying fault for mistakes, and ensuring fair access are just a few examples of ethical considerations. To preserve patient trust and address the effect on the healthcare workforce, ethical frameworks must be developed. The amazing potential of AI in the treatment of breast cancer calls for careful examination of its ethical and practical implications. We aim to review the comprehensive role of artificial intelligence in breast cancer management.
Collapse
Affiliation(s)
- Arun Singh
- General Surgery, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, IND
| | - Shivani B Paruthy
- General Surgery, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, IND
| | - Vivek Belsariya
- General Surgery, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, IND
| | - Nemi Chandra J
- General Surgery, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, IND
| | - Sunil Kumar Singh
- Surgical Oncology, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, IND
| | | | - Sushila Choudhary
- General Surgery, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, IND
| | - M Anil Kumar
- General Surgery, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, IND
| | - Dhananjay Khera
- General Surgery, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, IND
| | - Vaibhav Kuraria
- General Surgery, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, IND
| |
Collapse
|
9
|
Babaeipour R, Ouriadov A, Fox MS. Deep Learning Approaches for Quantifying Ventilation Defects in Hyperpolarized Gas Magnetic Resonance Imaging of the Lung: A Review. Bioengineering (Basel) 2023; 10:1349. [PMID: 38135940 PMCID: PMC10740978 DOI: 10.3390/bioengineering10121349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 11/06/2023] [Accepted: 11/21/2023] [Indexed: 12/24/2023] Open
Abstract
This paper provides an in-depth overview of Deep Neural Networks and their application in the segmentation and analysis of lung Magnetic Resonance Imaging (MRI) scans, specifically focusing on hyperpolarized gas MRI and the quantification of lung ventilation defects. An in-depth understanding of Deep Neural Networks is presented, laying the groundwork for the exploration of their use in hyperpolarized gas MRI and the quantification of lung ventilation defects. Five distinct studies are examined, each leveraging unique deep learning architectures and data augmentation techniques to optimize model performance. These studies encompass a range of approaches, including the use of 3D Convolutional Neural Networks, cascaded U-Net models, Generative Adversarial Networks, and nnU-net for hyperpolarized gas MRI segmentation. The findings highlight the potential of deep learning methods in the segmentation and analysis of lung MRI scans, emphasizing the need for consensus on lung ventilation segmentation methods.
Collapse
Affiliation(s)
- Ramtin Babaeipour
- School of Biomedical Engineering, Faculty of Engineering, The University of Western Ontario, London, ON N6A 3K7, Canada;
| | - Alexei Ouriadov
- School of Biomedical Engineering, Faculty of Engineering, The University of Western Ontario, London, ON N6A 3K7, Canada;
- Department of Physics and Astronomy, The University of Western Ontario, London, ON N6A 3K7, Canada;
- Lawson Health Research Institute, London, ON N6C 2R5, Canada
| | - Matthew S. Fox
- Department of Physics and Astronomy, The University of Western Ontario, London, ON N6A 3K7, Canada;
- Lawson Health Research Institute, London, ON N6C 2R5, Canada
| |
Collapse
|
10
|
Osuala R, Skorupko G, Lazrak N, Garrucho L, García E, Joshi S, Jouide S, Rutherford M, Prior F, Kushibar K, Díaz O, Lekadir K. medigan: a Python library of pretrained generative models for medical image synthesis. J Med Imaging (Bellingham) 2023; 10:061403. [PMID: 36814939 PMCID: PMC9940031 DOI: 10.1117/1.jmi.10.6.061403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 01/23/2023] [Indexed: 02/22/2023] Open
Abstract
Purpose Deep learning has shown great promise as the backbone of clinical decision support systems. Synthetic data generated by generative models can enhance the performance and capabilities of data-hungry deep learning models. However, there is (1) limited availability of (synthetic) datasets and (2) generative models are complex to train, which hinders their adoption in research and clinical applications. To reduce this entry barrier, we explore generative model sharing to allow more researchers to access, generate, and benefit from synthetic data. Approach We propose medigan, a one-stop shop for pretrained generative models implemented as an open-source framework-agnostic Python library. After gathering end-user requirements, design decisions based on usability, technical feasibility, and scalability are formulated. Subsequently, we implement medigan based on modular components for generative model (i) execution, (ii) visualization, (iii) search & ranking, and (iv) contribution. We integrate pretrained models with applications across modalities such as mammography, endoscopy, x-ray, and MRI. Results The scalability and design of the library are demonstrated by its growing number of integrated and readily-usable pretrained generative models, which include 21 models utilizing nine different generative adversarial network architectures trained on 11 different datasets. We further analyze three medigan applications, which include (a) enabling community-wide sharing of restricted data, (b) investigating generative model evaluation metrics, and (c) improving clinical downstream tasks. In (b), we extract Fréchet inception distances (FID) demonstrating FID variability based on image normalization and radiology-specific feature extractors. Conclusion medigan allows researchers and developers to create, increase, and domain-adapt their training data in just a few lines of code. Capable of enriching and accelerating the development of clinical machine learning models, we show medigan's viability as platform for generative model sharing. Our multimodel synthetic data experiments uncover standards for assessing and reporting metrics, such as FID, in image synthesis studies.
Collapse
Affiliation(s)
- Richard Osuala
- Universitat de Barcelona, Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| | - Grzegorz Skorupko
- Universitat de Barcelona, Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| | - Noussair Lazrak
- Universitat de Barcelona, Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| | - Lidia Garrucho
- Universitat de Barcelona, Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| | - Eloy García
- Universitat de Barcelona, Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| | - Smriti Joshi
- Universitat de Barcelona, Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| | - Socayna Jouide
- Universitat de Barcelona, Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| | - Michael Rutherford
- University of Arkansas for Medical Sciences, Department of Biomedical Informatics, Little Rock, Arkansas, United States
| | - Fred Prior
- University of Arkansas for Medical Sciences, Department of Biomedical Informatics, Little Rock, Arkansas, United States
| | - Kaisar Kushibar
- Universitat de Barcelona, Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| | - Oliver Díaz
- Universitat de Barcelona, Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| | - Karim Lekadir
- Universitat de Barcelona, Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| |
Collapse
|
11
|
Zhou Z, Qian X, Hu J, Chen G, Zhang C, Zhu J, Dai Y. An artificial intelligence-assisted diagnosis modeling software (AIMS) platform based on medical images and machine learning: a development and validation study. Quant Imaging Med Surg 2023; 13:7504-7522. [PMID: 37969634 PMCID: PMC10644131 DOI: 10.21037/qims-23-20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 06/12/2023] [Indexed: 11/17/2023]
Abstract
Background Supervised machine learning methods [both radiomics and convolutional neural network (CNN)-based deep learning] are usually employed to develop artificial intelligence models with medical images for computer-assisted diagnosis and prognosis of diseases. A classical machine learning-based modeling workflow involves a series of interconnected components and various algorithms, but this makes it challenging, tedious, and labor intensive for radiologists and researchers to build customized models for specific clinical applications if they lack expertise in machine learning methods. Methods We developed a user-friendly artificial intelligence-assisted diagnosis modeling software (AIMS) platform, which supplies standardized machine learning-based modeling workflows for computer-assisted diagnosis and prognosis systems with medical images. In contrast to other existing software platforms, AIMS contains both radiomics and CNN-based deep learning workflows, making it an all-in-one software platform for machine learning-based medical image analysis. The modular design of AIMS allows users to build machine learning models easily, test models comprehensively, and fairly compare the performance of different models in a specific application. The graphical user interface (GUI) enables users to process large numbers of medical images without programming or script addition. Furthermore, AIMS also provides a flexible image processing toolkit (e.g., semiautomatic segmentation, registration, morphological operations) to rapidly create lesion labels for multiphase analysis, multiregion analysis of an individual tumor (e.g., tumor mass and peritumor), and multimodality analysis. Results The functionality and efficiency of AIMS were demonstrated in 3 independent experiments in radiation oncology, where multiphase, multiregion, and multimodality analyses were performed, respectively. For clear cell renal cell carcinoma (ccRCC) Fuhrman grading with multiphase analysis (sample size =187), the area under the curve (AUC) value of the AIMS was 0.776; for ccRCC Fuhrman grading with multiregion analysis (sample size =177), the AUC value of the AIMS was 0.848; for prostate cancer Gleason grading with multimodality analysis (sample size =206), the AUC value of the AIMS was 0.980. Conclusions AIMS provides a user-friendly infrastructure for radiologists and researchers, lowering the barrier to building customized machine learning-based computer-assisted diagnosis models for medical image analysis.
Collapse
Affiliation(s)
- Zhiyong Zhou
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Xusheng Qian
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Jisu Hu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Guangqiang Chen
- Department of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Caiyuan Zhang
- Department of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Jianbing Zhu
- Suzhou Science & Technology Town Hospital, Suzhou Hospital, Affiliated Hospital of Medical School, Nanjing University, Suzhou, China
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
- Suzhou Guoke Kangcheng Medical Technology Co., Ltd., Suzhou, China
| |
Collapse
|
12
|
Astley JR, Biancardi AM, Hughes PJC, Marshall H, Collier GJ, Chan H, Saunders LC, Smith LJ, Brook ML, Thompson R, Rowland‐Jones S, Skeoch S, Bianchi SM, Hatton MQ, Rahman NM, Ho L, Brightling CE, Wain LV, Singapuri A, Evans RA, Moss AJ, McCann GP, Neubauer S, Raman B, Wild JM, Tahir BA. Implementable Deep Learning for Multi-sequence Proton MRI Lung Segmentation: A Multi-center, Multi-vendor, and Multi-disease Study. J Magn Reson Imaging 2023; 58:1030-1044. [PMID: 36799341 PMCID: PMC10946727 DOI: 10.1002/jmri.28643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 01/27/2023] [Accepted: 01/28/2023] [Indexed: 02/18/2023] Open
Abstract
BACKGROUND Recently, deep learning via convolutional neural networks (CNNs) has largely superseded conventional methods for proton (1 H)-MRI lung segmentation. However, previous deep learning studies have utilized single-center data and limited acquisition parameters. PURPOSE Develop a generalizable CNN for lung segmentation in 1 H-MRI, robust to pathology, acquisition protocol, vendor, and center. STUDY TYPE Retrospective. POPULATION A total of 809 1 H-MRI scans from 258 participants with various pulmonary pathologies (median age (range): 57 (6-85); 42% females) and 31 healthy participants (median age (range): 34 (23-76); 34% females) that were split into training (593 scans (74%); 157 participants (55%)), testing (50 scans (6%); 50 participants (17%)) and external validation (164 scans (20%); 82 participants (28%)) sets. FIELD STRENGTH/SEQUENCE 1.5-T and 3-T/3D spoiled-gradient recalled and ultrashort echo-time 1 H-MRI. ASSESSMENT 2D and 3D CNNs, trained on single-center, multi-sequence data, and the conventional spatial fuzzy c-means (SFCM) method were compared to manually delineated expert segmentations. Each method was validated on external data originating from several centers. Dice similarity coefficient (DSC), average boundary Hausdorff distance (Average HD), and relative error (XOR) metrics to assess segmentation performance. STATISTICAL TESTS Kruskal-Wallis tests assessed significances of differences between acquisitions in the testing set. Friedman tests with post hoc multiple comparisons assessed differences between the 2D CNN, 3D CNN, and SFCM. Bland-Altman analyses assessed agreement with manually derived lung volumes. A P value of <0.05 was considered statistically significant. RESULTS The 3D CNN significantly outperformed its 2D analog and SFCM, yielding a median (range) DSC of 0.961 (0.880-0.987), Average HD of 1.63 mm (0.65-5.45) and XOR of 0.079 (0.025-0.240) on the testing set and a DSC of 0.973 (0.866-0.987), Average HD of 1.11 mm (0.47-8.13) and XOR of 0.054 (0.026-0.255) on external validation data. DATA CONCLUSION The 3D CNN generated accurate 1 H-MRI lung segmentations on a heterogenous dataset, demonstrating robustness to disease pathology, sequence, vendor, and center. EVIDENCE LEVEL 4. TECHNICAL EFFICACY Stage 1.
Collapse
Affiliation(s)
- Joshua R. Astley
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
- Department of Oncology and MetabolismThe University of SheffieldSheffieldUK
| | - Alberto M. Biancardi
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
| | - Paul J. C. Hughes
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
| | - Helen Marshall
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
| | - Guilhem J. Collier
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
| | - Ho‐Fung Chan
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
| | - Laura C. Saunders
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
| | - Laurie J. Smith
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
| | - Martin L. Brook
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
| | - Roger Thompson
- Sheffield Teaching Hospitals NHS Foundation TrustSheffieldUK
| | | | - Sarah Skeoch
- Royal National Hospital for Rheumatic DiseasesRoyal United Hospital NHS Foundation TrustBathUK
- Arthritis Research UK Centre for Epidemiology, Division of Musculoskeletal and Dermatological Sciences, School of Biological Sciences, Faculty of Biology, Medicine and HealthUniversity of Manchester, Manchester Academic Health Sciences CentreManchesterUK
| | | | | | - Najib M. Rahman
- Division of Cardiovascular Medicine, Radcliffe Department of Medicine, National Institute for Health Research (NIHR) Oxford Biomedical Research Centre (BRC)University of OxfordOxfordUK
| | - Ling‐Pei Ho
- MRC Human Immunology UnitUniversity of OxfordOxfordUK
| | - Chris E. Brightling
- The Institute for Lung Health, NIHR Leicester Biomedical Research CentreUniversity of LeicesterLeicesterUK
| | - Louise V. Wain
- The Institute for Lung Health, NIHR Leicester Biomedical Research CentreUniversity of LeicesterLeicesterUK
- Department of Health sciencesUniversity of LeicesterLeicesterUK
| | - Amisha Singapuri
- The Institute for Lung Health, NIHR Leicester Biomedical Research CentreUniversity of LeicesterLeicesterUK
| | - Rachael A. Evans
- University Hospitals of Leicester NHS TrustUniversity of LeicesterLeicesterUK
| | - Alastair J. Moss
- The Institute for Lung Health, NIHR Leicester Biomedical Research CentreUniversity of LeicesterLeicesterUK
- Department of Cardiovascular SciencesUniversity of LeicesterLeicesterUK
| | - Gerry P. McCann
- The Institute for Lung Health, NIHR Leicester Biomedical Research CentreUniversity of LeicesterLeicesterUK
- Department of Cardiovascular SciencesUniversity of LeicesterLeicesterUK
| | - Stefan Neubauer
- Division of Cardiovascular Medicine, Radcliffe Department of Medicine, National Institute for Health Research (NIHR) Oxford Biomedical Research Centre (BRC)University of OxfordOxfordUK
| | - Betty Raman
- Division of Cardiovascular Medicine, Radcliffe Department of Medicine, National Institute for Health Research (NIHR) Oxford Biomedical Research Centre (BRC)University of OxfordOxfordUK
| | | | - Jim M. Wild
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
- Insigneo Institute for In Silico MedicineThe University of SheffieldSheffieldUK
| | - Bilal A. Tahir
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
- Department of Oncology and MetabolismThe University of SheffieldSheffieldUK
- Insigneo Institute for In Silico MedicineThe University of SheffieldSheffieldUK
| |
Collapse
|
13
|
Constant C, Aubin CE, Kremers HM, Garcia DVV, Wyles CC, Rouzrokh P, Larson AN. The use of deep learning in medical imaging to improve spine care: A scoping review of current literature and clinical applications. NORTH AMERICAN SPINE SOCIETY JOURNAL 2023; 15:100236. [PMID: 37599816 PMCID: PMC10432249 DOI: 10.1016/j.xnsj.2023.100236] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Accepted: 06/14/2023] [Indexed: 08/22/2023]
Abstract
Background Artificial intelligence is a revolutionary technology that promises to assist clinicians in improving patient care. In radiology, deep learning (DL) is widely used in clinical decision aids due to its ability to analyze complex patterns and images. It allows for rapid, enhanced data, and imaging analysis, from diagnosis to outcome prediction. The purpose of this study was to evaluate the current literature and clinical utilization of DL in spine imaging. Methods This study is a scoping review and utilized the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to review the scientific literature from 2012 to 2021. A search in PubMed, Web of Science, Embased, and IEEE Xplore databases with syntax specific for DL and medical imaging in spine care applications was conducted to collect all original publications on the subject. Specific data was extracted from the available literature, including algorithm application, algorithms tested, database type and size, algorithm training method, and outcome of interest. Results A total of 365 studies (total sample of 232,394 patients) were included and grouped into 4 general applications: diagnostic tools, clinical decision support tools, automated clinical/instrumentation assessment, and clinical outcome prediction. Notable disparities exist in the selected algorithms and the training across multiple disparate databases. The most frequently used algorithms were U-Net and ResNet. A DL model was developed and validated in 92% of included studies, while a pre-existing DL model was investigated in 8%. Of all developed models, only 15% of them have been externally validated. Conclusions Based on this scoping review, DL in spine imaging is used in a broad range of clinical applications, particularly for diagnosing spinal conditions. There is a wide variety of DL algorithms, database characteristics, and training methods. Future studies should focus on external validation of existing models before bringing them into clinical use.
Collapse
Affiliation(s)
- Caroline Constant
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Polytechnique Montreal, 2500 Chem. de Polytechnique, Montréal, QC H3T 1J4, Canada
- AO Research Institute Davos, Clavadelerstrasse 8, CH 7270, Davos, Switzerland
| | - Carl-Eric Aubin
- Polytechnique Montreal, 2500 Chem. de Polytechnique, Montréal, QC H3T 1J4, Canada
| | - Hilal Maradit Kremers
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
| | - Diana V. Vera Garcia
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
| | - Cody C. Wyles
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Department of Orthopedic Surgery, Mayo Clinic, 200, 1st St Southwest, Rochester, MN, 55902, United States
| | - Pouria Rouzrokh
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Radiology Informatics Laboratory, Mayo Clinic, 200, 1st St Southwest, Rochester, MN, 55902, United States
| | - Annalise Noelle Larson
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Department of Orthopedic Surgery, Mayo Clinic, 200, 1st St Southwest, Rochester, MN, 55902, United States
| |
Collapse
|
14
|
Cui Y, Arimura H, Yoshitake T, Shioyama Y, Yabuuchi H. Deep learning model fusion improves lung tumor segmentation accuracy across variable training-to-test dataset ratios. Phys Eng Sci Med 2023; 46:1271-1285. [PMID: 37548886 DOI: 10.1007/s13246-023-01295-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Accepted: 06/20/2023] [Indexed: 08/08/2023]
Abstract
This study aimed to investigate the robustness of a deep learning (DL) fusion model for low training-to-test ratio (TTR) datasets in the segmentation of gross tumor volumes (GTVs) in three-dimensional planning computed tomography (CT) images for lung cancer stereotactic body radiotherapy (SBRT). A total of 192 patients with lung cancer (solid tumor, 118; part-solid tumor, 53; ground-glass opacity, 21) who underwent SBRT were included in this study. Regions of interest in the GTVs were cropped based on GTV centroids from planning CT images. Three DL models, 3D U-Net, V-Net, and dense V-Net, were trained to segment the GTV regions. Nine fusion models were constructed with logical AND, logical OR, and voting of the two or three outputs of the three DL models. TTR was defined as the ratio of the number of cases in a training dataset to that in a test dataset. The Dice similarity coefficients (DSCs) and Hausdorff distance (HD) of the 12 models were assessed with TTRs of 1.00 (training data: validation data: test data = 40:20:40), 0.791 (35:20:45), 0.531 (31:10:59), 0.291 (20:10:70), and 0.116 (10:5:85). The voting fusion model achieved the highest DSCs of 0.829 to 0.798 for all TTRs among the 12 models, whereas the other models showed DSCs of 0.818 to 0.804 for a TTR of 1.00 and 0.788 to 0.742 for a TTR of 0.116, and an HD of 5.40 ± 3.00 to 6.07 ± 3.26 mm better than any single DL models. The findings suggest that the proposed voting fusion model is a robust approach for low TTR datasets in segmenting GTVs in planning CT images of lung cancer SBRT.
Collapse
Affiliation(s)
- Yunhao Cui
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Hidetaka Arimura
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan.
- Division of Medical Quantum Science, Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan.
| | - Tadamasa Yoshitake
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Yoshiyuki Shioyama
- Saga International Heavy Ion Cancer Treatment Foundation, 3049 Harakogamachi, Tosu-shi, 841-0071, Saga, Japan
| | - Hidetake Yabuuchi
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| |
Collapse
|
15
|
Astley JR, Biancardi AM, Marshall H, Hughes PJC, Collier GJ, Hatton MQ, Wild JM, Tahir BA. A hybrid model- and deep learning-based framework for functional lung image synthesis from multi-inflation CT and hyperpolarized gas MRI. Med Phys 2023; 50:5657-5670. [PMID: 36932692 PMCID: PMC10946819 DOI: 10.1002/mp.16369] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Revised: 02/25/2023] [Accepted: 03/04/2023] [Indexed: 03/19/2023] Open
Abstract
BACKGROUND Hyperpolarized gas MRI is a functional lung imaging modality capable of visualizing regional lung ventilation with exceptional detail within a single breath. However, this modality requires specialized equipment and exogenous contrast, which limits widespread clinical adoption. CT ventilation imaging employs various metrics to model regional ventilation from non-contrast CT scans acquired at multiple inflation levels and has demonstrated moderate spatial correlation with hyperpolarized gas MRI. Recently, deep learning (DL)-based methods, utilizing convolutional neural networks (CNNs), have been leveraged for image synthesis applications. Hybrid approaches integrating computational modeling and data-driven methods have been utilized in cases where datasets are limited with the added benefit of maintaining physiological plausibility. PURPOSE To develop and evaluate a multi-channel DL-based method that combines modeling and data-driven approaches to synthesize hyperpolarized gas MRI lung ventilation scans from multi-inflation, non-contrast CT and quantitatively compare these synthetic ventilation scans to conventional CT ventilation modeling. METHODS In this study, we propose a hybrid DL configuration that integrates model- and data-driven methods to synthesize hyperpolarized gas MRI lung ventilation scans from a combination of non-contrast, multi-inflation CT and CT ventilation modeling. We used a diverse dataset comprising paired inspiratory and expiratory CT and helium-3 hyperpolarized gas MRI for 47 participants with a range of pulmonary pathologies. We performed six-fold cross-validation on the dataset and evaluated the spatial correlation between the synthetic ventilation and real hyperpolarized gas MRI scans; the proposed hybrid framework was compared to conventional CT ventilation modeling and other non-hybrid DL configurations. Synthetic ventilation scans were evaluated using voxel-wise evaluation metrics such as Spearman's correlation and mean square error (MSE), in addition to clinical biomarkers of lung function such as the ventilated lung percentage (VLP). Furthermore, regional localization of ventilated and defect lung regions was assessed via the Dice similarity coefficient (DSC). RESULTS We showed that the proposed hybrid framework is capable of accurately replicating ventilation defects seen in the real hyperpolarized gas MRI scans, achieving a voxel-wise Spearman's correlation of 0.57 ± 0.17 and an MSE of 0.017 ± 0.01. The hybrid framework significantly outperformed CT ventilation modeling alone and all other DL configurations using Spearman's correlation. The proposed framework was capable of generating clinically relevant metrics such as the VLP without manual intervention, resulting in a Bland-Altman bias of 3.04%, significantly outperforming CT ventilation modeling. Relative to CT ventilation modeling, the hybrid framework yielded significantly more accurate delineations of ventilated and defect lung regions, achieving a DSC of 0.95 and 0.48 for ventilated and defect regions, respectively. CONCLUSION The ability to generate realistic synthetic ventilation scans from CT has implications for several clinical applications, including functional lung avoidance radiotherapy and treatment response mapping. CT is an integral part of almost every clinical lung imaging workflow and hence is readily available for most patients; therefore, synthetic ventilation from non-contrast CT can provide patients with wider access to ventilation imaging worldwide.
Collapse
Affiliation(s)
- Joshua R Astley
- Department of Oncology and Metabolism, The University of Sheffield, Sheffield, UK
- POLARIS, Department of Infection, Immunity & Cardiovascular Disease, The University of Sheffield, Sheffield, UK
| | - Alberto M Biancardi
- POLARIS, Department of Infection, Immunity & Cardiovascular Disease, The University of Sheffield, Sheffield, UK
| | - Helen Marshall
- POLARIS, Department of Infection, Immunity & Cardiovascular Disease, The University of Sheffield, Sheffield, UK
| | - Paul J C Hughes
- POLARIS, Department of Infection, Immunity & Cardiovascular Disease, The University of Sheffield, Sheffield, UK
| | - Guilhem J Collier
- POLARIS, Department of Infection, Immunity & Cardiovascular Disease, The University of Sheffield, Sheffield, UK
| | - Matthew Q Hatton
- Department of Oncology and Metabolism, The University of Sheffield, Sheffield, UK
| | - Jim M Wild
- POLARIS, Department of Infection, Immunity & Cardiovascular Disease, The University of Sheffield, Sheffield, UK
- Insigneo Institute for In Silico Medicine, The University of Sheffield, Sheffield, UK
| | - Bilal A Tahir
- Department of Oncology and Metabolism, The University of Sheffield, Sheffield, UK
- POLARIS, Department of Infection, Immunity & Cardiovascular Disease, The University of Sheffield, Sheffield, UK
- Insigneo Institute for In Silico Medicine, The University of Sheffield, Sheffield, UK
| |
Collapse
|
16
|
Boehringer AS, Sanaat A, Arabi H, Zaidi H. An active learning approach to train a deep learning algorithm for tumor segmentation from brain MR images. Insights Imaging 2023; 14:141. [PMID: 37620554 PMCID: PMC10449747 DOI: 10.1186/s13244-023-01487-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 07/22/2023] [Indexed: 08/26/2023] Open
Abstract
PURPOSE This study focuses on assessing the performance of active learning techniques to train a brain MRI glioma segmentation model. METHODS The publicly available training dataset provided for the 2021 RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge was used in this study, consisting of 1251 multi-institutional, multi-parametric MR images. Post-contrast T1, T2, and T2 FLAIR images as well as ground truth manual segmentation were used as input for the model. The data were split into a training set of 1151 cases and testing set of 100 cases, with the testing set remaining constant throughout. Deep convolutional neural network segmentation models were trained using the NiftyNet platform. To test the viability of active learning in training a segmentation model, an initial reference model was trained using all 1151 training cases followed by two additional models using only 575 cases and 100 cases. The resulting predicted segmentations of these two additional models on the remaining training cases were then addended to the training dataset for additional training. RESULTS It was demonstrated that an active learning approach for manual segmentation can lead to comparable model performance for segmentation of brain gliomas (0.906 reference Dice score vs 0.868 active learning Dice score) while only requiring manual annotation for 28.6% of the data. CONCLUSION The active learning approach when applied to model training can drastically reduce the time and labor spent on preparation of ground truth training data. CRITICAL RELEVANCE STATEMENT Active learning concepts were applied to a deep learning-assisted segmentation of brain gliomas from MR images to assess their viability in reducing the required amount of manually annotated ground truth data in model training. KEY POINTS • This study focuses on assessing the performance of active learning techniques to train a brain MRI glioma segmentation model. • The active learning approach for manual segmentation can lead to comparable model performance for segmentation of brain gliomas. • Active learning when applied to model training can drastically reduce the time and labor spent on preparation of ground truth training data.
Collapse
Affiliation(s)
- Andrew S Boehringer
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1205, Geneva, Switzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1205, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1205, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1205, Geneva, Switzerland.
- Geneva University Neurocenter, University of Geneva, CH-1211, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
17
|
Seth I, Bulloch G, Joseph K, Hunter-Smith DJ, Rozen WM. Use of Artificial Intelligence in the Advancement of Breast Surgery and Implications for Breast Reconstruction: A Narrative Review. J Clin Med 2023; 12:5143. [PMID: 37568545 PMCID: PMC10419723 DOI: 10.3390/jcm12155143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 07/28/2023] [Accepted: 08/04/2023] [Indexed: 08/13/2023] Open
Abstract
BACKGROUND Breast reconstruction is a pivotal part of the recuperation process following a mastectomy and aims to restore both the physical aesthetic and emotional well-being of breast cancer survivors. In recent years, artificial intelligence (AI) has emerged as a revolutionary technology across numerous medical disciplines. This narrative review of the current literature and evidence analysis explores the role of AI in the domain of breast reconstruction, outlining its potential to refine surgical procedures, enhance outcomes, and streamline decision making. METHODS A systematic search on Medline (via PubMed), Cochrane Library, Web of Science, Google Scholar, Clinical Trials, and Embase databases from January 1901 to June 2023 was conducted. RESULTS By meticulously evaluating a selection of recent studies and engaging with inherent challenges and prospective trajectories, this review spotlights the promising role AI plays in advancing the techniques of breast reconstruction. However, issues concerning data quality, privacy, and ethical considerations pose hurdles to the seamless integration of AI in the medical field. CONCLUSION The future research agenda comprises dataset standardization, AI algorithm refinement, and the implementation of prospective clinical trials and fosters cross-disciplinary partnerships. The fusion of AI with other emergent technologies like augmented reality and 3D printing could further propel progress in breast surgery.
Collapse
Affiliation(s)
- Ishith Seth
- Department of Plastic Surgery, Peninsula Health, Melbourne, VIC 3199, Australia
- Faculty of Medicine, The University of Melbourne, Melbourne, VIC 3053, Australia
| | - Gabriella Bulloch
- Faculty of Medicine, The University of Melbourne, Melbourne, VIC 3053, Australia
| | - Konrad Joseph
- Faculty of Medicine, The University of Wollongong, Wollongon, NSW 2500, Australia
| | | | - Warren Matthew Rozen
- Department of Plastic Surgery, Peninsula Health, Melbourne, VIC 3199, Australia
- Faculty of Medicine, The University of Melbourne, Melbourne, VIC 3053, Australia
| |
Collapse
|
18
|
Shamshad F, Khan S, Zamir SW, Khan MH, Hayat M, Khan FS, Fu H. Transformers in medical imaging: A survey. Med Image Anal 2023; 88:102802. [PMID: 37315483 DOI: 10.1016/j.media.2023.102802] [Citation(s) in RCA: 69] [Impact Index Per Article: 69.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 03/11/2023] [Accepted: 03/23/2023] [Indexed: 06/16/2023]
Abstract
Following unprecedented success on the natural language tasks, Transformers have been successfully applied to several computer vision problems, achieving state-of-the-art results and prompting researchers to reconsider the supremacy of convolutional neural networks (CNNs) as de facto operators. Capitalizing on these advances in computer vision, the medical imaging field has also witnessed growing interest for Transformers that can capture global context compared to CNNs with local receptive fields. Inspired from this transition, in this survey, we attempt to provide a comprehensive review of the applications of Transformers in medical imaging covering various aspects, ranging from recently proposed architectural designs to unsolved issues. Specifically, we survey the use of Transformers in medical image segmentation, detection, classification, restoration, synthesis, registration, clinical report generation, and other tasks. In particular, for each of these applications, we develop taxonomy, identify application-specific challenges as well as provide insights to solve them, and highlight recent trends. Further, we provide a critical discussion of the field's current state as a whole, including the identification of key challenges, open problems, and outlining promising future directions. We hope this survey will ignite further interest in the community and provide researchers with an up-to-date reference regarding applications of Transformer models in medical imaging. Finally, to cope with the rapid development in this field, we intend to regularly update the relevant latest papers and their open-source implementations at https://github.com/fahadshamshad/awesome-transformers-in-medical-imaging.
Collapse
Affiliation(s)
- Fahad Shamshad
- MBZ University of Artificial Intelligence, Abu Dhabi, United Arab Emirates.
| | - Salman Khan
- MBZ University of Artificial Intelligence, Abu Dhabi, United Arab Emirates; CECS, Australian National University, Canberra ACT 0200, Australia
| | - Syed Waqas Zamir
- Inception Institute of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | | | - Munawar Hayat
- Faculty of IT, Monash University, Clayton VIC 3800, Australia
| | - Fahad Shahbaz Khan
- MBZ University of Artificial Intelligence, Abu Dhabi, United Arab Emirates; Computer Vision Laboratory, Linköping University, Sweden
| | - Huazhu Fu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| |
Collapse
|
19
|
Astley JR, Biancardi AM, Marshall H, Smith LJ, Hughes PJC, Collier GJ, Saunders LC, Norquay G, Tofan MM, Hatton MQ, Hughes R, Wild JM, Tahir BA. PhysVENeT: a physiologically-informed deep learning-based framework for the synthesis of 3D hyperpolarized gas MRI ventilation. Sci Rep 2023; 13:11273. [PMID: 37438406 DOI: 10.1038/s41598-023-38105-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 07/03/2023] [Indexed: 07/14/2023] Open
Abstract
Functional lung imaging modalities such as hyperpolarized gas MRI ventilation enable visualization and quantification of regional lung ventilation; however, these techniques require specialized equipment and exogenous contrast, limiting clinical adoption. Physiologically-informed techniques to map proton (1H)-MRI ventilation have been proposed. These approaches have demonstrated moderate correlation with hyperpolarized gas MRI. Recently, deep learning (DL) has been used for image synthesis applications, including functional lung image synthesis. Here, we propose a 3D multi-channel convolutional neural network that employs physiologically-informed ventilation mapping and multi-inflation structural 1H-MRI to synthesize 3D ventilation surrogates (PhysVENeT). The dataset comprised paired inspiratory and expiratory 1H-MRI scans and corresponding hyperpolarized gas MRI scans from 170 participants with various pulmonary pathologies. We performed fivefold cross-validation on 150 of these participants and used 20 participants with a previously unseen pathology (post COVID-19) for external validation. Synthetic ventilation surrogates were evaluated using voxel-wise correlation and structural similarity metrics; the proposed PhysVENeT framework significantly outperformed conventional 1H-MRI ventilation mapping and other DL approaches which did not utilize structural imaging and ventilation mapping. PhysVENeT can accurately reflect ventilation defects and exhibits minimal overfitting on external validation data compared to DL approaches that do not integrate physiologically-informed mapping.
Collapse
Affiliation(s)
- Joshua R Astley
- Department of Oncology and Metabolism, The University of Sheffield, Sheffield, UK
- POLARIS, Department of Infection, Immunity & Cardiovascular Disease, The University of Sheffield, Sheffield, UK
| | - Alberto M Biancardi
- POLARIS, Department of Infection, Immunity & Cardiovascular Disease, The University of Sheffield, Sheffield, UK
| | - Helen Marshall
- POLARIS, Department of Infection, Immunity & Cardiovascular Disease, The University of Sheffield, Sheffield, UK
| | - Laurie J Smith
- POLARIS, Department of Infection, Immunity & Cardiovascular Disease, The University of Sheffield, Sheffield, UK
| | - Paul J C Hughes
- POLARIS, Department of Infection, Immunity & Cardiovascular Disease, The University of Sheffield, Sheffield, UK
| | - Guilhem J Collier
- POLARIS, Department of Infection, Immunity & Cardiovascular Disease, The University of Sheffield, Sheffield, UK
| | - Laura C Saunders
- POLARIS, Department of Infection, Immunity & Cardiovascular Disease, The University of Sheffield, Sheffield, UK
| | - Graham Norquay
- POLARIS, Department of Infection, Immunity & Cardiovascular Disease, The University of Sheffield, Sheffield, UK
| | - Malina-Maria Tofan
- Department of Oncology and Metabolism, The University of Sheffield, Sheffield, UK
| | - Matthew Q Hatton
- Department of Oncology and Metabolism, The University of Sheffield, Sheffield, UK
| | - Rod Hughes
- Early Development Respiratory Medicine, AstraZeneca, Cambridge, UK
| | - Jim M Wild
- POLARIS, Department of Infection, Immunity & Cardiovascular Disease, The University of Sheffield, Sheffield, UK
- Insigneo Institute for in Silico Medicine, The University of Sheffield, Sheffield, UK
| | - Bilal A Tahir
- Department of Oncology and Metabolism, The University of Sheffield, Sheffield, UK.
- POLARIS, Department of Infection, Immunity & Cardiovascular Disease, The University of Sheffield, Sheffield, UK.
- Insigneo Institute for in Silico Medicine, The University of Sheffield, Sheffield, UK.
| |
Collapse
|
20
|
Bhandary S, Kuhn D, Babaiee Z, Fechter T, Benndorf M, Zamboglou C, Grosu AL, Grosu R. Investigation and benchmarking of U-Nets on prostate segmentation tasks. Comput Med Imaging Graph 2023; 107:102241. [PMID: 37201475 DOI: 10.1016/j.compmedimag.2023.102241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 05/03/2023] [Accepted: 05/03/2023] [Indexed: 05/20/2023]
Abstract
In healthcare, a growing number of physicians and support staff are striving to facilitate personalized radiotherapy regimens for patients with prostate cancer. This is because individual patient biology is unique, and employing a single approach for all is inefficient. A crucial step for customizing radiotherapy planning and gaining fundamental information about the disease, is the identification and delineation of targeted structures. However, accurate biomedical image segmentation is time-consuming, requires considerable experience and is prone to observer variability. In the past decade, the use of deep learning models has significantly increased in the field of medical image segmentation. At present, a vast number of anatomical structures can be demarcated on a clinician's level with deep learning models. These models would not only unload work, but they can offer unbiased characterization of the disease. The main architectures used in segmentation are the U-Net and its variants, that exhibit outstanding performances. However, reproducing results or directly comparing methods is often limited by closed source of data and the large heterogeneity among medical images. With this in mind, our intention is to provide a reliable source for assessing deep learning models. As an example, we chose the challenging task of delineating the prostate gland in multi-modal images. First, this paper provides a comprehensive review of current state-of-the-art convolutional neural networks for 3D prostate segmentation. Second, utilizing public and in-house CT and MR datasets of varying properties, we created a framework for an objective comparison of automatic prostate segmentation algorithms. The framework was used for rigorous evaluations of the models, highlighting their strengths and weaknesses.
Collapse
Affiliation(s)
- Shrajan Bhandary
- Cyber-Physical Systems Division, Institute of Computer Engineering, Faculty of Informatics, Technische Universität Wien, Vienna, 1040, Austria.
| | - Dejan Kuhn
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany; Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany
| | - Zahra Babaiee
- Cyber-Physical Systems Division, Institute of Computer Engineering, Faculty of Informatics, Technische Universität Wien, Vienna, 1040, Austria
| | - Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany; Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany
| | - Matthias Benndorf
- Department of Diagnostic and Interventional Radiology, Medical Center University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany
| | - Constantinos Zamboglou
- Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany; Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany; German Oncology Center, European University, Limassol, 4108, Cyprus
| | - Anca-Ligia Grosu
- Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany; Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany
| | - Radu Grosu
- Cyber-Physical Systems Division, Institute of Computer Engineering, Faculty of Informatics, Technische Universität Wien, Vienna, 1040, Austria; Department of Computer Science, State University of New York at Stony Brook, NY, 11794, USA
| |
Collapse
|
21
|
Pitarch C, Ribas V, Vellido A. AI-Based Glioma Grading for a Trustworthy Diagnosis: An Analytical Pipeline for Improved Reliability. Cancers (Basel) 2023; 15:3369. [PMID: 37444479 DOI: 10.3390/cancers15133369] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 06/22/2023] [Accepted: 06/25/2023] [Indexed: 07/15/2023] Open
Abstract
Glioma is the most common type of tumor in humans originating in the brain. According to the World Health Organization, gliomas can be graded on a four-stage scale, ranging from the most benign to the most malignant. The grading of these tumors from image information is a far from trivial task for radiologists and one in which they could be assisted by machine-learning-based decision support. However, the machine learning analytical pipeline is also fraught with perils stemming from different sources, such as inadvertent data leakage, adequacy of 2D image sampling, or classifier assessment biases. In this paper, we analyze a glioma database sourced from multiple datasets using a simple classifier, aiming to obtain a reliable tumor grading and, on the way, we provide a few guidelines to ensure such reliability. Our results reveal that by focusing on the tumor region of interest and using data augmentation techniques we significantly enhanced the accuracy and confidence in tumor classifications. Evaluation on an independent test set resulted in an AUC-ROC of 0.932 in the discrimination of low-grade gliomas from high-grade gliomas, and an AUC-ROC of 0.893 in the classification of grades 2, 3, and 4. The study also highlights the importance of providing, beyond generic classification performance, measures of how reliable and trustworthy the model's output is, thus assessing the model's certainty and robustness.
Collapse
Affiliation(s)
- Carla Pitarch
- Computer Science Department, Universitat Politècnica de Catalunya (UPC), 08034 Barcelona, Spain
- Eurecat, Technology Centre of Catalonia, Digital Health Unit, 08005 Barcelona, Spain
| | - Vicent Ribas
- Computer Science Department, Universitat Politècnica de Catalunya (UPC), 08034 Barcelona, Spain
| | - Alfredo Vellido
- Computer Science Department, Universitat Politècnica de Catalunya (UPC), 08034 Barcelona, Spain
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
- Intelligent Data Science and Artificial Intelligence Research Center (IDEAI-UPC), 08034 Barcelona, Spain
| |
Collapse
|
22
|
Yuan D, Xu Z, Tian B, Wang H, Zhan Y, Lukasiewicz T. μ-Net: Medical image segmentation using efficient and effective deep supervision. Comput Biol Med 2023; 160:106963. [PMID: 37150087 DOI: 10.1016/j.compbiomed.2023.106963] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 03/30/2023] [Accepted: 04/18/2023] [Indexed: 05/09/2023]
Abstract
Although the existing deep supervised solutions have achieved some great successes in medical image segmentation, they have the following shortcomings; (i) semantic difference problem: since they are obtained by very different convolution or deconvolution processes, the intermediate masks and predictions in deep supervised baselines usually contain semantics with different depth, which thus hinders the models' learning capabilities; (ii) low learning efficiency problem: additional supervision signals will inevitably make the training of the models more time-consuming. Therefore, in this work, we first propose two deep supervised learning strategies, U-Net-Deep and U-Net-Auto, to overcome the semantic difference problem. Then, to resolve the low learning efficiency problem, upon the above two strategies, we further propose a new deep supervised segmentation model, called μ-Net, to achieve not only effective but also efficient deep supervised medical image segmentation by introducing a tied-weight decoder to generate pseudo-labels with more diverse information and also speed up the convergence in training. Finally, three different types of μ-Net-based deep supervision strategies are explored and a Similarity Principle of Deep Supervision is further derived to guide future research in deep supervised learning. Experimental studies on four public benchmark datasets show that μ-Net greatly outperforms all the state-of-the-art baselines, including the state-of-the-art deeply supervised segmentation models, in terms of both effectiveness and efficiency. Ablation studies sufficiently prove the soundness of the proposed Similarity Principle of Deep Supervision, the necessity and effectiveness of the tied-weight decoder, and using both the segmentation and reconstruction pseudo-labels for deep supervised learning.
Collapse
Affiliation(s)
- Di Yuan
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin, China
| | - Zhenghua Xu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin, China.
| | - Biao Tian
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin, China
| | - Hening Wang
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin, China
| | - Yuefu Zhan
- Department of Radiology, Hainan Women and Children's Medical Center, Haikou, China
| | - Thomas Lukasiewicz
- Institute of Logic and Computation, TU Wien, Vienna, Austria; Department of Computer Science, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
23
|
Astley JR, Biancardi AM, Marshall H, Hughes PJC, Collier GJ, Smith LJ, Eaden JA, Hughes R, Wild JM, Tahir BA. A Dual-Channel Deep Learning Approach for Lung Cavity Estimation From Hyperpolarized Gas and Proton MRI. J Magn Reson Imaging 2023; 57:1878-1890. [PMID: 36373828 PMCID: PMC10947587 DOI: 10.1002/jmri.28519] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 10/24/2022] [Accepted: 10/24/2022] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND Hyperpolarized gas MRI can quantify regional lung ventilation via biomarkers, including the ventilation defect percentage (VDP). VDP is computed from segmentations derived from spatially co-registered functional hyperpolarized gas and structural proton (1 H)-MRI. Although acquired at similar lung inflation levels, they are frequently misaligned, requiring a lung cavity estimation (LCE). Recently, single-channel, mono-modal deep learning (DL)-based methods have shown promise for pulmonary image segmentation problems. Multichannel, multimodal approaches may outperform single-channel alternatives. PURPOSE We hypothesized that a DL-based dual-channel approach, leveraging both 1 H-MRI and Xenon-129-MRI (129 Xe-MRI), can generate LCEs more accurately than single-channel alternatives. STUDY TYPE Retrospective. POPULATION A total of 480 corresponding 1 H-MRI and 129 Xe-MRI scans from 26 healthy participants (median age [range]: 11 [8-71]; 50% females) and 289 patients with pulmonary pathologies (median age [range]: 47 [6-83]; 51% females) were split into training (422 scans [88%]; 257 participants [82%]) and testing (58 scans [12%]; 58 participants [18%]) sets. FIELD STRENGTH/SEQUENCE 1.5-T, three-dimensional (3D) spoiled gradient-recalled 1 H-MRI and 3D steady-state free-precession 129 Xe-MRI. ASSESSMENT We developed a multimodal DL approach, integrating 129 Xe-MRI and 1 H-MRI, in a dual-channel convolutional neural network. We compared this approach to single-channel alternatives using manually edited LCEs as a benchmark. We further assessed a fully automatic DL-based framework to calculate VDPs and compared it to manually generated VDPs. STATISTICAL TESTS Friedman tests with post hoc Bonferroni correction for multiple comparisons compared single-channel and dual-channel DL approaches using Dice similarity coefficient (DSC), average boundary Hausdorff distance (average HD), and relative error (XOR) metrics. Bland-Altman analysis and paired t-tests compared manual and DL-generated VDPs. A P value < 0.05 was considered statistically significant. RESULTS The dual-channel approach significantly outperformed single-channel approaches, achieving a median (range) DSC, average HD, and XOR of 0.967 (0.867-0.978), 1.68 mm (37.0-0.778), and 0.066 (0.246-0.045), respectively. DL-generated VDPs were statistically indistinguishable from manually generated VDPs (P = 0.710). DATA CONCLUSION Our dual-channel approach generated LCEs, which could be integrated with ventilated lung segmentations to produce biomarkers such as the VDP without manual intervention. EVIDENCE LEVEL 4. TECHNICAL EFFICACY Stage 1.
Collapse
Affiliation(s)
- Joshua R. Astley
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
- Department of Oncology and MetabolismThe University of SheffieldSheffieldUK
| | - Alberto M. Biancardi
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
| | - Helen Marshall
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
| | - Paul J. C. Hughes
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
| | - Guilhem J. Collier
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
| | - Laurie J. Smith
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
| | - James A. Eaden
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
| | - Rod Hughes
- Early Development RespiratoryAstraZenecaCambridgeUK
| | - Jim M. Wild
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
- Insigneo Institute for in silico medicine, The University of SheffieldSheffieldUK
| | - Bilal A. Tahir
- POLARIS, Department of Infection, Immunity & Cardiovascular DiseaseThe University of SheffieldSheffieldUK
- Department of Oncology and MetabolismThe University of SheffieldSheffieldUK
- Insigneo Institute for in silico medicine, The University of SheffieldSheffieldUK
| |
Collapse
|
24
|
Xu M, Ouyang Y, Yuan Z. Deep Learning Aided Neuroimaging and Brain Regulation. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23114993. [PMID: 37299724 DOI: 10.3390/s23114993] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 05/15/2023] [Accepted: 05/22/2023] [Indexed: 06/12/2023]
Abstract
Currently, deep learning aided medical imaging is becoming the hot spot of AI frontier application and the future development trend of precision neuroscience. This review aimed to render comprehensive and informative insights into the recent progress of deep learning and its applications in medical imaging for brain monitoring and regulation. The article starts by providing an overview of the current methods for brain imaging, highlighting their limitations and introducing the potential benefits of using deep learning techniques to overcome these limitations. Then, we further delve into the details of deep learning, explaining the basic concepts and providing examples of how it can be used in medical imaging. One of the key strengths is its thorough discussion of the different types of deep learning models that can be used in medical imaging including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial network (GAN) assisted magnetic resonance imaging (MRI), positron emission tomography (PET)/computed tomography (CT), electroencephalography (EEG)/magnetoencephalography (MEG), optical imaging, and other imaging modalities. Overall, our review on deep learning aided medical imaging for brain monitoring and regulation provides a referrable glance for the intersection of deep learning aided neuroimaging and brain regulation.
Collapse
Affiliation(s)
- Mengze Xu
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Zhuhai 519087, China
- Centre for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macau SAR 999078, China
| | - Yuanyuan Ouyang
- Nanomicro Sino-Europe Technology Company Limited, Zhuhai 519031, China
- Jiangfeng China-Portugal Technology Co., Ltd., Macau SAR 999078, China
| | - Zhen Yuan
- Centre for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macau SAR 999078, China
| |
Collapse
|
25
|
Chakrabarty S, Abidi SA, Mousa M, Mokkarala M, Hren I, Yadav D, Kelsey M, LaMontagne P, Wood J, Adams M, Su Y, Thorpe S, Chung C, Sotiras A, Marcus DS. Integrative Imaging Informatics for Cancer Research: Workflow Automation for Neuro-Oncology (I3CR-WANO). JCO Clin Cancer Inform 2023; 7:e2200177. [PMID: 37146265 PMCID: PMC10281444 DOI: 10.1200/cci.22.00177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 01/25/2023] [Accepted: 03/06/2023] [Indexed: 05/07/2023] Open
Abstract
PURPOSE Efforts to use growing volumes of clinical imaging data to generate tumor evaluations continue to require significant manual data wrangling, owing to data heterogeneity. Here, we propose an artificial intelligence-based solution for the aggregation and processing of multisequence neuro-oncology MRI data to extract quantitative tumor measurements. MATERIALS AND METHODS Our end-to-end framework (1) classifies MRI sequences using an ensemble classifier, (2) preprocesses the data in a reproducible manner, (3) delineates tumor tissue subtypes using convolutional neural networks, and (4) extracts diverse radiomic features. Moreover, it is robust to missing sequences and adopts an expert-in-the-loop approach in which the segmentation results may be manually refined by radiologists. After the implementation of the framework in Docker containers, it was applied to two retrospective glioma data sets collected from the Washington University School of Medicine (WUSM; n = 384) and The University of Texas MD Anderson Cancer Center (MDA; n = 30), comprising preoperative MRI scans from patients with pathologically confirmed gliomas. RESULTS The scan-type classifier yielded an accuracy of >99%, correctly identifying sequences from 380 of 384 and 30 of 30 sessions from the WUSM and MDA data sets, respectively. Segmentation performance was quantified using the Dice Similarity Coefficient between the predicted and expert-refined tumor masks. The mean Dice scores were 0.882 (±0.244) and 0.977 (±0.04) for whole-tumor segmentation for WUSM and MDA, respectively. CONCLUSION This streamlined framework automatically curated, processed, and segmented raw MRI data of patients with varying grades of gliomas, enabling the curation of large-scale neuro-oncology data sets and demonstrating high potential for integration as an assistive tool in clinical practice.
Collapse
Affiliation(s)
- Satrajit Chakrabarty
- Department of Electrical and Systems Engineering, Washington University in St Louis, St Louis, MO
| | - Syed Amaan Abidi
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
| | - Mina Mousa
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
| | - Mahati Mokkarala
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
| | - Isabelle Hren
- Department of Computer Science & Engineering, Washington University in St Louis, St Louis, MO
| | - Divya Yadav
- Division of Radiation Oncology, Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Matthew Kelsey
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
| | - Pamela LaMontagne
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
| | - John Wood
- Division of Radiation Oncology, Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Michael Adams
- Division of Radiation Oncology, Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Yuzhuo Su
- Division of Radiation Oncology, Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Sherry Thorpe
- Division of Radiation Oncology, Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Caroline Chung
- Division of Radiation Oncology, Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Aristeidis Sotiras
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
- Institute for Informatics, Washington University School of Medicine, St Louis, MO
| | - Daniel S. Marcus
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
| |
Collapse
|
26
|
Wang G, Luo X, Gu R, Yang S, Qu Y, Zhai S, Zhao Q, Li K, Zhang S. PyMIC: A deep learning toolkit for annotation-efficient medical image segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107398. [PMID: 36773591 DOI: 10.1016/j.cmpb.2023.107398] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 11/29/2022] [Accepted: 02/01/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Open-source deep learning toolkits are one of the driving forces for developing medical image segmentation models that are essential for computer-assisted diagnosis and treatment procedures. Existing toolkits mainly focus on fully supervised segmentation that assumes full and accurate pixel-level annotations are available. Such annotations are time-consuming and difficult to acquire for segmentation tasks, which makes learning from imperfect labels highly desired for reducing the annotation cost. We aim to develop a new deep learning toolkit to support annotation-efficient learning for medical image segmentation, which can accelerate and simplify the development of deep learning models with limited annotation budget, e.g., learning from partial, sparse or noisy annotations. METHODS Our proposed toolkit named PyMIC is a modular deep learning library for medical image segmentation tasks. In addition to basic components that support development of high-performance models for fully supervised segmentation, it contains several advanced components that are tailored for learning from imperfect annotations, such as loading annotated and unannounced images, loss functions for unannotated, partially or inaccurately annotated images, and training procedures for co-learning between multiple networks, etc. PyMIC is built on the PyTorch framework and supports development of semi-supervised, weakly supervised and noise-robust learning methods for medical image segmentation. RESULTS We present several illustrative medical image segmentation tasks based on PyMIC: (1) Achieving competitive performance on fully supervised learning; (2) Semi-supervised cardiac structure segmentation with only 10% training images annotated; (3) Weakly supervised segmentation using scribble annotations; and (4) Learning from noisy labels for chest radiograph segmentation. CONCLUSIONS The PyMIC toolkit is easy to use and facilitates efficient development of medical image segmentation models with imperfect annotations. It is modular and flexible, which enables researchers to develop high-performance models with low annotation cost. The source code is available at:https://github.com/HiLab-git/PyMIC.
Collapse
Affiliation(s)
- Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China.
| | - Xiangde Luo
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Ran Gu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Shuojue Yang
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, USA
| | - Yijie Qu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Shuwei Zhai
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Qianfei Zhao
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Kang Li
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Shaoting Zhang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China
| |
Collapse
|
27
|
García-García F, Lee DJ, Mendoza-Garcés FJ, Irigoyen-Miró S, Legarreta-Olabarrieta MJ, García-Gutiérrez S, Arostegui I. Automated location of orofacial landmarks to characterize airway morphology in anaesthesia via deep convolutional neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 232:107428. [PMID: 36870169 DOI: 10.1016/j.cmpb.2023.107428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 02/13/2023] [Accepted: 02/15/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND A reliable anticipation of a difficult airway may notably enhance safety during anaesthesia. In current practice, clinicians use bedside screenings by manual measurements of patients' morphology. OBJECTIVE To develop and evaluate algorithms for the automated extraction of orofacial landmarks, which characterize airway morphology. METHODS We defined 27 frontal + 13 lateral landmarks. We collected n=317 pairs of pre-surgery photos from patients undergoing general anaesthesia (140 females, 177 males). As ground truth reference for supervised learning, landmarks were independently annotated by two anaesthesiologists. We trained two ad-hoc deep convolutional neural network architectures based on InceptionResNetV2 (IRNet) and MobileNetV2 (MNet), to predict simultaneously: (a) whether each landmark is visible or not (occluded, out of frame), (b) its 2D-coordinates (x,y). We implemented successive stages of transfer learning, combined with data augmentation. We added custom top layers on top of these networks, whose weights were fully tuned for our application. Performance in landmark extraction was evaluated by 10-fold cross-validation (CV) and compared against 5 state-of-the-art deformable models. RESULTS With annotators' consensus as the 'gold standard', our IRNet-based network performed comparably to humans in the frontal view: median CV loss L=1.277·10-3, inter-quartile range (IQR) [1.001, 1.660]; versus median 1.360, IQR [1.172, 1.651], and median 1.352, IQR [1.172, 1.619], for each annotator against consensus, respectively. MNet yielded slightly worse results: median 1.471, IQR [1.139, 1.982]. In the lateral view, both networks attained performances statistically poorer than humans: median CV loss L=2.141·10-3, IQR [1.676, 2.915], and median 2.611, IQR [1.898, 3.535], respectively; versus median 1.507, IQR [1.188, 1.988], and median 1.442, IQR [1.147, 2.010] for both annotators. However, standardized effect sizes in CV loss were small: 0.0322 and 0.0235 (non-significant) for IRNet, 0.1431 and 0.1518 (p<0.05) for MNet; therefore quantitatively similar to humans. The best performing state-of-the-art model (a deformable regularized Supervised Descent Method, SDM) behaved comparably to our DCNNs in the frontal scenario, but notoriously worse in the lateral view. CONCLUSIONS We successfully trained two DCNN models for the recognition of 27 + 13 orofacial landmarks pertaining to the airway. Using transfer learning and data augmentation, they were able to generalize without overfitting, reaching expert-like performances in CV. Our IRNet-based methodology achieved a satisfactory identification and location of landmarks: particularly in the frontal view, at the level of anaesthesiologists. In the lateral view, its performance decayed, although with a non-significant effect size. Independent authors had also reported lower lateral performances; as certain landmarks may not be clear salient points, even for a trained human eye.
Collapse
Affiliation(s)
| | - Dae-Jin Lee
- Basque Center for Applied Mathematics (BCAM) - Bilbao, Basque Country, Spain; IE University, School of Science and Technology - Madrid, Madrid, Spain.
| | - Francisco J Mendoza-Garcés
- Galdakao-Usansolo University Hospital, Anaesthesia & Resuscitation Service - Galdakao, Basque Country, Spain.
| | - Sofía Irigoyen-Miró
- Galdakao-Usansolo University Hospital, Anaesthesia & Resuscitation Service - Galdakao, Basque Country, Spain.
| | | | | | - Inmaculada Arostegui
- Basque Center for Applied Mathematics (BCAM) - Bilbao, Basque Country, Spain; University of the Basque Country (UPV/EHU), Department of Mathematics - Leioa, Basque Country, Spain.
| |
Collapse
|
28
|
Gürsoy E, Kaya Y. An overview of deep learning techniques for COVID-19 detection: methods, challenges, and future works. MULTIMEDIA SYSTEMS 2023; 29:1603-1627. [PMID: 37261262 PMCID: PMC10039775 DOI: 10.1007/s00530-023-01083-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 03/20/2023] [Indexed: 06/02/2023]
Abstract
The World Health Organization (WHO) declared a pandemic in response to the coronavirus COVID-19 in 2020, which resulted in numerous deaths worldwide. Although the disease appears to have lost its impact, millions of people have been affected by this virus, and new infections still occur. Identifying COVID-19 requires a reverse transcription-polymerase chain reaction test (RT-PCR) or analysis of medical data. Due to the high cost and time required to scan and analyze medical data, researchers are focusing on using automated computer-aided methods. This review examines the applications of deep learning (DL) and machine learning (ML) in detecting COVID-19 using medical data such as CT scans, X-rays, cough sounds, MRIs, ultrasound, and clinical markers. First, the data preprocessing, the features used, and the current COVID-19 detection methods are divided into two subsections, and the studies are discussed. Second, the reported publicly available datasets, their characteristics, and the potential comparison materials mentioned in the literature are presented. Third, a comprehensive comparison is made by contrasting the similar and different aspects of the studies. Finally, the results, gaps, and limitations are summarized to stimulate the improvement of COVID-19 detection methods, and the study concludes by listing some future research directions for COVID-19 classification.
Collapse
Affiliation(s)
- Ercan Gürsoy
- Department of Computer Engineering, Adana Alparslan Turkes Science and Technology University, 01250 Adana, Turkey
| | - Yasin Kaya
- Department of Computer Engineering, Adana Alparslan Turkes Science and Technology University, 01250 Adana, Turkey
| |
Collapse
|
29
|
Thanellas A, Peura H, Lavinto M, Ruokola T, Vieli M, Staartjes VE, Winklhofer S, Serra C, Regli L, Korja M. Development and External Validation of a Deep Learning Algorithm to Identify and Localize Subarachnoid Hemorrhage on CT Scans. Neurology 2023; 100:e1257-e1266. [PMID: 36639236 PMCID: PMC10033159 DOI: 10.1212/wnl.0000000000201710] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Accepted: 11/07/2022] [Indexed: 01/15/2023] Open
Abstract
BACKGROUND AND OBJECTIVES In medical imaging, a limited number of trained deep learning algorithms have been externally validated and released publicly. We hypothesized that a deep learning algorithm can be trained to identify and localize subarachnoid hemorrhage (SAH) on head computed tomography (CT) scans and that the trained model performs satisfactorily when tested using external and real-world data. METHODS We used noncontrast head CT images of patients admitted to Helsinki University Hospital between 2012 and 2017. We manually segmented (i.e., delineated) SAH on 90 head CT scans and used the segmented CT scans together with 22 negative (no SAH) control CT scans in training an open-source convolutional neural network (U-Net) to identify and localize SAH. We then tested the performance of the trained algorithm by using external data sets (137 SAH and 1,242 control cases) collected in 2 foreign countries and also by creating a data set of consecutive emergency head CT scans (8 SAH and 511 control cases) performed during on-call hours in 5 different domestic hospitals in September 2021. We assessed the algorithm's capability to identify SAH by calculating patient- and slice-level performance metrics, such as sensitivity and specificity. RESULTS In the external validation set of 1,379 cases, the algorithm identified 136 of 137 SAH cases correctly (sensitivity 99.3% and specificity 63.2%). Of the 49,064 axial head CT slices, the algorithm identified and localized SAH in 1845 of 2,110 slices with SAH (sensitivity 87.4% and specificity 95.3%). Of 519 consecutive emergency head CT scans imaged in September 2021, the algorithm identified all 8 SAH cases correctly (sensitivity 100.0% and specificity 75.3%). The slice-level (27,167 axial slices in total) sensitivity and specificity were 87.3% and 98.8%, respectively, as the algorithm identified and localized SAH in 58 of 77 slices with SAH. The performance of the algorithm can be tested on through a web service. DISCUSSION We show that the shared algorithm identifies SAH cases with a high sensitivity and that the slice-level specificity is high. In addition to openly sharing a high-performing deep learning algorithm, our work presents infrequently used approaches in designing, training, testing, and reporting deep learning algorithms developed for medical imaging diagnostics. CLASSIFICATION OF EVIDENCE This study provides Class III evidence that a deep learning algorithm correctly identifies the presence of subarachnoid hemorrhage on CT scan.
Collapse
Affiliation(s)
- Antonios Thanellas
- From the Department of Information Management (A.T.), Helsinki University Hospital, Helsinki, Finland; Department of Neurosurgery, University of Helsinki and Helsinki University Hospital (H.P., M.K.), Helsinki, Finland; CGI (M.L., T.R.), Helsinki, Finland; Machine Intelligence in Clinical Neuroscience (MICN) Laboratory, Department of Neurosurgery (M.V., V.E.S., S.W., L.R.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland; Department of Neuroradiology (C.S.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Heikki Peura
- From the Department of Information Management (A.T.), Helsinki University Hospital, Helsinki, Finland; Department of Neurosurgery, University of Helsinki and Helsinki University Hospital (H.P., M.K.), Helsinki, Finland; CGI (M.L., T.R.), Helsinki, Finland; Machine Intelligence in Clinical Neuroscience (MICN) Laboratory, Department of Neurosurgery (M.V., V.E.S., S.W., L.R.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland; Department of Neuroradiology (C.S.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Mikko Lavinto
- From the Department of Information Management (A.T.), Helsinki University Hospital, Helsinki, Finland; Department of Neurosurgery, University of Helsinki and Helsinki University Hospital (H.P., M.K.), Helsinki, Finland; CGI (M.L., T.R.), Helsinki, Finland; Machine Intelligence in Clinical Neuroscience (MICN) Laboratory, Department of Neurosurgery (M.V., V.E.S., S.W., L.R.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland; Department of Neuroradiology (C.S.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Tomi Ruokola
- From the Department of Information Management (A.T.), Helsinki University Hospital, Helsinki, Finland; Department of Neurosurgery, University of Helsinki and Helsinki University Hospital (H.P., M.K.), Helsinki, Finland; CGI (M.L., T.R.), Helsinki, Finland; Machine Intelligence in Clinical Neuroscience (MICN) Laboratory, Department of Neurosurgery (M.V., V.E.S., S.W., L.R.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland; Department of Neuroradiology (C.S.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Moira Vieli
- From the Department of Information Management (A.T.), Helsinki University Hospital, Helsinki, Finland; Department of Neurosurgery, University of Helsinki and Helsinki University Hospital (H.P., M.K.), Helsinki, Finland; CGI (M.L., T.R.), Helsinki, Finland; Machine Intelligence in Clinical Neuroscience (MICN) Laboratory, Department of Neurosurgery (M.V., V.E.S., S.W., L.R.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland; Department of Neuroradiology (C.S.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Victor E Staartjes
- From the Department of Information Management (A.T.), Helsinki University Hospital, Helsinki, Finland; Department of Neurosurgery, University of Helsinki and Helsinki University Hospital (H.P., M.K.), Helsinki, Finland; CGI (M.L., T.R.), Helsinki, Finland; Machine Intelligence in Clinical Neuroscience (MICN) Laboratory, Department of Neurosurgery (M.V., V.E.S., S.W., L.R.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland; Department of Neuroradiology (C.S.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Sebastian Winklhofer
- From the Department of Information Management (A.T.), Helsinki University Hospital, Helsinki, Finland; Department of Neurosurgery, University of Helsinki and Helsinki University Hospital (H.P., M.K.), Helsinki, Finland; CGI (M.L., T.R.), Helsinki, Finland; Machine Intelligence in Clinical Neuroscience (MICN) Laboratory, Department of Neurosurgery (M.V., V.E.S., S.W., L.R.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland; Department of Neuroradiology (C.S.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Carlo Serra
- From the Department of Information Management (A.T.), Helsinki University Hospital, Helsinki, Finland; Department of Neurosurgery, University of Helsinki and Helsinki University Hospital (H.P., M.K.), Helsinki, Finland; CGI (M.L., T.R.), Helsinki, Finland; Machine Intelligence in Clinical Neuroscience (MICN) Laboratory, Department of Neurosurgery (M.V., V.E.S., S.W., L.R.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland; Department of Neuroradiology (C.S.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Luca Regli
- From the Department of Information Management (A.T.), Helsinki University Hospital, Helsinki, Finland; Department of Neurosurgery, University of Helsinki and Helsinki University Hospital (H.P., M.K.), Helsinki, Finland; CGI (M.L., T.R.), Helsinki, Finland; Machine Intelligence in Clinical Neuroscience (MICN) Laboratory, Department of Neurosurgery (M.V., V.E.S., S.W., L.R.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland; Department of Neuroradiology (C.S.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Miikka Korja
- From the Department of Information Management (A.T.), Helsinki University Hospital, Helsinki, Finland; Department of Neurosurgery, University of Helsinki and Helsinki University Hospital (H.P., M.K.), Helsinki, Finland; CGI (M.L., T.R.), Helsinki, Finland; Machine Intelligence in Clinical Neuroscience (MICN) Laboratory, Department of Neurosurgery (M.V., V.E.S., S.W., L.R.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland; Department of Neuroradiology (C.S.), Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland.
| |
Collapse
|
30
|
Nagami N, Arimura H, Nojiri J, Yunhao C, Ninomiya K, Ogata M, Oishi M, Ohira K, Kitamura S, Irie H. Dual segmentation models for poorly and well-differentiated hepatocellular carcinoma using two-step transfer deep learning on dynamic contrast-enhanced CT images. Phys Eng Sci Med 2023; 46:83-97. [PMID: 36469246 DOI: 10.1007/s13246-022-01202-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 11/17/2022] [Indexed: 12/12/2022]
Abstract
The aim of this study was to develop dual segmentation models for poorly and well-differentiated hepatocellular carcinoma (HCC), using two-step transfer learning (TSTL) based on dynamic contrast-enhanced (DCE) computed tomography (CT) images. From 2013 to 2019, DCE-CT images of 128 patients with 80 poorly differentiated and 48 well-differentiated HCCs were selected at our hospital. In the first transfer learning (TL) step, a pre-trained segmentation model with 192 CT images of lung cancer patients was retrained as a poorly differentiated HCC model. In the second TL step, a well-differentiated HCC model was built from a poorly differentiated HCC model. The average three-dimensional Dice's similarity coefficient (3D-DSC) and 95th-percentile of the Hausdorff distance (95% HD) were mainly employed to evaluate the segmentation accuracy, based on a nested fourfold cross-validation test. The DSC denotes the degree of regional similarity between the HCC reference regions and the regions estimated using the proposed models. The 95% HD is defined as the 95th-percentile of the maximum measures of how far two subsets of a metric space are from each other. The average 3D-DSC and 95% HD were 0.849 ± 0.078 and 1.98 ± 0.71 mm, respectively, for poorly differentiated HCC regions, and 0.811 ± 0.089 and 2.01 ± 0.84 mm, respectively, for well-differentiated HCC regions. The average 3D-DSC for both regions was 1.2 times superior to that calculated without the TSTL. The proposed model using TSTL from the lung cancer dataset showed the potential to segment poorly and well-differentiated HCC regions on DCE-CT images.
Collapse
Affiliation(s)
- Noriyuki Nagami
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-Ku, Fukuoka City, Fukuoka, 812-8582, Japan
- Department of Radiology, Saga University Hospital, 5-1-1, Nabeshima, Saga City, Saga, 849-8501, Japan
| | - Hidetaka Arimura
- Division of Medical Quantum Science, Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-Ku, Fukuoka City, Fukuoka, 812-8582, Japan.
| | - Junichi Nojiri
- Medical Corporation Kouhoukai, Takagi Hospital, 141-11, Sakemi, Okawa City, Fukuoka, 831-0016, Japan
- Department of Radiology, Faculty of Medicine, Saga University, 5-1-1, Nabeshima, Saga City , Saga, 849-8501, Japan
| | - Cui Yunhao
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-Ku, Fukuoka City, Fukuoka, 812-8582, Japan
| | - Kenta Ninomiya
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-Ku, Fukuoka City, Fukuoka, 812-8582, Japan
| | - Manabu Ogata
- Department of Radiology, Saga University Hospital, 5-1-1, Nabeshima, Saga City, Saga, 849-8501, Japan
| | - Mitsutoshi Oishi
- Department of Radiology, Faculty of Medicine, Saga University, 5-1-1, Nabeshima, Saga City , Saga, 849-8501, Japan
| | - Keiichi Ohira
- Department of Radiology, Faculty of Medicine, Saga University, 5-1-1, Nabeshima, Saga City , Saga, 849-8501, Japan
| | - Shigetoshi Kitamura
- Department of Radiology, Saga University Hospital, 5-1-1, Nabeshima, Saga City, Saga, 849-8501, Japan
| | - Hiroyuki Irie
- Department of Radiology, Faculty of Medicine, Saga University, 5-1-1, Nabeshima, Saga City , Saga, 849-8501, Japan
| |
Collapse
|
31
|
Al-Dulaimi K, Banks J, Al-Sabaawi A, Nguyen K, Chandran V, Tomeo-Reyes I. Classification of HEp-2 Staining Pattern Images Using Adapted Multilayer Perceptron Neural Network-Based Intra-Class Variation of Cell Shape. SENSORS (BASEL, SWITZERLAND) 2023; 23:2195. [PMID: 36850793 PMCID: PMC9959868 DOI: 10.3390/s23042195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 02/01/2023] [Accepted: 02/08/2023] [Indexed: 06/18/2023]
Abstract
There exists a growing interest from the clinical practice research communities in the development of methods to automate HEp-2 stained cells classification procedure from histopathological images. Challenges faced by these methods include variations in cell densities and cell patterns, overfitting of features, large-scale data volume and stained cells. In this paper, a multi-class multilayer perceptron technique is adapted by adding a new hidden layer to calculate the variation in the mean, scale, kurtosis and skewness of higher order spectra features of the cell shape information. The adapted technique is then jointly trained and the probability of classification calculated using a Softmax activation function. This method is proposed to address overfitting, stained and large-scale data volume problems, and classify HEp-2 staining cells into six classes. An extensive experimental analysis is studied to verify the results of the proposed method. The technique has been trained and tested on the dataset from ICPR-2014 and ICPR-2016 competitions using the Task-1. The experimental results have shown that the proposed model achieved higher accuracy of 90.3% (with data augmentation) than of 87.5% (with no data augmentation). In addition, the proposed framework is compared with existing methods, as well as, the results of methods using in ICPR2014 and ICPR2016 competitions.The results demonstrate that our proposed method effectively outperforms recent methods.
Collapse
Affiliation(s)
- Khamael Al-Dulaimi
- School of Electrical Engineering and Robotics, Queensland University of Technology (QUT), Brisbane, QLD 4000, Australia
| | - Jasmine Banks
- School of Electrical Engineering and Robotics, Queensland University of Technology (QUT), Brisbane, QLD 4000, Australia
| | - Aiman Al-Sabaawi
- School of Computer Science, Queensland University of Technology (QUT), Brisbane, QLD 4000, Australia
| | - Kien Nguyen
- School of Electrical Engineering and Robotics, Queensland University of Technology (QUT), Brisbane, QLD 4000, Australia
| | - Vinod Chandran
- School of Electrical Engineering and Robotics, Queensland University of Technology (QUT), Brisbane, QLD 4000, Australia
| | - Inmaculada Tomeo-Reyes
- School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia
| |
Collapse
|
32
|
Raymond C, Jurkiewicz MT, Orunmuyi A, Liu L, Dada MO, Ladefoged CN, Teuho J, Anazodo UC. The performance of machine learning approaches for attenuation correction of PET in neuroimaging: A meta-analysis. J Neuroradiol 2023; 50:315-326. [PMID: 36738990 DOI: 10.1016/j.neurad.2023.01.157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 01/28/2023] [Indexed: 02/05/2023]
Abstract
PURPOSE This systematic review provides a consensus on the clinical feasibility of machine learning (ML) methods for brain PET attenuation correction (AC). Performance of ML-AC were compared to clinical standards. METHODS Two hundred and eighty studies were identified through electronic searches of brain PET studies published between January 1, 2008, and August 1, 2022. Reported outcomes for image quality, tissue classification performance, regional and global bias were extracted to evaluate ML-AC performance. Methodological quality of included studies and the quality of evidence of analysed outcomes were assessed using QUADAS-2 and GRADE, respectively. RESULTS A total of 19 studies (2371 participants) met the inclusion criteria. Overall, the global bias of ML methods was 0.76 ± 1.2%. For image quality, the relative mean square error (RMSE) was 0.20 ± 0.4 while for tissues classification, the Dice similarity coefficient (DSC) for bone/soft tissue/air were 0.82 ± 0.1 / 0.95 ± 0.03 / 0.85 ± 0.14. CONCLUSIONS In general, ML-AC performance is within acceptable limits for clinical PET imaging. The sparse information on ML-AC robustness and its limited qualitative clinical evaluation may hinder clinical implementation in neuroimaging, especially for PET/MRI or emerging brain PET systems where standard AC approaches are not readily available.
Collapse
Affiliation(s)
- Confidence Raymond
- Department of Medical Biophysics, Western University, London, ON, Canada; Lawson Health Research Institute, London, ON, Canada
| | - Michael T Jurkiewicz
- Department of Medical Biophysics, Western University, London, ON, Canada; Lawson Health Research Institute, London, ON, Canada; Department of Medical Imaging, Western University, London, ON, Canada
| | - Akintunde Orunmuyi
- Kenyatta University Teaching, Research and Referral Hospital, Nairobi, Kenya
| | - Linshan Liu
- Lawson Health Research Institute, London, ON, Canada
| | | | - Claes N Ladefoged
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark
| | - Jarmo Teuho
- Turku PET Centre, Turku University, Turku, Finland; Turku University Hospital, Turku, Finland
| | - Udunna C Anazodo
- Department of Medical Biophysics, Western University, London, ON, Canada; Lawson Health Research Institute, London, ON, Canada; Montreal Neurological Institute, 3801 Rue University, Montreal, QC H3A 2B4, Canada.
| |
Collapse
|
33
|
Kulseng CPS, Nainamalai V, Grøvik E, Geitung JT, Årøen A, Gjesdal KI. Automatic segmentation of human knee anatomy by a convolutional neural network applying a 3D MRI protocol. BMC Musculoskelet Disord 2023; 24:41. [PMID: 36650496 PMCID: PMC9847207 DOI: 10.1186/s12891-023-06153-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 01/10/2023] [Indexed: 01/19/2023] Open
Abstract
BACKGROUND To study deep learning segmentation of knee anatomy with 13 anatomical classes by using a magnetic resonance (MR) protocol of four three-dimensional (3D) pulse sequences, and evaluate possible clinical usefulness. METHODS The sample selection involved 40 healthy right knee volumes from adult participants. Further, a recently injured single left knee with previous known ACL reconstruction was included as a test subject. The MR protocol consisted of the following 3D pulse sequences: T1 TSE, PD TSE, PD FS TSE, and Angio GE. The DenseVNet neural network was considered for these experiments. Five input combinations of sequences (i) T1, (ii) T1 and FS, (iii) PD and FS, (iv) T1, PD, and FS and (v) T1, PD, FS and Angio were trained using the deep learning algorithm. The Dice similarity coefficient (DSC), Jaccard index and Hausdorff were used to compare the performance of the networks. RESULTS Combining all sequences collectively performed significantly better than other alternatives. The following DSCs (±standard deviation) were obtained for the test dataset: Bone medulla 0.997 (±0.002), PCL 0.973 (±0.015), ACL 0.964 (±0.022), muscle 0.998 (±0.001), cartilage 0.966 (±0.018), bone cortex 0.980 (±0.010), arteries 0.943 (±0.038), collateral ligaments 0.919 (± 0.069), tendons 0.982 (±0.005), meniscus 0.955 (±0.032), adipose tissue 0.998 (±0.001), veins 0.980 (±0.010) and nerves 0.921 (±0.071). The deep learning network correctly identified the anterior cruciate ligament (ACL) tear of the left knee, thus indicating a future aid to orthopaedics. CONCLUSIONS The convolutional neural network proves highly capable of correctly labeling all anatomical structures of the knee joint when applied to 3D MR sequences. We have demonstrated that this deep learning model is capable of automatized segmentation that may give 3D models and discover pathology. Both useful for a preoperative evaluation.
Collapse
Affiliation(s)
| | - Varatharajan Nainamalai
- grid.5947.f0000 0001 1516 2393Norwegian University of Science and Technology, Larsgaardvegen 2, Ålesund, 6025 Norway
| | - Endre Grøvik
- grid.5947.f0000 0001 1516 2393Norwegian University of Science and Technology, Høgskoleringen 5, Trondheim, 7491 Norway ,Møre og Romsdal Hospital Trust, Postboks 1600, Ålesund, 6025 Norway
| | - Jonn-Terje Geitung
- Sunnmøre MR-klinikk, Langelandsvegen 15, Ålesund, 6010 Norway ,grid.5510.10000 0004 1936 8921Faculty of Medicine, University of Oslo, Klaus Torgårds vei 3, Oslo, 0372 Norway ,grid.411279.80000 0000 9637 455XDepartment of Radiology, Akershus University Hospital, Postboks 1000, Lørenskog, 1478 Norway
| | - Asbjørn Årøen
- grid.411279.80000 0000 9637 455XDepartment of Orthopedic Surgery, Institute of Clinical Medicine, Akershus University Hospital, Problemveien 7, Oslo, 0315 Norway ,grid.412285.80000 0000 8567 2092Oslo Sports Trauma Research Center, Norwegian School of Sport Sciences, Postboks 4014 Ullevål Stadion, Oslo, 0806 Norway
| | - Kjell-Inge Gjesdal
- Sunnmøre MR-klinikk, Langelandsvegen 15, Ålesund, 6010 Norway ,grid.5947.f0000 0001 1516 2393Norwegian University of Science and Technology, Larsgaardvegen 2, Ålesund, 6025 Norway ,grid.411279.80000 0000 9637 455XDepartment of Radiology, Akershus University Hospital, Postboks 1000, Lørenskog, 1478 Norway
| |
Collapse
|
34
|
Advancing 3D Medical Image Analysis with Variable Dimension Transform based Supervised 3D Pre-training. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
35
|
Liu F, Demosthenes P. Real-world data: a brief review of the methods, applications, challenges and opportunities. BMC Med Res Methodol 2022; 22:287. [PMID: 36335315 PMCID: PMC9636688 DOI: 10.1186/s12874-022-01768-6] [Citation(s) in RCA: 71] [Impact Index Per Article: 35.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 10/22/2022] [Indexed: 11/07/2022] Open
Abstract
Abstract
Background
The increased adoption of the internet, social media, wearable devices, e-health services, and other technology-driven services in medicine and healthcare has led to the rapid generation of various types of digital data, providing a valuable data source beyond the confines of traditional clinical trials, epidemiological studies, and lab-based experiments.
Methods
We provide a brief overview on the type and sources of real-world data and the common models and approaches to utilize and analyze real-world data. We discuss the challenges and opportunities of using real-world data for evidence-based decision making This review does not aim to be comprehensive or cover all aspects of the intriguing topic on RWD (from both the research and practical perspectives) but serves as a primer and provides useful sources for readers who interested in this topic.
Results and Conclusions
Real-world hold great potential for generating real-world evidence for designing and conducting confirmatory trials and answering questions that may not be addressed otherwise. The voluminosity and complexity of real-world data also call for development of more appropriate, sophisticated, and innovative data processing and analysis techniques while maintaining scientific rigor in research findings, and attentions to data ethics to harness the power of real-world data.
Collapse
|
36
|
Deep Learning-Based Water-Fat Separation from Dual-Echo Chemical Shift-Encoded Imaging. Bioengineering (Basel) 2022; 9:bioengineering9100579. [PMID: 36290546 PMCID: PMC9598080 DOI: 10.3390/bioengineering9100579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 10/12/2022] [Accepted: 10/15/2022] [Indexed: 11/26/2022] Open
Abstract
Conventional water–fat separation approaches suffer long computational times and are prone to water/fat swaps. To solve these problems, we propose a deep learning-based dual-echo water–fat separation method. With IRB approval, raw data from 68 pediatric clinically indicated dual echo scans were analyzed, corresponding to 19382 contrast-enhanced images. A densely connected hierarchical convolutional network was constructed, in which dual-echo images and corresponding echo times were used as input and water/fat images obtained using the projected power method were regarded as references. Models were trained and tested using knee images with 8-fold cross validation and validated on out-of-distribution data from the ankle, foot, and arm. Using the proposed method, the average computational time for a volumetric dataset with ~400 slices was reduced from 10 min to under one minute. High fidelity was achieved (correlation coefficient of 0.9969, l1 error of 0.0381, SSIM of 0.9740, pSNR of 58.6876) and water/fat swaps were mitigated. I is of particular interest that metal artifacts were substantially reduced, even when the training set contained no images with metallic implants. Using the models trained with only contrast-enhanced images, water/fat images were predicted from non-contrast-enhanced images with high fidelity. The proposed water–fat separation method has been demonstrated to be fast, robust, and has the added capability to compensate for metal artifacts.
Collapse
|
37
|
Pati S, Baid U, Edwards B, Sheller MJ, Foley P, Reina GA, Thakur S, Sako C, Bilello M, Davatzikos C, Martin J, Shah P, Menze B, Bakas S. The federated tumor segmentation (FeTS) tool: an open-source solution to further solid tumor research. Phys Med Biol 2022; 67:10.1088/1361-6560/ac9449. [PMID: 36137534 PMCID: PMC9592188 DOI: 10.1088/1361-6560/ac9449] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 09/22/2022] [Indexed: 11/11/2022]
Abstract
Objective.De-centralized data analysis becomes an increasingly preferred option in the healthcare domain, as it alleviates the need for sharing primary patient data across collaborating institutions. This highlights the need for consistent harmonized data curation, pre-processing, and identification of regions of interest based on uniform criteria.Approach.Towards this end, this manuscript describes theFederatedTumorSegmentation (FeTS) tool, in terms of software architecture and functionality.Main results.The primary aim of the FeTS tool is to facilitate this harmonized processing and the generation of gold standard reference labels for tumor sub-compartments on brain magnetic resonance imaging, and further enable federated training of a tumor sub-compartment delineation model across numerous sites distributed across the globe, without the need to share patient data.Significance.Building upon existing open-source tools such as the Insight Toolkit and Qt, the FeTS tool is designed to enable training deep learning models targeting tumor delineation in either centralized or federated settings. The target audience of the FeTS tool is primarily the computational researcher interested in developing federated learning models, and interested in joining a global federation towards this effort. The tool is open sourced athttps://github.com/FETS-AI/Front-End.
Collapse
Affiliation(s)
- Sarthak Pati
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Ujjwal Baid
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | | | | | | | | | - Siddhesh Thakur
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Chiharu Sako
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Michel Bilello
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | | | | | - Bjoern Menze
- Department of Informatics, Technical University of Munich, Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
38
|
Comparison of Three Automated Approaches for Classification of Amyloid-PET Images. Neuroinformatics 2022; 20:1065-1075. [PMID: 35622223 DOI: 10.1007/s12021-022-09587-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/25/2022] [Indexed: 01/27/2023]
Abstract
Automated amyloid-PET image classification can support clinical assessment and increase diagnostic confidence. Three automated approaches using global cut-points derived from Receiver Operating Characteristic (ROC) analysis, machine learning (ML) algorithms with regional SUVr values, and deep learning (DL) network with 3D image input were compared under various conditions: number of training data, radiotracers, and cohorts. 276 [11C]PiB and 209 [18F]AV45 PET images from ADNI database and our local cohort were used. Global mean and maximum SUVr cut-points were derived using ROC analysis. 68 ML models were built using regional SUVr values and one DL network was trained with classifications of two visual assessments - manufacturer's recommendations (gray-scale) and with visually guided reference region scaling (rainbow-scale). ML-based classification achieved similarly high accuracy as ROC classification, but had better convergence between training and unseen data, with a smaller number of training data. Naïve Bayes performed the best overall among the 68 ML algorithms. Classification with maximum SUVr cut-points yielded higher accuracy than with mean SUVr cut-points, particularly for cohorts showing more focal uptake. DL networks can support the classification of definite cases accurately but performed poorly for equivocal cases. Rainbow-scale standardized image intensity scaling and improved inter-rater agreement. Gray-scale detects focal accumulation better, thus classifying more amyloid-positive scans. All three approaches generally achieved higher accuracy when trained with rainbow-scale classification. ML yielded similarly high accuracy as ROC, but with better convergence between training and unseen data, and further work may lead to even more accurate ML methods.
Collapse
|
39
|
Wang H, Zhang S, Xing X, Yue Q, Feng W, Chen S, Zhang J, Xie D, Chen N, Liu Y. Radiomic study on preoperative multi-modal magnetic resonance images identifies IDH-mutant TERT promoter-mutant gliomas. Cancer Med 2022; 12:2524-2537. [PMID: 36176070 PMCID: PMC9939206 DOI: 10.1002/cam4.5097] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 06/20/2022] [Accepted: 07/13/2022] [Indexed: 11/09/2022] Open
Abstract
OBJECTIVES Gliomas with comutations of isocitrate dehydrogenase (IDH) genes and telomerase reverse transcriptase (TERT) gene promoter (IDHmut pTERTmut) show distinct biological features and respond to first-line treatment differently in comparison with other gliomas. This study aimed to characterize the IDHmut pTERTmut gliomas in multimodal MRI using the radiomic method and establish a precise diagnostic model identifying this group of gliomas. METHODS A total of 140 patients with untreated primary gliomas were admitted between 2016 and 2020 to West China Hospital as a discovery cohort, including 22 IDHmut pTERTmut patients. Thirty-four additional cases from a different hospital were included in the study as an independent validation cohort. A total of 3654 radiomic features were extracted from the preoperative multimodal MRI images (T1c, FLAIR, and ADC maps) and filtered in a data-driven approach. The discovery cohort was split into training and test sets by a 4:1 ratio. A diagnostic model (multilayer perceptron classifier) for detecting the IDHmut pTERTmut gliomas was trained using an automatic machine-learning algorithm named tree-based pipeline optimization tool (TPOT). The most critical radiomic features in the model were identified and visualized. RESULTS The model achieved an area under the receiver-operating curve (AUROC) of 0.971 (95% CI, 0.902-1.000), the sensitivity of 0.833 (95% CI, 0.333-1.000), and the specificity of 0.966 (95% CI, 0.931-1.000) in the test set. The area under the precision-recall curve (AUCPR) was 0.754 (95% CI, 0.572-0.833) and the F1 score was 0.833 (95% CI, 0.500-1.000). In the independent validation set, the model reached 0.952 AUROC, 0.714 sensitivity, 0.963 specificity, 0.841 AUCPR, and 0.769 F1 score. MR radiomic features of the IDHmut pTERTmut gliomas represented homogenous low-complexity texture in three modalities. CONCLUSIONS An accurate diagnostic model was constructed for detecting IDHmut pTERTmut gliomas using multimodal radiomic features. The most important features were associated with the homogenous simple texture of IDHmut pTERTmut gliomas in MRI images transformed using Laplacian of Gaussian and wavelet filters.
Collapse
Affiliation(s)
- Haoyu Wang
- Department of NeurosurgeryWest China Hospital of Sichuan UniversityChengduChina,Department of NeurosurgeryXinhua Hospital, Affiliated to Shanghai Jiao Tong University School of MedicineShanghaiChina
| | - Shuxin Zhang
- Department of NeurosurgeryWest China Hospital of Sichuan UniversityChengduChina,Department of Head and Neck Surgery, Sichuan Cancer Hospital and Institute, Sichuan Cancer Center, School of MedicineUniversity of Electronic Science and Technology of ChinaChengduChina
| | - Xiang Xing
- Department of NeurosurgeryWest China Hospital of Sichuan UniversityChengduChina
| | - Qiang Yue
- Department of RadiologyWest China Hospital of Sichuan UniversityChengduChina
| | - Wentao Feng
- Department of NeurosurgeryWest China Hospital of Sichuan UniversityChengduChina
| | - Siliang Chen
- Department of NeurosurgeryWest China Hospital of Sichuan UniversityChengduChina
| | - Jun Zhang
- Frontier Science Center for Disease Molecular Network, State Key Laboratory of BiotherapyWest China Hospital of Sichuan UniversityChengduChina
| | - Dan Xie
- Frontier Science Center for Disease Molecular Network, State Key Laboratory of BiotherapyWest China Hospital of Sichuan UniversityChengduChina
| | - Ni Chen
- Department of Pathology, West China HospitalSichuan UniversityChengduSichuanChina
| | - Yanhui Liu
- Department of NeurosurgeryWest China Hospital of Sichuan UniversityChengduChina
| |
Collapse
|
40
|
Tang Y, Gao X, Wang W, Dan Y, Zhou L, Su S, Wu J, Lv H, He Y. Automated Detection of Epiretinal Membranes in OCT Images Using Deep Learning. Ophthalmic Res 2022; 66:238-246. [PMID: 36170844 DOI: 10.1159/000525929] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Accepted: 06/08/2022] [Indexed: 11/19/2022]
Abstract
INTRODUCTION Development and validation of a deep learning algorithm to automatically identify and locate epiretinal memberane (ERM) regions in OCT images. METHODS OCT images of 468 eyes were retrospectively collected from a total of 404 ERM patients. One expert manually annotated the ERM regions for all images. A total of 422 images (90%) and the remainig 46 images (10%) were used as the training dataset and validation dataset for deep learning algorithm training and validation, respectively. One senior and one junior clinician read the images. The diagnostic results were compared. RESULTS The algorithm accurately segmented and located the ERM regions in OCT images. The image-level accuracy was 95.65%, and the ERM region-level accuracy was 90.14%, respectively. In comparison experiments, the accuracies of the junior clinician improved from 85.00% to 61.29% without the assistance of the algorithm to 100.00% and 90.32% with the assistance of the algorithm. The corresponding results of the senior clinician were 96.15%, 95.00% without the assistance of the algorithm, and 96.15%, 97.50% with the assistance of the algorithm. CONCLUSIONS The developed deep learning algorithm can accurately segment ERM regions in OCT images. This deep learning approach may help clinicians in clinical diagnosis with better accuracy and efficiency.
Collapse
Affiliation(s)
- Yong Tang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Xiaorong Gao
- Department of Ophthalmology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Weijia Wang
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yujiao Dan
- Department of Ophthalmology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Linjing Zhou
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Song Su
- Department of Hepatobiliary Surgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Jiali Wu
- Department of Anesthesiology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Hongbin Lv
- Department of Ophthalmology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Yue He
- Department of Ophthalmology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| |
Collapse
|
41
|
Gholamiankhah F, Mostafapour S, Abdi Goushbolagh N, Shojaerazavi S, Layegh P, Tabatabaei SM, Arabi H. Automated Lung Segmentation from Computed Tomography Images of Normal and COVID-19 Pneumonia Patients. IRANIAN JOURNAL OF MEDICAL SCIENCES 2022; 47:440-449. [PMID: 36117575 PMCID: PMC9445870 DOI: 10.30476/ijms.2022.90791.2178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Revised: 10/01/2021] [Accepted: 12/10/2021] [Indexed: 11/30/2022]
Abstract
BACKGROUND Automated image segmentation is an essential step in quantitative image analysis. This study assesses the performance of a deep learning-based model for lung segmentation from computed tomography (CT) images of normal and COVID-19 patients. METHODS A descriptive-analytical study was conducted from December 2020 to April 2021 on the CT images of patients from various educational hospitals affiliated with Mashhad University of Medical Sciences (Mashhad, Iran). Of the selected images and corresponding lung masks of 1,200 confirmed COVID-19 patients, 1,080 were used to train a residual neural network. The performance of the residual network (ResNet) model was evaluated on two distinct external test datasets, namely the remaining 120 COVID-19 and 120 normal patients. Different evaluation metrics such as Dice similarity coefficient (DSC), mean absolute error (MAE), relative mean Hounsfield unit (HU) difference, and relative volume difference were calculated to assess the accuracy of the predicted lung masks. The Mann-Whitney U test was used to assess the difference between the corresponding values in the normal and COVID-19 patients. P<0.05 was considered statistically significant. RESULTS The ResNet model achieved a DSC of 0.980 and 0.971 and a relative mean HU difference of -2.679% and -4.403% for the normal and COVID-19 patients, respectively. Comparable performance in lung segmentation of normal and COVID-19 patients indicated the model's accuracy for identifying lung tissue in the presence of COVID-19-associated infections. Although a slightly better performance was observed in normal patients. CONCLUSION The ResNet model provides an accurate and reliable automated lung segmentation of COVID-19 infected lung tissue.A preprint version of this article was published on arXiv before formal peer review (https://arxiv.org/abs/2104.02042).
Collapse
Affiliation(s)
- Faeze Gholamiankhah
- Department of Medical Physics, School of Medicine, Shahid Sadoughi University of Medical Sciences, Yazd, Iran
| | - Samaneh Mostafapour
- Department of Radiology Technology, School of Paramedical Sciences, Mashhad University of Sciences, Yazd, Iran
| | - Nouraddin Abdi Goushbolagh
- Department of Medical Physics, School of Medicine, Shahid Sadoughi University of Medical Sciences, Yazd, Iran
| | - Seyedjafar Shojaerazavi
- Department of Cardiology, Ghaem Hospital, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Parvaneh Layegh
- Department of Radiology, School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Seyyed Mohammad Tabatabaei
- Department of Medical Informatics, School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran,
Clinical Research Development Unit, Imam Reza Hospital, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| |
Collapse
|
42
|
Khojaste-Sarakhsi M, Haghighi SS, Ghomi SF, Marchiori E. Deep learning for Alzheimer's disease diagnosis: A survey. Artif Intell Med 2022; 130:102332. [DOI: 10.1016/j.artmed.2022.102332] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 04/29/2022] [Accepted: 05/30/2022] [Indexed: 11/28/2022]
|
43
|
Žerovnik Mekuč M, Bohak C, Boneš E, Hudoklin S, Romih R, Marolt M. Automatic segmentation and reconstruction of intracellular compartments in volumetric electron microscopy data. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 223:106959. [PMID: 35763876 DOI: 10.1016/j.cmpb.2022.106959] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Revised: 02/02/2022] [Accepted: 06/14/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVES In recent years, electron microscopy is enabling the acquisition of volumetric data with resolving power to directly observe the ultrastructure of intracellular compartments. New insights and knowledge about cell processes that are offered by such data require a comprehensive analysis which is limited by the time-consuming manual segmentation and reconstruction methods. METHOD We present methods for automatic segmentation, reconstruction, and analysis of intracellular compartments from volumetric data obtained by the dual-beam electron microscopy. We specifically address segmentation of fusiform vesicles and the Golgi apparatus, reconstruction of mitochondria and fusiform vesicles, and morphological analysis of the reconstructed mitochondria. RESULTS AND CONCLUSION Evaluation on the public UroCell dataset demonstrated high accuracy of the proposed methods for segmentation of fusiform vesicles and the Golgi apparatus, as well as for reconstruction of mitochondria and analysis of their shapes, while reconstruction of fusiform vesicles proved to be more challenging. We published an extension of the UroCell dataset with all of the data used in this work, to further contribute to research on automatic analysis of the ultrastructure of intracellular compartments.
Collapse
Affiliation(s)
- Manca Žerovnik Mekuč
- Faculty of Computer and Information Science, University of Ljubljana, Večna pot 113, Ljubljana 1000, Slovenia.
| | - Ciril Bohak
- Faculty of Computer and Information Science, University of Ljubljana, Večna pot 113, Ljubljana 1000, Slovenia; Visual Computing Center, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia.
| | - Eva Boneš
- Faculty of Computer and Information Science, University of Ljubljana, Večna pot 113, Ljubljana 1000, Slovenia.
| | - Samo Hudoklin
- Institute of Cell Biology, Faculty of Medicine, University of Ljubljana, Vrazov trg 2, Ljubljana 1000, Slovenia.
| | - Rok Romih
- Institute of Cell Biology, Faculty of Medicine, University of Ljubljana, Vrazov trg 2, Ljubljana 1000, Slovenia.
| | - Matija Marolt
- Faculty of Computer and Information Science, University of Ljubljana, Večna pot 113, Ljubljana 1000, Slovenia; Faculty of Computer and Information Science, University of Ljubljana, Večna pot 113, Ljubljana 1000, Slovenia.
| |
Collapse
|
44
|
ω-net: Dual supervised medical image segmentation with multi-dimensional self-attention and diversely-connected multi-scale convolution. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.05.053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
45
|
Nai YH, Loi HY, O'Doherty S, Tan TH, Reilhac A. Comparison of the performances of machine learning and deep learning in improving the quality of low dose lung cancer PET images. Jpn J Radiol 2022; 40:1290-1299. [PMID: 35809210 DOI: 10.1007/s11604-022-01311-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 06/19/2022] [Indexed: 11/29/2022]
Abstract
PURPOSE To compare the performances of machine learning (ML) and deep learning (DL) in improving the quality of low dose (LD) lung cancer PET images and the minimum counts required. MATERIALS AND METHODS 33 standard dose (SD) PET images, were used to simulate LD PET images at seven-count levels of 0.25, 0.5, 1, 2, 5, 7.5 and 10 million (M) counts. Image quality transfer (IQT), a ML algorithm that uses decision tree and patch-sampling was compared to two DL networks-HighResNet (HRN) and deep-boosted regression (DBR). Supervised training was performed by training the ML and DL algorithms with matched-pair SD and LD images. Image quality evaluation and clinical lesion detection tasks were performed by three readers. Bias in 53 radiomic features, including mean SUV, was evaluated for all lesions. RESULTS ML- and DL-estimated images showed higher signal and smaller error than LD images with optimal image quality recovery achieved using LD down to 5 M counts. True positive rate and false discovery rate were fairly stable beyond 5 M counts for the detection of small and large true lesions. Readers rated average or higher ratings to images estimated from LD images of count levels above 5 M only, with higher confidence in detecting true lesions. CONCLUSION LD images with a minimum of 5 M counts (8.72 MBq for 10 min scan or 25 MBq for 3 min scan) are required for optimal clinical use of ML and DL, with slightly better but more varied performance shown by DL.
Collapse
Affiliation(s)
- Ying-Hwey Nai
- Clinical Imaging Research Centre, Yong Loo Lin School of Medicine, National University of Singapore, 14 Medical Drive, #B1-01, Singapore, 117599, Singapore.
| | - Hoi Yin Loi
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
| | - Sophie O'Doherty
- Clinical Imaging Research Centre, Yong Loo Lin School of Medicine, National University of Singapore, 14 Medical Drive, #B1-01, Singapore, 117599, Singapore
| | - Teng Hwee Tan
- Department of Radiation Oncology, National University Cancer Institute, Singapore, Singapore
| | - Anthonin Reilhac
- Clinical Imaging Research Centre, Yong Loo Lin School of Medicine, National University of Singapore, 14 Medical Drive, #B1-01, Singapore, 117599, Singapore
| |
Collapse
|
46
|
Large-scale investigation of deep learning approaches for ventilated lung segmentation using multi-nuclear hyperpolarized gas MRI. Sci Rep 2022; 12:10566. [PMID: 35732795 PMCID: PMC9217976 DOI: 10.1038/s41598-022-14672-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 06/10/2022] [Indexed: 11/08/2022] Open
Abstract
Respiratory diseases are leading causes of mortality and morbidity worldwide. Pulmonary imaging is an essential component of the diagnosis, treatment planning, monitoring, and treatment assessment of respiratory diseases. Insights into numerous pulmonary pathologies can be gleaned from functional lung MRI techniques. These include hyperpolarized gas ventilation MRI, which enables visualization and quantification of regional lung ventilation with high spatial resolution. Segmentation of the ventilated lung is required to calculate clinically relevant biomarkers. Recent research in deep learning (DL) has shown promising results for numerous segmentation problems. Here, we evaluate several 3D convolutional neural networks to segment ventilated lung regions on hyperpolarized gas MRI scans. The dataset consists of 759 helium-3 (3He) or xenon-129 (129Xe) volumetric scans and corresponding expert segmentations from 341 healthy subjects and patients with a wide range of pathologies. We evaluated segmentation performance for several DL experimental methods via overlap, distance and error metrics and compared them to conventional segmentation methods, namely, spatial fuzzy c-means (SFCM) and K-means clustering. We observed that training on combined 3He and 129Xe MRI scans using a 3D nn-UNet outperformed other DL methods, achieving a mean ± SD Dice coefficient of 0.963 ± 0.018, average boundary Hausdorff distance of 1.505 ± 0.969 mm, Hausdorff 95th percentile of 5.754 ± 6.621 mm and relative error of 0.075 ± 0.039. Moreover, limited differences in performance were observed between 129Xe and 3He scans in the testing set. Combined training on 129Xe and 3He yielded statistically significant improvements over the conventional methods (p < 0.0001). In addition, we observed very strong correlation and agreement between DL and expert segmentations, with Pearson correlation of 0.99 (p < 0.0001) and Bland-Altman bias of - 0.8%. The DL approach evaluated provides accurate, robust and rapid segmentations of ventilated lung regions and successfully excludes non-lung regions such as the airways and artefacts. This approach is expected to eliminate the need for, or significantly reduce, subsequent time-consuming manual editing.
Collapse
|
47
|
Zaker N, Haddad K, Faghihi R, Arabi H, Zaidi H. Direct inference of Patlak parametric images in whole-body PET/CT imaging using convolutional neural networks. Eur J Nucl Med Mol Imaging 2022; 49:4048-4063. [PMID: 35716176 PMCID: PMC9525418 DOI: 10.1007/s00259-022-05867-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 06/09/2022] [Indexed: 11/20/2022]
Abstract
Purpose This study proposed and investigated the feasibility of estimating Patlak-derived influx rate constant (Ki) from standardized uptake value (SUV) and/or dynamic PET image series. Methods Whole-body 18F-FDG dynamic PET images of 19 subjects consisting of 13 frames or passes were employed for training a residual deep learning model with SUV and/or dynamic series as input and Ki-Patlak (slope) images as output. The training and evaluation were performed using a nine-fold cross-validation scheme. Owing to the availability of SUV images acquired 60 min post-injection (20 min total acquisition time), the data sets used for the training of the models were split into two groups: “With SUV” and “Without SUV.” For “With SUV” group, the model was first trained using only SUV images and then the passes (starting from pass 13, the last pass, to pass 9) were added to the training of the model (one pass each time). For this group, 6 models were developed with input data consisting of SUV, SUV plus pass 13, SUV plus passes 13 and 12, SUV plus passes 13 to 11, SUV plus passes 13 to 10, and SUV plus passes 13 to 9. For the “Without SUV” group, the same trend was followed, but without using the SUV images (5 models were developed with input data of passes 13 to 9). For model performance evaluation, the mean absolute error (MAE), mean error (ME), mean relative absolute error (MRAE%), relative error (RE%), mean squared error (MSE), root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) were calculated between the predicted Ki-Patlak images by the two groups and the reference Ki-Patlak images generated through Patlak analysis using the whole acquired data sets. For specific evaluation of the method, regions of interest (ROIs) were drawn on representative organs, including the lung, liver, brain, and heart and around the identified malignant lesions. Results The MRAE%, RE%, PSNR, and SSIM indices across all patients were estimated as 7.45 ± 0.94%, 4.54 ± 2.93%, 46.89 ± 2.93, and 1.00 ± 6.7 × 10−7, respectively, for models predicted using SUV plus passes 13 to 9 as input. The predicted parameters using passes 13 to 11 as input exhibited almost similar results compared to the predicted models using SUV plus passes 13 to 9 as input. Yet, the bias was continuously reduced by adding passes until pass 11, after which the magnitude of error reduction was negligible. Hence, the predicted model with SUV plus passes 13 to 9 had the lowest quantification bias. Lesions invisible in one or both of SUV and Ki-Patlak images appeared similarly through visual inspection in the predicted images with tolerable bias. Conclusion This study concluded the feasibility of direct deep learning-based approach to estimate Ki-Patlak parametric maps without requiring the input function and with a fewer number of passes. This would lead to shorter acquisition times for WB dynamic imaging with acceptable bias and comparable lesion detectability performance. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-022-05867-w.
Collapse
Affiliation(s)
- Neda Zaker
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.,School of Mechanical Engineering, Department of Nuclear Engineering, Shiraz University, Shiraz, Iran
| | - Kamal Haddad
- School of Mechanical Engineering, Department of Nuclear Engineering, Shiraz University, Shiraz, Iran
| | - Reza Faghihi
- School of Mechanical Engineering, Department of Nuclear Engineering, Shiraz University, Shiraz, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland. .,Geneva University Neurocenter, Geneva University, Geneva, Switzerland. .,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands. .,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
48
|
Validation of deep learning-based nonspecific estimates for amyloid burden quantification with longitudinal data. Phys Med 2022; 99:85-93. [PMID: 35665624 DOI: 10.1016/j.ejmp.2022.05.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 05/26/2022] [Accepted: 05/27/2022] [Indexed: 11/20/2022] Open
Abstract
PURPOSE To validate our previously proposed method of quantifying amyloid-beta (Aβ) load using nonspecific (NS) estimates generated with convolutional neural networks (CNNs) using [18F]Florbetapir scans from longitudinal and multicenter ADNI data. METHODS 188 paired MR (T1-weighted and T2-weighted) and PET images were downloaded from the ADNI3 dataset, of which 49 subjects had 2 time-point scans. 40 Aβ- subjects with low specific uptake were selected for training. Multimodal ScaleNet (SN) and monomodal HighRes3DNet (HRN), using either T1-weighted or T2-weighted MR images as inputs) were trained to map structural MR to NS-PET images. The optimized SN and HRN networks were used to estimate the NS for all scans and then subtracted from SUVr images to determine the specific amyloid load (SAβL) images. The association of SAβL with various cognitive and functional test scores was evaluated using Spearman analysis, as well as the differences in SAβL with cognitive test scores for 49 subjects with 2 time-point scans and sensitivity analysis. RESULTS SAβL derived from both SN and HRN showed higher association with memory-related cognitive test scores compared to SUVr. However, for longitudinal scans, only SAβL estimated from multimodal SN consistently performed better than SUVr for all memory-related cognitive test scores. CONCLUSIONS Our proposed method of quantifying Aβ load using NS estimated from CNN correlated better than SUVr with cognitive decline for both static and longitudinal data, and was able to estimate NS of [18F]Florbetapir. We suggest employing multimodal networks with both T1-weighted and T2-weighted MR images for better NS estimation.
Collapse
|
49
|
Peng T, Wu Y, Qin J, Wu QJ, Cai J. H-ProSeg: Hybrid ultrasound prostate segmentation based on explainability-guided mathematical model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106752. [PMID: 35338887 DOI: 10.1016/j.cmpb.2022.106752] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Revised: 02/16/2022] [Accepted: 03/11/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate and robust prostate segmentation in transrectal ultrasound (TRUS) images is of great interest for image-guided prostate interventions and prostate cancer diagnosis. However, it remains a challenging task for various reasons, including a missing or ambiguous boundary between the prostate and surrounding tissues, the presence of shadow artifacts, intra-prostate intensity heterogeneity, and anatomical variations. METHODS Here, we present a hybrid method for prostate segmentation (H-ProSeg) in TRUS images, using a small number of radiologist-defined seed points as the prior points. This method consists of three subnetworks. The first subnetwork uses an improved principal curve-based model to obtain data sequences consisting of seed points and their corresponding projection index. The second subnetwork uses an improved differential evolution-based artificial neural network for training to decrease the model error. The third subnetwork uses the parameters of the artificial neural network to explain the smooth mathematical description of the prostate contour. The performance of the H-ProSeg method was assessed in 55 brachytherapy patients using Dice similarity coefficient (DSC), Jaccard similarity coefficient (Ω), and accuracy (ACC) values. RESULTS The H-ProSeg method achieved excellent segmentation accuracy, with DSC, Ω, and ACC values of 95.8%, 94.3%, and 95.4%, respectively. Meanwhile, the DSC, Ω, and ACC values of the proposed method were as high as 93.3%, 91.9%, and 93%, respectively, due to the influence of Gaussian noise (standard deviation of Gaussian function, σ = 50). Although the σ increased from 10 to 50, the DSC, Ω, and ACC values fluctuated by a maximum of approximately 2.5%, demonstrating the excellent robustness of our method. CONCLUSIONS Here, we present a hybrid method for accurate and robust prostate ultrasound image segmentation. The H-ProSeg method achieved superior performance compared with current state-of-the-art techniques. The knowledge of precise boundaries of the prostate is crucial for the conservation of risk structures. The proposed models have the potential to improve prostate cancer diagnosis and therapeutic outcomes.
Collapse
Affiliation(s)
- Tao Peng
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Yiyun Wu
- Department of Medical Technology, Jiangsu Province Hospital, Nanjing, Jiangsu, China
| | - Jing Qin
- Department of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Qingrong Jackie Wu
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China.
| |
Collapse
|
50
|
Thibeau-Sutre E, Díaz M, Hassanaly R, Routier A, Dormont D, Colliot O, Burgos N. ClinicaDL: An open-source deep learning software for reproducible neuroimaging processing. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 220:106818. [PMID: 35483271 DOI: 10.1016/j.cmpb.2022.106818] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 02/14/2022] [Accepted: 04/14/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE As deep learning faces a reproducibility crisis and studies on deep learning applied to neuroimaging are contaminated by methodological flaws, there is an urgent need to provide a safe environment for deep learning users to help them avoid common pitfalls that will bias and discredit their results. Several tools have been proposed to help deep learning users design their framework for neuroimaging data sets. Software overview: We present here ClinicaDL, one of these software tools. ClinicaDL interacts with BIDS, a standard format in the neuroimaging field, and its derivatives, so it can be used with a large variety of data sets. Moreover, it checks the absence of data leakage when inferring the results of new data with trained networks, and saves all necessary information to guarantee the reproducibility of results. The combination of ClinicaDL and its companion project Clinica allows performing an end-to-end neuroimaging analysis, from the download of raw data sets to the interpretation of trained networks, including neuroimaging preprocessing, quality check, label definition, architecture search, and network training and evaluation. CONCLUSIONS We implemented ClinicaDL to bring answers to three common issues encountered by deep learning users who are not always familiar with neuroimaging data: (1) the format and preprocessing of neuroimaging data sets, (2) the contamination of the evaluation procedure by data leakage and (3) a lack of reproducibility. We hope that its use by researchers will allow producing more reliable and thus valuable scientific studies in our field.
Collapse
Affiliation(s)
- Elina Thibeau-Sutre
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, F-75013, France
| | - Mauricio Díaz
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, F-75013, France
| | - Ravi Hassanaly
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, F-75013, France
| | - Alexandre Routier
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, F-75013, France
| | - Didier Dormont
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié Salpêtrière, DMU DIAMENT, Paris, F-75013, France
| | - Olivier Colliot
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, F-75013, France
| | - Ninon Burgos
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, F-75013, France.
| |
Collapse
|