1
|
Rajendran P, Chen Y, Qiu L, Niedermayr T, Liu W, Buyyounouski M, Bagshaw H, Han B, Yang Y, Kovalchuk N, Gu X, Hancock S, Xing L, Dai X. Auto-delineation of treatment target volume for radiation therapy using large language model-aided multimodal learning. Int J Radiat Oncol Biol Phys 2024:S0360-3016(24)02971-7. [PMID: 39117164 DOI: 10.1016/j.ijrobp.2024.07.2149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 06/17/2024] [Accepted: 07/06/2024] [Indexed: 08/10/2024]
Abstract
PURPOSE Artificial intelligence (AI)-aided methods have made significant progress in the auto-delineation of normal tissues. However, these approaches struggle with the auto-contouring of radiotherapy target volume. Our goal is to model the delineation of target volume as a clinical decision-making problem, resolved by leveraging large language model-aided multimodal learning approaches. METHODS AND MATERIALS A vision-language model, termed Medformer, has been developed, employing the hierarchical vision transformer as its backbone, and incorporating large language models to extract text-rich features. The contextually embedded linguistic features are seamlessly integrated into visual features for language-aware visual encoding through the visual language attention module. Metrics, including Dice similarity coefficient (DSC), intersection over union (IOU), and 95th percentile Hausdorff distance (HD95), were used to quantitatively evaluate the performance of our model. The evaluation was conducted on an in-house prostate cancer dataset and a public oropharyngeal carcinoma (OPC) dataset, totaling 668 subjects. RESULTS Our Medformer achieved a DSC of 0.81 ± 0.10 versus 0.72 ± 0.10, IOU of 0.73 ± 0.12 versus 0.65 ± 0.09, and HD95 of 9.86 ± 9.77 mm versus 19.13 ± 12.96 mm for delineation of gross tumor volume (GTV) on the prostate cancer dataset. Similarly, on the OPC dataset, it achieved a DSC of 0.77 ± 0.11 versus 0.72 ± 0.09, IOU of 0.70 ± 0.09 versus 0.65 ± 0.07, and HD95 of 7.52 ± 4.8 mm versus 13.63 ± 7.13 mm, representing significant improvements (p < 0.05). For delineating the clinical target volume (CTV), Medformer achieved a DSC of 0.91 ± 0.04, IOU of 0.85 ± 0.05, and HD95 of 2.98 ± 1.60 mm, comparable to other state-of-the-art algorithms. CONCLUSIONS Auto-delineation of the treatment target based on multimodal learning outperforms conventional approaches that rely purely on visual features. Our method could be adopted into routine practice to rapidly contour CTV/GTV.
Collapse
Affiliation(s)
| | - Yizheng Chen
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Liang Qiu
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Thomas Niedermayr
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Wu Liu
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Mark Buyyounouski
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Hilary Bagshaw
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Bin Han
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Yong Yang
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Nataliya Kovalchuk
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Xuejun Gu
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Steven Hancock
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Xianjin Dai
- Department of Radiation Oncology, Stanford University, Stanford, California.
| |
Collapse
|
2
|
Priya J, Raja SKS, Kiruthika SU. State-of-art technologies, challenges, and emerging trends of computer vision in dental images. Comput Biol Med 2024; 178:108800. [PMID: 38917534 DOI: 10.1016/j.compbiomed.2024.108800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 06/20/2024] [Accepted: 06/20/2024] [Indexed: 06/27/2024]
Abstract
Computer vision falls under the broad umbrella of artificial intelligence that mimics human vision and plays a vital role in dental imaging. Dental practitioners visualize and interpret teeth, and the structure surrounding the teeth and detect abnormalities by manually examining various dental imaging modalities. Due to the complexity and cognitive difficulty of comprehending medical data, human error makes correct diagnosis difficult. Automated diagnosis may be able to help alleviate delays, hasten practitioners' interpretation of positive cases, and lighten their workload. Several medical imaging modalities like X-rays, CT scans, color images, etc. that are employed in dentistry are briefly described in this survey. Dentists employ dental imaging as a diagnostic tool in several specialties, including orthodontics, endodontics, periodontics, etc. In the discipline of dentistry, computer vision has progressed from classic image processing to machine learning with mathematical approaches and robust deep learning techniques. Here conventional image processing techniques solely as well as in conjunction with intelligent machine learning algorithms, and sophisticated architectures of dental radiograph analysis employ deep learning techniques. This study provides a detailed summary of several tasks, including anatomical segmentation, identification, and categorization of different dental anomalies with their shortfalls as well as future perspectives in this field.
Collapse
Affiliation(s)
- J Priya
- ECE Department, Easwari Engineering College, Ramapuram, Chennai, Tamilnadu, India.
| | - S Kanaga Suba Raja
- CSE Department, SRM Institute of Science and Technology, Tiruchirappalli, Tamilnadu, India.
| | - S Usha Kiruthika
- CSE Department, National Institute of Technology, Tiruchirappalli, Tamilnadu, India.
| |
Collapse
|
3
|
Tuccio G, Afrakhteh S, Iacca G, Demi L. Time Efficient Ultrasound Localization Microscopy Based on A Novel Radial Basis Function 2D Interpolation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1690-1701. [PMID: 38145542 DOI: 10.1109/tmi.2023.3347261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/27/2023]
Abstract
Ultrasound localization microscopy (ULM) allows for the generation of super-resolved (SR) images of the vasculature by precisely localizing intravenously injected microbubbles. Although SR images may be useful for diagnosing and treating patients, their use in the clinical context is limited by the need for prolonged acquisition times and high frame rates. The primary goal of our study is to relax the requirement of high frame rates to obtain SR images. To this end, we propose a new time-efficient ULM (TEULM) pipeline built on a cutting-edge interpolation method. More specifically, we suggest employing Radial Basis Functions (RBFs) as interpolators to estimate the missing values in the 2-dimensional (2D) spatio-temporal structures. To evaluate this strategy, we first mimic the data acquisition at a reduced frame rate by applying a down-sampling (DS = 2, 4, 8, and 10) factor to high frame rate ULM data. Then, we up-sample the data to the original frame rate using the suggested interpolation to reconstruct the missing frames. Finally, using both the original high frame rate data and the interpolated one, we reconstruct SR images using the ULM framework steps. We evaluate the proposed TEULM using four in vivo datasets, a Rat brain (dataset A), a Rat kidney (dataset B), a Rat tumor (dataset C) and a Rat brain bolus (dataset D), interpolating at the in-phase and quadrature (IQ) level. Results demonstrate the effectiveness of TEULM in recovering vascular structures, even at a DS rate of 10 (corresponding to a frame rate of sub-100Hz). In conclusion, the proposed technique is successful in reconstructing accurate SR images while requiring frame rates of one order of magnitude lower than standard ULM.
Collapse
|
4
|
Alavi H, Seifi M, Rouhollahei M, Rafati M, Arabfard M. Development of Local Software for Automatic Measurement of Geometric Parameters in the Proximal Femur Using a Combination of a Deep Learning Approach and an Active Shape Model on X-ray Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:633-652. [PMID: 38343246 PMCID: PMC11031524 DOI: 10.1007/s10278-023-00953-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 10/16/2023] [Accepted: 10/23/2023] [Indexed: 04/20/2024]
Abstract
Proximal femur geometry is an important risk factor for diagnosing and predicting hip and femur injuries. Hence, the development of an automated approach for measuring these parameters could help physicians with the early identification of hip and femur ailments. This paper presents a technique that combines the active shape model (ASM) and deep learning methodologies. First, the femur boundary is extracted by a deep learning neural network. Then, the femur's anatomical landmarks are fitted to the extracted border using the ASM method. Finally, the geometric parameters of the proximal femur, including femur neck axis length (FNAL), femur head diameter (FHD), femur neck width (FNW), shaft width (SW), neck shaft angle (NSA), and alpha angle (AA), are calculated by measuring the distances and angles between the landmarks. The dataset of hip radiographic images consisted of 428 images, with 208 men and 220 women. These images were split into training and testing sets for analysis. The deep learning network and ASM were subsequently trained on the training dataset. In the testing dataset, the automatic measurement of FNAL, FHD, FNW, SW, NSA, and AA parameters resulted in mean errors of 1.19%, 1.46%, 2.28%, 2.43%, 1.95%, and 4.53%, respectively.
Collapse
Affiliation(s)
- Hamid Alavi
- Department of Radiology, Health Research Center, Life Style Institute, Baqiyatallah University of Medical Sciences, Tehran, Iran
| | - Mehdi Seifi
- Department of Radiology, Health Research Center, Life Style Institute, Baqiyatallah University of Medical Sciences, Tehran, Iran
| | - Mahboubeh Rouhollahei
- School of Health Management and Information Sciences, Iran University of Medical Sciences, Tehran, Iran
- Chemical Injuries Research Center, Systems Biology and Poisonings Institute, Baqiyatallah University of Medical Sciences, Tehran, Iran
| | - Mehravar Rafati
- Department of Medical Physics and Radiology, Faculty of Paramedicine, Kashan University of Medical Sciences, Kashan, Iran.
| | - Masoud Arabfard
- Chemical Injuries Research Center, Systems Biology and Poisonings Institute, Baqiyatallah University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
5
|
Kim HS, Kim H, Kim S, Cha Y, Kim JT, Kim JW, Ha YC, Yoo JI. Precise individual muscle segmentation in whole thigh CT scans for sarcopenia assessment using U-net transformer. Sci Rep 2024; 14:3301. [PMID: 38331977 PMCID: PMC10853213 DOI: 10.1038/s41598-024-53707-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 02/04/2024] [Indexed: 02/10/2024] Open
Abstract
The study aims to develop a deep learning based automatic segmentation approach using the UNETR(U-net Transformer) architecture to quantify the volume of individual thigh muscles(27 muscles in 5 groups) for Sarcopenia assessment. By automating the segmentation process, this approach improves the efficiency and accuracy of muscle volume calculation, facilitating a comprehensive understanding of muscle composition and its relationship to Sarcopenia. The study utilized a dataset of 72 whole thigh CT scans from hip fracture patients, annotated by two radiologists. The UNETR model was trained to perform precise voxel-level segmentation and various metrics such as dice score, average symmetric surface distance, volume correlation, relative absolute volume difference and Hausdorff distance were employed to evaluate the model's performance. Additionally, the correlation between Sarcopenia and individual thigh muscle volumes was examined. The proposed model demonstrated superior segmentation performance compared to the baseline model, achieving higher dice scores (DC = 0.84) and lower average symmetric surface distances (ASSD = 1.4191 ± 0.91). The volume correlation between Sarcopenia and individual thigh muscles in the male group. Furthermore, the correlation analysis of grouped thigh muscles also showed negative associations with Sarcopenia in the male participants. This thesis presents a deep learning based automatic segmentation approach for quantifying individual thigh muscle volume in sarcopenia assessment. The results highlights the associations between Sarcopenia and specific individual muscles as well as grouped thigh muscle regions, particularly in males. The proposed method improves the efficiency and accuracy of muscle volume calculation, contributing to a comprehensive evaluation of Sarcopenia. This research enhances our understanding of muscle composition and performance, providing valuable insights for effective interventions in Sarcopenia management.
Collapse
Affiliation(s)
- Hyeon Su Kim
- Department of Biomedical Research Institute, Inha University Hospital, Incheon, South Korea.
| | - Hyunbin Kim
- Department of Biomedical Research Institute, Inha University Hospital, Incheon, South Korea
| | - Shinjune Kim
- Department of Biomedical Research Institute, Inha University Hospital, Incheon, South Korea
| | - Yonghan Cha
- Department of Orthopaedic Surgery, Daejeon Eulji Medical Center, Daejeon, South Korea
| | - Jung-Taek Kim
- Department of Orthopedic Surgery, Ajou University School of Medicine, Suwon, South Korea
| | - Jin-Woo Kim
- Department of Orthopaedic Surgery, Nowon Eulji Medical Center, Seoul, South Korea
| | - Yong-Chan Ha
- Department of Orthopaedic Surgery, Seoul Bumin Hospital, Seoul, South Korea
| | - Jun-Il Yoo
- Department of Orthopedic Surgery, School of Medicine, Inha University Hospital, Incheon, South Korea.
| |
Collapse
|
6
|
Petrov Y, Malik B, Fredrickson J, Jemaa S, Carano RAD. Deep Ensembles Are Robust to Occasional Catastrophic Failures of Individual DNNs for Organs Segmentations in CT Images. J Digit Imaging 2023; 36:2060-2074. [PMID: 37291384 PMCID: PMC10502003 DOI: 10.1007/s10278-023-00857-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 05/15/2023] [Accepted: 05/18/2023] [Indexed: 06/10/2023] Open
Abstract
Deep neural networks (DNNs) have recently showed remarkable performance in various computer vision tasks, including classification and segmentation of medical images. Deep ensembles (an aggregated prediction of multiple DNNs) were shown to improve a DNN's performance in various classification tasks. Here we explore how deep ensembles perform in the image segmentation task, in particular, organ segmentations in CT (Computed Tomography) images. Ensembles of V-Nets were trained to segment multiple organs using several in-house and publicly available clinical studies. The ensembles segmentations were tested on images from a different set of studies, and the effects of ensemble size as well as other ensemble parameters were explored for various organs. Compared to single models, Deep Ensembles significantly improved the average segmentation accuracy, especially for those organs where the accuracy was lower. More importantly, Deep Ensembles strongly reduced occasional "catastrophic" segmentation failures characteristic of single models and variability of the segmentation accuracy from image to image. To quantify this we defined the "high risk images": images for which at least one model produced an outlier metric (performed in the lower 5% percentile). These images comprised about 12% of the test images across all organs. Ensembles performed without outliers for 68%-100% of the "high risk images" depending on the performance metric used.
Collapse
Affiliation(s)
- Yury Petrov
- Genentech, Inc., 1 DNA Way, South San Francisco, CA, 94080, USA.
| | - Bilal Malik
- Genentech, Inc., 1 DNA Way, South San Francisco, CA, 94080, USA
| | | | - Skander Jemaa
- Genentech, Inc., 1 DNA Way, South San Francisco, CA, 94080, USA
| | | |
Collapse
|
7
|
Mai DVC, Drami I, Pring ET, Gould LE, Lung P, Popuri K, Chow V, Beg MF, Athanasiou T, Jenkins JT. A systematic review of automated segmentation of 3D computed-tomography scans for volumetric body composition analysis. J Cachexia Sarcopenia Muscle 2023; 14:1973-1986. [PMID: 37562946 PMCID: PMC10570079 DOI: 10.1002/jcsm.13310] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Revised: 05/03/2023] [Accepted: 07/11/2023] [Indexed: 08/12/2023] Open
Abstract
Automated computed tomography (CT) scan segmentation (labelling of pixels according to tissue type) is now possible. This technique is being adapted to achieve three-dimensional (3D) segmentation of CT scans, opposed to single L3-slice alone. This systematic review evaluates feasibility and accuracy of automated segmentation of 3D CT scans for volumetric body composition (BC) analysis, as well as current limitations and pitfalls clinicians and researchers should be aware of. OVID Medline, Embase and grey literature databases up to October 2021 were searched. Original studies investigating automated skeletal muscle, visceral and subcutaneous AT segmentation from CT were included. Seven of the 92 studies met inclusion criteria. Variation existed in expertise and numbers of humans performing ground-truth segmentations used to train algorithms. There was heterogeneity in patient characteristics, pathology and CT phases that segmentation algorithms were developed upon. Reporting of anatomical CT coverage varied, with confusing terminology. Six studies covered volumetric regional slabs rather than the whole body. One study stated the use of whole-body CT, but it was not clear whether this truly meant head-to-fingertip-to-toe. Two studies used conventional computer algorithms. The latter five used deep learning (DL), an artificial intelligence technique where algorithms are similarly organized to brain neuronal pathways. Six of seven reported excellent segmentation performance (Dice similarity coefficients > 0.9 per tissue). Internal testing on unseen scans was performed for only four of seven algorithms, whilst only three were tested externally. Trained DL algorithms achieved full CT segmentation in 12 to 75 s versus 25 min for non-DL techniques. DL enables opportunistic, rapid and automated volumetric BC analysis of CT performed for clinical indications. However, most CT scans do not cover head-to-fingertip-to-toe; further research must validate using common CT regions to estimate true whole-body BC, with direct comparison to single lumbar slice. Due to successes of DL, we expect progressive numbers of algorithms to materialize in addition to the seven discussed in this paper. Researchers and clinicians in the field of BC must therefore be aware of pitfalls. High Dice similarity coefficients do not inform the degree to which BC tissues may be under- or overestimated and nor does it inform on algorithm precision. Consensus is needed to define accuracy and precision standards for ground-truth labelling. Creation of a large international, multicentre common CT dataset with BC ground-truth labels from multiple experts could be a robust solution.
Collapse
Affiliation(s)
- Dinh Van Chi Mai
- Department of SurgerySt Mark's Academic Institute, St Mark's HospitalLondonUK
- Department of Surgery and CancerImperial CollegeLondonUK
| | - Ioanna Drami
- Department of SurgerySt Mark's Academic Institute, St Mark's HospitalLondonUK
- Department of Metabolism, Digestion and ReproductionImperial CollegeLondonUK
| | - Edward T. Pring
- Department of SurgerySt Mark's Academic Institute, St Mark's HospitalLondonUK
- Department of Surgery and CancerImperial CollegeLondonUK
| | - Laura E. Gould
- Department of SurgerySt Mark's Academic Institute, St Mark's HospitalLondonUK
- School of Cancer Sciences, College of Medical, Veterinary & Life SciencesUniverstiy of GlasgowGlasgowUK
| | - Phillip Lung
- Department of SurgerySt Mark's Academic Institute, St Mark's HospitalLondonUK
- Department of Surgery and CancerImperial CollegeLondonUK
| | - Karteek Popuri
- Department of Computer ScienceMemorial University of NewfoundlandSt JohnsCanada
| | - Vincent Chow
- School of Engineering ScienceSimon Fraser UniversityBurnabyCanada
| | - Mirza F. Beg
- School of Engineering ScienceSimon Fraser UniversityBurnabyCanada
| | | | - John T. Jenkins
- Department of SurgerySt Mark's Academic Institute, St Mark's HospitalLondonUK
- Department of Surgery and CancerImperial CollegeLondonUK
| | | |
Collapse
|
8
|
Lin YC, Lin G, Pandey S, Yeh CH, Wang JJ, Lin CY, Ho TY, Ko SF, Ng SH. Fully automated segmentation and radiomics feature extraction of hypopharyngeal cancer on MRI using deep learning. Eur Radiol 2023; 33:6548-6556. [PMID: 37338554 PMCID: PMC10415433 DOI: 10.1007/s00330-023-09827-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Revised: 03/29/2023] [Accepted: 04/14/2023] [Indexed: 06/21/2023]
Abstract
OBJECTIVES To use convolutional neural network for fully automated segmentation and radiomics features extraction of hypopharyngeal cancer (HPC) tumor in MRI. METHODS MR images were collected from 222 HPC patients, among them 178 patients were used for training, and another 44 patients were recruited for testing. U-Net and DeepLab V3 + architectures were used for training the models. The model performance was evaluated using the dice similarity coefficient (DSC), Jaccard index, and average surface distance. The reliability of radiomics parameters of the tumor extracted by the models was assessed using intraclass correlation coefficient (ICC). RESULTS The predicted tumor volumes by DeepLab V3 + model and U-Net model were highly correlated with those delineated manually (p < 0.001). The DSC of DeepLab V3 + model was significantly higher than that of U-Net model (0.77 vs 0.75, p < 0.05), particularly in those small tumor volumes of < 10 cm3 (0.74 vs 0.70, p < 0.001). For radiomics extraction of the first-order features, both models exhibited high agreement (ICC: 0.71-0.91) with manual delineation. The radiomics extracted by DeepLab V3 + model had significantly higher ICCs than those extracted by U-Net model for 7 of 19 first-order features and for 8 of 17 shape-based features (p < 0.05). CONCLUSION Both DeepLab V3 + and U-Net models produced reasonable results in automated segmentation and radiomic features extraction of HPC on MR images, whereas DeepLab V3 + had a better performance than U-Net. CLINICAL RELEVANCE STATEMENT The deep learning model, DeepLab V3 + , exhibited promising performance in automated tumor segmentation and radiomics extraction for hypopharyngeal cancer on MRI. This approach holds great potential for enhancing the radiotherapy workflow and facilitating prediction of treatment outcomes. KEY POINTS • DeepLab V3 + and U-Net models produced reasonable results in automated segmentation and radiomic features extraction of HPC on MR images. • DeepLab V3 + model was more accurate than U-Net in automated segmentation, especially on small tumors. • DeepLab V3 + exhibited higher agreement for about half of the first-order and shape-based radiomics features than U-Net.
Collapse
Affiliation(s)
- Yu-Chun Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan
- Department of Medical Imaging and Radiological Sciences, Chang Gung University, Taoyuan, Taiwan
- Clinical Metabolomics Core Laboratory, Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan
| | - Gigin Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan
- Clinical Metabolomics Core Laboratory, Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan
| | - Sumit Pandey
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan
| | - Chih-Hua Yeh
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan
| | - Jiun-Jie Wang
- Department of Medical Imaging and Radiological Sciences, Chang Gung University, Taoyuan, Taiwan
| | - Chien-Yu Lin
- Department of Radiation Oncology, Chang Gung Memorial Hospital at Linkou and Chang Gung University, Taoyuan, Taiwan
| | - Tsung-Ying Ho
- Department of Nuclear Medicine and Molecular Imaging Center, Chang Gung Memorial Hospital and Chang Gung University, Taoyuan, Taiwan
| | - Sheung-Fat Ko
- Department of Radiology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Kaohsiung, Taiwan
| | - Shu-Hang Ng
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan.
| |
Collapse
|
9
|
Müller-Franzes G, Müller-Franzes F, Huck L, Raaff V, Kemmer E, Khader F, Arasteh ST, Lemainque T, Kather JN, Nebelung S, Kuhl C, Truhn D. Fibroglandular tissue segmentation in breast MRI using vision transformers: a multi-institutional evaluation. Sci Rep 2023; 13:14207. [PMID: 37648728 PMCID: PMC10468506 DOI: 10.1038/s41598-023-41331-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 08/24/2023] [Indexed: 09/01/2023] Open
Abstract
Accurate and automatic segmentation of fibroglandular tissue in breast MRI screening is essential for the quantification of breast density and background parenchymal enhancement. In this retrospective study, we developed and evaluated a transformer-based neural network for breast segmentation (TraBS) in multi-institutional MRI data, and compared its performance to the well established convolutional neural network nnUNet. TraBS and nnUNet were trained and tested on 200 internal and 40 external breast MRI examinations using manual segmentations generated by experienced human readers. Segmentation performance was assessed in terms of the Dice score and the average symmetric surface distance. The Dice score for nnUNet was lower than for TraBS on the internal testset (0.909 ± 0.069 versus 0.916 ± 0.067, P < 0.001) and on the external testset (0.824 ± 0.144 versus 0.864 ± 0.081, P = 0.004). Moreover, the average symmetric surface distance was higher (= worse) for nnUNet than for TraBS on the internal (0.657 ± 2.856 versus 0.548 ± 2.195, P = 0.001) and on the external testset (0.727 ± 0.620 versus 0.584 ± 0.413, P = 0.03). Our study demonstrates that transformer-based networks improve the quality of fibroglandular tissue segmentation in breast MRI compared to convolutional-based models like nnUNet. These findings might help to enhance the accuracy of breast density and parenchymal enhancement quantification in breast MRI screening.
Collapse
Affiliation(s)
- Gustav Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Fritz Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Luisa Huck
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Vanessa Raaff
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Eva Kemmer
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Firas Khader
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Soroosh Tayebi Arasteh
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Teresa Lemainque
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Technical University, Dresden, Germany
- Department of Medicine III, University Hospital RWTH, Aachen, Germany
| | - Sven Nebelung
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Christiane Kuhl
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany.
| |
Collapse
|
10
|
Kaba Ş, Haci H, Isin A, Ilhan A, Conkbayir C. The Application of Deep Learning for the Segmentation and Classification of Coronary Arteries. Diagnostics (Basel) 2023; 13:2274. [PMID: 37443668 DOI: 10.3390/diagnostics13132274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 06/30/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023] Open
Abstract
In recent years, the prevalence of coronary artery disease (CAD) has become one of the leading causes of death around the world. Accurate stenosis detection of coronary arteries is crucial for timely treatment. Cardiologists use visual estimations when reading coronary angiography images to diagnose stenosis. As a result, they face various challenges which include high workloads, long processing times and human error. Computer-aided segmentation and classification of coronary arteries, as to whether stenosis is present or not, significantly reduces the workload of cardiologists and human errors caused by manual processes. Moreover, deep learning techniques have been shown to aid medical experts in diagnosing diseases using biomedical imaging. Thus, this study proposes the use of automatic segmentation of coronary arteries using U-Net, ResUNet-a, UNet++, models and classification using DenseNet201, EfficientNet-B0, Mobilenet-v2, ResNet101 and Xception models. In the case of segmentation, the comparative analysis of the three models has shown that U-Net achieved the highest score with a 0.8467 Dice score and 0.7454 Jaccard Index in comparison with UNet++ and ResUnet-a. Evaluation of the classification model's performances has shown that DenseNet201 performed better than other pretrained models with 0.9000 accuracy, 0.9833 specificity, 0.9556 PPV, 0.7746 Cohen's Kappa and 0.9694 Area Under the Curve (AUC).
Collapse
Affiliation(s)
- Şerife Kaba
- Department of Biomedical Engineering, Near East University, TRNC Mersin 10, Nicosia 99138, Turkey
| | - Huseyin Haci
- Department of Electrical-Electronic Engineering, Near East University, TRNC Mersin 10, Nicosia 99138, Turkey
| | - Ali Isin
- Department of Biomedical Engineering, Cyprus International University, TRNC Mersin 10, Nicosia 99138, Turkey
| | - Ahmet Ilhan
- Department of Computer Engineering, Near East University, TRNC Mersin 10, Nicosia 99138, Turkey
| | - Cenk Conkbayir
- Department of Cardiology, Near East University, TRNC Mersin 10, Nicosia 99138, Turkey
| |
Collapse
|
11
|
Ghazi N, Aarabi MH, Soltanian-Zadeh H. Deep Learning Methods for Identification of White Matter Fiber Tracts: Review of State-of-the-Art and Future Prospective. Neuroinformatics 2023; 21:517-548. [PMID: 37328715 DOI: 10.1007/s12021-023-09636-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/20/2023] [Indexed: 06/18/2023]
Abstract
Quantitative analysis of white matter fiber tracts from diffusion Magnetic Resonance Imaging (dMRI) data is of great significance in health and disease. For example, analysis of fiber tracts related to anatomically meaningful fiber bundles is highly demanded in pre-surgical and treatment planning, and the surgery outcome depends on accurate segmentation of the desired tracts. Currently, this process is mainly done through time-consuming manual identification performed by neuro-anatomical experts. However, there is a broad interest in automating the pipeline such that it is fast, accurate, and easy to apply in clinical settings and also eliminates the intra-reader variabilities. Following the advancements in medical image analysis using deep learning techniques, there has been a growing interest in using these techniques for the task of tract identification as well. Recent reports on this application show that deep learning-based tract identification approaches outperform existing state-of-the-art methods. This paper presents a review of current tract identification approaches based on deep neural networks. First, we review the recent deep learning methods for tract identification. Next, we compare them with respect to their performance, training process, and network properties. Finally, we end with a critical discussion of open challenges and possible directions for future works.
Collapse
Affiliation(s)
- Nayereh Ghazi
- Control and Intelligent Processing Center of Excellence (CIPCE), School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, 14399, Iran
| | - Mohammad Hadi Aarabi
- Department of Neuroscience, University of Padova, Padova, Italy
- Padova Neuroscience Center (PNC), University of Padova, Padova, Italy
| | - Hamid Soltanian-Zadeh
- Control and Intelligent Processing Center of Excellence (CIPCE), School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, 14399, Iran.
- Medical Image Analysis Laboratory, Departments of Radiology and Research Administration, Henry Ford Health System, Detroit, MI, 48202, USA.
| |
Collapse
|
12
|
El-Melegy MT, Kamel RM, Abou El-Ghar M, Alghamdi NS, El-Baz A. Kidney Segmentation from Dynamic Contrast-Enhanced Magnetic Resonance Imaging Integrating Deep Convolutional Neural Networks and Level Set Methods. Bioengineering (Basel) 2023; 10:755. [PMID: 37508782 PMCID: PMC10375962 DOI: 10.3390/bioengineering10070755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/20/2023] [Accepted: 06/21/2023] [Indexed: 07/30/2023] Open
Abstract
The dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) technique has taken on a significant and increasing role in diagnostic procedures and treatments for patients who suffer from chronic kidney disease. Careful segmentation of kidneys from DCE-MRI scans is an essential early step towards the evaluation of kidney function. Recently, deep convolutional neural networks have increased in popularity in medical image segmentation. To this end, in this paper, we propose a new and fully automated two-phase approach that integrates convolutional neural networks and level set methods to delimit kidneys in DCE-MRI scans. We first develop two convolutional neural networks that rely on the U-Net structure (UNT) to predict a kidney probability map for DCE-MRI scans. Then, to leverage the segmentation performance, the pixel-wise kidney probability map predicted from the deep model is exploited with the shape prior information in a level set method to guide the contour evolution towards the target kidney. Real DCE-MRI datasets of 45 subjects are used for training, validating, and testing the proposed approach. The valuation results demonstrate the high performance of the two-phase approach, achieving a Dice similarity coefficient of 0.95 ± 0.02 and intersection over union of 0.91 ± 0.03, and 1.54 ± 1.6 considering a 95% Hausdorff distance. Our intensive experiments confirm the potential and effectiveness of that approach over both UNT models and numerous recent level set-based methods.
Collapse
Affiliation(s)
- Moumen T El-Melegy
- Electrical Engineering Department, Assiut University, Assiut 71515, Egypt
| | - Rasha M Kamel
- Computer Science Department, Assiut University, Assiut 71515, Egypt
| | - Mohamed Abou El-Ghar
- Radiology Department, Urology and Nephrology Center, Mansoura University, Mansoura 35516, Egypt
| | - Norah Saleh Alghamdi
- Department of Computer Sciences, College of Computer and Information Science, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| |
Collapse
|
13
|
Yousef R, Khan S, Gupta G, Siddiqui T, Albahlal BM, Alajlan SA, Haq MA. U-Net-Based Models towards Optimal MR Brain Image Segmentation. Diagnostics (Basel) 2023; 13:diagnostics13091624. [PMID: 37175015 PMCID: PMC10178263 DOI: 10.3390/diagnostics13091624] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 04/14/2023] [Accepted: 04/25/2023] [Indexed: 05/15/2023] Open
Abstract
Brain tumor segmentation from MRIs has always been a challenging task for radiologists, therefore, an automatic and generalized system to address this task is needed. Among all other deep learning techniques used in medical imaging, U-Net-based variants are the most used models found in the literature to segment medical images with respect to different modalities. Therefore, the goal of this paper is to examine the numerous advancements and innovations in the U-Net architecture, as well as recent trends, with the aim of highlighting the ongoing potential of U-Net being used to better the performance of brain tumor segmentation. Furthermore, we provide a quantitative comparison of different U-Net architectures to highlight the performance and the evolution of this network from an optimization perspective. In addition to that, we have experimented with four U-Net architectures (3D U-Net, Attention U-Net, R2 Attention U-Net, and modified 3D U-Net) on the BraTS 2020 dataset for brain tumor segmentation to provide a better overview of this architecture's performance in terms of Dice score and Hausdorff distance 95%. Finally, we analyze the limitations and challenges of medical image analysis to provide a critical discussion about the importance of developing new architectures in terms of optimization.
Collapse
Affiliation(s)
- Rammah Yousef
- Yogananda School of AI, Computers and Data Sciences, Shoolini University, Solan 173229, India
| | - Shakir Khan
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
- Department of Computer Science and Engineering, University Centre for Research and Development, Chandigarh University, Mohali 140413, India
| | - Gaurav Gupta
- Yogananda School of AI, Computers and Data Sciences, Shoolini University, Solan 173229, India
| | - Tamanna Siddiqui
- Department of Computer Science, Aligarh Muslim University, Aligarh 202001, India
| | - Bader M Albahlal
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Saad Abdullah Alajlan
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Mohd Anul Haq
- Department of Computer Science, College of Computer and Information Sciences, Majmaah University, Al-Majmaah 11952, Saudi Arabia
| |
Collapse
|
14
|
Chen J, Chen S, Wee L, Dekker A, Bermejo I. Deep learning based unpaired image-to-image translation applications for medical physics: a systematic review. Phys Med Biol 2023; 68. [PMID: 36753766 DOI: 10.1088/1361-6560/acba74] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 02/08/2023] [Indexed: 02/10/2023]
Abstract
Purpose. There is a growing number of publications on the application of unpaired image-to-image (I2I) translation in medical imaging. However, a systematic review covering the current state of this topic for medical physicists is lacking. The aim of this article is to provide a comprehensive review of current challenges and opportunities for medical physicists and engineers to apply I2I translation in practice.Methods and materials. The PubMed electronic database was searched using terms referring to unpaired (unsupervised), I2I translation, and medical imaging. This review has been reported in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. From each full-text article, we extracted information extracted regarding technical and clinical applications of methods, Transparent Reporting for Individual Prognosis Or Diagnosis (TRIPOD) study type, performance of algorithm and accessibility of source code and pre-trained models.Results. Among 461 unique records, 55 full-text articles were included in the review. The major technical applications described in the selected literature are segmentation (26 studies), unpaired domain adaptation (18 studies), and denoising (8 studies). In terms of clinical applications, unpaired I2I translation has been used for automatic contouring of regions of interest in MRI, CT, x-ray and ultrasound images, fast MRI or low dose CT imaging, CT or MRI only based radiotherapy planning, etc Only 5 studies validated their models using an independent test set and none were externally validated by independent researchers. Finally, 12 articles published their source code and only one study published their pre-trained models.Conclusion. I2I translation of medical images offers a range of valuable applications for medical physicists. However, the scarcity of external validation studies of I2I models and the shortage of publicly available pre-trained models limits the immediate applicability of the proposed methods in practice.
Collapse
Affiliation(s)
- Junhua Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Shenlun Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Leonard Wee
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Inigo Bermejo
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| |
Collapse
|
15
|
El-Melegy M, Kamel R, Abou El-Ghar M, Alghamdi NS, El-Baz A. Variational Approach for Joint Kidney Segmentation and Registration from DCE-MRI Using Fuzzy Clustering with Shape Priors. Biomedicines 2022; 11:biomedicines11010006. [PMID: 36672514 PMCID: PMC9856100 DOI: 10.3390/biomedicines11010006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 12/10/2022] [Accepted: 12/12/2022] [Indexed: 12/24/2022] Open
Abstract
The dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) technique has great potential in the diagnosis, therapy, and follow-up of patients with chronic kidney disease (CKD). Towards that end, precise kidney segmentation from DCE-MRI data becomes a prerequisite processing step. Exploiting the useful information about the kidney's shape in this step mandates a registration operation beforehand to relate the shape model coordinates to those of the image to be segmented. Imprecise alignment of the shape model induces errors in the segmentation results. In this paper, we propose a new variational formulation to jointly segment and register DCE-MRI kidney images based on fuzzy c-means clustering embedded within a level-set (LSet) method. The image pixels' fuzzy memberships and the spatial registration parameters are simultaneously updated in each evolution step to direct the LSet contour toward the target kidney. Results on real medical datasets of 45 subjects demonstrate the superior performance of the proposed approach, reporting a Dice similarity coefficient of 0.94 ± 0.03, Intersection-over-Union of 0.89 ± 0.05, and 2.2 ± 2.3 in 95-percentile of Hausdorff distance. Extensive experiments show that our approach outperforms several state-of-the-art LSet-based methods as well as two UNet-based deep neural models trained for the same task in terms of accuracy and consistency.
Collapse
Affiliation(s)
- Moumen El-Melegy
- Electrical Engineering Department, Assiut University, Assiut 71515, Egypt
| | - Rasha Kamel
- Computer Science Department, Assiut University, Assiut 71515, Egypt
| | - Mohamed Abou El-Ghar
- Radiology Department, Urology and Nephrology Center, Mansoura University, Mansoura 35516, Egypt
| | - Norah S. Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
- Correspondence:
| |
Collapse
|
16
|
Automated distinction of neoplastic from healthy liver parenchyma based on machine learning. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07599-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
17
|
Müller D, Soto-Rey I, Kramer F. Towards a guideline for evaluation metrics in medical image segmentation. BMC Res Notes 2022; 15:210. [PMID: 35725483 PMCID: PMC9208116 DOI: 10.1186/s13104-022-06096-y] [Citation(s) in RCA: 55] [Impact Index Per Article: 27.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Accepted: 06/07/2022] [Indexed: 11/10/2022] Open
Abstract
In the last decade, research on artificial intelligence has seen rapid growth with deep learning models, especially in the field of medical image segmentation. Various studies demonstrated that these models have powerful prediction capabilities and achieved similar results as clinicians. However, recent studies revealed that the evaluation in image segmentation studies lacks reliable model performance assessment and showed statistical bias by incorrect metric implementation or usage. Thus, this work provides an overview and interpretation guide on the following metrics for medical image segmentation evaluation in binary as well as multi-class problems: Dice similarity coefficient, Jaccard, Sensitivity, Specificity, Rand index, ROC curves, Cohen's Kappa, and Hausdorff distance. Furthermore, common issues like class imbalance and statistical as well as interpretation biases in evaluation are discussed. As a summary, we propose a guideline for standardized medical image segmentation evaluation to improve evaluation quality, reproducibility, and comparability in the research field.
Collapse
Affiliation(s)
- Dominik Müller
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Augsburg, Germany.
- Medical Data Integration Center, Institute for Digital Medicine, University Hospital Augsburg, Augsburg, Germany.
| | - Iñaki Soto-Rey
- Medical Data Integration Center, Institute for Digital Medicine, University Hospital Augsburg, Augsburg, Germany
| | - Frank Kramer
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Augsburg, Germany
| |
Collapse
|
18
|
Oyibo P, Jujjavarapu S, Meulah B, Agbana T, Braakman I, van Diepen A, Bengtson M, van Lieshout L, Oyibo W, Vdovine G, Diehl JC. Schistoscope: An Automated Microscope with Artificial Intelligence for Detection of Schistosoma haematobium Eggs in Resource-Limited Settings. MICROMACHINES 2022; 13:mi13050643. [PMID: 35630110 PMCID: PMC9146062 DOI: 10.3390/mi13050643] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Revised: 04/13/2022] [Accepted: 04/15/2022] [Indexed: 02/01/2023]
Abstract
For many parasitic diseases, the microscopic examination of clinical samples such as urine and stool still serves as the diagnostic reference standard, primarily because microscopes are accessible and cost-effective. However, conventional microscopy is laborious, requires highly skilled personnel, and is highly subjective. Requirements for skilled operators, coupled with the cost and maintenance needs of the microscopes, which is hardly done in endemic countries, presents grossly limited access to the diagnosis of parasitic diseases in resource-limited settings. The urgent requirement for the management of tropical diseases such as schistosomiasis, which is now focused on elimination, has underscored the critical need for the creation of access to easy-to-use diagnosis for case detection, community mapping, and surveillance. In this paper, we present a low-cost automated digital microscope—the Schistoscope—which is capable of automatic focusing and scanning regions of interest in prepared microscope slides, and automatic detection of Schistosoma haematobium eggs in captured images. The device was developed using widely accessible distributed manufacturing methods and off-the-shelf components to enable local manufacturability and ease of maintenance. For proof of principle, we created a Schistosoma haematobium egg dataset of over 5000 images captured from spiked and clinical urine samples from field settings and demonstrated the automatic detection of Schistosoma haematobium eggs using a trained deep neural network model. The experiments and results presented in this paper collectively illustrate the robustness, stability, and optical performance of the device, making it suitable for use in the monitoring and evaluation of schistosomiasis control programs in endemic settings.
Collapse
Affiliation(s)
- Prosper Oyibo
- Delft Center for Systems and Control, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, 2628 CD Delft, The Netherlands; (P.O.); (T.A.); (G.V.)
- ANDI Centre of Excellence for Malaria Diagnosis, College of Medicine, University of Lagos, Lagos 101017, Nigeria;
| | - Satyajith Jujjavarapu
- Department of Sustainable Design Engineering, Faculty of Industrial Design Engineering, Delft University of Technology, 2628 CE Delft, The Netherlands; (S.J.); (I.B.)
| | - Brice Meulah
- Department of Parasitology, Leiden University Medical Center, 2333 ZA Leiden, The Netherlands; (B.M.); (A.v.D.); (M.B.); (L.v.L.)
- Centre de Recherches Medicales des Lambaréné, CERMEL, Lambarene BP 242, Gabon
| | - Tope Agbana
- Delft Center for Systems and Control, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, 2628 CD Delft, The Netherlands; (P.O.); (T.A.); (G.V.)
| | - Ingeborg Braakman
- Department of Sustainable Design Engineering, Faculty of Industrial Design Engineering, Delft University of Technology, 2628 CE Delft, The Netherlands; (S.J.); (I.B.)
| | - Angela van Diepen
- Department of Parasitology, Leiden University Medical Center, 2333 ZA Leiden, The Netherlands; (B.M.); (A.v.D.); (M.B.); (L.v.L.)
| | - Michel Bengtson
- Department of Parasitology, Leiden University Medical Center, 2333 ZA Leiden, The Netherlands; (B.M.); (A.v.D.); (M.B.); (L.v.L.)
| | - Lisette van Lieshout
- Department of Parasitology, Leiden University Medical Center, 2333 ZA Leiden, The Netherlands; (B.M.); (A.v.D.); (M.B.); (L.v.L.)
| | - Wellington Oyibo
- ANDI Centre of Excellence for Malaria Diagnosis, College of Medicine, University of Lagos, Lagos 101017, Nigeria;
| | - Gleb Vdovine
- Delft Center for Systems and Control, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, 2628 CD Delft, The Netherlands; (P.O.); (T.A.); (G.V.)
| | - Jan-Carel Diehl
- Department of Sustainable Design Engineering, Faculty of Industrial Design Engineering, Delft University of Technology, 2628 CE Delft, The Netherlands; (S.J.); (I.B.)
- Correspondence: ; Tel.: +31-614-015-469
| |
Collapse
|
19
|
Cardiac Magnetic Resonance Left Ventricle Segmentation and Function Evaluation Using a Trained Deep-Learning Model. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12052627] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Cardiac MRI is the gold standard for evaluating left ventricular myocardial mass (LVMM), end-systolic volume (LVESV), end-diastolic volume (LVEDV), stroke volume (LVSV), and ejection fraction (LVEF). Deep convolutional neural networks (CNNs) can provide automatic segmentation of LV myocardium (LVF) and blood cavity (LVC) and quantification of LV function; however, the performance is typically degraded when applied to new datasets. A 2D U-net with Monte-Carlo dropout was trained on 45 cine MR images and the model was used to segment 10 subjects from the ACDC dataset. The initial segmentations were post-processed using a continuous kernel-cut method. The refined segmentations were employed to update the trained model. This procedure was iterated several times and the final updated U-net model was used to segment the remaining 90 ACDC subjects. Algorithm and manual segmentations were compared using Dice coefficient (DSC) and average surface distance in a symmetric manner (ASSD). The relationships between algorithm and manual LV indices were evaluated using Pearson correlation coefficient (r), Bland-Altman analyses, and paired t-tests. Direct application of the pre-trained model yielded DSC of 0.74 ± 0.12 for LVM and 0.87 ± 0.12 for LVC. After fine-tuning, DSC was 0.81 ± 0.09 for LVM and 0.90 ± 0.09 for LVC. Algorithm LV function measurements were strongly correlated with manual analyses (r = 0.86–0.99, p < 0.0001) with minimal biases of −8.8 g for LVMM, −0.9 mL for LVEDV, −0.2 mL for LVESV, −0.7 mL for LVSV, and −0.6% for LVEF. The procedure required ∼12 min for fine-tuning and approximately 1 s to contour a new image on a Linux (Ubuntu 14.02) desktop (Inter(R) CPU i7-7770, 4.2 GHz, 16 GB RAM) with a GPU (GeForce, GTX TITAN X, 12 GB Memory). This approach provides a way to incorporate a trained CNN to segment and quantify previously unseen cardiac MR datasets without needing manual annotation of the unseen datasets.
Collapse
|
20
|
Ilhan A, Sekeroglu B, Abiyev R. Brain tumor segmentation in MRI images using nonparametric localization and enhancement methods with U-net. Int J Comput Assist Radiol Surg 2022; 17:589-600. [DOI: 10.1007/s11548-022-02566-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 01/12/2022] [Indexed: 11/05/2022]
|
21
|
Selective identification and localization of indolent and aggressive prostate cancers via CorrSigNIA: an MRI-pathology correlation and deep learning framework. Med Image Anal 2022; 75:102288. [PMID: 34784540 PMCID: PMC8678366 DOI: 10.1016/j.media.2021.102288] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2020] [Revised: 09/02/2021] [Accepted: 10/20/2021] [Indexed: 01/03/2023]
Abstract
Automated methods for detecting prostate cancer and distinguishing indolent from aggressive disease on Magnetic Resonance Imaging (MRI) could assist in early diagnosis and treatment planning. Existing automated methods of prostate cancer detection mostly rely on ground truth labels with limited accuracy, ignore disease pathology characteristics observed on resected tissue, and cannot selectively identify aggressive (Gleason Pattern≥4) and indolent (Gleason Pattern=3) cancers when they co-exist in mixed lesions. In this paper, we present a radiology-pathology fusion approach, CorrSigNIA, for the selective identification and localization of indolent and aggressive prostate cancer on MRI. CorrSigNIA uses registered MRI and whole-mount histopathology images from radical prostatectomy patients to derive accurate ground truth labels and learn correlated features between radiology and pathology images. These correlated features are then used in a convolutional neural network architecture to detect and localize normal tissue, indolent cancer, and aggressive cancer on prostate MRI. CorrSigNIA was trained and validated on a dataset of 98 men, including 74 men that underwent radical prostatectomy and 24 men with normal prostate MRI. CorrSigNIA was tested on three independent test sets including 55 men that underwent radical prostatectomy, 275 men that underwent targeted biopsies, and 15 men with normal prostate MRI. CorrSigNIA achieved an accuracy of 80% in distinguishing between men with and without cancer, a lesion-level ROC-AUC of 0.81±0.31 in detecting cancers in both radical prostatectomy and biopsy cohort patients, and lesion-levels ROC-AUCs of 0.82±0.31 and 0.86±0.26 in detecting clinically significant cancers in radical prostatectomy and biopsy cohort patients respectively. CorrSigNIA consistently outperformed other methods across different evaluation metrics and cohorts. In clinical settings, CorrSigNIA may be used in prostate cancer detection as well as in selective identification of indolent and aggressive components of prostate cancer, thereby improving prostate cancer care by helping guide targeted biopsies, reducing unnecessary biopsies, and selecting and planning treatment.
Collapse
|
22
|
|