1
|
Kaushik S, Ödén J, Sharma DS, Fredriksson A, Toma-Dasu I. Generation and evaluation of anatomy-preserving virtual CT for online adaptive proton therapy. Med Phys 2024; 51:1536-1546. [PMID: 38230803 DOI: 10.1002/mp.16941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 11/24/2023] [Accepted: 12/31/2023] [Indexed: 01/18/2024] Open
Abstract
BACKGROUND Daily CTs generated by CBCT correction are required for daily replanning in online-adaptive proton therapy (APT) to effectively deal with inter-fractional changes. Out of the currently available methods, the suitability of a daily CT generation method for proton dose calculation also depends on the anatomical site. PURPOSE We propose an anatomy-preserving virtual CT (APvCT) method as a hybrid method of CBCT correction, which is especially suitable for large anatomy deformations. The accuracy of the hybrid method was assessed by comparison with the corrected CBCT (cCBCT) and virtual CT (vCT) methods in the context of online APT. METHODS Seventy-one daily CBCTs of four prostate cancer patients treated with intensity modulated proton therapy (IMPT) were converted to daily CTs using cCBCT, vCT, and the newly proposed APvCT method. In APvCT, planning CT (pCT) were mapped to CBCT geometry using deformable image registration with boundary conditions on controlling regions of interest (ROIs) created with deep learning segmentation on cCBCT. The relative frequency distribution (RFD) of HU, mass density and stopping power ratio (SPR) values were assessed and compared with the pCT. The ROIs in the APvCT and vCT were compared with cCBCT in terms of Dice similarity coefficient (DSC) and mean distance-to-agreement (mDTA). For each patient, a robustly optimized IMPT plan was created on the pCT and subsequent daily adaptive plans on daily CTs. For dose distribution comparison on the same anatomy, the daily adaptive plans on cCBCT and vCT were recalculated on the corresponding APvCT. The dose distributions were compared in terms of isodose volumes and 3D global gamma-index passing rate (GPR) at γ(2%, 2 mm) criterion. RESULTS For all patients, no noticeable difference in RFDs was observed amongst APvCT, vCT, and pCT except in cCBCT, which showed a noticeable difference. The minimum DSC value was 0.96 and 0.39 for contours in APvCT and vCT respectively. The average value of mDTA for APvCT was 0.01 cm for clinical target volume and ≤0.01 cm for organs at risk, which increased to 0.18 cm and ≤0.52 cm for vCT. The mean GPR value was 90.9%, 64.5%, and 67.0% for APvCT versus cCBCT, vCT versus cCBCT, and APvCT versus vCT, respectively. When recalculated on APvCT, the adaptive cCBCT and vCT plans resulted in mean GPRs of 89.5 ± 5.1% and 65.9 ± 19.1%, respectively. The mean DSC values for 80.0%, 90.0%, 95.0%, 98.0%, and 100.0% isodose volumes were 0.97, 0.97, 0.97, 0.95, and 0.91 for recalculated cCBCT plans, and 0.89, 0.88, 0.87, 0.85, and 0.81 for recalculated vCT plans. Hausdorff distance for the 100.0% isodose volume in some cases of recalculated cCBCT plans on APvCT exceeded 1.00 cm. CONCLUSIONS APvCT contours showed good agreement with reference contours of cCBCT which indicates anatomy preservation in APvCT. A vCT with erroneous anatomy can result in an incorrect adaptive plan. Further, slightly lower values of GPR between the APvCT and cCBCT-based adaptive plans can be explained by the difference in the cCBCT's SPR RFD from the pCT.
Collapse
Affiliation(s)
- Suryakant Kaushik
- RaySearch Laboratories AB (Publ), Stockholm, Sweden
- Department of Physics, Medical Radiation Physics, Stockholm University, Stockholm, Sweden
- Department of Oncology and Pathology, Medical Radiation Physics, Karolinska Institutet, Stockholm, Sweden
| | - Jakob Ödén
- RaySearch Laboratories AB (Publ), Stockholm, Sweden
| | | | | | - Iuliana Toma-Dasu
- Department of Physics, Medical Radiation Physics, Stockholm University, Stockholm, Sweden
- Department of Oncology and Pathology, Medical Radiation Physics, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
2
|
Mahmoud A, El-Sharkawy YH. Multi-wavelength interference phase imaging for automatic breast cancer detection and delineation using diffuse reflection imaging. Sci Rep 2024; 14:415. [PMID: 38172105 PMCID: PMC10764793 DOI: 10.1038/s41598-023-50475-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Accepted: 12/20/2023] [Indexed: 01/05/2024] Open
Abstract
Millions of women globally are impacted by the major health problem of breast cancer (BC). Early detection of BC is critical for successful treatment and improved survival rates. In this study, we provide a progressive approach for BC detection using multi-wavelength interference (MWI) phase imaging based on diffuse reflection hyperspectral (HS) imaging. The proposed findings are based on the measurement of the interference pattern between the blue (446.6 nm) and red (632 nm) wavelengths. We consider implementing a comprehensive image processing and categorization method based on the use of Fast Fourier (FF) transform analysis pertaining to a change in the refractive index between tumor and normal tissue. We observed that cancer growth affects tissue organization dramatically, as seen by persistently increased refractive index variance in tumors compared normal areas. Both malignant and normal tissue had different depth data collected from it that was analyzed. To enhance the categorization of ex-vivo BC tissue, we developed and validated a training classifier algorithm specifically designed for categorizing HS cube data. Following the application of signal normalization with the FF transform algorithm, our methodology achieved a high level of performance with a specificity (Spec) of 94% and a sensitivity (Sen) of 90.9% for the 632 nm acquired image categorization, based on preliminary findings from breast specimens under investigation. Notably, we successfully leveraged unstained tissue samples to create 3D phase-resolved images that effectively highlight the distinctions in diffuse reflectance features between cancerous and healthy tissue. Preliminary data revealed that our imaging method might be able to assist specialists in safely excising malignant areas and assessing the tumor bed following resection automatically at different depths. This preliminary investigation might result in an effective "in-vivo" disease description utilizing optical technology using a typical RGB camera with wavelength-specific operation with our quantitative phase MWI imaging methodology.
Collapse
Affiliation(s)
- Alaaeldin Mahmoud
- Optoelectronics and Automatic Control Systems Department, Military Technical College, Kobry El-Kobba, Cairo, Egypt.
| | - Yasser H El-Sharkawy
- Optoelectronics and Automatic Control Systems Department, Military Technical College, Kobry El-Kobba, Cairo, Egypt
| |
Collapse
|
3
|
Strain JF, Rahmani M, Dierker D, Owen C, Jafri H, Vlassenko AG, Womack K, Fripp J, Tosun D, Benzinger TLS, Weiner M, Masters C, Lee JM, Morris JC, Goyal MS. Accuracy of TrUE-Net in comparison to established white matter hyperintensity segmentation methods: An independent validation study. Neuroimage 2024; 285:120494. [PMID: 38086495 DOI: 10.1016/j.neuroimage.2023.120494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 10/23/2023] [Accepted: 12/09/2023] [Indexed: 12/17/2023] Open
Abstract
White matter hyperintensities (WMH) are nearly ubiquitous in the aging brain, and their topography and overall burden are associated with cognitive decline. Given their numerosity, accurate methods to automatically segment WMH are needed. Recent developments, including the availability of challenge data sets and improved deep learning algorithms, have led to a new promising deep-learning based automated segmentation model called TrUE-Net, which has yet to undergo rigorous independent validation. Here, we compare TrUE-Net to six established automated WMH segmentation tools, including a semi-manual method. We evaluated the techniques at both global and regional level to compare their ability to detect the established relationship between WMH burden and age. We found that TrUE-Net was highly reliable at identifying WMH regions with low false positive rates, when compared to semi-manual segmentation as the reference standard. TrUE-Net performed similarly or favorably when compared to the other automated techniques. Moreover, TrUE-Net was able to detect relationships between WMH and age to a similar degree as the reference standard semi-manual segmentation at both the global and regional level. These results support the use of TrUE-Net for identifying WMH at the global or regional level, including in large, combined datasets.
Collapse
Affiliation(s)
- Jeremy F Strain
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, USA; Neuroimaging Labs Research Center, Washington University School of Medicine, St. Louis MO, USA.
| | - Maryam Rahmani
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA; Neuroimaging Labs Research Center, Washington University School of Medicine, St. Louis MO, USA
| | - Donna Dierker
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA; Neuroimaging Labs Research Center, Washington University School of Medicine, St. Louis MO, USA
| | - Christopher Owen
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
| | - Hussain Jafri
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
| | - Andrei G Vlassenko
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA; Neuroimaging Labs Research Center, Washington University School of Medicine, St. Louis MO, USA
| | - Kyle Womack
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, USA
| | - Jurgen Fripp
- The Australian e-Health Research Centre, CSIRO Health and Biosecurity, Brisbane, QLD, Australia
| | - Duygu Tosun
- Division of Radiology and Biomedical Imaging, University of California - San Francisco, San Francisco, CA, USA
| | - Tammie L S Benzinger
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA; Knight Alzheimer Disease Research Center, St. Louis, MO, USA
| | - Michael Weiner
- Division of Radiology and Biomedical Imaging, University of California - San Francisco, San Francisco, CA, USA
| | - Colin Masters
- The Florey Institute of Neuroscience and Mental Health, The University of Melbourne, Parkville, Victoria, Australia
| | - Jin-Moo Lee
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, USA; Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA; Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO, USA
| | - John C Morris
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, USA; Knight Alzheimer Disease Research Center, St. Louis, MO, USA
| | - Manu S Goyal
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA; Neuroimaging Labs Research Center, Washington University School of Medicine, St. Louis MO, USA
| |
Collapse
|
4
|
Dai J, Liu T, Torigian DA, Tong Y, Han S, Nie P, Zhang J, Li R, Xie F, Udupa JK. GA-Net: A geographical attention neural network for the segmentation of body torso tissue composition. Med Image Anal 2024; 91:102987. [PMID: 37837691 PMCID: PMC10841506 DOI: 10.1016/j.media.2023.102987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 07/27/2023] [Accepted: 09/28/2023] [Indexed: 10/16/2023]
Abstract
PURPOSE Body composition analysis (BCA) of the body torso plays a vital role in the study of physical health and pathology and provides biomarkers that facilitate the diagnosis and treatment of many diseases, such as type 2 diabetes mellitus, cardiovascular disease, obstructive sleep apnea, and osteoarthritis. In this work, we propose a body composition tissue segmentation method that can automatically delineate those key tissues, including subcutaneous adipose tissue, skeleton, skeletal muscle tissue, and visceral adipose tissue, on positron emission tomography/computed tomography scans of the body torso. METHODS To provide appropriate and precise semantic and spatial information that is strongly related to body composition tissues for the deep neural network, first we introduce a new concept of the body area and integrate it into our proposed segmentation network called Geographical Attention Network (GA-Net). The body areas are defined following anatomical principles such that the whole body torso region is partitioned into three non-overlapping body areas. Each body composition tissue of interest is fully contained in exactly one specific minimal body area. Secondly, the proposed GA-Net has a novel dual-decoder schema that is composed of a tissue decoder and an area decoder. The tissue decoder segments the body composition tissues, while the area decoder segments the body areas as an auxiliary task. The features of body areas and body composition tissues are fused through a soft attention mechanism to gain geographical attention relevant to the body tissues. Thirdly, we propose a body composition tissue annotation approach that takes the body area labels as the region of interest, which significantly improves the reproducibility, precision, and efficiency of delineating body composition tissues. RESULTS Our evaluations on 50 low-dose unenhanced CT images indicate that GA-Net outperforms other architectures statistically significantly based on the Dice metric. GA-Net also shows improvements for the 95% Hausdorff Distance metric in most comparisons. Notably, GA-Net exhibits more sensitivity to subtle boundary information and produces more reliable and robust predictions for such structures, which are the most challenging parts to manually mend in practice, with potentially significant time-savings in the post hoc correction of these subtle boundary placement errors. Due to the prior knowledge provided from body areas, GA-Net achieves competitive performance with less training data. Our extension of the dual-decoder schema to TransUNet and 3D U-Net demonstrates that the new schema significantly improves the performance of these classical neural networks as well. Heatmaps obtained from attention gate layers further illustrate the geographical guidance function of body areas for identifying body tissues. CONCLUSIONS (i) Prior anatomic knowledge supplied in the form of appropriately designed anatomic container objects significantly improves the segmentation of bodily tissues. (ii) Of particular note are the improvements achieved in the delineation of subtle boundary features which otherwise would take much effort for manual correction. (iii) The method can be easily extended to existing networks to improve their accuracy for this application.
Collapse
Affiliation(s)
- Jian Dai
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Tiange Liu
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia 19104, PA, United States of America.
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia 19104, PA, United States of America.
| | - Shiwei Han
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Pengju Nie
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Jing Zhang
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Ran Li
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Fei Xie
- School of AOAIR, Xidian University, Xi'an 710071, Shaanxi, China.
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia 19104, PA, United States of America.
| |
Collapse
|
5
|
Soh WK, Rajapakse JC. Hybrid UNet transformer architecture for ischemic stoke segmentation with MRI and CT datasets. Front Neurosci 2023; 17:1298514. [PMID: 38105927 PMCID: PMC10723803 DOI: 10.3389/fnins.2023.1298514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 11/07/2023] [Indexed: 12/19/2023] Open
Abstract
A hybrid UNet and Transformer (HUT) network is introduced to combine the merits of the UNet and Transformer architectures, improving brain lesion segmentation from MRI and CT scans. The HUT overcomes the limitations of conventional approaches by utilizing two parallel stages: one based on UNet and the other on Transformers. The Transformer-based stage captures global dependencies and long-range correlations. It uses intermediate feature vectors from the UNet decoder and improves segmentation accuracy by enhancing the attention and relationship modeling between voxel patches derived from the 3D brain volumes. In addition, HUT incorporates self-supervised learning on the transformer network. This allows the transformer network to learn by maintaining consistency between the classification layers of the different resolutions of patches and augmentations. There is an improvement in the rate of convergence of the training and the overall capability of segmentation. Experimental results on benchmark datasets, including ATLAS and ISLES2018, demonstrate HUT's advantage over the state-of-the-art methods. HUT achieves higher Dice scores and reduced Hausdorff Distance scores in single-modality and multi-modality lesion segmentation. HUT outperforms the state-the-art network SPiN in the single-modality MRI segmentation on Anatomical Tracings of lesion After Stroke (ATLAS) dataset by 4.84% of Dice score and a large margin of 40.7% in the Hausdorff Distance score. HUT also performed well on CT perfusion brain scans in the Ischemic Stroke Lesion Segmentation (ISLES2018) dataset and demonstrated an improvement over the recent state-of-the-art network USSLNet by 3.3% in the Dice score and 12.5% in the Hausdorff Distance score. With the analysis of both single and multi-modalities datasets (ATLASR12 and ISLES2018), we show that HUT can perform and generalize well on different datasets. Code is available at: https://github.com/vicsohntu/HUT_CT.
Collapse
Affiliation(s)
| | - Jagath C. Rajapakse
- School of Computer Science and Engineering, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
6
|
Schell M, Foltyn-Dumitru M, Bendszus M, Vollmuth P. Automated hippocampal segmentation algorithms evaluated in stroke patients. Sci Rep 2023; 13:11712. [PMID: 37474622 PMCID: PMC10359355 DOI: 10.1038/s41598-023-38833-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 07/16/2023] [Indexed: 07/22/2023] Open
Abstract
Deep learning segmentation algorithms can produce reproducible results in a matter of seconds. However, their application to more complex datasets is uncertain and may fail in the presence of severe structural abnormalities-such as those commonly seen in stroke patients. In this investigation, six recent, deep learning-based hippocampal segmentation algorithms were tested on 641 stroke patients of a multicentric, open-source dataset ATLAS 2.0. The comparisons of the volumes showed that the methods are not interchangeable with concordance correlation coefficients from 0.266 to 0.816. While the segmentation algorithms demonstrated an overall good performance (volumetric similarity [VS] 0.816 to 0.972, DICE score 0.786 to 0.921, and Hausdorff distance [HD] 2.69 to 6.34), no single out-performing algorithm was identified: FastSurfer performed best in VS, QuickNat in DICE and average HD, and Hippodeep in HD. Segmentation performance was significantly lower for ipsilesional segmentation, with a decrease in performance as a function of lesion size due to the pathology-based domain shift. Only QuickNat showed a more robust performance in volumetric similarity. Even though there are many pre-trained segmentation methods, it is important to be aware of the possible decrease in performance for the segmentation results on the lesion side due to the pathology-based domain shift. The segmentation algorithm should be selected based on the research question and the evaluation parameter needed. More research is needed to improve current hippocampal segmentation methods.
Collapse
Affiliation(s)
- Marianne Schell
- Department of Neuroradiology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany
| | - Martha Foltyn-Dumitru
- Department of Neuroradiology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany
| | - Martin Bendszus
- Department of Neuroradiology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany
| | - Philipp Vollmuth
- Department of Neuroradiology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany.
| |
Collapse
|
7
|
Kugler E, Breitenbach EM, MacDonald R. Glia Cell Morphology Analysis Using the Fiji GliaMorph Toolkit. Curr Protoc 2023; 3:e654. [PMID: 36688682 PMCID: PMC10108223 DOI: 10.1002/cpz1.654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
Glial cells are the support cells of the nervous system. Glial cells typically have elaborate morphologies that facilitate close contacts with neighboring neurons, synapses, and the vasculature. In the retina, Müller glia (MG) are the principal glial cell type that supports neuronal function by providing a myriad of supportive functions via intricate cell morphologies and precise contacts. Thus, complex glial morphology is critical for glial function, but remains challenging to resolve at a sub-cellular level or reproducibly quantify in complex tissues. To address this issue, we developed GliaMorph as a Fiji-based macro toolkit that allows 3D glial cell morphology analysis in the developing and mature retina. As GliaMorph is implemented in a modular fashion, here we present guides to (a) setup of GliaMorph, (b) data understanding in 3D, including z-axis intensity decay and signal-to-noise ratio, (c) pre-processing data to enhance image quality, (d) performing and examining image segmentation, and (e) 3D quantification of MG features, including apicobasal texture analysis. To allow easier application, GliaMorph tools are supported with graphical user interfaces where appropriate, and example data are publicly available to facilitate adoption. Further, GliaMorph can be modified to meet users' morphological analysis needs for other glial or neuronal shapes. Finally, this article provides users with an in-depth understanding of data requirements and the workflow of GliaMorph. © 2023 The Authors. Current Protocols published by Wiley Periodicals LLC. Basic Protocol 1: Download and installation of GliaMorph components including example data Basic Protocol 2: Understanding data properties and quality 3D-essential for subsequent analysis and capturing data property issues early Basic Protocol 3: Pre-processing AiryScan microscopy data for analysis Alternate Protocol: Pre-processing confocal microscopy data for analysis Basic Protocol 4: Segmentation of glial cells Basic Protocol 5: 3D quantification of glial cell morphology.
Collapse
Affiliation(s)
- Elisabeth Kugler
- Institute of Ophthalmology, University College London, Greater London, UK
| | | | - Ryan MacDonald
- Institute of Ophthalmology, University College London, Greater London, UK
| |
Collapse
|
8
|
Cesaria M, Alfinito E, Arima V, Bianco M, Cataldo R. MEED: A novel robust contrast enhancement procedure yielding highly-convergent thresholding of biofilm images. Comput Biol Med 2022; 151:106217. [PMID: 36306585 DOI: 10.1016/j.compbiomed.2022.106217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Revised: 10/04/2022] [Accepted: 10/15/2022] [Indexed: 12/27/2022]
Abstract
Morphological and statistical investigation of biofilm images may be even more critical than the image acquisition itself, in particular in the presence of morphologically complex distributions, due to the unavoidable impact of the measurement technique too. Hence, digital image pre-processing is mandatory for reliable feature extraction and enhancement preliminary to segmentation. Also, pattern recognition in automated deep learning (both supervised and unsupervised) models often requires a preliminary effective contrast-enhancement. However, no universal consensus exists on the optimal contrast enhancement approach. This paper presents and discusses a new general, robust, reproducible, accurate and easy to implement contrast enhancement procedure, briefly named MEED-procedure, able to work on images with different bacterial coverages and biofilm structures, coming from different imaging instrumentations (herein stereomicroscope and transmission microscope). It exploits a proper succession of basic morphological operations (erosion and dilation) and a horizontal line structuring element, to minimize the impact on size and shape of the even finer bacterial features. It systematically enhances the objects of interest, without histogram stretching and/or undesirable artifacts yielded by common automated methods. The quality of the MEED-procedure is ascertained by segmentation tests which demonstrate its robustness regarding the determination of threshold and convergence of the thresholding algorithm. Extensive validation tests over a rich image database, comparison with the literature and comprehensive discussion of the conceptual background support the superiority of the MEED-procedure over the existing methods and demonstrate it is not a routine application of morphological operators.
Collapse
Affiliation(s)
- Maura Cesaria
- University of Salento-Department of Mathematics and Physics "Ennio De Giorgi"- c/o Campus Ecotekne - Lecce, Italy.
| | - Eleonora Alfinito
- University of Salento-Department of Mathematics and Physics "Ennio De Giorgi"- c/o Campus Ecotekne - Lecce, Italy
| | - Valentina Arima
- CNR NANOTEC - Institute of Nanotechnology, c/o Campus Ecotekne, Lecce, Italy
| | - Monica Bianco
- CNR NANOTEC - Institute of Nanotechnology, c/o Campus Ecotekne, Lecce, Italy
| | - Rosella Cataldo
- University of Salento-Department of Mathematics and Physics "Ennio De Giorgi"- c/o Campus Ecotekne - Lecce, Italy.
| |
Collapse
|
9
|
Raj KV, Nabeel PM, Sivaprakasam M, Joseph J. Time-warping for robust automated arterial wall-recognition and tracking from single-scan-line ultrasound signals. ULTRASONICS 2022; 126:106828. [PMID: 36031705 DOI: 10.1016/j.ultras.2022.106828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Revised: 05/26/2022] [Accepted: 08/13/2022] [Indexed: 06/15/2023]
Abstract
Current ultrasound methods for recognition and motion-tracking of arterial walls are suited for image-based B-mode or M-mode scans but not adequately robust for single-line image-free scans. We introduce a time-warping-based technique to address this need. Its performance was validated through simulations and in-vivo trials on 21 subjects. The method recognized wall locations with 100 % precision for simulated frames (SNR > 10 dB). Clustering detections for multiple frames achieved sensitivity >98 %, while it was ∼90 % without clustering. The absence of arterial walls was predicted with 100 % specificity. In-vivo results corroborated the performance outcomes yielding a sensitivity ≥94 %, precision ≥98 %, and specificity ≥98 % using the clustering scheme. Further, excellent frame-to-frame tracking accuracy (absolute error <3 %, RMSE <2 μm) was demonstrated. Image-free measurements of peak arterial distension agreed with the image-based ones, within an error of 1.08 ± 3.65 % and RMSE of 38 μm. The method discerned the presence of arterial walls in A-mode frames, robustly localized, and tracked them even when they were proximal to hyperechoic regions or slow-moving tissue structures. Unification of delineation techniques with the proposed methods facilitates a complete image-free framework for measuring arterial dynamics and the development of reliable A-mode devices.
Collapse
Affiliation(s)
- Kiran V Raj
- Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai, Tamil Nadu, India.
| | - P M Nabeel
- Healthcare Technology Innovation Centre, Indian Institute of Technology Madras, Chennai, Tamil Nadu, India
| | - Mohanasankar Sivaprakasam
- Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai, Tamil Nadu, India; Healthcare Technology Innovation Centre, Indian Institute of Technology Madras, Chennai, Tamil Nadu, India
| | - Jayaraj Joseph
- Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai, Tamil Nadu, India
| |
Collapse
|
10
|
Assistance System for the Teaching of Natural Numbers to Preschool Children with the Use of Artificial Intelligence Algorithms. FUTURE INTERNET 2022. [DOI: 10.3390/fi14090266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
This research was aimed at designing an image recognition system that can help increase children’s interest in learning natural numbers between 0 and 9. The research method used was qualitative descriptive, observing early childhood learning in a face-to-face education model, especially in the learning of numbers, with additional data from literature studies. For the development of the system, the cascade method was used, consisting of three stages: identification of the population, design of the artificial intelligence architecture, and implementation of the recognition system. The method of the system sought to replicate a mechanic that simulates a game, whereby the child trains the artificial intelligence algorithm such that it recognizes the numbers that the child draws on a blackboard. The system is expected to help increase the ability of children in their interest to learn numbers and identify the meaning of quantities to help improve teaching success with a fun and engaging teaching method for children. The implementation of learning in this system is expected to make it easier for children to learn to write, read, and conceive the quantities of numbers, in addition to exploring their potential, creativity, and interest in learning, with the use of technologies.
Collapse
|
11
|
Fang L, Jiang Y, Ren X. Cerebral hemorrhage segmentation with energy functional based on anatomy theory. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
12
|
Recent Progress in Epicardial and Pericardial Adipose Tissue Segmentation and Quantification Based on Deep Learning: A Systematic Review. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12105217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Epicardial and pericardial adipose tissues (EAT and PAT), which are located around the heart, have been linked to coronary atherosclerosis, cardiomyopathy, coronary artery disease, and other cardiovascular diseases. Additionally, the volume and thickness of EAT are good predictors of CVD risk levels. Manual quantification of these tissues is a tedious and error-prone process. This paper presents a comprehensive and critical overview of research on the epicardial and pericardial adipose tissue segmentation and quantification methods, evaluates their effectiveness in terms of segmentation time and accuracy, provides a critical comparison of the methods, and presents ongoing and future challenges in the field. Described methods are classified into pericardial adipose tissue segmentation, direct epicardial adipose tissue segmentation, and epicardial adipose tissue segmentation via pericardium delineation. A comprehensive categorization of the underlying methods is conducted with insights into their evolution from traditional image processing methods to recent deep learning-based methods. The paper also provides an overview of the research on the clinical significance of epicardial and pericardial adipose tissues as well as the terminology and definitions used in the medical literature.
Collapse
|
13
|
Xu Y, Souza LF, Silva IC, Marques AG, Silva FH, Nunes VX, Han T, Jia C, de Albuquerque VHC, Filho PPR. A soft computing automatic based in deep learning with use of fine-tuning for pulmonary segmentation in computed tomography images. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107810] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
14
|
Prabhudesai S, Wang NC, Ahluwalia V, Huan X, Bapuraj JR, Banovic N, Rao A. Stratification by Tumor Grade Groups in a Holistic Evaluation of Machine Learning for Brain Tumor Segmentation. Front Neurosci 2021; 15:740353. [PMID: 34690680 PMCID: PMC8526730 DOI: 10.3389/fnins.2021.740353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 09/01/2021] [Indexed: 11/13/2022] Open
Abstract
Accurate and consistent segmentation plays an important role in the diagnosis, treatment planning, and monitoring of both High Grade Glioma (HGG), including Glioblastoma Multiforme (GBM), and Low Grade Glioma (LGG). Accuracy of segmentation can be affected by the imaging presentation of glioma, which greatly varies between the two tumor grade groups. In recent years, researchers have used Machine Learning (ML) to segment tumor rapidly and consistently, as compared to manual segmentation. However, existing ML validation relies heavily on computing summary statistics and rarely tests the generalizability of an algorithm on clinically heterogeneous data. In this work, our goal is to investigate how to holistically evaluate the performance of ML algorithms on a brain tumor segmentation task. We address the need for rigorous evaluation of ML algorithms and present four axes of model evaluation-diagnostic performance, model confidence, robustness, and data quality. We perform a comprehensive evaluation of a glioma segmentation ML algorithm by stratifying data by specific tumor grade groups (GBM and LGG) and evaluate these algorithms on each of the four axes. The main takeaways of our work are-(1) ML algorithms need to be evaluated on out-of-distribution data to assess generalizability, reflective of tumor heterogeneity. (2) Segmentation metrics alone are limited to evaluate the errors made by ML algorithms and their describe their consequences. (3) Adoption of tools in other domains such as robustness (adversarial attacks) and model uncertainty (prediction intervals) lead to a more comprehensive performance evaluation. Such a holistic evaluation framework could shed light on an algorithm's clinical utility and help it evolve into a more clinically valuable tool.
Collapse
Affiliation(s)
- Snehal Prabhudesai
- Computer Science and Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Nicholas Chandler Wang
- Computational Medicine and Bioinformatics, Michigan Medicine, Ann Arbor, MI, United States
| | - Vinayak Ahluwalia
- Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, United States
| | - Xun Huan
- Mechanical Engineering, University of Michigan, Ann Arbor, MI, United States
| | | | - Nikola Banovic
- Computer Science and Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Arvind Rao
- Computational Medicine and Bioinformatics, Michigan Medicine, Ann Arbor, MI, United States
- Department of Biostatistics, University of Michigan, Ann Arbor, MI, United States
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, United States
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, United States
| |
Collapse
|
15
|
Sahli H, Ben Slama A, Mouelhi A, Soayeh N, Rachdi R, Sayadi M. A computer-aided method based on geometrical texture features for a precocious detection of fetal Hydrocephalus in ultrasound images. Technol Health Care 2021; 28:643-664. [PMID: 32200362 DOI: 10.3233/thc-191752] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUD Hydrocephalus is the most common anomaly of the fetal head characterized by an excessive accumulation of fluid in the brain processing. The diagnostic process of fetal heads using traditional evaluation techniques are generally time consuming and error prone. Usually, fetal head size is computed using an ultrasound (US) image around 20-22 weeks, which is the gestational age (GA). Biometrical measurements are extracted and compared with ground truth charts to identify normal or abnormal growth. METHODS In this paper, an attempt has been made to enhance the Hydrocephalus characterization process by extracting other geometrical and textural features to design an efficient recognition system. The superiority of this work consists of the reduced time processing and the complexity of standard automatic approaches for routine examination. This proposed method requires practical insidiousness of the precocious discovery of fetuses' malformation to alert the experts about the existence of abnormal outcome. The first task is devoted to a proposed pre-processing model using a standard filtering and a segmentation scheme using a modified Hough transform (MHT) to detect the region of interest. Indeed, the obtained clinical parameters are presented to the principal component analysis (PCA) model in order to obtain a reduced number of measures which are employed in the classification stage. RESULTS Thanks to the combination of geometrical and statistical features, the classification process provided an important ability and an interesting performance achieving more than 96% of accuracy to detect pathological subjects in premature ages. CONCLUSIONS The experimental results illustrate the success and the accuracy of the proposed classification method for a factual diagnostic of fetal head malformation.
Collapse
Affiliation(s)
- Hanene Sahli
- University of Tunis, ENSIT, LR13ES03 SIME, Tunis, Tunisia
| | - Amine Ben Slama
- University of Tunis El Manar, ISTMT, LR13ES07, LRBTM, Tunis, Tunisia
| | - Aymen Mouelhi
- University of Tunis, ENSIT, LR13ES03 SIME, Tunis, Tunisia
| | - Nesrine Soayeh
- Obstetrics, Gynecology and Reproductive Department, Military Hospital, Tunis, Tunisia
| | - Radhouane Rachdi
- Obstetrics, Gynecology and Reproductive Department, Military Hospital, Tunis, Tunisia
| | - Mounir Sayadi
- University of Tunis, ENSIT, LR13ES03 SIME, Tunis, Tunisia
| |
Collapse
|
16
|
Fang L, Zhang L, Yao Y. Integrating a learned probabilistic model with energy functional for ultrasound image segmentation. Med Biol Eng Comput 2021; 59:1917-1931. [PMID: 34383220 DOI: 10.1007/s11517-021-02411-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 07/03/2021] [Indexed: 11/26/2022]
Abstract
The segmentation of ultrasound (US) images is steadily growing in popularity, owing to the necessity of computer-aided diagnosis (CAD) systems and the advantages that this technique shows, such as safety and efficiency. The objective of this work is to separate the lesion from its background in US images. However, most US images contain poor quality, which is affected by the noise, ambiguous boundary, and heterogeneity. Moreover, the lesion region may be not salient amid the other normal tissues, which makes its segmentation a challenging problem. In this paper, an US image segmentation algorithm that combines the learned probabilistic model with energy functionals is proposed. Firstly, a learned probabilistic model based on the generalized linear model (GLM) reduces the false positives and increases the likelihood energy term of the lesion region. It yields a new probability projection that attracts the energy functional toward the desired region of interest. Then, boundary indicator and probability statistical-based energy functional are used to provide a reliable boundary for the lesion. Integrating probabilistic information into the energy functional framework can effectively overcome the impact of poor quality and further improve the accuracy of segmentation. To verify the performance of the proposed algorithm, 40 images are randomly selected in three databases for evaluation. The values of DICE coefficient, the Jaccard distance, root-mean-square error, and mean absolute error are 0.96, 0.91, 0.059, and 0.042, respectively. Besides, the initialization of the segmentation algorithm and the influence of noise are also analyzed. The experiment shows a significant improvement in performance. A. Description of the proposed paper. B. The main steps involved in the proposed method.
Collapse
Affiliation(s)
- Lingling Fang
- Department of Computing and Information Technology, Liaoning Normal University, Dalian City, Liaoning Province, China.
- Nanchang Institute of Technology, City, Nanchang, Jiangxi Province, China.
| | - Lirong Zhang
- Department of Computing and Information Technology, Liaoning Normal University, Dalian City, Liaoning Province, China
| | - Yibo Yao
- Department of Computing and Information Technology, Liaoning Normal University, Dalian City, Liaoning Province, China
| |
Collapse
|
17
|
Orbach MR, Servaes SE, Mayer OH, Cahill PJ, Balasubramanian S. Quantifying lung and diaphragm morphology using radiographs in normative pediatric subjects, and predicting CT-derived lung volume. Pediatr Pulmonol 2021; 56:2177-2185. [PMID: 33860632 DOI: 10.1002/ppul.25429] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Revised: 04/03/2021] [Accepted: 04/11/2021] [Indexed: 11/09/2022]
Abstract
OBJECTIVE To quantify the effect of age on two-dimensional (2D) radiographic lung and diaphragm morphology and determine if 2D radiographic lung measurements can be used to estimate computer tomography (CT)-derived lung volume in normative pediatric subjects. MATERIALS AND METHODS Digitally reconstructed radiographs (DRRs) were created using retrospective chest CT scans from 77 pediatric male and female subjects aged birth to 19 years. 2D lung and diaphragm measurements were made on the DRRs using custom MATLAB code, and Spearman correlations and exponential regression equations were used to relate 2D measurements with age. In addition, 3D lung volumes were segmented using CT scans, and power regression equations were fitted to predict each lung's CT-derived volume from 2D lung measurements. The coefficient of determination (R2 ) and standard error of the estimate (SEE) were used to assess the precision of the predictive equations with p < .05 indicating statistical significance. RESULTS All 2D radiographic lung and diaphragm measurements showed statistically significant positive correlations with age (p < .01), including lung major axis (Spearman rho ≥ 0.90). Precise estimations of CT-derived lung volumes can be made using 2D lung measurements (R2 ≥ 0.95), including lung major axis (R2 ≥ 0.97). INTERPRETATIONS The reported pediatric age-specific reference data on 2D lung and diaphragm morphology and growth rates could be clinically used to identify lung and diaphragm pathologies during chest X-ray evaluations. The simple, precise, and clinically adaptable radiographic method for estimating CT-derived lung volumes may be used when pulmonary function tests are not readily available or difficult to perform.
Collapse
Affiliation(s)
- Mattan R Orbach
- School of Biomedical Engineering, Science and Health Systems, Drexel University, Philadelphia, Pennsylvania, USA
| | - Sabah E Servaes
- Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, USA
| | - Oscar H Mayer
- Division of Pulmonary Medicine, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, USA
| | - Patrick J Cahill
- Division of Orthopaedic Surgery, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, USA
| | - Sriram Balasubramanian
- School of Biomedical Engineering, Science and Health Systems, Drexel University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
18
|
Kuklyte J, Fitzgerald J, Nelissen S, Wei H, Whelan A, Power A, Ahmad A, Miarka M, Gregson M, Maxwell M, Raji R, Lenihan J, Finn-Moloney E, Rafferty M, Cary M, Barale-Thomas E, O’Shea D. Evaluation of the Use of Single- and Multi-Magnification Convolutional Neural Networks for the Determination and Quantitation of Lesions in Nonclinical Pathology Studies. Toxicol Pathol 2021; 49:815-842. [PMID: 33618634 PMCID: PMC8091423 DOI: 10.1177/0192623320986423] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Digital pathology platforms with integrated artificial intelligence have the potential to increase the efficiency of the nonclinical pathologist's workflow through screening and prioritizing slides with lesions and highlighting areas with specific lesions for review. Herein, we describe the comparison of various single- and multi-magnification convolutional neural network (CNN) architectures to accelerate the detection of lesions in tissues. Different models were evaluated for defining performance characteristics and efficiency in accurately identifying lesions in 5 key rat organs (liver, kidney, heart, lung, and brain). Cohorts for liver and kidney were collected from TG-GATEs open-source repository, and heart, lung, and brain from internally selected R&D studies. Annotations were performed, and models were trained on each of the available lesion classes in the available organs. Various class-consolidation approaches were evaluated from generalized lesion detection to individual lesion detections. The relationship between the amount of annotated lesions and the precision/accuracy of model performance is elucidated. The utility of multi-magnification CNN implementations in specific tissue subtypes is also demonstrated. The use of these CNN-based models offers users the ability to apply generalized lesion detection to whole-slide images, with the potential to generate novel quantitative data that would not be possible with conventional image analysis techniques.
Collapse
Affiliation(s)
| | | | | | - Haolin Wei
- Deciphex, Dublin City University, Dublin, Ireland
| | - Aoife Whelan
- Deciphex, Dublin City University, Dublin, Ireland
| | - Adam Power
- Deciphex, Dublin City University, Dublin, Ireland
| | - Ajaz Ahmad
- Deciphex, Dublin City University, Dublin, Ireland
| | | | - Mark Gregson
- Deciphex, Dublin City University, Dublin, Ireland
| | | | - Ruka Raji
- Deciphex, Dublin City University, Dublin, Ireland
| | | | | | | | - Maurice Cary
- Pathology Experts GmbH, Technologie Zentrum Witterswil, Witters, Switzerland
| | | | - Donal O’Shea
- Deciphex, Dublin City University, Dublin, Ireland
| |
Collapse
|
19
|
Li H, Liu B, Zhang Y, Fu C, Han X, Du L, Gao W, Chen Y, Liu X, Wang Y, Wang T, Ma G, Lei B. 3D IFPN: Improved Feature Pyramid Network for Automatic Segmentation of Gastric Tumor. Front Oncol 2021; 11:618496. [PMID: 34094903 PMCID: PMC8173118 DOI: 10.3389/fonc.2021.618496] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 04/21/2021] [Indexed: 11/24/2022] Open
Abstract
Automatic segmentation of gastric tumor not only provides image-guided clinical diagnosis but also assists radiologists to read images and improve the diagnostic accuracy. However, due to the inhomogeneous intensity distribution of gastric tumors in CT scans, the ambiguous/missing boundaries, and the highly variable shapes of gastric tumors, it is quite challenging to develop an automatic solution. This study designs a novel 3D improved feature pyramidal network (3D IFPN) to automatically segment gastric tumors in computed tomography (CT) images. To meet the challenges of this extremely difficult task, the proposed 3D IFPN makes full use of the complementary information within the low and high layers of deep convolutional neural networks, which is equipped with three types of feature enhancement modules: 3D adaptive spatial feature fusion (ASFF) module, single-level feature refinement (SLFR) module, and multi-level feature refinement (MLFR) module. The 3D ASFF module adaptively suppresses the feature inconsistency in different levels and hence obtains the multi-level features with high feature invariance. Then, the SLFR module combines the adaptive features and previous multi-level features at each level to generate the multi-level refined features by skip connection and attention mechanism. The MLFR module adaptively recalibrates the channel-wise and spatial-wise responses by adding the attention operation, which improves the prediction capability of the network. Furthermore, a stage-wise deep supervision (SDS) mechanism and a hybrid loss function are also embedded to enhance the feature learning ability of the network. CT volumes dataset collected in three Chinese medical centers was used to evaluate the segmentation performance of the proposed 3D IFPN model. Experimental results indicate that our method outperforms state-of-the-art segmentation networks in gastric tumor segmentation. Moreover, to explore the generalization for other segmentation tasks, we also extend the proposed network to liver tumor segmentation in CT images of the MICCAI 2017 Liver Tumor Segmentation Challenge.
Collapse
Affiliation(s)
- Haimei Li
- Department of Radiology, Fuxing Hospital, Capital Medical University, Beijing, China
| | - Bing Liu
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China.,Graduate School of Peking Union Medical College, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yongtao Zhang
- School of Biomedical Engineering, Shenzhen University, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen, China
| | - Chao Fu
- Department of Radiology, Dongzhimen Hospital, Beijing University of Chinese Medicine, Beijing, China
| | - Xiaowei Han
- Department of Radiology, The Affiliated Drum Tower Hospital of Nanjing University Medical School, Nanjing, China
| | - Lei Du
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Wenwen Gao
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Yue Chen
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Xiuxiu Liu
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Yige Wang
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Tianfu Wang
- School of Biomedical Engineering, Shenzhen University, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen, China
| | - Guolin Ma
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Baiying Lei
- School of Biomedical Engineering, Shenzhen University, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen, China
| |
Collapse
|
20
|
Kim KC, Cho HC, Jang TJ, Choi JM, Seo JK. Automatic detection and segmentation of lumbar vertebrae from X-ray images for compression fracture evaluation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105833. [PMID: 33250283 DOI: 10.1016/j.cmpb.2020.105833] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Accepted: 11/04/2020] [Indexed: 06/12/2023]
Abstract
For compression fracture detection and evaluation, an automatic X-ray image segmentation technique that combines deep-learning and level-set methods is proposed. Automatic segmentation is much more difficult for X-ray images than for CT or MRI images because they contain overlapping shadows of thoracoabdominal structures including lungs, bowel gases, and other bony structures such as ribs. Additional difficulties include unclear object boundaries, the complex shape of the vertebra, inter-patient variability, and variations in image contrast. Accordingly, a structured hierarchical segmentation method is presented that combines the advantages of two deep-learning methods. Pose-driven learning is used to selectively identify the five lumbar vertebrae in an accurate and robust manner. With knowledge of the vertebral positions, M-net is employed to segment the individual vertebra. Finally, fine-tuning segmentation is applied by combining the level-set method with the previously obtained segmentation results. The performance of the proposed method was validated by 160 lumbar X-ray images, resulting in a mean Dice similarity metric of 91.60±2.22%. The results show that the proposed method achieves accurate and robust identification of each lumbar vertebra and fine segmentation of individual vertebra.
Collapse
Affiliation(s)
- Kang Cheol Kim
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul 03722, South Korea
| | - Hyun Cheol Cho
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul 03722, South Korea
| | - Tae Jun Jang
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul 03722, South Korea
| | | | - Jin Keun Seo
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul 03722, South Korea
| |
Collapse
|
21
|
Liermann J, Syed M, Ben-Josef E, Schubert K, Schlampp I, Sprengel SD, Ristau J, Weykamp F, Röhrich M, Koerber SA, Haberkorn U, Debus J, Herfarth K, Giesel FL, Naumann P. Impact of FAPI-PET/CT on Target Volume Definition in Radiation Therapy of Locally Recurrent Pancreatic Cancer. Cancers (Basel) 2021; 13:cancers13040796. [PMID: 33672893 PMCID: PMC7918160 DOI: 10.3390/cancers13040796] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 02/06/2021] [Accepted: 02/11/2021] [Indexed: 12/28/2022] Open
Abstract
Simple Summary We demonstrate how manual target definition based on contrast-enhanced computed tomography is highly unreliable and inconsistent. In a second step, we used a novel positron emission tomography tracer, FAPI (68Ga-labeled fibroblast activation protein inhibitor) for target volume definition. FAPI-PET/CT contains biologic information as it visualizes cancer associated fibroblasts. The pioneering use of FAPI PET/CT in radiation treatment planning improved target definition in locally recurrent pancreatic cancer. Abstract (1) Background: A new radioactive positron emission tomography (PET) tracer uses inhibitors of fibroblast activation protein (FAPI) to visualize FAP-expressing cancer associated fibroblasts. Significant FAPI-uptake has recently been demonstrated in pancreatic cancer patients. Target volume delineation for radiation therapy still relies on often less precise conventional computed tomography (CT) imaging, especially in locally recurrent pancreatic cancer patients. The need for improvement in precise tumor detection and delineation led us to innovatively use the novel FAPI-PET/CT for radiation treatment planning. (2) Methods: Gross tumor volumes (GTVs) of seven locally recurrent pancreatic cancer cases were contoured by six radiation oncologists. In addition, FAPI-PET/CT was used to automatically delineate tumors. The interobserver variability in target definition was analyzed and FAPI-based automatic GTVs were compared to the manually defined GTVs. (3) Results: Target definition differed significantly between different radiation oncologists with mean dice similarity coefficients (DSCs) between 0.55 and 0.65. There was no significant difference between the volumes of automatic FAPI-GTVs based on the threshold of 2.0 and most of the manually contoured GTVs by radiation oncologists. (4) Conclusion: Due to its high tumor to background contrast, FAPI-PET/CT seems to be a superior imaging modality compared to the current gold standard contrast-enhanced CT in pancreatic cancer. For the first time, we demonstrate how FAPI-PET/CT could facilitate target definition and increases consistency in radiation oncology in pancreatic cancer.
Collapse
Affiliation(s)
- Jakob Liermann
- Department of Radiation Oncology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; (M.S.); (K.S.); (I.S.); (S.D.S.); (J.R.); (F.W.); (S.A.K.); (J.D.); (K.H.); (P.N.)
- Heidelberg Institute of Radiation Oncology (HIRO), Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
- Heidelberg Ion-Beam Therapy Center (HIT), Im Neuenheimer Feld 450, 69120 Heidelberg, Germany
- Correspondence: ; Tel.: +49-622-156-8202
| | - Mustafa Syed
- Department of Radiation Oncology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; (M.S.); (K.S.); (I.S.); (S.D.S.); (J.R.); (F.W.); (S.A.K.); (J.D.); (K.H.); (P.N.)
- Heidelberg Institute of Radiation Oncology (HIRO), Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
- Heidelberg Ion-Beam Therapy Center (HIT), Im Neuenheimer Feld 450, 69120 Heidelberg, Germany
| | - Edgar Ben-Josef
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA 19104, USA;
| | - Kai Schubert
- Department of Radiation Oncology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; (M.S.); (K.S.); (I.S.); (S.D.S.); (J.R.); (F.W.); (S.A.K.); (J.D.); (K.H.); (P.N.)
- Heidelberg Institute of Radiation Oncology (HIRO), Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
| | - Ingmar Schlampp
- Department of Radiation Oncology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; (M.S.); (K.S.); (I.S.); (S.D.S.); (J.R.); (F.W.); (S.A.K.); (J.D.); (K.H.); (P.N.)
- Heidelberg Institute of Radiation Oncology (HIRO), Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Simon David Sprengel
- Department of Radiation Oncology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; (M.S.); (K.S.); (I.S.); (S.D.S.); (J.R.); (F.W.); (S.A.K.); (J.D.); (K.H.); (P.N.)
- Heidelberg Institute of Radiation Oncology (HIRO), Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Jonas Ristau
- Department of Radiation Oncology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; (M.S.); (K.S.); (I.S.); (S.D.S.); (J.R.); (F.W.); (S.A.K.); (J.D.); (K.H.); (P.N.)
- Heidelberg Institute of Radiation Oncology (HIRO), Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Fabian Weykamp
- Department of Radiation Oncology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; (M.S.); (K.S.); (I.S.); (S.D.S.); (J.R.); (F.W.); (S.A.K.); (J.D.); (K.H.); (P.N.)
- Heidelberg Institute of Radiation Oncology (HIRO), Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Manuel Röhrich
- Department of Nuclear Medicine, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; (M.R.); (U.H.); (F.L.G.)
| | - Stefan A. Koerber
- Department of Radiation Oncology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; (M.S.); (K.S.); (I.S.); (S.D.S.); (J.R.); (F.W.); (S.A.K.); (J.D.); (K.H.); (P.N.)
- Heidelberg Institute of Radiation Oncology (HIRO), Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Uwe Haberkorn
- Department of Nuclear Medicine, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; (M.R.); (U.H.); (F.L.G.)
| | - Juergen Debus
- Department of Radiation Oncology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; (M.S.); (K.S.); (I.S.); (S.D.S.); (J.R.); (F.W.); (S.A.K.); (J.D.); (K.H.); (P.N.)
- Heidelberg Institute of Radiation Oncology (HIRO), Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
- Heidelberg Ion-Beam Therapy Center (HIT), Im Neuenheimer Feld 450, 69120 Heidelberg, Germany
- Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany
- German Cancer Consortium (DKTK), Partner Site Heidelberg, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany
| | - Klaus Herfarth
- Department of Radiation Oncology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; (M.S.); (K.S.); (I.S.); (S.D.S.); (J.R.); (F.W.); (S.A.K.); (J.D.); (K.H.); (P.N.)
- Heidelberg Institute of Radiation Oncology (HIRO), Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
- Heidelberg Ion-Beam Therapy Center (HIT), Im Neuenheimer Feld 450, 69120 Heidelberg, Germany
- Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany
- German Cancer Consortium (DKTK), Partner Site Heidelberg, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany
| | - Frederik L. Giesel
- Department of Nuclear Medicine, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; (M.R.); (U.H.); (F.L.G.)
| | - Patrick Naumann
- Department of Radiation Oncology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; (M.S.); (K.S.); (I.S.); (S.D.S.); (J.R.); (F.W.); (S.A.K.); (J.D.); (K.H.); (P.N.)
- Heidelberg Institute of Radiation Oncology (HIRO), Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| |
Collapse
|
22
|
Vogin G, Hettal L, Bartau C, Thariat J, Claeys MV, Peyraga G, Retif P, Schick U, Antoni D, Bodgal Z, Dhermain F, Feuvret L. Cranial organs at risk delineation: heterogenous practices in radiotherapy planning. Radiat Oncol 2021; 16:26. [PMID: 33541394 PMCID: PMC7863275 DOI: 10.1186/s13014-021-01756-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 01/28/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Segmentation is a crucial step in treatment planning that directly impacts dose distribution and optimization. The aim of this study was to evaluate the inter-individual variability of common cranial organs at risk (OAR) delineation in neurooncology practice. METHODS Anonymized simulation contrast-enhanced CT and MR scans of one patient with a solitary brain metastasis was used for delineation and analysis. Expert professionals from 16 radiotherapy centers involved in brain structures delineation were asked to segment 9 OAR on their own treatment planning system. As reference, two experts in neurooncology, produced a unique consensual contour set according to guidelines. Overlap ratio, Kappa index (KI), volumetric ratio, Commonly Contoured Volume, Supplementary Contoured Volume were evaluated using Artiview™ v 2.8.2-according to occupation, seniority and level of expertise of all participants. RESULTS For the most frequently delineated and largest OAR, the mean KI are often good (0.8 for the parotid and the brainstem); however, for the smaller OAR, KI degrade (0.3 for the optic chiasm, 0.5% for the cochlea), with a significant discrimination (p < 0.01). The radiation oncologists, members of Association des Neuro-Oncologue d'Expression Française society performed better in all indicators compared to non-members (p < 0.01). Our exercise was effective in separating the different participating centers with 3 of the reported indicators (p < 0.01). CONCLUSION Our study illustrates the heterogeneity in normal structures contouring between professionals. We emphasize the need for cerebral OAR delineation harmonization-that is a major determinant of therapeutic ratio and clinical trials evaluation.
Collapse
Affiliation(s)
- Guillaume Vogin
- Department of Radiation Oncology, Institut de Cancérologie de Lorraine, Vandoeuvre Les Nancy, France
- IMoPA, UMR 7365 CNRS-Université de Lorraine, Vandoeuvre Les Nancy, France
- Centre National de radiothérapie du Grand-Duché de Luxembourg, Centre François Baclesse, Boîte postale 436, 4005 Esch sur Alzette, Luxembourg
| | - Liza Hettal
- IMoPA, UMR 7365 CNRS-Université de Lorraine, Vandoeuvre Les Nancy, France
| | - Clarisse Bartau
- Aquilab SAS, Parc Eurasanté - 250 rue Salvador Allende, Loos, France
| | - Juliette Thariat
- Département de Radiothérapie, Centre François Baclesse/ARCHADE, 3 Av General Harris, Caen, France
- Laboratoire de Physique Corpusculaire IN2P3/ENSICAEN - UMR6534 - Unicaen, Normandie Université, Caen, France
| | | | - Guillaume Peyraga
- Service de Radiothérapie, Institut Universitaire du Cancer de Toulouse (Oncopole), Toulouse, France
| | - Paul Retif
- Service de Radiothérapie, CHR de Metz-Thionville Site Mercy, Metz, France
| | - Ulrike Schick
- Département de radiothérapie, CHU de Brest, Brest, France
| | - Delphine Antoni
- Département de radiothérapie, Institut de Cancérologie Strasbourg Europe (ICANS), Strasbourg, France
| | - Zsuzsa Bodgal
- Centre National de radiothérapie du Grand-Duché de Luxembourg, Centre François Baclesse, Boîte postale 436, 4005 Esch sur Alzette, Luxembourg
| | - Frederic Dhermain
- Radiation Oncology Department, Gustave Roussy University Hospital, Villejuif, France
| | - Loic Feuvret
- Department of Radiation Oncology, AP-HP, Hôpitaux Universitaires La Pitié Salpêtrière - Charles Foix, Sorbonne Université, Paris, France
| |
Collapse
|
23
|
Automated ultrasound assessment of amniotic fluid index using deep learning. Med Image Anal 2021; 69:101951. [PMID: 33515982 DOI: 10.1016/j.media.2020.101951] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Revised: 12/13/2020] [Accepted: 12/21/2020] [Indexed: 12/19/2022]
Abstract
The estimation of antenatal amniotic fluid (AF) volume (AFV) is important as it offers crucial information about fetal development, fetal well-being, and perinatal prognosis. However, AFV measurement is cumbersome and patient specific. Moreover, it is heavily sonographer-dependent, with measurement accuracy varying greatly depending on the sonographer's experience. Therefore, the development of accurate, robust, and adoptable methods to evaluate AFV is highly desirable. In this regard, automation is expected to reduce user-based variability and workload of sonographers. However, automating AFV measurement is very challenging, because accurate detection of AF pockets is difficult owing to various confusing factors, such as reverberation artifact, AF mimicking region and floating matter. Furthermore, AF pocket exhibits an unspecified variety of shapes and sizes, and ultrasound images often show missing or incomplete structural boundaries. To overcome the abovementioned difficulties, we develop a hierarchical deep-learning-based method, which consider clinicians' anatomical-knowledge-based approaches. The key step is the segmentation of the AF pocket using our proposed deep learning network, AF-net. AF-net is a variation of U-net combined with three complementary concepts - atrous convolution, multi-scale side-input layer, and side-output layer. The experimental results demonstrate that the proposed method provides a measurement of the amniotic fluid index (AFI) that is as robust and precise as the results from clinicians. The proposed method achieved a Dice similarity of 0.877±0.086 for AF segmentation and achieved a mean absolute error of 2.666±2.986 and mean relative error of 0.018±0.023 for AFI value. To the best of our knowledge, our method, for the first time, provides an automated measurement of AFI.
Collapse
|
24
|
Tyyger M, Bhaumik S, Nix M, Currie S, Nallathambi C, Speight R, Al-Qaisieh B, Murray L. Volumetric and dosimetric impact of post-surgical MRI-guided radiotherapy for glioblastoma: A pilot study. BJR Open 2021; 3:20210067. [PMID: 35707751 PMCID: PMC9185844 DOI: 10.1259/bjro.20210067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 11/08/2021] [Indexed: 11/29/2022] Open
Abstract
Objectives: Glioblastoma (GBM) radiotherapy (RT) target delineation requires MRI, ideally concurrent with CT simulation (pre-RT MRI). Due to limited MRI availability, <72 h post-surgery MRI is commonly used instead. Whilst previous investigations assessed volumetric differences between post-surgical and pre-RT delineations, dosimetric impact remains unknown. We quantify volumetric and dosimetric impact of using post-surgical MRI for GBM target delineation. Methods: Gross tumour volumes (GTVs) for five GBM patients receiving chemo-RT with post-surgical and pre-RT MRIs were delineated by three independent observers. Planning target volumes (PTVs) and RT plans were generated for each GTV. Volumetric and dosimetric differences were assessed through: absolute volumes, volume-distance histograms and dose-volume histogram statistics. Results: Post-surgical MRI delineations had significantly (p < 0.05) larger GTV and PTV volumes (median 16.7 and 64.4 cm3, respectively). Post-surgical RT plans, applied to pre-RT delineations, had significantly decreased (p < 0.01) median PTV doses (ΔD99% = −8.1 Gy and ΔD95% = −2.0 Gy). Median organ-at-risk (OAR) dose increases (brainstem ΔD5% =+0.8, normal brain mean dose =+2.9 and normal brain ΔD10% = 5.3 Gy) were observed. Conclusion: Post-surgical MRI delineation significantly impacted RT planning, with larger normal-appearing tissue volumes irradiated and increased OAR doses, despite a reduced coverage of the pre-RT defined target. Advances in knowledge: We believe this is the first investigation assessing the dosimetric impact of using post-surgical MRI for GBM target delineation. It highlights the potential of significantly degraded RT plans, showing the clinical need for dedicated MRI for GBM RT.
Collapse
Affiliation(s)
- Marcus Tyyger
- Leeds Cancer Centre, Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | | | - Michael Nix
- Leeds Cancer Centre, Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | - Stuart Currie
- Department of Neuroradiology, Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | | | - Richard Speight
- Leeds Cancer Centre, Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | | | - Louise Murray
- Leeds Cancer Centre, Leeds Teaching Hospitals NHS Trust, Leeds, UK
- Radiotherapy Research Group, University of Leeds, Leeds, UK
| |
Collapse
|
25
|
Kulkarni SS, Shetty NS, Gala KB, Patkar S, Narang A, Polnaya AM, Patil S, Shetty NG, Hota F, Goel M. A Validation Study of Liver Volumetry Estimation by a Semiautomated Software in Patients Undergoing Hepatic Resections. JOURNAL OF CLINICAL INTERVENTIONAL RADIOLOGY ISVIR 2020. [DOI: 10.1055/s-0040-1721534] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
AbstractPurpose The purpose of this study was to validate the use of a semiautomated software for liver volumetry preoperatively by comparing it with the volume of resected specimen in patients undergoing hepatic resections.Materials and Methods This is a single-center retrospective study of patients who underwent estimation of future liver remnant (FLR) using Myrian XP-Liver which is a semiautomated software for hepatectomy. The estimated resection volume, which is the sum of volume of normal liver to be resected and tumor volume, was compared with actual specimen weight to calculate the accuracy of the software. The statistical analysis was performed with SPSS software version 24.Results Data on FLR estimation using the semiautomated software was available for 200 out of 388 patients who underwent formal hepatic resections. The median resected volume of surgical specimen was 650 mL (interquartile range [IQR] 364–950), while the median estimated volume using the Myrian software was 617 mL (IQR 362–979). There was significant correlation between estimated resection volume calculated using the semiautomated method and actual specimen weight (p-value < 0.0001) with the Spearman’s correlation value of 0.956.Conclusion The estimated volume of liver to be resected as calculated by the semiautomated software was accurate and correlated significantly with the volume of resected specimen, and hence, the estimation of FLR volume may likely correlate with the true postoperative residual liver volume. In addition, the software-based liver segmentation, FLR estimation, and color-coded three-dimensional images provide a clear road map to the surgeon to facilitate safe resection.
Collapse
Affiliation(s)
- Suyash S. Kulkarni
- Interventional Radiology, Department of Radio-Diagnosis, Tata Memorial Hospital, Tata Memorial Centre, Mumbai, Maharashtra, India
- Homi Bhabha National Institute, Mumbai, India
| | - Nitin Sudhakar Shetty
- Interventional Radiology, Department of Radio-Diagnosis, Tata Memorial Hospital, Tata Memorial Centre, Mumbai, Maharashtra, India
- Homi Bhabha National Institute, Mumbai, India
| | - Kunal B. Gala
- Interventional Radiology, Department of Radio-Diagnosis, Tata Memorial Hospital, Tata Memorial Centre, Mumbai, Maharashtra, India
- Homi Bhabha National Institute, Mumbai, India
| | - Shraddha Patkar
- Homi Bhabha National Institute, Mumbai, India
- Gastrointestinal and HPB Surgery, Department of Surgical Oncology, Tata Memorial Hospital, Tata Memorial Centre, Mumbai, Maharashtra, India
| | - Amrita Narang
- Interventional Radiology, Department of Radio-Diagnosis, Tata Memorial Hospital, Tata Memorial Centre, Mumbai, Maharashtra, India
- Homi Bhabha National Institute, Mumbai, India
| | - Ashwin M. Polnaya
- Department of Radio-Diagnosis and Imaging, A. J. Institute of Medical Science and Research Centre, Mangalore, Karnataka, India
| | - Sushil Patil
- Interventional Radiology, Department of Radio-Diagnosis, Tata Memorial Hospital, Tata Memorial Centre, Mumbai, Maharashtra, India
| | - Neeraj G. Shetty
- Interventional Radiology, Department of Radio-Diagnosis, Tata Memorial Hospital, Tata Memorial Centre, Mumbai, Maharashtra, India
- Homi Bhabha National Institute, Mumbai, India
| | - Falguni Hota
- Interventional Radiology, Department of Radio-Diagnosis, Tata Memorial Hospital, Tata Memorial Centre, Mumbai, Maharashtra, India
- Homi Bhabha National Institute, Mumbai, India
| | - Mahesh Goel
- Homi Bhabha National Institute, Mumbai, India
- Gastrointestinal and HPB Surgery, Department of Surgical Oncology, Tata Memorial Hospital, Tata Memorial Centre, Mumbai, Maharashtra, India
| |
Collapse
|
26
|
Automatic adjustment of the pulse-coupled neural network hyperparameters based on differential evolution and cluster validity index for image segmentation. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2019.105547] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
27
|
Zeng C, Gu L, Liu Z, Zhao S. Review of Deep Learning Approaches for the Segmentation of Multiple Sclerosis Lesions on Brain MRI. Front Neuroinform 2020; 14:610967. [PMID: 33328949 PMCID: PMC7714963 DOI: 10.3389/fninf.2020.610967] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 10/26/2020] [Indexed: 11/18/2022] Open
Abstract
In recent years, there have been multiple works of literature reviewing methods for automatically segmenting multiple sclerosis (MS) lesions. However, there is no literature systematically and individually review deep learning-based MS lesion segmentation methods. Although the previous review also included methods based on deep learning, there are some methods based on deep learning that they did not review. In addition, their review of deep learning methods did not go deep into the specific categories of Convolutional Neural Network (CNN). They only reviewed these methods in a generalized form, such as supervision strategy, input data handling strategy, etc. This paper presents a systematic review of the literature in automated multiple sclerosis lesion segmentation based on deep learning. Algorithms based on deep learning reviewed are classified into two categories through their CNN style, and their strengths and weaknesses will also be given through our investigation and analysis. We give a quantitative comparison of the methods reviewed through two metrics: Dice Similarity Coefficient (DSC) and Positive Predictive Value (PPV). Finally, the future direction of the application of deep learning in MS lesion segmentation will be discussed.
Collapse
Affiliation(s)
- Chenyi Zeng
- School of Intelligent Systems Engineering, Sun Yat-Sen University, Guangzhou, China
| | - Lin Gu
- RIKEN AIP, Tokyo, Japan
- The University of Tokyo, Tokyo, Japan
| | - Zhenzhong Liu
- Tianjin Key Laboratory for Advanced Mechatronic System Design and Intelligent Control, School of Mechanical Engineering, Tianjin University of Technology, Tianjin, China
- National Demonstration Center for Experimental Mechanical and Electrical Engineering Education, Tianjin University of Technology, Tianjin, China
| | - Shen Zhao
- School of Intelligent Systems Engineering, Sun Yat-Sen University, Guangzhou, China
| |
Collapse
|
28
|
Renard F, Guedria S, Palma ND, Vuillerme N. Variability and reproducibility in deep learning for medical image segmentation. Sci Rep 2020; 10:13724. [PMID: 32792540 PMCID: PMC7426407 DOI: 10.1038/s41598-020-69920-0] [Citation(s) in RCA: 70] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Accepted: 07/11/2020] [Indexed: 12/11/2022] Open
Abstract
Medical image segmentation is an important tool for current clinical applications. It is the backbone of numerous clinical diagnosis methods, oncological treatments and computer-integrated surgeries. A new class of machine learning algorithm, deep learning algorithms, outperforms the results of classical segmentation in terms of accuracy. However, these techniques are complex and can have a high range of variability, calling the reproducibility of the results into question. In this article, through a literature review, we propose an original overview of the sources of variability to better understand the challenges and issues of reproducibility related to deep learning for medical image segmentation. Finally, we propose 3 main recommendations to address these potential issues: (1) an adequate description of the framework of deep learning, (2) a suitable analysis of the different sources of variability in the framework of deep learning, and (3) an efficient system for evaluating the segmentation results.
Collapse
Affiliation(s)
- Félix Renard
- Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG, 38000, Grenoble, France.
- Univ. Grenoble Alpes, AGEIS, 38000, Grenoble, France.
| | - Soulaimane Guedria
- Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG, 38000, Grenoble, France
- Univ. Grenoble Alpes, AGEIS, 38000, Grenoble, France
| | - Noel De Palma
- Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG, 38000, Grenoble, France
| | - Nicolas Vuillerme
- Univ. Grenoble Alpes, AGEIS, 38000, Grenoble, France
- Institut Universitaire de France, Paris, France
| |
Collapse
|
29
|
von der Esch E, Kohles AJ, Anger PM, Hoppe R, Niessner R, Elsner M, Ivleva NP. TUM-ParticleTyper: A detection and quantification tool for automated analysis of (Microplastic) particles and fibers. PLoS One 2020; 15:e0234766. [PMID: 32574195 PMCID: PMC7310837 DOI: 10.1371/journal.pone.0234766] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Accepted: 05/25/2020] [Indexed: 11/19/2022] Open
Abstract
TUM-ParticleTyper is a novel program for the automated detection, quantification and morphological characterization of fragments, including particles and fibers, in images from optical, fluorescence and electron microscopy (SEM). It can be used to automatically select targets for subsequent chemical analysis, e.g., Raman microscopy, or any other single particle identification method. The program was specifically developed and validated for the analysis of microplastic particles on gold coated polycarbonate filters. Our method development was supported by the design of a filter holder that minimizes filter roughness and facilitates enhanced focusing for better images and Raman measurements. The TUM-ParticleTyper software is tunable to the user's specific sample demands and can extract the morphological characteristics of detected objects (coordinates, Feret's diameter min / max, area and shape). Results are saved in csv-format and contours of detected objects are displayed as an overlay on the original image. Additionally, the program can stitch a set of images to create a full image out of several smaller ones. An additional useful feature is the inclusion of a statistical process to calculate the minimum number of particles that must be chemically identified to be representative of all particles localized on the substrate. The program performance was evaluated on genuine microplastic samples. The TUM-ParticleTyper software localizes particles using an adaptive threshold with results comparable to the "gold standard" method (manual localization by an expert) and surpasses the commonly used Otsu thresholding by doubling the rate of true positive localizations. This enables the analysis of a statistically significant number of particles on the filter selected by random sampling, measured via single point approach. This extreme reduction in measurement points was validated by comparison to chemical imaging, applying both procedures to the same area at comparable processing times. The single point approach was both faster and more accurate proving the applicability of the presented program.
Collapse
Affiliation(s)
- Elisabeth von der Esch
- Institute of Hydrochemistry, Chair of Analytical Chemistry and Water Chemistry, Technical University of Munich, Munich, Germany
| | - Alexander J. Kohles
- Institute of Hydrochemistry, Chair of Analytical Chemistry and Water Chemistry, Technical University of Munich, Munich, Germany
| | - Philipp M. Anger
- Institute of Hydrochemistry, Chair of Analytical Chemistry and Water Chemistry, Technical University of Munich, Munich, Germany
| | - Roland Hoppe
- Institute of Hydrochemistry, Chair of Analytical Chemistry and Water Chemistry, Technical University of Munich, Munich, Germany
| | - Reinhard Niessner
- Institute of Hydrochemistry, Chair of Analytical Chemistry and Water Chemistry, Technical University of Munich, Munich, Germany
| | - Martin Elsner
- Institute of Hydrochemistry, Chair of Analytical Chemistry and Water Chemistry, Technical University of Munich, Munich, Germany
| | - Natalia P. Ivleva
- Institute of Hydrochemistry, Chair of Analytical Chemistry and Water Chemistry, Technical University of Munich, Munich, Germany
| |
Collapse
|
30
|
Jiao H, Jiang X, Pang Z, Lin X, Huang Y, Li L. Deep Convolutional Neural Networks-Based Automatic Breast Segmentation and Mass Detection in DCE-MRI. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:2413706. [PMID: 32454879 PMCID: PMC7232735 DOI: 10.1155/2020/2413706] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Accepted: 02/13/2020] [Indexed: 02/07/2023]
Abstract
Breast segmentation and mass detection in medical images are important for diagnosis and treatment follow-up. Automation of these challenging tasks can assist radiologists by reducing the high manual workload of breast cancer analysis. In this paper, deep convolutional neural networks (DCNN) were employed for breast segmentation and mass detection in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). First, the region of the breasts was segmented from the remaining body parts by building a fully convolutional neural network based on U-Net++. Using the method of deep learning to extract the target area can help to reduce the interference external to the breast. Second, a faster region with convolutional neural network (Faster RCNN) was used for mass detection on segmented breast images. The dataset of DCE-MRI used in this study was obtained from 75 patients, and a 5-fold cross validation method was adopted. The statistical analysis of breast region segmentation was carried out by computing the Dice similarity coefficient (DSC), Jaccard coefficient, and segmentation sensitivity. For validation of breast mass detection, the sensitivity with the number of false positives per case was computed and analyzed. The Dice and Jaccard coefficients and the segmentation sensitivity value for breast region segmentation were 0.951, 0.908, and 0.948, respectively, which were better than those of the original U-Net algorithm, and the average sensitivity for mass detection achieved 0.874 with 3.4 false positives per case.
Collapse
Affiliation(s)
- Han Jiao
- School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510006, China
| | - Xinhua Jiang
- Department of Medical Imaging, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China
| | - Zhiyong Pang
- School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510006, China
| | - Xiaofeng Lin
- Department of Medical Imaging, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China
| | - Yihua Huang
- School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510006, China
| | - Li Li
- Department of Medical Imaging, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China
| |
Collapse
|
31
|
Comelli A, Bignardi S, Stefano A, Russo G, Sabini MG, Ippolito M, Yezzi A. Development of a new fully three-dimensional methodology for tumours delineation in functional images. Comput Biol Med 2020; 120:103701. [PMID: 32217282 PMCID: PMC7237290 DOI: 10.1016/j.compbiomed.2020.103701] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Revised: 03/11/2020] [Accepted: 03/11/2020] [Indexed: 01/15/2023]
Abstract
Delineation of tumours in Positron Emission Tomography (PET) plays a crucial role in accurate diagnosis and radiotherapy treatment planning. In this context, it is of outmost importance to devise efficient and operator-independent segmentation algorithms capable of reconstructing the tumour three-dimensional (3D) shape. In previous work, we proposed a system for 3D tumour delineation on PET data (expressed in terms of Standardized Uptake Value - SUV), based on a two-step approach. Step 1 identified the slice enclosing the maximum SUV and generated a rough contour surrounding it. Such contour was then used to initialize step 2, where the 3D shape of the tumour was obtained by separately segmenting 2D PET slices, leveraging the slice-by-slice marching approach. Additionally, we combined active contours and machine learning components to improve performance. Despite its success, the slice marching approach poses unnecessary limitations that are naturally removed by performing the segmentation directly in 3D. In this paper, we migrate our system into 3D. In particular, the segmentation in step 2 is now performed by evolving an active surface directly in the 3D space. The key points of such an advancement are that it performs the shape reconstruction on the whole stack of slices simultaneously, naturally leveraging cross-slice information that could not be exploited before. Additionally, it does not require any specific stopping condition, as the active surface naturally reaches a stable topology once convergence is achieved. Performance of this fully 3D approach is evaluated on the same dataset discussed in our previous work, which comprises fifty PET scans of lung, head and neck, and brain tumours. The results have confirmed that a benefit is indeed achieved in practice for all investigated anatomical districts, both quantitatively, through a set of commonly used quality indicators (dice similarity coefficient >87.66%, Hausdorff distance < 1.48 voxel and Mahalanobis distance < 0.82 voxel), and qualitatively in terms of Likert score (>3 in 54% of the tumours).
Collapse
Affiliation(s)
- Albert Comelli
- Ri.MED Foundation, via Bandiera 11, 90133, Palermo, Italy
| | - Samuel Bignardi
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA
| | - Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy.
| | - Giorgio Russo
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy; Medical Physics Unit, Cannizzaro Hospital, Catania, Italy
| | | | - Massimo Ippolito
- Nuclear Medicine Department, Cannizzaro Hospital, Catania, Italy
| | - Anthony Yezzi
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA
| |
Collapse
|
32
|
Sbei A, ElBedoui K, Barhoumi W, Maktouf C. Gradient-based generation of intermediate images for heterogeneous tumor segmentation within hybrid PET/MRI scans. Comput Biol Med 2020; 119:103669. [PMID: 32339115 DOI: 10.1016/j.compbiomed.2020.103669] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 02/17/2020] [Accepted: 02/17/2020] [Indexed: 10/25/2022]
Abstract
Segmentation of tumors from hybrid PET/MRI scans plays an essential role in accurate diagnosis and treatment planning. However, when treating tumors, several challenges, notably heterogeneity and the problem of leaking into surrounding tissues with similar high uptake, have to be considered. To address these issues, we propose an automated method for accurate delineation of tumors in hybrid PET/MRI scans. The method is mainly based on creating intermediate images. In fact, an automatic detection technique that determines a preliminary Interesting Uptake Region (IUR) is firstly performed. To overcome the leakage problem, a separation technique is adopted to generate the final IUR. Then, smart seeds are provided for the Graph Cut (GC) technique to obtain the tumor map. To create intermediate images that tend to reduce heterogeneity faced on the original images, the tumor map gradient is combined with the gradient image. Lastly, segmentation based on the GCsummax technique is applied to the generated images. The proposed method has been validated on PET phantoms as well as on real-world PET/MRI scans of prostate, liver and pancreatic tumors. Experimental comparison revealed the superiority of the proposed method over state-of-the-art methods. This confirms the crucial role of automatically creating intermediate images in addressing the problem of wrongly estimating arc weights for heterogeneous targets.
Collapse
Affiliation(s)
- Arafet Sbei
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Artificial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), 2 Rue Bayrouni, 2080 Ariana, Tunisia
| | - Khaoula ElBedoui
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Artificial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), 2 Rue Bayrouni, 2080 Ariana, Tunisia; Université de Carthage, Ecole Nationale d'Ingénieurs de Carthage, 45 Rue des Entrepreneurs, 2035 Tunis-Carthage, Tunisia
| | - Walid Barhoumi
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Artificial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), 2 Rue Bayrouni, 2080 Ariana, Tunisia; Université de Carthage, Ecole Nationale d'Ingénieurs de Carthage, 45 Rue des Entrepreneurs, 2035 Tunis-Carthage, Tunisia.
| | - Chokri Maktouf
- Nuclear Medicine Department, Pasteur Institute of Tunis, Tunis, Tunisia
| |
Collapse
|
33
|
Li J, Udupa JK, Tong Y, Wang L, Torigian DA. LinSEM: Linearizing segmentation evaluation metrics for medical images. Med Image Anal 2020; 60:101601. [PMID: 31811980 PMCID: PMC6980787 DOI: 10.1016/j.media.2019.101601] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Revised: 08/06/2019] [Accepted: 11/07/2019] [Indexed: 10/25/2022]
Abstract
Numerous algorithms are available for segmenting medical images. Empirical discrepancy metrics are commonly used in measuring the similarity or difference between segmentations by algorithms and "true" segmentations. However, one issue with the commonly used metrics is that the same metric value often represents different levels of "clinical acceptability" for different objects depending on their size, shape, and complexity of form. An ideal segmentation evaluation metric should be able to reflect degrees of acceptability directly from metric values and be able to show the same acceptability meaning by the same metric value for objects of different shape, size, and form. Intuitively, metrics which have a linear relationship with degree of acceptability will satisfy these conditions of the ideal metric. This issue has not been addressed in the medical image segmentation literature. In this paper, we propose a method called LinSEM for linearizing commonly used segmentation evaluation metrics based on corresponding degrees of acceptability evaluated by an expert in a reader study. LinSEM consists of two main parts: (a) estimating the relationship between metric values and degrees of acceptability separately for each considered metric and object, and (b) linearizing any given metric value corresponding to a given segmentation of an object based on the estimated relationship. Since algorithmic segmentations do not usually cover the full range of variability of acceptability, we create a set (SS) of simulated segmentations for each object that guarantee such coverage by using image transformations applied to a set (ST) of true segmentations of the object. We then conduct a reader study wherein the reader assigns an acceptability score (AS) for each sample in SS, expressing the acceptability of the sample on a 1 to 5 scale. Then the metric-AS relationship is constructed for the object by using an estimation method. With the idea that the ideal metric should be linear with respect to acceptability, we can then linearize the metric value of any segmentation sample of the object from a set (SA) of actual segmentations to its linearized value by using the constructed metric-acceptability relationship curve. Experiments are conducted involving three metrics - Dice coefficient (DC), Jaccard index (JI), and Hausdorff Distance (HD) - on five objects: skin outer boundary of the head and neck (cervico-thoracic) body region superior to the shoulders, right parotid gland, mandible, cervical esophagus, and heart. Actual segmentations (SA) of these objects are generated via our Automatic Anatomy Recognition (AAR) method. Our results indicate that, generally, JI has a more linear relationship with acceptability before linearization than other metrics. LinSEM achieves significantly improved uniformity of meaning post-linearization across all tested objects and metrics, except in a few cases where the departure from linearity was insignificant. This improvement is generally the largest for DC and HD reaching 8-25% for many tested cases. Although some objects (such as right parotid gland and esophagus for DC and JI) are close in their meaning between themselves before linearization, they are distant in this meaning from other objects but are brought close to other objects after linearization. This suggests the importance of performing linearization considering all objects in a body region and body-wide.
Collapse
Affiliation(s)
- Jieyu Li
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, 800 Dongchuan RD, Shanghai 200240, China; Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard Building, 3710 Hamilton Walk, Philadelphia, PA 19104, United States
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard Building, 3710 Hamilton Walk, Philadelphia, PA 19104, United States.
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard Building, 3710 Hamilton Walk, Philadelphia, PA 19104, United States
| | - Lisheng Wang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, 800 Dongchuan RD, Shanghai 200240, China
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard Building, 3710 Hamilton Walk, Philadelphia, PA 19104, United States
| |
Collapse
|
34
|
Thaha R, Jogi SP, Rajan S, Mahajan V, Venugopal VK, Mehndiratta A, Singh A. Modified radial-search algorithm for segmentation of tibiofemoral cartilage in MR images of patients with subchondral lesion. Int J Comput Assist Radiol Surg 2020; 15:403-413. [DOI: 10.1007/s11548-020-02116-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Accepted: 01/06/2020] [Indexed: 02/06/2023]
|
35
|
Demirer M, Candemir S, Bigelow MT, Yu SM, Gupta V, Prevedello LM, White RD, Yu JS, Grimmer R, Wels M, Wimmer A, Halabi AH, Ihsani A, O'Donnell TP, Erdal BS. A User Interface for Optimizing Radiologist Engagement in Image Data Curation for Artificial Intelligence. Radiol Artif Intell 2019; 1:e180095. [PMID: 33937804 DOI: 10.1148/ryai.2019180095] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Revised: 06/14/2019] [Accepted: 06/25/2019] [Indexed: 11/11/2022]
Abstract
Purpose To delineate image data curation needs and describe a locally designed graphical user interface (GUI) to aid radiologists in image annotation for artificial intelligence (AI) applications in medical imaging. Materials and Methods GUI components support image analysis toolboxes, picture archiving and communication system integration, third-party applications, processing of scripting languages, and integration of deep learning libraries. For clinical AI applications, GUI components included two-dimensional segmentation and classification; three-dimensional segmentation and quantification; and three-dimensional segmentation, quantification, and classification. To assess radiologist engagement and performance efficiency associated with GUI-related capabilities, image annotation rate (studies per day) and speed (minutes per case) were evaluated in two clinical scenarios of varying complexity: hip fracture detection and coronary atherosclerotic plaque demarcation and stenosis grading. Results For hip fracture, 1050 radiographs were annotated over 7 days (150 studies per day; median speed: 10 seconds per study [interquartile range, 3-21 seconds per study]). A total of 294 coronary CT angiographic studies with 1843 arteries and branches were annotated for atherosclerotic plaque over 23 days (15.2 studies [80.1 vessels] per day; median speed: 6.08 minutes per study [interquartile range, 2.8-10.6 minutes per study] and 73 seconds per vessel [interquartile range, 20.9-155 seconds per vessel]). Conclusion GUI-component compatibility with common image analysis tools facilitates radiologist engagement in image data curation, including image annotation, supporting AI application development and evolution for medical imaging. When complemented by other GUI elements, a continuous integrated workflow supporting formation of an agile deep neural network life cycle results.Supplemental material is available for this article.© RSNA, 2019.
Collapse
Affiliation(s)
- Mutlu Demirer
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Sema Candemir
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Matthew T Bigelow
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Sarah M Yu
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Vikash Gupta
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Luciano M Prevedello
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Richard D White
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Joseph S Yu
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Rainer Grimmer
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Michael Wels
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Andreas Wimmer
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Abdul H Halabi
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Alvin Ihsani
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Thomas P O'Donnell
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Barbaros S Erdal
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| |
Collapse
|
36
|
Yang S, Yoon HJ, Yazdi SJM, Lee JH. A novel automated lumen segmentation and classification algorithm for detection of irregular protrusion after stents deployment. Int J Med Robot 2019; 16:e2033. [PMID: 31469940 DOI: 10.1002/rcs.2033] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2019] [Revised: 08/12/2019] [Accepted: 08/24/2019] [Indexed: 12/11/2022]
Abstract
BACKGROUND Clinically, irregular protrusions and blockages after stent deployment can lead to significant adverse outcomes such as thrombotic reocclusion or restenosis. In this study, we propose a novel fully automated method for irregular lumen segmentation and normal/abnormal lumen classification. METHODS The proposed method consists of a lumen segmentation, feature extraction, and lumen classification. In total, 92 features were extracted to classify normal/abnormal lumen. The lumen classification method is a combination of supervised learning algorithm and feature selection that is a partition-membership filter method. RESULTS As the results, our proposed lumen segmentation method obtained the average of dice similarity coefficient (DSC) and the accuracy of proposed features and the random forest (RF) for normal/abnormal lumen classification as 97.6% and 98.2%, respectively. CONCLUSIONS Therefore, we can lead to better understanding of the overall vascular status and help to determine cardiovascular diagnosis.
Collapse
Affiliation(s)
- Su Yang
- Department of Biomedical Engineering, School of Medicine, Keimyung University, Daegu, South Korea
| | - Hyuck-Jun Yoon
- Department of Internal Medicine, School of Medicine, Keimyung University, Daegu, South Korea
| | | | - Jong-Ha Lee
- Department of Biomedical Engineering, School of Medicine, Keimyung University, Daegu, South Korea
| |
Collapse
|
37
|
Consistent validation of gray-level thresholding image segmentation algorithms based on machine learning classifiers. Stat Pap (Berl) 2019. [DOI: 10.1007/s00362-019-01138-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
38
|
Gao K, Niu S, Ji Z, Wu M, Chen Q, Xu R, Yuan S, Fan W, Chen Y, Dong J. Double-branched and area-constraint fully convolutional networks for automated serous retinal detachment segmentation in SD-OCT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 176:69-80. [PMID: 31200913 DOI: 10.1016/j.cmpb.2019.04.027] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 04/17/2019] [Accepted: 04/23/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Quantitative assessment of subretinal fluid in spectral domain optical coherence tomography (SD-OCT) images is crucial for the diagnosis of central serous chorioretinopathy. For the subretinal fluid segmentation, the traditional methods need to segment retinal layers and then segment subretinal fluid. The layer segmentation has a high influence on subretinal fluid segmentation, so we aim to develop a deep learning model to segment subretinal fluid automatically without layer segmentation. METHODS In this paper, we propose a novel image-to-image double-branched and area-constraint fully convolutional networks (DA-FCN) for segmenting subretinal fluid in SD-OCT images. Firstly, the dataset is extended by mirroring image, which helps to overcome the over-fitting problem in the training stage. Then, double-branched structures are designed to learn the shallow coarse and deep representations from the SD-OCT images. DA-FCN model is directly trained using the image and corresponding pixel-based ground truth. Finally, we introduce a novel supervision mechanism by jointing the area loss LA with the softmax loss LS to learn more representative features. RESULTS The testing dataset with 52 SD-OCT volumes from 35 eyes of 35 patients is used for the evaluation of the proposed algorithm based on the cross-validation method. For the three criterions, including the true positive volume fraction, dice similarity coefficient, and positive predicative value, our method can obtain the results of (1) 94.3, 95.3, and 96.4 for dataset 1; (2) 97.3, 95.3, and 93.4 for dataset 2; (3) 93.0, 92.8, and 92.8 for dataset 3; (4) 89.7, 90.1, and 92.6 for dataset 4. CONCLUSION In this work, we propose a novel fully convolutional network for the automatic segmentation of the subretinal fluid. By constructing the double branched structures and area constraint term, our method shows higher segmentation accuracy without layer segmentation compared with other methods.
Collapse
Affiliation(s)
- Kun Gao
- Shandong Provincial Key Laboratory of Network based Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan 250022, China
| | - Sijie Niu
- Shandong Provincial Key Laboratory of Network based Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan 250022, China.
| | - Zexuan Ji
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Menglin Wu
- School of Computer Science and Technology, Nanjing Tech University, Nanjing 210094, China
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Rongbin Xu
- Shandong Provincial Key Laboratory of Network based Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan 250022, China
| | - Songtao Yuan
- Department of Ophthalmology, The First Affiliated Hospital with Nanjing Medical University, Nanjing 210094, China
| | - Wen Fan
- Department of Ophthalmology, The First Affiliated Hospital with Nanjing Medical University, Nanjing 210094, China
| | - Yuehui Chen
- Shandong Provincial Key Laboratory of Network based Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan 250022, China
| | - Jiwen Dong
- Shandong Provincial Key Laboratory of Network based Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan 250022, China
| |
Collapse
|
39
|
Hu J, Chen Y, Yi Z. Automated segmentation of macular edema in OCT using deep neural networks. Med Image Anal 2019; 55:216-227. [PMID: 31096135 DOI: 10.1016/j.media.2019.05.002] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2018] [Revised: 04/23/2019] [Accepted: 05/09/2019] [Indexed: 11/29/2022]
Abstract
Macular edema is an eye disease that can affect visual acuity. Typical disease symptoms include subretinal fluid (SRF) and pigment epithelium detachment (PED). Optical coherence tomography (OCT) has been widely used for diagnosing macular edema because of its non-invasive and high resolution properties. Segmentation for macular edema lesions from OCT images plays an important role in clinical diagnosis. Many computer-aided systems have been proposed for the segmentation. Most traditional segmentation methods used in these systems are based on low-level hand-crafted features, which require significant domain knowledge and are sensitive to the variations of lesions. To overcome these shortcomings, this paper proposes to use deep neural networks (DNNs) together with atrous spatial pyramid pooling (ASPP) to automatically segment the SRF and PED lesions. Lesions-related features are first extracted by DNNs, then processed by ASPP which is composed of multiple atrous convolutions with different fields of view to accommodate the various scales of the lesions. Based on ASPP, a novel module called stochastic ASPP (sASPP) is proposed to combat the co-adaptation of multiple atrous convolutions. A large OCT dataset provided by a competition platform called "AI Challenger" are used to train and evaluate the proposed model. Experimental results demonstrate that the DNNs together with ASPP achieve higher segmentation accuracy compared with the state-of-the-art method. The stochastic operation added in sASPP is empirically verified as an effective regularization method that can alleviate the overfitting problem and significantly reduce the validation error.
Collapse
Affiliation(s)
- Junjie Hu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| | - Yuanyuan Chen
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China.
| |
Collapse
|
40
|
Candemir S, Antani S. A review on lung boundary detection in chest X-rays. Int J Comput Assist Radiol Surg 2019; 14:563-576. [PMID: 30730032 PMCID: PMC6420899 DOI: 10.1007/s11548-019-01917-1] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Accepted: 01/16/2019] [Indexed: 01/22/2023]
Abstract
PURPOSE Chest radiography is the most common imaging modality for pulmonary diseases. Due to its wide usage, there is a rich literature addressing automated detection of cardiopulmonary diseases in digital chest X-rays (CXRs). One of the essential steps for automated analysis of CXRs is localizing the relevant region of interest, i.e., isolating lung region from other less relevant parts, for applying decision-making algorithms there. This article provides an overview of the recent literature on lung boundary detection in CXR images. METHODS We review the leading lung segmentation algorithms proposed in period 2006-2017. First, we present a review of articles for posterior-anterior view CXRs. Then, we mention studies which operate on lateral views. We pay particular attention to works that focus their efforts on deformed lungs and pediatric cases. We also highlight the radiographic measures extracted from lung boundary and their use in automatically detecting cardiopulmonary abnormalities. Finally, we identify challenges in dataset curation and expert delineation process, and we listed publicly available CXR datasets. RESULTS (1) We classified algorithms into four categories: rule-based, pixel classification-based, model-based, hybrid, and deep learning-based algorithms. Based on the reviewed articles, hybrid methods and deep learning-based methods surpass the algorithms in other classes and have segmentation performance as good as inter-observer performance. However, they require long training process and pose high computational complexity. (2) We found that most of the algorithms in the literature are evaluated on posterior-anterior view adult CXRs with a healthy lung anatomy appearance without considering challenges in abnormal CXRs. (3) We also found that there are limited studies for pediatric CXRs. The lung appearance in pediatrics, especially in infant cases, deviates from adult lung appearance due to the pediatric development stages. Moreover, pediatric CXRs are noisier than adult CXRs due to interference by other objects, such as someone holding the child's arms or the child's body, and irregular body pose. Therefore, lung boundary detection algorithms developed on adult CXRs may not perform accurately in pediatric cases and need additional constraints suitable for pediatric CXR imaging characteristics. (4) We have also stated that one of the main challenges in medical image analysis is accessing the suitable datasets. We listed benchmark CXR datasets for developing and evaluating the lung boundary algorithms. However, the number of CXR images with reference boundaries is limited due to the cumbersome but necessary process of expert boundary delineation. CONCLUSIONS A reliable computer-aided diagnosis system would need to support a greater variety of lung and background appearance. To our knowledge, algorithms in the literature are evaluated on posterior-anterior view adult CXRs with a healthy lung anatomy appearance, without considering ambiguous lung silhouettes due to pathological deformities, anatomical alterations due to misaligned body positioning, patient's development stage and gross background noises such as holding hands, jewelry, patient's head and legs in CXR. Considering all the challenges which are not very well addressed in the literature, developing lung boundary detection algorithms that are robust to such interference remains a challenging task. We believe that a broad review of lung region detection algorithms would be useful for researchers working in the field of automated detection/diagnosis algorithms for lung/heart pathologies in CXRs.
Collapse
Affiliation(s)
- Sema Candemir
- Lister Hill National Center for Biomedical Communications, Communications Engineering Branch, National Library of Medicine, National Institutes of Health, Bethesda, USA
| | - Sameer Antani
- Lister Hill National Center for Biomedical Communications, Communications Engineering Branch, National Library of Medicine, National Institutes of Health, Bethesda, USA
| |
Collapse
|
41
|
Vorontsov E, Cerny M, Régnier P, Di Jorio L, Pal CJ, Lapointe R, Vandenbroucke-Menu F, Turcotte S, Kadoury S, Tang A. Deep Learning for Automated Segmentation of Liver Lesions at CT in Patients with Colorectal Cancer Liver Metastases. Radiol Artif Intell 2019; 1:180014. [PMID: 33937787 DOI: 10.1148/ryai.2019180014] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2018] [Revised: 01/25/2019] [Accepted: 01/31/2019] [Indexed: 02/06/2023]
Abstract
Purpose To evaluate the performance, agreement, and efficiency of a fully convolutional network (FCN) for liver lesion detection and segmentation at CT examinations in patients with colorectal liver metastases (CLMs). Materials and Methods This retrospective study evaluated an automated method using an FCN that was trained, validated, and tested with 115, 15, and 26 contrast material-enhanced CT examinations containing 261, 22, and 105 lesions, respectively. Manual detection and segmentation by a radiologist was the reference standard. Performance of fully automated and user-corrected segmentations was compared with that of manual segmentations. The interuser agreement and interaction time of manual and user-corrected segmentations were assessed. Analyses included sensitivity and positive predictive value of detection, segmentation accuracy, Cohen κ, Bland-Altman analyses, and analysis of variance. Results In the test cohort, for lesion size smaller than 10 mm (n = 30), 10-20 mm (n = 35), and larger than 20 mm (n = 40), the detection sensitivity of the automated method was 10%, 71%, and 85%; positive predictive value was 25%, 83%, and 94%; Dice similarity coefficient was 0.14, 0.53, and 0.68; maximum symmetric surface distance was 5.2, 6.0, and 10.4 mm; and average symmetric surface distance was 2.7, 1.7, and 2.8 mm, respectively. For manual and user-corrected segmentation, κ values were 0.42 (95% confidence interval: 0.24, 0.63) and 0.52 (95% confidence interval: 0.36, 0.72); normalized interreader agreement for lesion volume was -0.10 ± 0.07 (95% confidence interval) and -0.10 ± 0.08; and mean interaction time was 7.7 minutes ± 2.4 (standard deviation) and 4.8 minutes ± 2.1 (P < .001), respectively. Conclusion Automated detection and segmentation of CLM by using deep learning with convolutional neural networks, when manually corrected, improved efficiency but did not substantially change agreement on volumetric measurements.© RSNA, 2019Supplemental material is available for this article.
Collapse
Affiliation(s)
- Eugene Vorontsov
- Department of Radiology (M.C., A.T.) and Department of Surgery, Hepatopancreatobiliary and Liver Transplantation Division (R.L., F.V., S.T.), Centre Hospitalier de l'Université de Montréal (CHUM), 1000 rue Saint-Denis, Montréal, QC, Canada H2X 0C2; Montreal Institute for Learning Algorithms (MILA), Montréal, Canada (E.V., C.J.P.); École Polytechnique, Montréal, Canada (E.V., C.J.P., S.K.); Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Canada (M.C., P.R., S.T., S.K., A.T.); and Imagia Cybernetics, Montréal, Canada (L.D.J.)
| | - Milena Cerny
- Department of Radiology (M.C., A.T.) and Department of Surgery, Hepatopancreatobiliary and Liver Transplantation Division (R.L., F.V., S.T.), Centre Hospitalier de l'Université de Montréal (CHUM), 1000 rue Saint-Denis, Montréal, QC, Canada H2X 0C2; Montreal Institute for Learning Algorithms (MILA), Montréal, Canada (E.V., C.J.P.); École Polytechnique, Montréal, Canada (E.V., C.J.P., S.K.); Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Canada (M.C., P.R., S.T., S.K., A.T.); and Imagia Cybernetics, Montréal, Canada (L.D.J.)
| | - Philippe Régnier
- Department of Radiology (M.C., A.T.) and Department of Surgery, Hepatopancreatobiliary and Liver Transplantation Division (R.L., F.V., S.T.), Centre Hospitalier de l'Université de Montréal (CHUM), 1000 rue Saint-Denis, Montréal, QC, Canada H2X 0C2; Montreal Institute for Learning Algorithms (MILA), Montréal, Canada (E.V., C.J.P.); École Polytechnique, Montréal, Canada (E.V., C.J.P., S.K.); Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Canada (M.C., P.R., S.T., S.K., A.T.); and Imagia Cybernetics, Montréal, Canada (L.D.J.)
| | - Lisa Di Jorio
- Department of Radiology (M.C., A.T.) and Department of Surgery, Hepatopancreatobiliary and Liver Transplantation Division (R.L., F.V., S.T.), Centre Hospitalier de l'Université de Montréal (CHUM), 1000 rue Saint-Denis, Montréal, QC, Canada H2X 0C2; Montreal Institute for Learning Algorithms (MILA), Montréal, Canada (E.V., C.J.P.); École Polytechnique, Montréal, Canada (E.V., C.J.P., S.K.); Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Canada (M.C., P.R., S.T., S.K., A.T.); and Imagia Cybernetics, Montréal, Canada (L.D.J.)
| | - Christopher J Pal
- Department of Radiology (M.C., A.T.) and Department of Surgery, Hepatopancreatobiliary and Liver Transplantation Division (R.L., F.V., S.T.), Centre Hospitalier de l'Université de Montréal (CHUM), 1000 rue Saint-Denis, Montréal, QC, Canada H2X 0C2; Montreal Institute for Learning Algorithms (MILA), Montréal, Canada (E.V., C.J.P.); École Polytechnique, Montréal, Canada (E.V., C.J.P., S.K.); Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Canada (M.C., P.R., S.T., S.K., A.T.); and Imagia Cybernetics, Montréal, Canada (L.D.J.)
| | - Réal Lapointe
- Department of Radiology (M.C., A.T.) and Department of Surgery, Hepatopancreatobiliary and Liver Transplantation Division (R.L., F.V., S.T.), Centre Hospitalier de l'Université de Montréal (CHUM), 1000 rue Saint-Denis, Montréal, QC, Canada H2X 0C2; Montreal Institute for Learning Algorithms (MILA), Montréal, Canada (E.V., C.J.P.); École Polytechnique, Montréal, Canada (E.V., C.J.P., S.K.); Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Canada (M.C., P.R., S.T., S.K., A.T.); and Imagia Cybernetics, Montréal, Canada (L.D.J.)
| | - Franck Vandenbroucke-Menu
- Department of Radiology (M.C., A.T.) and Department of Surgery, Hepatopancreatobiliary and Liver Transplantation Division (R.L., F.V., S.T.), Centre Hospitalier de l'Université de Montréal (CHUM), 1000 rue Saint-Denis, Montréal, QC, Canada H2X 0C2; Montreal Institute for Learning Algorithms (MILA), Montréal, Canada (E.V., C.J.P.); École Polytechnique, Montréal, Canada (E.V., C.J.P., S.K.); Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Canada (M.C., P.R., S.T., S.K., A.T.); and Imagia Cybernetics, Montréal, Canada (L.D.J.)
| | - Simon Turcotte
- Department of Radiology (M.C., A.T.) and Department of Surgery, Hepatopancreatobiliary and Liver Transplantation Division (R.L., F.V., S.T.), Centre Hospitalier de l'Université de Montréal (CHUM), 1000 rue Saint-Denis, Montréal, QC, Canada H2X 0C2; Montreal Institute for Learning Algorithms (MILA), Montréal, Canada (E.V., C.J.P.); École Polytechnique, Montréal, Canada (E.V., C.J.P., S.K.); Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Canada (M.C., P.R., S.T., S.K., A.T.); and Imagia Cybernetics, Montréal, Canada (L.D.J.)
| | - Samuel Kadoury
- Department of Radiology (M.C., A.T.) and Department of Surgery, Hepatopancreatobiliary and Liver Transplantation Division (R.L., F.V., S.T.), Centre Hospitalier de l'Université de Montréal (CHUM), 1000 rue Saint-Denis, Montréal, QC, Canada H2X 0C2; Montreal Institute for Learning Algorithms (MILA), Montréal, Canada (E.V., C.J.P.); École Polytechnique, Montréal, Canada (E.V., C.J.P., S.K.); Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Canada (M.C., P.R., S.T., S.K., A.T.); and Imagia Cybernetics, Montréal, Canada (L.D.J.)
| | - An Tang
- Department of Radiology (M.C., A.T.) and Department of Surgery, Hepatopancreatobiliary and Liver Transplantation Division (R.L., F.V., S.T.), Centre Hospitalier de l'Université de Montréal (CHUM), 1000 rue Saint-Denis, Montréal, QC, Canada H2X 0C2; Montreal Institute for Learning Algorithms (MILA), Montréal, Canada (E.V., C.J.P.); École Polytechnique, Montréal, Canada (E.V., C.J.P., S.K.); Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Canada (M.C., P.R., S.T., S.K., A.T.); and Imagia Cybernetics, Montréal, Canada (L.D.J.)
| |
Collapse
|
42
|
Liu T, Udupa JK, Miao Q, Tong Y, Torigian DA. Quantification of body-torso-wide tissue composition on low-dose CT images via automatic anatomy recognition. Med Phys 2019; 46:1272-1285. [PMID: 30614020 DOI: 10.1002/mp.13373] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2018] [Revised: 11/19/2018] [Accepted: 12/24/2018] [Indexed: 12/22/2022] Open
Abstract
PURPOSE Quantification of body composition plays an important role in many clinical and research applications. Radiologic imaging techniques such as Dual-energy X-ray absorptiometry (DXA), magnetic resonance imaging (MRI), and computed tomography (CT) imaging make accurate quantification of the body composition possible. However, most current imaging-based methods need human interaction to quantify multiple tissues. When dealing with whole-body images of many subjects, interactive methods become impractical. This paper presents an automated, efficient, accurate, and practical body composition quantification method for low-dose CT images. METHOD Our method, named automatic anatomy recognition body composition analysis (AAR-BCA), aims to quantify four tissue components in body torso (BT) - subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), bone tissue, and muscle tissue - from CT images of given whole-body positron emission tomography/computed tomography (PET/CT) acquisitions. AAR-BCA consists of three key steps - modeling BT with its ensemble of key objects from a population of patient images, recognition or localization of these objects in a given patient image I, and delineation and quantification of the four tissue components in I guided by the recognized objects. In the first step, from a given set of patient images and the associated delineated objects, a fuzzy anatomy model of the key object ensemble, including anatomic organs, tissue regions, and tissue interfaces, is built where the objects are organized in a hierarchical order. The second step involves recognizing, or finding roughly the location of, each object in any given whole-body image I of a patient following the object hierarchy and guided by the built model. The third step makes use of this fuzzy localization information of the objects and the intensity distributions of the four tissue components, already learned and encoded in the model, to optimally delineate in a fuzzy manner and quantify these components. All parameters in our method are determined from training datasets. RESULTS Thirty-eight low-dose CT images from different subjects are tested in a fivefold cross-validation strategy for evaluating AAR-BCA with a 23-15 train-test dataset division. For BT, over all objects, AAR-BCA achieves a false-positive volume fraction (FPVF) of 3.7% and false-negative volume fraction (FNVF) of 3.8%. Notably, SAT achieves both a FPVF and FNVF under 3%. For bone tissue, it achieves a FPVF and a FNVF both under 3.5%. For VAT tissue, the FNVF of 4.8% is higher than for other objects and so also for muscle (4.7%). The level of accuracy for the four tissue components in individual body subregions mostly remains at the same level as for BT. The processing time required per patient image is under a minute. CONCLUSIONS Motivated by applications in cancer and systemic diseases, our goal in this paper was to seek a practical method for body composition quantification which is automated, accurate, and efficient, and works on BT in low-dose CT. The proposed AAR-BCA method toward this goal can quantify four tissue components including SAT, VAT, bone tissue, and muscle tissue in the body torso with under 5% overall error. All needed parameters can be automatically estimated from the training datasets.
Collapse
Affiliation(s)
- Tiange Liu
- School of Information Science and Engineering, Yanshan University, Qinhuangdao, Hebei, 066004, China.,Medical image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA.,Xidian University, Xi'an, Shaanxi, 710126, China
| | - Jayaram K Udupa
- Medical image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Qiguang Miao
- Xidian University, Xi'an, Shaanxi, 710126, China
| | - Yubing Tong
- Medical image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Drew A Torigian
- Medical image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| |
Collapse
|
43
|
Comelli A, Stefano A, Bignardi S, Russo G, Sabini MG, Ippolito M, Barone S, Yezzi A. Active contour algorithm with discriminant analysis for delineating tumors in positron emission tomography. Artif Intell Med 2019; 94:67-78. [PMID: 30871684 DOI: 10.1016/j.artmed.2019.01.002] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2018] [Revised: 10/18/2018] [Accepted: 01/07/2019] [Indexed: 12/19/2022]
Abstract
In the context of cancer delineation using positron emission tomography datasets, we present an innovative approach which purpose is to tackle the real-time, three-dimensional segmentation task in a full, or at least nearly full automatized way. The approach comprises a preliminary initialization phase where the user highlights a region of interest around the cancer on just one slice of the tomographic dataset. The algorithm takes care of identifying an optimal and user-independent region of interest around the anomalous tissue and located on the slice containing the highest standardized uptake value so to start the successive segmentation task. The three-dimensional volume is then reconstructed using a slice-by-slice marching approach until a suitable automatic stop condition is met. On each slice, the segmentation is performed using an enhanced local active contour based on the minimization of a novel energy functional which combines the information provided by a machine learning component, the discriminant analysis in the present study. As a result, the whole algorithm is almost completely automatic and the output segmentation is independent from the input provided by the user. Phantom experiments comprising spheres and zeolites, and clinical cases comprising various body districts (lung, brain, and head and neck), and two different radio-tracers (18 F-fluoro-2-deoxy-d-glucose, and 11C-labeled Methionine) were used to assess the algorithm performances. Phantom experiments with spheres and with zeolites showed a dice similarity coefficient above 90% and 80%, respectively. Clinical cases showed high agreement with the gold standard (R2 = 0.98). These results indicate that the proposed method can be efficiently applied in the clinical routine with potential benefit for the treatment response assessment, and targeting in radiotherapy.
Collapse
Affiliation(s)
- Albert Comelli
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta GA, 30332, USA; Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, PA, Italy; Department of Industrial and Digital Innovation (DIID) - University of Palermo, PA, Italy
| | - Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, PA, Italy.
| | - Samuel Bignardi
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta GA, 30332, USA
| | - Giorgio Russo
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, PA, Italy; Medical Physics Unit, Cannizzaro Hospital, Catania, Italy
| | | | - Massimo Ippolito
- Nuclear Medicine Department, Cannizzaro Hospital, Catania, Italy
| | - Stefano Barone
- Department of Industrial and Digital Innovation (DIID) - University of Palermo, PA, Italy
| | - Anthony Yezzi
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta GA, 30332, USA
| |
Collapse
|
44
|
Danelakis A, Theoharis T, Verganelakis DA. Survey of automated multiple sclerosis lesion segmentation techniques on magnetic resonance imaging. Comput Med Imaging Graph 2018; 70:83-100. [DOI: 10.1016/j.compmedimag.2018.10.002] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2018] [Revised: 09/05/2018] [Accepted: 10/02/2018] [Indexed: 01/18/2023]
|
45
|
Levin EA, Morgan RM, Griffin LD, Jones VJ. A Comparison of Thresholding Methods for Forensic Reconstruction Studies Using Fluorescent Powder Proxies for Trace Materials. J Forensic Sci 2018; 64:431-442. [PMID: 30359482 PMCID: PMC6849572 DOI: 10.1111/1556-4029.13938] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Revised: 10/08/2018] [Accepted: 10/08/2018] [Indexed: 12/21/2022]
Abstract
Image segmentation is a fundamental precursor to quantitative image analysis. At present, no standardised methodology exists for segmenting images of fluorescent proxies for trace evidence. Experiments evaluated (i) whether manual segmentation is reproducible within and between examiners (with three participants repeatedly tracing three images) (ii) whether manually defining a threshold level offers accurate and reproducible results (with 20 examiners segmenting 10 images), and (iii) whether a global thresholding algorithm might perform with similar accuracy, while offering improved reproducibility and efficiency (16 algorithms tested). Statistically significant differences were seen between examiners’ traced outputs. Manually thresholding produced good accuracy on average (within ±1% of the expected values), but poor reproducibility (with multiple outliers). Three algorithms (Yen, MaxEntropy, and RenyiEntropy) offered similar accuracy, with improved reproducibility and efficiency. Together, these findings suggest that appropriate algorithms could perform thresholding tasks as part of a robust workflow for reconstruction studies employing fluorescent proxies for trace evidence.
Collapse
Affiliation(s)
- Emma A Levin
- Centre for the Forensic Sciences, University College London, 35 Tavistock Square, London, WC1H 9EZ, U.K.,Department of Security and Crime Science, University College London, 35 Tavistock Square, London, WC1H 9EZ, U.K.,Environmental Change Research Centre, Department of Geography, University College London, Pearson Building, Gower Street, London, WC1E 6BT, U.K
| | - Ruth M Morgan
- Centre for the Forensic Sciences, University College London, 35 Tavistock Square, London, WC1H 9EZ, U.K.,Department of Security and Crime Science, University College London, 35 Tavistock Square, London, WC1H 9EZ, U.K
| | - Lewis D Griffin
- Deparment of Computer Science, University College London, Gower Street, London, WC1E 6BT, UK
| | - Vivienne J Jones
- Environmental Change Research Centre, Department of Geography, University College London, Pearson Building, Gower Street, London, WC1E 6BT, U.K
| |
Collapse
|
46
|
Automatic segmentation of cervical region in colposcopic images using K-means. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2018; 41:1077-1085. [DOI: 10.1007/s13246-018-0678-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Accepted: 08/20/2018] [Indexed: 01/23/2023]
|
47
|
A smart and operator independent system to delineate tumours in Positron Emission Tomography scans. Comput Biol Med 2018; 102:1-15. [PMID: 30219733 DOI: 10.1016/j.compbiomed.2018.09.002] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2018] [Revised: 08/20/2018] [Accepted: 09/06/2018] [Indexed: 12/30/2022]
Abstract
Positron Emission Tomography (PET) imaging has an enormous potential to improve radiation therapy treatment planning offering complementary functional information with respect to other anatomical imaging approaches. The aim of this study is to develop an operator independent, reliable, and clinically feasible system for biological tumour volume delineation from PET images. Under this design hypothesis, we combine several known approaches in an original way to deploy a system with a high level of automation. The proposed system automatically identifies the optimal region of interest around the tumour and performs a slice-by-slice marching local active contour segmentation. It automatically stops when a "cancer-free" slice is identified. User intervention is limited at drawing an initial rough contour around the cancer region. By design, the algorithm performs the segmentation minimizing any dependence from the initial input, so that the final result is extremely repeatable. To assess the performances under different conditions, our system is evaluated on a dataset comprising five synthetic experiments and fifty oncological lesions located in different anatomical regions (i.e. lung, head and neck, and brain) using PET studies with 18F-fluoro-2-deoxy-d-glucose and 11C-labeled Methionine radio-tracers. Results on synthetic lesions demonstrate enhanced performances when compared against the most common PET segmentation methods. In clinical cases, the proposed system produces accurate segmentations (average dice similarity coefficient: 85.36 ± 2.94%, 85.98 ± 3.40%, 88.02 ± 2.75% in the lung, head and neck, and brain district, respectively) with high agreement with the gold standard (determination coefficient R2 = 0.98). We believe that the proposed system could be efficiently used in the everyday clinical routine as a medical decision tool, and to provide the clinicians with additional information, derived from PET, which can be of use in radiation therapy, treatment, and planning.
Collapse
|
48
|
Ogier A, Sdika M, Foure A, Le Troter A, Bendahan D. Individual muscle segmentation in MR images: A 3D propagation through 2D non-linear registration approaches. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2017:317-320. [PMID: 29059874 DOI: 10.1109/embc.2017.8036826] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Manual and automated segmentation of individual muscles in magnetic resonance images have been recognized as challenging given the high variability of shapes between muscles and subjects and the discontinuity or lack of visible boundaries between muscles. In the present study, we proposed an original algorithm allowing a semi-automatic transversal propagation of manually-drawn masks. Our strategy was based on several ascending and descending non-linear registration approaches which is similar to the estimation of a Lagrangian trajectory applied to manual masks. Using several manually-segmented slices, we have evaluated our algorithm on the four muscles of the quadriceps femoris group. We mainly showed that our 3D propagated segmentation was very accurate with an averaged Dice similarity coefficient value higher than 0.91 for the minimal manual input of only two manually-segmented slices.
Collapse
|
49
|
Skalski A, Jakubowski J, Drewniak T. LEFMIS: locally-oriented evaluation framework for medical image segmentation algorithms. Phys Med Biol 2018; 63:165016. [PMID: 29999495 DOI: 10.1088/1361-6560/aad316] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
This article proposes a novel framework for the locally-oriented evaluation of segmentation algorithms (LEFMIS). The presented approach is robust and takes into account local inter/intra-observer variability and the anisotropy of medical images. What is more, the framework makes it possible to distinguish types of error locally. These features are crucial in the context of cancer image data. The proposed framework is based on use of the signed anisotropic Euclidean distance transform and the distance projection. It can be used easily in many different applications with or without additional expert outlines (both inter- and intra-observer variability). The performance of the proposed framework is depicted using both artificial and kidney cancer CT data with experts' manual outlines. In the article, in the case of artificial data, it is presented that the manual outlines dispersion is symmetric in relation to the truth border. The effectiveness of the selected segmentation algorithm was analysed in the context of kidney cancer using computed tomography data. For the calculated local inter-observer variability, 80.11% of the surface points generated by the kidney segmentation algorithm are within one expert outline standard deviation and 97.96% are within five. An error distribution shift in the direction of type I error equivalent was also observed. Finally, the significance of the local estimation of error type differences is presented. The article shows the greater usefulness and flexibility of the proposed framework in comparison to the state-of-the-art methods. The exemplary usage of the LEFMIS with or without inter-/intra-observer variability is also presented.
Collapse
Affiliation(s)
- Andrzej Skalski
- AGH University of Science and Technology, Department of Measurement and Electronics, al. A.Mickiewicza 30, PL30059, Cracow, Poland
| | | | | |
Collapse
|
50
|
Gordaliza PM, Muñoz-Barrutia A, Abella M, Desco M, Sharpe S, Vaquero JJ. Unsupervised CT Lung Image Segmentation of a Mycobacterium Tuberculosis Infection Model. Sci Rep 2018; 8:9802. [PMID: 29955159 PMCID: PMC6023884 DOI: 10.1038/s41598-018-28100-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2017] [Accepted: 06/12/2018] [Indexed: 02/06/2023] Open
Abstract
Tuberculosis (TB) is an infectious disease caused by Mycobacterium tuberculosis that produces pulmonary damage. Radiological imaging is the preferred technique for the assessment of TB longitudinal course. Computer-assisted identification of biomarkers eases the work of the radiologist by providing a quantitative assessment of disease. Lung segmentation is the step before biomarker extraction. In this study, we present an automatic procedure that enables robust segmentation of damaged lungs that have lesions attached to the parenchyma and are affected by respiratory movement artifacts in a Mycobacterium Tuberculosis infection model. Its main steps are the extraction of the healthy lung tissue and the airway tree followed by elimination of the fuzzy boundaries. Its performance was compared with respect to a segmentation obtained using: (1) a semi-automatic tool and (2) an approach based on fuzzy connectedness. A consensus segmentation resulting from the majority voting of three experts' annotations was considered our ground truth. The proposed approach improves the overlap indicators (Dice similarity coefficient, 94% ± 4%) and the surface similarity coefficients (Hausdorff distance, 8.64 mm ± 7.36 mm) in the majority of the most difficult-to-segment slices. Results indicate that the refined lung segmentations generated could facilitate the extraction of meaningful quantitative data on disease burden.
Collapse
Affiliation(s)
- Pedro M Gordaliza
- Universidad Carlos III de Madrid, Departamento de Bioingeniería e Ingeniería Aeroespacial, Leganés, ES28911, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, ES28007, Spain
| | - Arrate Muñoz-Barrutia
- Universidad Carlos III de Madrid, Departamento de Bioingeniería e Ingeniería Aeroespacial, Leganés, ES28911, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, ES28007, Spain
| | - Mónica Abella
- Universidad Carlos III de Madrid, Departamento de Bioingeniería e Ingeniería Aeroespacial, Leganés, ES28911, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, ES28007, Spain
- Centro de Investigaciones Cardiovasculares Carlos III (CNIC), Madrid, Spain
| | - Manuel Desco
- Universidad Carlos III de Madrid, Departamento de Bioingeniería e Ingeniería Aeroespacial, Leganés, ES28911, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, ES28007, Spain
- Centro de Investigaciones Cardiovasculares Carlos III (CNIC), Madrid, Spain
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Madrid, ES28029, Spain
| | - Sally Sharpe
- Public Health England, Microbiology Services Division, Porton Down, SP4 0JG, England
| | - Juan José Vaquero
- Universidad Carlos III de Madrid, Departamento de Bioingeniería e Ingeniería Aeroespacial, Leganés, ES28911, Spain.
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, ES28007, Spain.
| |
Collapse
|