1
|
Park J, Rho MJ, Moon MH. Enhanced deep learning model for precise nodule localization and recurrence risk prediction following curative-intent surgery for lung cancer. PLoS One 2024; 19:e0300442. [PMID: 38995927 PMCID: PMC11244817 DOI: 10.1371/journal.pone.0300442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 02/27/2024] [Indexed: 07/14/2024] Open
Abstract
PURPOSE Radical surgery is the primary treatment for early-stage resectable lung cancer, yet recurrence after curative surgery is not uncommon. Identifying patients at high risk of recurrence using preoperative computed tomography (CT) images could enable more aggressive surgical approaches, shorter surveillance intervals, and intensified adjuvant treatments. This study aims to analyze lung cancer sites in CT images to predict potential recurrences in high-risk individuals. METHODS We retrieved anonymized imaging and clinical data from an institutional database, focusing on patients who underwent curative pulmonary resections for non-small cell lung cancers. Our study used a deep learning model, the Mask Region-based Convolutional Neural Network (MRCNN), to predict cancer locations and assign recurrence classification scores. To find optimized trained weighted values in the model, we developed preprocessing python codes, adjusted dynamic learning rate, and modifying hyper parameter in the model. RESULTS The model training completed; we performed classifications using the validation dataset. The results, including the confusion matrix, demonstrated performance metrics: bounding box (0.390), classification (0.034), mask (0.266), Region Proposal Network (RPN) bounding box (0.341), and RPN classification (0.054). The model successfully identified lung cancer recurrence sites, which were then accurately mapped onto chest CT images to highlight areas of primary concern. CONCLUSION The trained model allows clinicians to focus on lung regions where cancer recurrence is more likely, acting as a significant aid in the detection and diagnosis of lung cancer. Serving as a clinical decision support system, it offers substantial support in managing lung cancer patients.
Collapse
Affiliation(s)
- Jihwan Park
- College of Liberal Arts, Dankook University, Cheonan-si, Chungcheongnam-do, Republic of Korea
| | - Mi Jung Rho
- College of Health Science, Dankook University, Cheonan-si, Chungcheongnam-do, Republic of Korea
| | - Mi Hyoung Moon
- Department of Thoracic and Cardiovascular Surgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| |
Collapse
|
2
|
Fassia MK, Balasubramanian A, Woo S, Vargas HA, Hricak H, Konukoglu E, Becker AS. Deep Learning Prostate MRI Segmentation Accuracy and Robustness: A Systematic Review. Radiol Artif Intell 2024; 6:e230138. [PMID: 38568094 PMCID: PMC11294957 DOI: 10.1148/ryai.230138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 02/24/2024] [Accepted: 03/19/2024] [Indexed: 04/28/2024]
Abstract
Purpose To investigate the accuracy and robustness of prostate segmentation using deep learning across various training data sizes, MRI vendors, prostate zones, and testing methods relative to fellowship-trained diagnostic radiologists. Materials and Methods In this systematic review, Embase, PubMed, Scopus, and Web of Science databases were queried for English-language articles using keywords and related terms for prostate MRI segmentation and deep learning algorithms dated to July 31, 2022. A total of 691 articles from the search query were collected and subsequently filtered to 48 on the basis of predefined inclusion and exclusion criteria. Multiple characteristics were extracted from selected studies, such as deep learning algorithm performance, MRI vendor, and training dataset features. The primary outcome was comparison of mean Dice similarity coefficient (DSC) for prostate segmentation for deep learning algorithms versus diagnostic radiologists. Results Forty-eight studies were included. Most published deep learning algorithms for whole prostate gland segmentation (39 of 42 [93%]) had a DSC at or above expert level (DSC ≥ 0.86). The mean DSC was 0.79 ± 0.06 (SD) for peripheral zone, 0.87 ± 0.05 for transition zone, and 0.90 ± 0.04 for whole prostate gland segmentation. For selected studies that used one major MRI vendor, the mean DSCs of each were as follows: General Electric (three of 48 studies), 0.92 ± 0.03; Philips (four of 48 studies), 0.92 ± 0.02; and Siemens (six of 48 studies), 0.91 ± 0.03. Conclusion Deep learning algorithms for prostate MRI segmentation demonstrated accuracy similar to that of expert radiologists despite varying parameters; therefore, future research should shift toward evaluating segmentation robustness and patient outcomes across diverse clinical settings. Keywords: MRI, Genital/Reproductive, Prostate Segmentation, Deep Learning Systematic review registration link: osf.io/nxaev © RSNA, 2024.
Collapse
Affiliation(s)
- Mohammad-Kasim Fassia
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Adithya Balasubramanian
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Sungmin Woo
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Hebert Alberto Vargas
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Hedvig Hricak
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Ender Konukoglu
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Anton S. Becker
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| |
Collapse
|
3
|
Mikhailov I, Chauveau B, Bourdel N, Bartoli A. A deep learning-based interactive medical image segmentation framework with sequential memory. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 245:108038. [PMID: 38271792 DOI: 10.1016/j.cmpb.2024.108038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 12/22/2023] [Accepted: 01/16/2024] [Indexed: 01/27/2024]
Abstract
BACKGROUND AND OBJECTIVE Image segmentation is an essential component in medical image analysis. The case of 3D images such as MRI is particularly challenging and time consuming. Interactive or semi-automatic methods are thus highly desirable. However, existing methods do not exploit the typical sequentiality of real user interactions. This is due to the interaction memory used in these systems, which discards ordering. In contrast, we argue that the order of the user corrections should be used for training and lead to performance improvements. METHODS We contribute to solving this problem by proposing a general multi-class deep learning-based interactive framework for image segmentation, which embeds a base network in a user interaction loop with a user feedback memory. We propose to model the memory explicitly as a sequence of consecutive system states, from which the features can be learned, generally learning from the segmentation refinement process. Training is a major difficulty owing to the network's input being dependent on the previous output. We adapt the network to this loop by introducing a virtual user in the training process, modelled by dynamically simulating the iterative user feedback. RESULTS We evaluated our framework against existing methods on the complex task of multi-class semantic instance female pelvis MRI segmentation with 5 classes, including up to 27 tumour instances, using a segmentation dataset collected in our hospital, and on liver and pancreas CT segmentation, using public datasets. We conducted a user evaluation, involving both senior and junior medical personnel in matching and adjacent areas of expertise. We observed an annotation time reduction with 5'56" for our framework against 25' on average for classical tools. We systematically evaluated the influence of the number of clicks on the segmentation accuracy. A single interaction round our framework outperforms existing automatic systems with a comparable setup. We provide an ablation study and show that our framework outperforms existing interactive systems. CONCLUSIONS Our framework largely outperforms existing systems in accuracy, with the largest impact on the smallest, most difficult classes, and drastically reduces the average user segmentation time with fast inference at 47.2±6.2 ms per image.
Collapse
Affiliation(s)
- Ivan Mikhailov
- EnCoV, Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, 63000, France; SurgAR, 22 All. Alan Turing, Clermont-Ferrand, 63000, France.
| | - Benoit Chauveau
- SurgAR, 22 All. Alan Turing, Clermont-Ferrand, 63000, France; CHU de Clermont-Ferrand, Clermont-Ferrand, 63000, France
| | - Nicolas Bourdel
- EnCoV, Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, 63000, France; SurgAR, 22 All. Alan Turing, Clermont-Ferrand, 63000, France; CHU de Clermont-Ferrand, Clermont-Ferrand, 63000, France
| | - Adrien Bartoli
- EnCoV, Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, 63000, France; SurgAR, 22 All. Alan Turing, Clermont-Ferrand, 63000, France; CHU de Clermont-Ferrand, Clermont-Ferrand, 63000, France
| |
Collapse
|
4
|
Tong X, Wang S, Zhang J, Fan Y, Liu Y, Wei W. Automatic Osteoporosis Screening System Using Radiomics and Deep Learning from Low-Dose Chest CT Images. Bioengineering (Basel) 2024; 11:50. [PMID: 38247927 PMCID: PMC10813496 DOI: 10.3390/bioengineering11010050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 12/21/2023] [Accepted: 12/29/2023] [Indexed: 01/23/2024] Open
Abstract
OBJECTIVE Develop two fully automatic osteoporosis screening systems using deep learning (DL) and radiomics (Rad) techniques based on low-dose chest CT (LDCT) images and evaluate their diagnostic effectiveness. METHODS In total, 434 patients who underwent LDCT and bone mineral density (BMD) examination were retrospectively enrolled and divided into the development set (n = 333) and temporal validation set (n = 101). An automatic thoracic vertebra cancellous bone (TVCB) segmentation model was developed. The Dice similarity coefficient (DSC) was used to evaluate the segmentation performance. Furthermore, the three-class Rad and DL models were developed to distinguish osteoporosis, osteopenia, and normal bone mass. The diagnostic performance of these models was evaluated using the receiver operating characteristic (ROC) curve and decision curve analysis (DCA). RESULTS The automatic segmentation model achieved excellent segmentation performance, with a mean DSC of 0.96 ± 0.02 in the temporal validation set. The Rad model was used to identify osteoporosis, osteopenia, and normal BMD in the temporal validation set, with respective area under the receiver operating characteristic curve (AUC) values of 0.943, 0.801, and 0.932. The DL model achieved higher AUC values of 0.983, 0.906, and 0.969 for the same categories in the same validation set. The Delong test affirmed that both models performed similarly in BMD assessment. However, the accuracy of the DL model is 81.2%, which is better than the 73.3% accuracy of the Rad model in the temporal validation set. Additionally, DCA indicated that the DL model provided a greater net benefit compared to the Rad model across the majority of the reasonable threshold probabilities Conclusions: The automated segmentation framework we developed can accurately segment cancellous bone on low-dose chest CT images. These predictive models, which are based on deep learning and radiomics, provided comparable diagnostic performance in automatic BMD assessment. Nevertheless, it is important to highlight that the DL model demonstrates higher accuracy and precision than the Rad model.
Collapse
Affiliation(s)
| | | | | | | | | | - Wei Wei
- Department of Radiology, First Affiliated Hospital of Dalian Medical University, Dalian 116014, China (S.W.); (Y.F.)
| |
Collapse
|
5
|
Ma J, Kong D, Wu F, Bao L, Yuan J, Liu Y. Densely connected convolutional networks for ultrasound image based lesion segmentation. Comput Biol Med 2024; 168:107725. [PMID: 38006827 DOI: 10.1016/j.compbiomed.2023.107725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 11/03/2023] [Accepted: 11/15/2023] [Indexed: 11/27/2023]
Abstract
Delineating lesion boundaries play a central role in diagnosing thyroid and breast cancers, making related therapy plans and evaluating therapeutic effects. However, it is often time-consuming and error-prone with limited reproducibility to manually annotate low-quality ultrasound (US) images, given high speckle noises, heterogeneous appearances, ambiguous boundaries etc., especially for nodular lesions with huge intra-class variance. It is hence appreciative but challenging for accurate lesion segmentations from US images in clinical practices. In this study, we propose a new densely connected convolutional network (called MDenseNet) architecture to automatically segment nodular lesions from 2D US images, which is first pre-trained over ImageNet database (called PMDenseNet) and then retrained upon the given US image datasets. Moreover, we also designed a deep MDenseNet with pre-training strategy (PDMDenseNet) for segmentation of thyroid and breast nodules by adding a dense block to increase the depth of our MDenseNet. Extensive experiments demonstrate that the proposed MDenseNet-based method can accurately extract multiple nodular lesions, with even complex shapes, from input thyroid and breast US images. Moreover, additional experiments show that the introduced MDenseNet-based method also outperforms three state-of-the-art convolutional neural networks in terms of accuracy and reproducibility. Meanwhile, promising results in nodular lesion segmentation from thyroid and breast US images illustrate its great potential in many other clinical segmentation tasks.
Collapse
Affiliation(s)
- Jinlian Ma
- School of Integrated Circuits, Shandong University, Jinan 250101, China; Shenzhen Research Institute of Shandong University, A301 Virtual University Park in South District of Shenzhen, China; State Key Lab of CAD&CG, College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China.
| | - Dexing Kong
- School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| | - Fa Wu
- School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| | - Lingyun Bao
- Department of Ultrasound, Hangzhou First Peoples Hospital, Zhejiang University, Hangzhou, China
| | - Jing Yuan
- School of Mathematics and Statistics, Xidian University, China
| | - Yusheng Liu
- State Key Lab of CAD&CG, College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
6
|
Landry G, Kurz C, Traverso A. The role of artificial intelligence in radiotherapy clinical practice. BJR Open 2023; 5:20230030. [PMID: 37942500 PMCID: PMC10630974 DOI: 10.1259/bjro.20230030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 09/13/2023] [Accepted: 09/27/2023] [Indexed: 11/10/2023] Open
Abstract
This review article visits the current state of artificial intelligence (AI) in radiotherapy clinical practice. We will discuss how AI has a place in the modern radiotherapy workflow at the level of automatic segmentation and planning, two applications which have seen real-work implementation. A special emphasis will be placed on the role AI can play in online adaptive radiotherapy, such as performed at MR-linacs, where online plan adaptation is a procedure which could benefit from automation to reduce on-couch time for patients. Pseudo-CT generation and AI for motion tracking will be introduced in the scope of online adaptive radiotherapy as well. We further discuss the use of AI for decision-making and response assessment, for example for personalized prescription and treatment selection, risk stratification for outcomes and toxicities, and AI for quantitative imaging and response assessment. Finally, the challenges of generalizability and ethical aspects will be covered. With this, we provide a comprehensive overview of the current and future applications of AI in radiotherapy.
Collapse
Affiliation(s)
| | - Christopher Kurz
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | | |
Collapse
|
7
|
Simeth J, Jiang J, Nosov A, Wibmer A, Zelefsky M, Tyagi N, Veeraraghavan H. Deep learning-based dominant index lesion segmentation for MR-guided radiation therapy of prostate cancer. Med Phys 2023; 50:4854-4870. [PMID: 36856092 PMCID: PMC11098147 DOI: 10.1002/mp.16320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 01/11/2023] [Accepted: 01/29/2023] [Indexed: 03/02/2023] Open
Abstract
BACKGROUND Dose escalation radiotherapy enables increased control of prostate cancer (PCa) but requires segmentation of dominant index lesions (DIL). This motivates the development of automated methods for fast, accurate, and consistent segmentation of PCa DIL. PURPOSE To construct and validate a model for deep-learning-based automatic segmentation of PCa DIL defined by Gleason score (GS) ≥3+4 from MR images applied to MR-guided radiation therapy. Validate generalizability of constructed models across scanner and acquisition differences. METHODS Five deep-learning networks were evaluated on apparent diffusion coefficient (ADC) MRI from 500 lesions in 365 patients arising from internal training Dataset 1 (156 lesions in 125 patients, 1.5Tesla GE MR with endorectal coil), testing using Dataset 1 (35 lesions in 26 patients), external ProstateX Dataset 2 (299 lesions in 204 patients, 3Tesla Siemens MR), and internal inter-rater Dataset 3 (10 lesions in 10 patients, 3Tesla Philips MR). The five networks include: multiple resolution residually connected network (MRRN) and MRRN regularized in training with deep supervision implemented into the last convolutional block (MRRN-DS), Unet, Unet++, ResUnet, and fast panoptic segmentation (FPSnet) as well as fast panoptic segmentation with smoothed labels (FPSnet-SL). Models were evaluated by volumetric DIL segmentation accuracy using Dice similarity coefficient (DSC) and the balanced F1 measure of detection accuracy, as a function of lesion aggressiveness and size (Dataset 1 and 2), and accuracy with respect to two-raters (on Dataset 3). Upon acceptance for publication segmentation models will be made available in an open-source GitHub repository. RESULTS In general, MRRN-DS more accurately segmented tumors than other methods on the testing datasets. MRRN-DS significantly outperformed ResUnet in Dataset2 (DSC of 0.54 vs. 0.44, p < 0.001) and the Unet++ in Dataset3 (DSC of 0.45 vs. p = 0.04). FPSnet-SL was similarly accurate as MRRN-DS in Dataset2 (p = 0.30), but MRRN-DS significantly outperformed FPSnet and FPSnet-SL in both Dataset1 (0.60 vs. 0.51 [p = 0.01] and 0.54 [p = 0.049] respectively) and Dataset3 (0.45 vs. 0.06 [p = 0.002] and 0.24 [p = 0.004] respectively). Finally, MRRN-DS produced slightly higher agreement with experienced radiologist than two radiologists in Dataset 3 (DSC of 0.45 vs. 0.41). CONCLUSIONS MRRN-DS was generalizable to different MR testing datasets acquired using different scanners. It produced slightly higher agreement with an experienced radiologist than that between two radiologists. Finally, MRRN-DS more accurately segmented aggressive lesions, which are generally candidates for radiative dose ablation.
Collapse
Affiliation(s)
- Josiah Simeth
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Anton Nosov
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Andreas Wibmer
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Michael Zelefsky
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Neelam Tyagi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| |
Collapse
|
8
|
Kim H, Kang SW, Kim JH, Nagar H, Sabuncu M, Margolis DJA, Kim CK. The role of AI in prostate MRI quality and interpretation: Opportunities and challenges. Eur J Radiol 2023; 165:110887. [PMID: 37245342 DOI: 10.1016/j.ejrad.2023.110887] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 05/06/2023] [Accepted: 05/20/2023] [Indexed: 05/30/2023]
Abstract
Prostate MRI plays an important role in imaging the prostate gland and surrounding tissues, particularly in the diagnosis and management of prostate cancer. With the widespread adoption of multiparametric magnetic resonance imaging in recent years, the concerns surrounding the variability of imaging quality have garnered increased attention. Several factors contribute to the inconsistency of image quality, such as acquisition parameters, scanner differences and interobserver variabilities. While efforts have been made to standardize image acquisition and interpretation via the development of systems, such as PI-RADS and PI-QUAL, the scoring systems still depend on the subjective experience and acumen of humans. Artificial intelligence (AI) has been increasingly used in many applications, including medical imaging, due to its ability to automate tasks and lower human error rates. These advantages have the potential to standardize the tasks of image interpretation and quality control of prostate MRI. Despite its potential, thorough validation is required before the implementation of AI in clinical practice. In this article, we explore the opportunities and challenges of AI, with a focus on the interpretation and quality of prostate MRI.
Collapse
Affiliation(s)
- Heejong Kim
- Department of Radiology, Weill Cornell Medical College, 525 E 68th St Box 141, New York, NY 10021, United States
| | - Shin Won Kang
- Research Institute for Future Medicine, Samsung Medical Center, Republic of Korea
| | - Jae-Hun Kim
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Republic of Korea
| | - Himanshu Nagar
- Department of Radiation Oncology, Weill Cornell Medical College, 525 E 68th St, New York, NY 10021, United States
| | - Mert Sabuncu
- Department of Radiology, Weill Cornell Medical College, 525 E 68th St Box 141, New York, NY 10021, United States
| | - Daniel J A Margolis
- Department of Radiology, Weill Cornell Medical College, 525 E 68th St Box 141, New York, NY 10021, United States.
| | - Chan Kyo Kim
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Republic of Korea
| |
Collapse
|
9
|
Duan B, Lv HY, Huang Y, Xu ZM, Chen WX. Deep learning for the screening of primary ciliary dyskinesia based on cranial computed tomography. Front Physiol 2023; 14:1098893. [PMID: 37008008 PMCID: PMC10050729 DOI: 10.3389/fphys.2023.1098893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 02/13/2023] [Indexed: 03/17/2023] Open
Abstract
Objective: To analyze the cranial computed tomography (CT) imaging features of patients with primary ciliary dyskinesia (PCD) who have exudative otitis media (OME) and sinusitis using a deep learning model for early intervention in PCD.Methods: Thirty-two children with PCD diagnosed at the Children’s Hospital of Fudan University, Shanghai, China, between January 2010 and January 2021 who had undergone cranial CT were retrospectively analyzed. Thirty-two children with OME and sinusitis diagnosed using cranial CT formed the control group. Multiple deep learning neural network training models based on PyTorch were built, and the optimal model was trained and selected to observe the differences between the cranial CT images of patients with PCD and those of general patients and to screen patients with PCD.Results: The Swin-Transformer, ConvNeXt, and GoogLeNet training models had optimal results, with an accuracy of approximately 0.94; VGG11, VGG16, VGG19, ResNet 34, and ResNet 50, which are neural network models with fewer layers, achieved relatively strong results; and Transformer and other neural networks with more layers or neural network models with larger receptive fields exhibited a relatively weak performance. A heat map revealed the differences in the sinus, middle ear mastoid, and fourth ventricle between the patients with PCD and the control group. Transfer learning can improve the modeling effect of neural networks.Conclusion: Deep learning-based CT imaging models can accurately screen for PCD and identify differences between the cranial CT images.
Collapse
|
10
|
CT-Based Automatic Spine Segmentation Using Patch-Based Deep Learning. INT J INTELL SYST 2023. [DOI: 10.1155/2023/2345835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
CT vertebral segmentation plays an essential role in various clinical applications, such as computer-assisted surgical interventions, assessment of spinal abnormalities, and vertebral compression fractures. Automatic CT vertebral segmentation is challenging due to the overlapping shadows of thoracoabdominal structures such as the lungs, bony structures such as the ribs, and other issues such as ambiguous object borders, complicated spine architecture, patient variability, and fluctuations in image contrast. Deep learning is an emerging technique for disease diagnosis in the medical field. This study proposes a patch-based deep learning approach to extract the discriminative features from unlabeled data using a stacked sparse autoencoder (SSAE). 2D slices from a CT volume are divided into overlapping patches fed into the model for training. A random under sampling (RUS)-module is applied to balance the training data by selecting a subset of the majority class. SSAE uses pixel intensities alone to learn high-level features to recognize distinctive features from image patches. Each image is subjected to a sliding window operation to express image patches using autoencoder high-level features, which are then fed into a sigmoid layer to classify whether each patch is a vertebra or not. We validate our approach on three diverse publicly available datasets: VerSe, CSI-Seg, and the Lumbar CT dataset. Our proposed method outperformed other models after configuration optimization by achieving 89.9% in precision, 90.2% in recall, 98.9% in accuracy, 90.4% in F-score, 82.6% in intersection over union (IoU), and 90.2% in Dice coefficient (DC). The results of this study demonstrate that our model’s performance consistency using a variety of validation strategies is flexible, fast, and generalizable, making it suited for clinical application.
Collapse
|
11
|
Rodrigues NM, Silva S, Vanneschi L, Papanikolaou N. A Comparative Study of Automated Deep Learning Segmentation Models for Prostate MRI. Cancers (Basel) 2023; 15:1467. [PMID: 36900261 PMCID: PMC10001231 DOI: 10.3390/cancers15051467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 02/17/2023] [Accepted: 02/20/2023] [Indexed: 03/03/2023] Open
Abstract
Prostate cancer is one of the most common forms of cancer globally, affecting roughly one in every eight men according to the American Cancer Society. Although the survival rate for prostate cancer is significantly high given the very high incidence rate, there is an urgent need to improve and develop new clinical aid systems to help detect and treat prostate cancer in a timely manner. In this retrospective study, our contributions are twofold: First, we perform a comparative unified study of different commonly used segmentation models for prostate gland and zone (peripheral and transition) segmentation. Second, we present and evaluate an additional research question regarding the effectiveness of using an object detector as a pre-processing step to aid in the segmentation process. We perform a thorough evaluation of the deep learning models on two public datasets, where one is used for cross-validation and the other as an external test set. Overall, the results reveal that the choice of model is relatively inconsequential, as the majority produce non-significantly different scores, apart from nnU-Net which consistently outperforms others, and that the models trained on data cropped by the object detector often generalize better, despite performing worse during cross-validation.
Collapse
Affiliation(s)
- Nuno M. Rodrigues
- LASIGE, Faculty of Sciences, University of Lisbon, 1749-016 Lisbon, Portugal
- Champalimaud Foundation, Centre for the Unknown, 1400-038 Lisbon, Portugal
| | - Sara Silva
- LASIGE, Faculty of Sciences, University of Lisbon, 1749-016 Lisbon, Portugal
| | - Leonardo Vanneschi
- NOVA Information Management School (NOVA IMS), Campus de Campolide, Universidade Nova de Lisboa, 1070-312 Lisboa, Portugal
| | | |
Collapse
|
12
|
Qureshi I, Yan J, Abbas Q, Shaheed K, Riaz AB, Wahid A, Khan MWJ, Szczuko P. Medical image segmentation using deep semantic-based methods: A review of techniques, applications and emerging trends. INFORMATION FUSION 2023. [DOI: 10.1016/j.inffus.2022.09.031] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
|
13
|
Li X, Bagher-Ebadian H, Gardner S, Kim J, Elshaikh M, Movsas B, Zhu D, Chetty IJ. An uncertainty-aware deep learning architecture with outlier mitigation for prostate gland segmentation in radiotherapy treatment planning. Med Phys 2023; 50:311-322. [PMID: 36112996 DOI: 10.1002/mp.15982] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 08/24/2022] [Accepted: 08/25/2022] [Indexed: 01/25/2023] Open
Abstract
PURPOSE Task automation is essential for efficient and consistent image segmentation in radiation oncology. We report on a deep learning architecture, comprising a U-Net and a variational autoencoder (VAE) for automatic contouring of the prostate gland incorporating interobserver variation for radiotherapy treatment planning. The U-Net/VAE generates an ensemble set of segmentations for each image CT slice. A novel outlier mitigation (OM) technique was implemented to enhance the model segmentation accuracy. METHODS The primary source dataset (source_prim) consisted of 19 200 CT slices (from 300 patient planning CT image datasets) with manually contoured prostate glands. A smaller secondary source dataset (source_sec) comprised 640 CT slices (from 10 patient CT datasets), where prostate glands were segmented by 5 independent physicians on each dataset to account for interobserver variability. Data augmentation via random rotation (<5 degrees), cropping, and horizontal flipping was applied to each dataset to increase sample size by a factor of 100. A probabilistic hierarchical U-Net with VAE was implemented and pretrained using the augmented source_prim dataset for 30 epochs. Model parameters of the U-Net/VAE were fine-tuned using the augmented source_sec dataset for 100 epochs. After the first round of training, outlier contours in the training dataset were automatically detected and replaced by the most accurate contours (based on Dice similarity coefficient, DSC) generated by the model. The U-Net/OM-VAE was retrained using the revised training dataset. Metrics for comparison included DSC, Hausdorff distance (HD, mm), normalized cross-correlation (NCC) coefficient, and center-of-mass (COM) distance (mm). RESULTS Results for U-Net/OM-VAE with outliers replaced in the training dataset versus U-Net/VAE without OM were as follows: DSC = 0.82 ± 0.01 versus 0.80 ± 0.02 (p = 0.019), HD = 9.18 ± 1.22 versus 10.18 ± 1.35 mm (p = 0.043), NCC = 0.59 ± 0.07 versus 0.62 ± 0.06, and COM = 3.36 ± 0.81 versus 4.77 ± 0.96 mm over the average of 15 contours. For the average of 15 highest accuracy contours, values were as follows: DSC = 0.90 ± 0.02 versus 0.85 ± 0.02, HD = 5.47 ± 0.02 versus 7.54 ± 1.36 mm, and COM = 1.03 ± 0.58 versus 1.46 ± 0.68 mm (p < 0.03 for all metrics). Results for the U-Net/OM-VAE with outliers removed were as follows: DSC = 0.78 ± 0.01, HD = 10.65 ± 1.95 mm, NCC = 0.46 ± 0.10, COM = 4.17 ± 0.79 mm for the average of 15 contours, and DSC = 0.88 ± 0.02, HD = 7.00 ± 1.17 mm, COM = 1.58 ± 0.63 mm for the average of 15 highest accuracy contours. All metrics for U-Net/VAE trained on the source_prim and source_sec datasets via pretraining, followed by fine-tuning, show statistically significant improvement over that trained on the source_sec dataset only. Finally, all metrics for U-Net/VAE with or without OM showed statistically significant improvement over those for the standard U-Net. CONCLUSIONS A VAE combined with a hierarchical U-Net and an OM strategy (U-Net/OM-VAE) demonstrates promise toward capturing interobserver variability and produces accurate prostate auto-contours for radiotherapy planning. The availability of multiple contours for each CT slice enables clinicians to determine trade-offs in selecting the "best fitting" contour on each CT slice. Mitigation of outlier contours in the training dataset improves prediction accuracy, but one must be wary of reduction in variability in the training dataset.
Collapse
Affiliation(s)
- Xin Li
- Department of Computer Science, Wayne State University, Detroit, Michigan, USA
| | - Hassan Bagher-Ebadian
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| | - Stephen Gardner
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| | - Joshua Kim
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| | - Mohamed Elshaikh
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| | - Benjamin Movsas
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| | - Dongxiao Zhu
- Department of Computer Science, Wayne State University, Detroit, Michigan, USA
| | - Indrin J Chetty
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| |
Collapse
|
14
|
A Comprehensive Review on Smart Health Care: Applications, Paradigms, and Challenges with Case Studies. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2022:4822235. [PMID: 36247859 PMCID: PMC9536991 DOI: 10.1155/2022/4822235] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 09/02/2022] [Indexed: 01/26/2023]
Abstract
Growth and advancement of the Deep Learning (DL) and the Internet of Things (IoT) are figuring out their way over the modern contemporary world through integrating various technologies in distinct fields viz, agriculture, manufacturing, energy, transportation, supply chains, cities, healthcare, and so on. Researchers had identified the feasibility of integrating deep learning, cloud, and IoT to enhance the overall automation, where IoT may prolong its application area through utilizing cloud services and the cloud can even prolong its applications through data acquired by IoT devices like sensors and deep learning for disease detection and diagnosis. This study explains a summary of various techniques utilized in smart healthcare, i.e., deep learning, cloud-based-IoT applications in smart healthcare, fog computing in smart healthcare, and challenges and issues faced by smart healthcare and it presents a wider scope as it is not intended for a particular application such aspatient monitoring, disease detection, and diagnosing and the technologies used for developing this smart systems are outlined. Smart health bestows the quality of life. Convenient and comfortable living is made possible by the services provided by smart healthcare systems (SHSs). Since healthcare is a massive area with enormous data and a broad spectrum of diseases associated with different organs, immense research can be done to overcome the drawbacks of traditional healthcare methods. Deep learning with IoT can effectively be applied in the healthcare sector to automate the diagnosing and treatment process even in rural areas remotely. Applications may include disease prevention and diagnosis, fitness and patient monitoring, food monitoring, mobile health, telemedicine, emergency systems, assisted living, self-management of chronic diseases, and so on.
Collapse
|
15
|
Dourthe B, Shaikh N, Pai S A, Fels S, Brown SHM, Wilson DR, Street J, Oxland TR. Automated Segmentation of Spinal Muscles From Upright Open MRI Using a Multiscale Pyramid 2D Convolutional Neural Network. Spine (Phila Pa 1976) 2022; 47:1179-1186. [PMID: 34919072 DOI: 10.1097/brs.0000000000004308] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/29/2021] [Indexed: 02/01/2023]
Abstract
STUDY DESIGN Randomized trial. OBJECTIVE To implement an algorithm enabling the automated segmentation of spinal muscles from open magnetic resonance images in healthy volunteers and patients with adult spinal deformity (ASD). SUMMARY OF BACKGROUND DATA Understanding spinal muscle anatomy is critical to diagnosing and treating spinal deformity.Muscle boundaries can be extrapolated from medical images using segmentation, which is usually done manually by clinical experts and remains complicated and time-consuming. METHODS Three groups were examined: two healthy volunteer groups (N = 6 for each group) and one ASD group (N = 8 patients) were imaged at the lumbar and thoracic regions of the spine in an upright open magnetic resonance imaging scanner while maintaining different postures (various seated, standing, and supine). For each group and region, a selection of regions of interest (ROIs) was manually segmented. A multiscale pyramid two-dimensional convolutional neural network was implemented to automatically segment all defined ROIs. A five-fold crossvalidation method was applied and distinct models were trained for each resulting set and group and evaluated using Dice coefficients calculated between the model output and the manually segmented target. RESULTS Good to excellent results were found across all ROIs for the ASD (Dice coefficient >0.76) and healthy (dice coefficient > 0.86) groups. CONCLUSION This study represents a fundamental step toward the development of an automated spinal muscle properties extraction pipeline, which will ultimately allow clinicians to have easier access to patient-specific simulations, diagnosis, and treatment.
Collapse
Affiliation(s)
- Benjamin Dourthe
- ICORD, Blusson Spinal Cord Centre, University of British Columbia, Vancouver, BC, Canada
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
| | - Noor Shaikh
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
- Depart-Department of Mechanical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Anoosha Pai S
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Sidney Fels
- Electrical and Computer Engineering Department, University of British Columbia, Vancouver, BC, Canada
| | - Stephen H M Brown
- Department of Human Health and Nutritional Sciences, University of Guelph, Guelph, ON, Canada
| | - David R Wilson
- ICORD, Blusson Spinal Cord Centre, University of British Columbia, Vancouver, BC, Canada
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
- Centre for Hip Health and Mobility, University of British Columbia, Vancouver, BC, Canada
| | - John Street
- ICORD, Blusson Spinal Cord Centre, University of British Columbia, Vancouver, BC, Canada
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
| | - Thomas R Oxland
- ICORD, Blusson Spinal Cord Centre, University of British Columbia, Vancouver, BC, Canada
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
- Depart-Department of Mechanical Engineering, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
16
|
Li Y, Wu Y, Huang M, Zhang Y, Bai Z. Automatic prostate and peri-prostatic fat segmentation based on pyramid mechanism fusion network for T2-weighted MRI. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 223:106918. [PMID: 35779461 DOI: 10.1016/j.cmpb.2022.106918] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/01/2022] [Revised: 05/10/2022] [Accepted: 05/25/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic and accurate segmentation of prostate and peri-prostatic fat in male pelvic MRI images is a critical step in the diagnosis and prognosis of prostate cancer. The boundary of prostate tissue is not clear, which makes the task of automatic segmentation very challenging. The main issues, especially for the peri-prostatic fat, which is being offered for the first time, are hazy boundaries and a large form variation. METHODS We propose a pyramid mechanism fusion network (PMF-Net) to learn global features and more comprehensive context information. In the proposed PMF-Net, we devised two pyramid techniques in particular. A pyramid mechanism module made of dilated convolutions of varying rates is inserted before each down sample of the fundamental network architecture encoder. The module is intended to address the issue of information loss during the feature coding process, particularly in the case of segmentation object boundary information. In the transition stage from encoder to decoder, pyramid fusion module is designed to extract global features. The features of the decoder not only integrate the features of the previous stage after up sampling and the output features of pyramid mechanism, but also include the features of skipping connection transmission under the same scale of the encoder. RESULTS The segmentation results of prostate and peri-prostatic fat on numerous diverse male pelvic MRI datasets show that our proposed PMF-Net has higher performance than existing methods. The average surface distance (ASD) and Dice similarity coefficient (DSC) of prostate segmentation results reached 10.06 and 90.21%, respectively. The ASD and DSC of the peri-prostatic fat segmentation results reached 50.96 and 82.41%. CONCLUSIONS The results of our segmentation are substantially connected and consistent with those of expert manual segmentation. Furthermore, peri-prostatic fat segmentation is a new issue, and good automatic segmentation has substantial therapeutic implications.
Collapse
Affiliation(s)
- Yuchun Li
- State Key Laboratory of Marine Resource Utilization in South China Sea, College of Information Science and Technology, Hainan University, Haikou 570288, China
| | - Yuanyuan Wu
- State Key Laboratory of Marine Resource Utilization in South China Sea, College of Information Science and Technology, Hainan University, Haikou 570288, China
| | - Mengxing Huang
- State Key Laboratory of Marine Resource Utilization in South China Sea, College of Information Science and Technology, Hainan University, Haikou 570288, China.
| | - Yu Zhang
- School of Computer science and Technology, Hainan University, Haikou 570288, China
| | - Zhiming Bai
- Haikou Municipal People's Hospital and Central South University Xiangya Medical College Affiliated Hospital, Haikou 570288, China
| |
Collapse
|
17
|
Suganyadevi S, Seethalakshmi V. CVD-HNet: Classifying Pneumonia and COVID-19 in Chest X-ray Images Using Deep Network. WIRELESS PERSONAL COMMUNICATIONS 2022; 126:3279-3303. [PMID: 35756172 PMCID: PMC9206838 DOI: 10.1007/s11277-022-09864-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 05/29/2022] [Indexed: 06/04/2023]
Abstract
The use of computer-assisted analysis to improve image interpretation has been a long-standing challenge in the medical imaging industry. In terms of image comprehension, Continuous advances in AI (Artificial Intelligence), predominantly in DL (Deep Learning) techniques, are supporting in the classification, Detection, and quantification of anomalies in medical images. DL techniques are the most rapidly evolving branch of AI, and it's recently been successfully pragmatic in a variety of fields, including medicine. This paper provides a classification method for COVID 19 infected X-ray images based on new novel deep CNN model. For COVID19 specified pneumonia analysis, two new customized CNN architectures, CVD-HNet1 (COVID-HybridNetwork1) and CVD-HNet2 (COVID-HybridNetwork2), have been designed. The suggested method utilizes operations based on boundaries and regions, as well as convolution processes, in a systematic manner. In comparison to existing CNNs, the suggested classification method achieves excellent Accuracy 98 percent, F Score 0.99 and MCC 0.97. These results indicate impressive classification accuracy on a limited dataset, with more training examples, much better results can be achieved. Overall, our CVD-HNet model could be a useful tool for radiologists in diagnosing and detecting COVID 19 instances early.
Collapse
Affiliation(s)
- S. Suganyadevi
- Department of Electronics and Communication Engineering, KPR Institute of Engineering and Technology, Coimbatore, Tamilnadu 641 407 India
| | - V. Seethalakshmi
- Department of Electronics and Communication Engineering, KPR Institute of Engineering and Technology, Coimbatore, Tamilnadu 641 407 India
| |
Collapse
|
18
|
Ullah F, Moon J, Naeem H, Jabbar S. Explainable artificial intelligence approach in combating real-time surveillance of COVID19 pandemic from CT scan and X-ray images using ensemble model. THE JOURNAL OF SUPERCOMPUTING 2022; 78:19246-19271. [PMID: 35754515 PMCID: PMC9206105 DOI: 10.1007/s11227-022-04631-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/25/2022] [Indexed: 06/01/2023]
Abstract
Population size has made disease monitoring a major concern in the healthcare system, due to which auto-detection has become a top priority. Intelligent disease detection frameworks enable doctors to recognize illnesses, provide stable and accurate results, and lower mortality rates. An acute and severe disease known as Coronavirus (COVID19) has suddenly become a global health crisis. The fastest way to avoid the spreading of Covid19 is to implement an automated detection approach. In this study, an explainable COVID19 detection in CT scan and chest X-ray is established using a combination of deep learning and machine learning classification algorithms. A Convolutional Neural Network (CNN) collects deep features from collected images, and these features are then fed into a machine learning ensemble for COVID19 assessment. To identify COVID19 disease from images, an ensemble model is developed which includes, Gaussian Naive Bayes (GNB), Support Vector Machine (SVM), Decision Tree (DT), Logistic Regression (LR), K-Nearest Neighbor (KNN), and Random Forest (RF). The overall performance of the proposed method is interpreted using Gradient-weighted Class Activation Mapping (Grad-CAM), and t-distributed Stochastic Neighbor Embedding (t-SNE). The proposed method is evaluated using two datasets containing 1,646 and 2,481 CT scan images gathered from COVID19 patients, respectively. Various performance comparisons with state-of-the-art approaches were also shown. The proposed approach beats existing models, with scores of 98.5% accuracy, 99% precision, and 99% recall, respectively. Further, the t-SNE and explainable Artificial Intelligence (AI) experiments are conducted to validate the proposed approach.
Collapse
Affiliation(s)
- Farhan Ullah
- School of Software, Northwestern Polytechnical University, Xian, 710072 Shaanxi People’s Republic of China
| | - Jihoon Moon
- Department of Industrial Security, Chung-Ang University, Seoul, 06974 Korea
| | - Hamad Naeem
- School of Computer Science and Technology, Zhoukou Normal University, Zhoukou, 466000 Henan People’s Republic of China
| | - Sohail Jabbar
- Department of Computational Sciences, The University of Faisalabad, Faisalabad, 38000 Pakistan
| |
Collapse
|
19
|
Chen X, Ma X, Yan X, Luo F, Yang S, Wang Z, Wu R, Wang J, Lu N, Bi N, Yi J, Wang S, Li Y, Dai J, Men K. Personalized auto-segmentation for magnetic resonance imaging guided adaptive radiotherapy of prostate cancer. Med Phys 2022; 49:4971-4979. [PMID: 35670079 DOI: 10.1002/mp.15793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 05/13/2022] [Accepted: 05/30/2022] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Fast and accurate delineation of organs on treatment-fraction images is critical in magnetic resonance imaging-guided adaptive radiotherapy (MRIgART). This study proposes a personalized auto-segmentation (AS) framework to assist online delineation of prostate cancer using MRIgART. METHODS Image data from 26 patients diagnosed with prostate cancer and treated using hypofractionated MRIgART (5 fractions per patient) were collected retrospectively. Daily pretreatment T2-weighted MRI was performed using a 1.5-T MRI system integrated into a Unity MR-linac. First-fraction image and contour data from 16 patients (80 image-sets) were used to train the population AS model, and the remaining 10 patients composed the test set. The proposed personalized AS framework contained two main steps. First, a convolutional neural network was employed to train the population model using the training set. Second, for each test patient, the population model was progressively fine-tuned with manually checked delineations of the patient's current and previous fractions to obtain a personalized model that was applied to the next fraction. RESULTS Compared with the population model, the personalized models substantially improved the mean Dice similarity coefficient from 0.79 to 0.93 for the prostate clinical target volume (CTV), 0.91 to 0.97 for the bladder, 0.82 to 0.92 for the rectum, 0.91 to 0.93 for the femoral heads, respectively. CONCLUSIONS The proposed method can achieve accurate segmentation and potentially shorten the overall online delineation time of MRIgART. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Xiangyu Ma
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Xuena Yan
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Fei Luo
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Siran Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Zekun Wang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Runye Wu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Jianyang Wang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Ningning Lu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Nan Bi
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Junlin Yi
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Shulian Wang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Yexiong Li
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| |
Collapse
|
20
|
|
21
|
Xiao C, Jin J, Yi J, Han C, Zhou Y, Ai Y, Xie C, Jin X. RefineNet-based 2D and 3D automatic segmentations for clinical target volume and organs at risks for patients with cervical cancer in postoperative radiotherapy. J Appl Clin Med Phys 2022; 23:e13631. [PMID: 35533205 PMCID: PMC9278674 DOI: 10.1002/acm2.13631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 04/09/2021] [Accepted: 04/18/2022] [Indexed: 11/07/2022] Open
Abstract
PURPOSE An accurate and reliable target volume delineation is critical for the safe and successful radiotherapy. The purpose of this study is to develop new 2D and 3D automatic segmentation models based on RefineNet for clinical target volume (CTV) and organs at risk (OARs) for postoperative cervical cancer based on computed tomography (CT) images. METHODS A 2D RefineNet and 3D RefineNetPlus3D were adapted and built to automatically segment CTVs and OARs on a total of 44 222 CT slices of 313 patients with stage I-III cervical cancer. Fully convolutional networks (FCNs), U-Net, context encoder network (CE-Net), UNet3D, and ResUNet3D were also trained and tested with randomly divided training and validation sets, respectively. The performances of these automatic segmentation models were evaluated by Dice similarity coefficient (DSC), Jaccard similarity coefficient, and average symmetric surface distance when comparing them with manual segmentations with the test data. RESULTS The DSC for RefineNet, FCN, U-Net, CE-Net, UNet3D, ResUNet3D, and RefineNet3D were 0.82, 0.80, 0.82, 0.81, 0.80, 0.81, and 0.82 with a mean contouring time of 3.2, 3.4, 8.2, 3.9, 9.8, 11.4, and 6.4 s, respectively. The generated RefineNetPlus3D demonstrated a good performance in the automatic segmentation of bladder, small intestine, rectum, right and left femoral heads with a DSC of 0.97, 0.95, 091, 0.98, and 0.98, respectively, with a mean computation time of 6.6 s. CONCLUSIONS The newly adapted RefineNet and developed RefineNetPlus3D were promising automatic segmentation models with accurate and clinically acceptable CTV and OARs for cervical cancer patients in postoperative radiotherapy.
Collapse
Affiliation(s)
- Chengjian Xiao
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People's Republic of China
| | - Juebin Jin
- Department of Medical Engineering, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People's Republic of China
| | - Jinling Yi
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People's Republic of China
| | - Ce Han
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People's Republic of China
| | - Yongqiang Zhou
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People's Republic of China
| | - Yao Ai
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People's Republic of China
| | - Congying Xie
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People's Republic of China.,Department of Radiation and Medical Oncology, Wenzhou Medical University Second Affiliated Hospital, Wenzhou, People's Republic of China
| | - Xiance Jin
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People's Republic of China.,School of Basic Medical Science, Wenzhou Medical University, Wenzhou, People's Republic of China
| |
Collapse
|
22
|
Qin C, Tu P, Chen X, Troccaz J. A novel registration-based algorithm for prostate segmentation via the combination of SSM and CNN. Med Phys 2022; 49:5268-5282. [PMID: 35506596 DOI: 10.1002/mp.15698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Revised: 04/18/2022] [Accepted: 04/22/2022] [Indexed: 11/12/2022] Open
Abstract
PURPOSE Precise determination of target is an essential procedure in prostate interventions, such as prostate biopsy, lesion detection, and targeted therapy. However, the prostate delineation may be tough in some cases due to tissue ambiguity or lack of partial anatomical boundary. In this study, we proposed a novel supervised registration-based algorithm for precise prostate segmentation, which combine the convolutional neural network (CNN) with a statistical shape model (SSM). METHODS The proposed network mainly consists of two branches. One called SSM-Net branch was exploited to predict the shape transform matrix, shape control parameters, and shape fine-tuning vector, for the generation of the prostate boundary. Furtherly, according to the inferred boundary, a normalized distance map was calculated as the output of SSM-Net. Another branch named ResU-Net was employed to predict a probability label map from the input images at the same time. Integrating the output of these two branches, the optimal weighted sum of the distance map and the probability map was regarded as the prostate segmentation. RESULTS Two public datasets PROMISE12 and NCI-ISBI 2013 were utilized to evaluate the performance of the proposed algorithm. The results demonstrate that the segmentation algorithm achieved the best performance with an SSM of 9500 nodes, which obtained a dice of 0.907 and an average surface distance of 1.85 mm. Compared with other methods, our algorithm delineates the prostate region more accurately and efficiently. In addition, we verified the impact of model elasticity augmentation and the fine-tuning item on the network segmentation capability. As a result, both factors have improved the delineation accuracy, with dice increased by 10% and 7% respectively. CONCLUSIONS Our segmentation method has the potential to be an effective and robust approach for prostate segmentation. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Chunxia Qin
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China.,School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Puxun Tu
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaojun Chen
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jocelyne Troccaz
- Univ. Grenoble Alpes, CNRS, Grenoble INP, TIMC, Grenoble, France
| |
Collapse
|
23
|
Gunashekar DD, Bielak L, Hägele L, Oerther B, Benndorf M, Grosu AL, Brox T, Zamboglou C, Bock M. Explainable AI for CNN-based prostate tumor segmentation in multi-parametric MRI correlated to whole mount histopathology. Radiat Oncol 2022; 17:65. [PMID: 35366918 PMCID: PMC8976981 DOI: 10.1186/s13014-022-02035-0] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Accepted: 03/15/2022] [Indexed: 12/15/2022] Open
Abstract
Automatic prostate tumor segmentation is often unable to identify the lesion even if multi-parametric MRI data is used as input, and the segmentation output is difficult to verify due to the lack of clinically established ground truth images. In this work we use an explainable deep learning model to interpret the predictions of a convolutional neural network (CNN) for prostate tumor segmentation. The CNN uses a U-Net architecture which was trained on multi-parametric MRI data from 122 patients to automatically segment the prostate gland and prostate tumor lesions. In addition, co-registered ground truth data from whole mount histopathology images were available in 15 patients that were used as a test set during CNN testing. To be able to interpret the segmentation results of the CNN, heat maps were generated using the Gradient Weighted Class Activation Map (Grad-CAM) method. The CNN achieved a mean Dice Sorensen Coefficient 0.62 and 0.31 for the prostate gland and the tumor lesions -with the radiologist drawn ground truth and 0.32 with whole-mount histology ground truth for tumor lesions. Dice Sorensen Coefficient between CNN predictions and manual segmentations from MRI and histology data were not significantly different. In the prostate the Grad-CAM heat maps could differentiate between tumor and healthy prostate tissue, which indicates that the image information in the tumor was essential for the CNN segmentation.
Collapse
Affiliation(s)
- Deepa Darshini Gunashekar
- Department of Radiology, Medical Physics, Medical Center University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.
| | - Lars Bielak
- Department of Radiology, Medical Physics, Medical Center University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany
| | - Leonard Hägele
- Department of Radiology, Medical Physics, Medical Center University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Benedict Oerther
- Department of Radiology, Medical Center University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Matthias Benndorf
- Department of Radiology, Medical Center University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Anca-L Grosu
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany
- Department of Radiology, Medical Center University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Thomas Brox
- Department of Computer Science, University of Freiburg, Freiburg, Germany
| | - Constantinos Zamboglou
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany
- Department of Radiology, Medical Center University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Michael Bock
- Department of Radiology, Medical Physics, Medical Center University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany
| |
Collapse
|
24
|
|
25
|
SVseg: Stacked Sparse Autoencoder-Based Patch Classification Modeling for Vertebrae Segmentation. MATHEMATICS 2022. [DOI: 10.3390/math10050796] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Precise vertebrae segmentation is essential for the image-related analysis of spine pathologies such as vertebral compression fractures and other abnormalities, as well as for clinical diagnostic treatment and surgical planning. An automatic and objective system for vertebra segmentation is required, but its development is likely to run into difficulties such as low segmentation accuracy and the requirement of prior knowledge or human intervention. Recently, vertebral segmentation methods have focused on deep learning-based techniques. To mitigate the challenges involved, we propose deep learning primitives and stacked Sparse autoencoder-based patch classification modeling for Vertebrae segmentation (SVseg) from Computed Tomography (CT) images. After data preprocessing, we extract overlapping patches from CT images as input to train the model. The stacked sparse autoencoder learns high-level features from unlabeled image patches in an unsupervised way. Furthermore, we employ supervised learning to refine the feature representation to improve the discriminability of learned features. These high-level features are fed into a logistic regression classifier to fine-tune the model. A sigmoid classifier is added to the network to discriminate the vertebrae patches from non-vertebrae patches by selecting the class with the highest probabilities. We validated our proposed SVseg model on the publicly available MICCAI Computational Spine Imaging (CSI) dataset. After configuration optimization, our proposed SVseg model achieved impressive performance, with 87.39% in Dice Similarity Coefficient (DSC), 77.60% in Jaccard Similarity Coefficient (JSC), 91.53% in precision (PRE), and 90.88% in sensitivity (SEN). The experimental results demonstrated the method’s efficiency and significant potential for diagnosing and treating clinical spinal diseases.
Collapse
|
26
|
Abstract
Artificial intelligence (AI) has illuminated a clear path towards an evolving health-care system replete with enhanced precision and computing capabilities. Medical imaging analysis can be strengthened by machine learning as the multidimensional data generated by imaging naturally lends itself to hierarchical classification. In this Review, we describe the role of machine intelligence in image-based endocrine cancer diagnostics. We first provide a brief overview of AI and consider its intuitive incorporation into the clinical workflow. We then discuss how AI can be applied for the characterization of adrenal, pancreatic, pituitary and thyroid masses in order to support clinicians in their diagnostic interpretations. This Review also puts forth a number of key evaluation criteria for machine learning in medicine that physicians can use in their appraisals of these algorithms. We identify mitigation strategies to address ongoing challenges around data availability and model interpretability in the context of endocrine cancer diagnosis. Finally, we delve into frontiers in systems integration for AI, discussing automated pipelines and evolving computing platforms that leverage distributed, decentralized and quantum techniques.
Collapse
Affiliation(s)
| | - Ihab R Kamel
- Department of Imaging & Imaging Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Harrison X Bai
- Department of Imaging & Imaging Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
27
|
Rouvière O, Moldovan PC, Vlachomitrou A, Gouttard S, Riche B, Groth A, Rabotnikov M, Ruffion A, Colombel M, Crouzet S, Weese J, Rabilloud M. Combined model-based and deep learning-based automated 3D zonal segmentation of the prostate on T2-weighted MR images: clinical evaluation. Eur Radiol 2022; 32:3248-3259. [PMID: 35001157 DOI: 10.1007/s00330-021-08408-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 09/28/2021] [Accepted: 10/09/2021] [Indexed: 11/04/2022]
Abstract
OBJECTIVE To train and to test for prostate zonal segmentation an existing algorithm already trained for whole-gland segmentation. METHODS The algorithm, combining model-based and deep learning-based approaches, was trained for zonal segmentation using the NCI-ISBI-2013 dataset and 70 T2-weighted datasets acquired at an academic centre. Test datasets were randomly selected among examinations performed at this centre on one of two scanners (General Electric, 1.5 T; Philips, 3 T) not used for training. Automated segmentations were corrected by two independent radiologists. When segmentation was initiated outside the prostate, images were cropped and segmentation repeated. Factors influencing the algorithm's mean Dice similarity coefficient (DSC) and its precision were assessed using beta regression. RESULTS Eighty-two test datasets were selected; one was excluded. In 13/81 datasets, segmentation started outside the prostate, but zonal segmentation was possible after image cropping. Depending on the radiologist chosen as reference, algorithm's median DSCs were 96.4/97.4%, 91.8/93.0% and 79.9/89.6% for whole-gland, central gland and anterior fibromuscular stroma (AFMS) segmentations, respectively. DSCs comparing radiologists' delineations were 95.8%, 93.6% and 81.7%, respectively. For all segmentation tasks, the scanner used for imaging significantly influenced the mean DSC and its precision, and the mean DSC was significantly lower in cases with initial segmentation outside the prostate. For central gland segmentation, the mean DSC was also significantly lower in larger prostates. The radiologist chosen as reference had no significant impact, except for AFMS segmentation. CONCLUSIONS The algorithm performance fell within the range of inter-reader variability but remained significantly impacted by the scanner used for imaging. KEY POINTS • Median Dice similarity coefficients obtained by the algorithm fell within human inter-reader variability for the three segmentation tasks (whole gland, central gland, anterior fibromuscular stroma). • The scanner used for imaging significantly impacted the performance of the automated segmentation for the three segmentation tasks. • The performance of the automated segmentation of the anterior fibromuscular stroma was highly variable across patients and showed also high variability across the two radiologists.
Collapse
Affiliation(s)
- Olivier Rouvière
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Pavillon B, 5 place d'Arsonval, F-69437, Lyon, France. .,Université de Lyon, F-69003, Lyon, France. .,Faculté de Médecine Lyon Est, Université Lyon 1, F-69003, Lyon, France. .,INSERM, LabTau, U1032, Lyon, France.
| | - Paul Cezar Moldovan
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Pavillon B, 5 place d'Arsonval, F-69437, Lyon, France
| | - Anna Vlachomitrou
- Philips France, 33 rue de Verdun, CS 60 055, 92156, Suresnes Cedex, France
| | - Sylvain Gouttard
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Pavillon B, 5 place d'Arsonval, F-69437, Lyon, France
| | - Benjamin Riche
- Service de Biostatistique Et Bioinformatique, Pôle Santé Publique, Hospices Civils de Lyon, F-69003, Lyon, France.,Laboratoire de Biométrie Et Biologie Évolutive, Équipe Biostatistique-Santé, UMR 5558, CNRS, F-69100, Villeurbanne, France
| | - Alexandra Groth
- Philips Research, Röntgenstrasse 24-26, 22335, Hamburg, Germany
| | | | - Alain Ruffion
- Department of Urology, Centre Hospitalier Lyon Sud, Hospices Civils de Lyon, F-69310, Pierre-Bénite, France
| | - Marc Colombel
- Université de Lyon, F-69003, Lyon, France.,Faculté de Médecine Lyon Est, Université Lyon 1, F-69003, Lyon, France.,Department of Urology, Hôpital Edouard Herriot, Hospices Civils de Lyon, F-69437, Lyon, France
| | - Sébastien Crouzet
- Department of Urology, Hôpital Edouard Herriot, Hospices Civils de Lyon, F-69437, Lyon, France
| | - Juergen Weese
- Philips Research, Röntgenstrasse 24-26, 22335, Hamburg, Germany
| | - Muriel Rabilloud
- Université de Lyon, F-69003, Lyon, France.,Faculté de Médecine Lyon Est, Université Lyon 1, F-69003, Lyon, France.,Service de Biostatistique Et Bioinformatique, Pôle Santé Publique, Hospices Civils de Lyon, F-69003, Lyon, France.,Laboratoire de Biométrie Et Biologie Évolutive, Équipe Biostatistique-Santé, UMR 5558, CNRS, F-69100, Villeurbanne, France
| |
Collapse
|
28
|
Suganyadevi S, Seethalakshmi V, Balasamy K. A review on deep learning in medical image analysis. INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL 2022; 11:19-38. [PMID: 34513553 PMCID: PMC8417661 DOI: 10.1007/s13735-021-00218-1] [Citation(s) in RCA: 75] [Impact Index Per Article: 37.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 08/06/2021] [Accepted: 08/09/2021] [Indexed: 05/02/2023]
Abstract
Ongoing improvements in AI, particularly concerning deep learning techniques, are assisting to identify, classify, and quantify patterns in clinical images. Deep learning is the quickest developing field in artificial intelligence and is effectively utilized lately in numerous areas, including medication. A brief outline is given on studies carried out on the region of application: neuro, brain, retinal, pneumonic, computerized pathology, bosom, heart, breast, bone, stomach, and musculoskeletal. For information exploration, knowledge deployment, and knowledge-based prediction, deep learning networks can be successfully applied to big data. In the field of medical image processing methods and analysis, fundamental information and state-of-the-art approaches with deep learning are presented in this paper. The primary goals of this paper are to present research on medical image processing as well as to define and implement the key guidelines that are identified and addressed.
Collapse
Affiliation(s)
- S. Suganyadevi
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, India
| | - V. Seethalakshmi
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, India
| | - K. Balasamy
- Department of IT, Dr. Mahalingam College of Engineering and Technology, Coimbatore, India
| |
Collapse
|
29
|
Hu R, Peng Z, Zhu X, Gan J, Zhu Y, Ma J, Wu G. Multi-Band Brain Network Analysis for Functional Neuroimaging Biomarker Identification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3843-3855. [PMID: 34310294 PMCID: PMC8931676 DOI: 10.1109/tmi.2021.3099641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
The functional connectomic profile is one of the non-invasive imaging biomarkers in the computer-assisted diagnostic system for many neuro-diseases. However, the diagnostic power of functional connectivity is challenged by mixed frequency-specific neuronal oscillations in the brain, which makes the single Functional Connectivity Network (FCN) often underpowered to capture the disease-related functional patterns. To address this challenge, we propose a novel functional connectivity analysis framework to conduct joint feature learning and personalized disease diagnosis, in a semi-supervised manner, aiming at focusing on putative multi-band functional connectivity biomarkers from functional neuroimaging data. Specifically, we first decompose the Blood Oxygenation Level Dependent (BOLD) signals into multiple frequency bands by the discrete wavelet transform, and then cast the alignment of all fully-connected FCNs derived from multiple frequency bands into a parameter-free multi-band fusion model. The proposed fusion model fuses all fully-connected FCNs to obtain a sparsely-connected FCN (sparse FCN for short) for each individual subject, as well as lets each sparse FCN be close to its neighbored sparse FCNs and be far away from its furthest sparse FCNs. Furthermore, we employ the l1 -SVM to conduct joint brain region selection and disease diagnosis. Finally, we evaluate the effectiveness of our proposed framework on various neuro-diseases, i.e., Fronto-Temporal Dementia (FTD), Obsessive-Compulsive Disorder (OCD), and Alzheimer's Disease (AD), and the experimental results demonstrate that our framework shows more reasonable results, compared to state-of-the-art methods, in terms of classification performance and the selected brain regions. The source code can be visited by the url https://github.com/reynard-hu/mbbna.
Collapse
|
30
|
Seo H, Yu L, Ren H, Li X, Shen L, Xing L. Deep Neural Network With Consistency Regularization of Multi-Output Channels for Improved Tumor Detection and Delineation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3369-3378. [PMID: 34048339 PMCID: PMC8692166 DOI: 10.1109/tmi.2021.3084748] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Deep learning is becoming an indispensable tool for imaging applications, such as image segmentation, classification, and detection. In this work, we reformulate a standard deep learning problem into a new neural network architecture with multi-output channels, which reflects different facets of the objective, and apply the deep neural network to improve the performance of image segmentation. By adding one or more interrelated auxiliary-output channels, we impose an effective consistency regularization for the main task of pixelated classification (i.e., image segmentation). Specifically, multi-output-channel consistency regularization is realized by residual learning via additive paths that connect main-output channel and auxiliary-output channels in the network. The method is evaluated on the detection and delineation of lung and liver tumors with public data. The results clearly show that multi-output-channel consistency implemented by residual learning improves the standard deep neural network. The proposed framework is quite broad and should find widespread applications in various deep learning problems.
Collapse
|
31
|
An Optimized Approach for Prostate Image Segmentation Using K-Means Clustering Algorithm with Elbow Method. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:4553832. [PMID: 34819951 PMCID: PMC8608531 DOI: 10.1155/2021/4553832] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/10/2021] [Accepted: 10/26/2021] [Indexed: 01/05/2023]
Abstract
Prostate cancer disease is one of the common types that cause men's prostate damage all over the world. Prostate-specific membrane antigen (PSMA) expressed by type-II is an extremely attractive style for imaging-based diagnosis of prostate cancer. Clinically, photodynamic therapy (PDT) is used as noninvasive therapy in treatment of several cancers and some other diseases. This paper aims to segment or cluster and analyze pixels of histological and near-infrared (NIR) prostate cancer images acquired by PSMA-targeting PDT low weight molecular agents. Such agents can provide image guidance to resection of the prostate tumors and permit for the subsequent PDT in order to remove remaining or noneradicable cancer cells. The color prostate image segmentation is accomplished using an optimized image segmentation approach. The optimized approach combines the k-means clustering algorithm with elbow method that can give better clustering of pixels through automatically determining the best number of clusters. Clusters' statistics and ratio results of pixels in the segmented images show the applicability of the proposed approach for giving the optimum number of clusters for prostate cancer analysis and diagnosis.
Collapse
|
32
|
Gassenmaier S, Küstner T, Nickel D, Herrmann J, Hoffmann R, Almansour H, Afat S, Nikolaou K, Othman AE. Deep Learning Applications in Magnetic Resonance Imaging: Has the Future Become Present? Diagnostics (Basel) 2021; 11:2181. [PMID: 34943418 PMCID: PMC8700442 DOI: 10.3390/diagnostics11122181] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 11/18/2021] [Accepted: 11/22/2021] [Indexed: 12/11/2022] Open
Abstract
Deep learning technologies and applications demonstrate one of the most important upcoming developments in radiology. The impact and influence of these technologies on image acquisition and reporting might change daily clinical practice. The aim of this review was to present current deep learning technologies, with a focus on magnetic resonance image reconstruction. The first part of this manuscript concentrates on the basic technical principles that are necessary for deep learning image reconstruction. The second part highlights the translation of these techniques into clinical practice. The third part outlines the different aspects of image reconstruction techniques, and presents a review of the current literature regarding image reconstruction and image post-processing in MRI. The promising results of the most recent studies indicate that deep learning will be a major player in radiology in the upcoming years. Apart from decision and diagnosis support, the major advantages of deep learning magnetic resonance imaging reconstruction techniques are related to acquisition time reduction and the improvement of image quality. The implementation of these techniques may be the solution for the alleviation of limited scanner availability via workflow acceleration. It can be assumed that this disruptive technology will change daily routines and workflows permanently.
Collapse
Affiliation(s)
- Sebastian Gassenmaier
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
| | - Thomas Küstner
- Department of Diagnostic and Interventional Radiology, Medical Image and Data Analysis (MIDAS.lab), Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany;
| | - Dominik Nickel
- MR Applications Predevelopment, Siemens Healthcare GmbH, Allee am Roethelheimpark 2, 91052 Erlangen, Germany;
| | - Judith Herrmann
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
| | - Rüdiger Hoffmann
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
| | - Haidara Almansour
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
| | - Saif Afat
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
| | - Konstantin Nikolaou
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
| | - Ahmed E. Othman
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
- Department of Neuroradiology, University Medical Center, 55131 Mainz, Germany
| |
Collapse
|
33
|
Wang R, Liu X, Shao H, Li Q, Zhong D. 3D Dense Volumetric Network for Accurate Automated Pancreas Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3553-3556. [PMID: 34892006 DOI: 10.1109/embc46164.2021.9630789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Pancreatic cancer poses a great threat to our health with an overall five-year survival rate of 8%. Automatic and accurate segmentation of pancreas plays an important and prerequisite role in computer-assisted diagnosis and treatment. Due to the ambiguous pancreas borders and intertwined surrounding tissues, it is a challenging task. In this paper, we propose a novel 3D Dense Volumetric Network (3D2VNet) to improve the segmentation accuracy of pancreas organ. Firstly, 3D fully convolutional architecture is applied to effectively incorporate the 3D pancreas and geometric cues for volume-to-volume segmentation. Then, dense connectivity is introduced to preserve the maximum information flow between layers and reduce the overfitting on limited training data. In addition, a auxiliary side path is constructed to help the gradient propagation to stabilize the training process. Adequate experiments are conducted on a challenging pancreas dataset in Medical Segmentation Decathlon challenge. The results demonstrate our method can outperform other comparison methods on the task of automated pancreas segmentation using limited data.Clinical relevance-This paper proposes an accurate automated pancreas segmentation method, which can provide assistance to clinicians in the diagnosis and treatment of pancreatic cancer.
Collapse
|
34
|
Kalantar R, Lin G, Winfield JM, Messiou C, Lalondrelle S, Blackledge MD, Koh DM. Automatic Segmentation of Pelvic Cancers Using Deep Learning: State-of-the-Art Approaches and Challenges. Diagnostics (Basel) 2021; 11:1964. [PMID: 34829310 PMCID: PMC8625809 DOI: 10.3390/diagnostics11111964] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 10/14/2021] [Accepted: 10/19/2021] [Indexed: 12/18/2022] Open
Abstract
The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology and radiation oncology. Amongst the anatomical locations where recent auto-segmentation algorithms have been employed, the pelvis remains one of the most challenging due to large intra- and inter-patient soft-tissue variabilities. This review provides a comprehensive, non-systematic and clinically-oriented overview of 74 DL-based segmentation studies, published between January 2016 and December 2020, for bladder, prostate, cervical and rectal cancers on computed tomography (CT) and magnetic resonance imaging (MRI), highlighting the key findings, challenges and limitations.
Collapse
Affiliation(s)
- Reza Kalantar
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Gigin Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Chang Gung University, 5 Fuhsing St., Guishan, Taoyuan 333, Taiwan;
| | - Jessica M. Winfield
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Christina Messiou
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Susan Lalondrelle
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Matthew D. Blackledge
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Dow-Mu Koh
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| |
Collapse
|
35
|
Kaur J, Kaur P. Outbreak COVID-19 in Medical Image Processing Using Deep Learning: A State-of-the-Art Review. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2021; 29:2351-2382. [PMID: 34690493 PMCID: PMC8525064 DOI: 10.1007/s11831-021-09667-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Accepted: 10/01/2021] [Indexed: 06/13/2023]
Abstract
From the month of December-19, the outbreak of Coronavirus (COVID-19) triggered several deaths and overstated every aspect of individual health. COVID-19 has been designated as a pandemic by World Health Organization. The circumstances placed serious trouble on every country worldwide, particularly with health arrangements and time-consuming responses. The increase in the positive cases of COVID-19 globally spread every day. The quantity of accessible diagnosing kits is restricted because of complications in detecting the existence of the illness. Fast and correct diagnosis of COVID-19 is a timely requirement for the prevention and controlling of the pandemic through suitable isolation and medicinal treatment. The significance of the present work is to discuss the outline of the deep learning techniques with medical imaging such as outburst prediction, virus transmitted indications, detection and treatment aspects, vaccine availability with remedy research. Abundant image resources of medical imaging as X-rays, Computed Tomography Scans, Magnetic Resonance imaging, formulate deep learning high-quality methods to fight against the pandemic COVID-19. The review presents a comprehensive idea of deep learning and its related applications in healthcare received over the past decade. At the last, some issues and confrontations to control the health crisis and outbreaks have been introduced. The progress in technology has contributed to developing individual's lives. The problems faced by the radiologists during medical imaging techniques and deep learning approaches for diagnosing the COVID-19 infections have been also discussed.
Collapse
Affiliation(s)
- Jaspreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab India
| | - Prabhpreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab India
| |
Collapse
|
36
|
Niyas S, Chethana Vaisali S, Show I, Chandrika T, Vinayagamani S, Kesavadas C, Rajan J. Segmentation of focal cortical dysplasia lesions from magnetic resonance images using 3D convolutional neural networks. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
37
|
Li GY, Wang CY, Lv J. Current status of deep learning in abdominal image reconstruction. Artif Intell Med Imaging 2021; 2:86-94. [DOI: 10.35711/aimi.v2.i4.86] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 06/24/2021] [Accepted: 08/17/2021] [Indexed: 02/06/2023] Open
Abstract
Abdominal magnetic resonance imaging (MRI) and computed tomography (CT) are commonly used for disease screening, diagnosis, and treatment guidance. However, abdominal MRI has disadvantages including slow speed and vulnerability to motions, while CT suffers from problems of radiation. It has been reported that deep learning reconstruction can solve such problems while maintaining good image quality. Recently, deep learning-based image reconstruction has become a hot topic in the field of medical imaging. This study reviews the latest research on deep learning reconstruction in abdominal imaging, including the widely used convolutional neural network, generative adversarial network, and recurrent neural network.
Collapse
Affiliation(s)
- Guang-Yuan Li
- School of Computer and Control Engineering, Yantai University, Yantai 264000, Shandong Province, China
| | - Cheng-Yan Wang
- Human Phenome Institute, Fudan University, Shanghai 201203, China
| | - Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai 264000, Shandong Province, China
| |
Collapse
|
38
|
Yan C, Lu JJ, Chen K, Wang L, Lu H, Yu L, Sun M, Xu J. Scale- and Slice-aware Net (S 2 aNet) for 3D segmentation of organs and musculoskeletal structures in pelvic MRI. Magn Reson Med 2021; 87:431-445. [PMID: 34337773 DOI: 10.1002/mrm.28939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 06/11/2021] [Accepted: 07/04/2021] [Indexed: 11/06/2022]
Abstract
PURPOSE MRI of organs and musculoskeletal structures in the female pelvis presents a unique display of pelvic anatomy. Automated segmentation of pelvic structures plays an important role in personalized diagnosis and treatment on pelvic structures disease. Pelvic organ systems are very complicated, and it is a challenging task for 3D segmentation of massive pelvic structures on MRI. METHODS A new Scale- and Slice-aware Net ( S 2 aNet) is presented for 3D dense segmentation of 54 organs and musculoskeletal structures in female pelvic MR images. A Scale-aware module is designed to capture the spatial and semantic information of different-scale structures. A Slice-aware module is introduced to model similar spatial relationships of consecutive slices in 3D data. Moreover, S 2 aNet leverages a weight-adaptive loss optimization strategy to reinforce the supervision with more discriminative capability on hard samples and categories. RESULTS Experiments have been performed on a pelvic MRI cohort of 27 MR images from 27 patient cases. Across the cohort and 54 categories of organs and musculoskeletal structures manually delineated, S 2 aNet was shown to outperform the UNet framework and other state-of-the-art fully convolutional networks in terms of sensitivity, Dice similarity coefficient and relative volume difference. CONCLUSION The experimental results on the pelvic 3D MR dataset show that the proposed S 2 aNet achieves excellent segmentation results compared to other state-of-the-art models. To our knowledge, S 2 aNet is the first model to achieve 3D dense segmentation for 54 musculoskeletal structures on pelvic MRI, which will be leveraged to the clinical application under the support of more cases in the future.
Collapse
Affiliation(s)
- Chaoyang Yan
- Institute for AI in Medicine, School of Automation, Nanjing University of Information Science and Technology, Nanjing, China
| | - Jing-Jing Lu
- Department of Radiology, Beijing United Family Hospital, Beijing, China.,Department of Radiology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Kang Chen
- Eight-Year Program of Clinical Medicine, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Lei Wang
- Institute for AI in Medicine, School of Automation, Nanjing University of Information Science and Technology, Nanjing, China
| | - Haoda Lu
- Institute for AI in Medicine, School of Automation, Nanjing University of Information Science and Technology, Nanjing, China
| | - Li Yu
- Institute for AI in Medicine, School of Automation, Nanjing University of Information Science and Technology, Nanjing, China
| | - Mengyan Sun
- Department of Radiology, Beijing Chest Hospital, Capital Medical University, Beijing, China.,Beijing Tuberculosis and Thoracic Tumor Institute, Beijing, China
| | - Jun Xu
- Institute for AI in Medicine, School of Automation, Nanjing University of Information Science and Technology, Nanjing, China
| |
Collapse
|
39
|
He K, Lian C, Zhang B, Zhang X, Cao X, Nie D, Gao Y, Zhang J, Shen D. HF-UNet: Learning Hierarchically Inter-Task Relevance in Multi-Task U-Net for Accurate Prostate Segmentation in CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2118-2128. [PMID: 33848243 DOI: 10.1109/tmi.2021.3072956] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Accurate segmentation of the prostate is a key step in external beam radiation therapy treatments. In this paper, we tackle the challenging task of prostate segmentation in CT images by a two-stage network with 1) the first stage to fast localize, and 2) the second stage to accurately segment the prostate. To precisely segment the prostate in the second stage, we formulate prostate segmentation into a multi-task learning framework, which includes a main task to segment the prostate, and an auxiliary task to delineate the prostate boundary. Here, the second task is applied to provide additional guidance of unclear prostate boundary in CT images. Besides, the conventional multi-task deep networks typically share most of the parameters (i.e., feature representations) across all tasks, which may limit their data fitting ability, as the specificity of different tasks are inevitably ignored. By contrast, we solve them by a hierarchically-fused U-Net structure, namely HF-UNet. The HF-UNet has two complementary branches for two tasks, with the novel proposed attention-based task consistency learning block to communicate at each level between the two decoding branches. Therefore, HF-UNet endows the ability to learn hierarchically the shared representations for different tasks, and preserve the specificity of learned representations for different tasks simultaneously. We did extensive evaluations of the proposed method on a large planning CT image dataset and a benchmark prostate zonal dataset. The experimental results show HF-UNet outperforms the conventional multi-task network architectures and the state-of-the-art methods.
Collapse
|
40
|
Shao L, Liu Z, Yan Y, Liu J, Ye X, Xia H, Zhu X, Zhang Y, Zhang Z, Chen H, He W, Liu C, Lu M, Huang Y, Sun K, Zhou X, Yang G, Lu J, Tian J. Patient-level Prediction of Multi-classification Task at Prostate MRI based on End-to-End Framework learning from Diagnostic Logic of Radiologists. IEEE Trans Biomed Eng 2021; 68:3690-3700. [PMID: 34014820 DOI: 10.1109/tbme.2021.3082176] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
The grade groups (GGs) of Gleason scores (Gs) is the most critical indicator in the clinical diagnosis and treatment system of prostate cancer. End-to-end method for stratifying the patient-level pathological appearance of prostate cancer (PCa) in magnetic resonance (MRI) are of high demand for clinical decision. Existing methods typically employ a statistical method for integrating slice-level results to a patient-level result, which ignores the asymmetric use of ground truth (GT) and overall optimization. Therefore, more domain knowledge (e.g. diagnostic logic of radiologists) needs to be incorporated into the design of the framework. The patient-level GT is necessary to be logically assigned to each slice of a MRI to achieve joint optimization between slice-level analysis and patient-level decision-making. In this paper, we propose a framework (PCa-GGNet-v2) that learns from radiologists to capture signs in a separate two-dimensional (2-D) space of MRI and further associate them for the overall decision, where all steps are optimized jointly in an end-to-end trainable way. In the training phase, patient-level prediction is transferred from weak supervision to supervision with GT. An association route records the attentional slice for reweighting loss of MRI slices and interpretability. We evaluate our method in an in-house multi-center dataset (N=570) and PROSTATEx (N=204), which yields five-classification accuracy over 80% and AUC of 0.804 at patient-level respectively. Our method reveals the state-of-the-art performance for patient-level multi-classification task to personalized medicine.
Collapse
|
41
|
Cusumano D, Boldrini L, Dhont J, Fiorino C, Green O, Güngör G, Jornet N, Klüter S, Landry G, Mattiucci GC, Placidi L, Reynaert N, Ruggieri R, Tanadini-Lang S, Thorwarth D, Yadav P, Yang Y, Valentini V, Verellen D, Indovina L. Artificial Intelligence in magnetic Resonance guided Radiotherapy: Medical and physical considerations on state of art and future perspectives. Phys Med 2021; 85:175-191. [PMID: 34022660 DOI: 10.1016/j.ejmp.2021.05.010] [Citation(s) in RCA: 54] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Revised: 04/15/2021] [Accepted: 05/04/2021] [Indexed: 12/14/2022] Open
Abstract
Over the last years, technological innovation in Radiotherapy (RT) led to the introduction of Magnetic Resonance-guided RT (MRgRT) systems. Due to the higher soft tissue contrast compared to on-board CT-based systems, MRgRT is expected to significantly improve the treatment in many situations. MRgRT systems may extend the management of inter- and intra-fraction anatomical changes, offering the possibility of online adaptation of the dose distribution according to daily patient anatomy and to directly monitor tumor motion during treatment delivery by means of a continuous cine MR acquisition. Online adaptive treatments require a multidisciplinary and well-trained team, able to perform a series of operations in a safe, precise and fast manner while the patient is waiting on the treatment couch. Artificial Intelligence (AI) is expected to rapidly contribute to MRgRT, primarily by safely and efficiently automatising the various manual operations characterizing online adaptive treatments. Furthermore, AI is finding relevant applications in MRgRT in the fields of image segmentation, synthetic CT reconstruction, automatic (on-line) planning and the development of predictive models based on daily MRI. This review provides a comprehensive overview of the current AI integration in MRgRT from a medical physicist's perspective. Medical physicists are expected to be major actors in solving new tasks and in taking new responsibilities: their traditional role of guardians of the new technology implementation will change with increasing emphasis on the managing of AI tools, processes and advanced systems for imaging and data analysis, gradually replacing many repetitive manual tasks.
Collapse
Affiliation(s)
- Davide Cusumano
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy
| | - Luca Boldrini
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy
| | | | - Claudio Fiorino
- Medical Physics, San Raffaele Scientific Institute, Milan, Italy
| | - Olga Green
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO, USA
| | - Görkem Güngör
- Acıbadem MAA University, School of Medicine, Department of Radiation Oncology, Maslak Istanbul, Turkey
| | - Núria Jornet
- Servei de Radiofísica i Radioprotecció, Hospital de la Santa Creu i Sant Pau, Spain
| | - Sebastian Klüter
- Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, LMU Munich, Munich, Germany; German Cancer Consortium (DKTK), Munich, Germany
| | | | - Lorenzo Placidi
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy.
| | - Nick Reynaert
- Department of Medical Physics, Institut Jules Bordet, Belgium
| | - Ruggero Ruggieri
- Dipartimento di Radioterapia Oncologica Avanzata, IRCCS "Sacro cuore - don Calabria", Negrar di Valpolicella (VR), Italy
| | - Stephanie Tanadini-Lang
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Daniela Thorwarth
- Section for Biomedical Physics, Department of Radiation Oncology, University Hospital Tüebingen, Tübingen, Germany
| | - Poonam Yadav
- Department of Human Oncology School of Medicine and Public Heath University of Wisconsin - Madison, USA
| | - Yingli Yang
- Department of Radiation Oncology, David Geffen School of Medicine, University of California Los Angeles, USA
| | - Vincenzo Valentini
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy
| | - Dirk Verellen
- Department of Medical Physics, Iridium Cancer Network, Belgium; Faculty of Medicine and Health Sciences, Antwerp University, Antwerp, Belgium
| | - Luca Indovina
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy
| |
Collapse
|
42
|
Xue J, He K, Nie D, Adeli E, Shi Z, Lee SW, Zheng Y, Liu X, Li D, Shen D. Cascaded MultiTask 3-D Fully Convolutional Networks for Pancreas Segmentation. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:2153-2165. [PMID: 31869812 DOI: 10.1109/tcyb.2019.2955178] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Automatic pancreas segmentation is crucial to the diagnostic assessment of diabetes or pancreatic cancer. However, the relatively small size of the pancreas in the upper body, as well as large variations of its location and shape in retroperitoneum, make the segmentation task challenging. To alleviate these challenges, in this article, we propose a cascaded multitask 3-D fully convolution network (FCN) to automatically segment the pancreas. Our cascaded network is composed of two parts. The first part focuses on fast locating the region of the pancreas, and the second part uses a multitask FCN with dense connections to refine the segmentation map for fine voxel-wise segmentation. In particular, our multitask FCN with dense connections is implemented to simultaneously complete tasks of the voxel-wise segmentation and skeleton extraction from the pancreas. These two tasks are complementary, that is, the extracted skeleton provides rich information about the shape and size of the pancreas in retroperitoneum, which can boost the segmentation of pancreas. The multitask FCN is also designed to share the low- and mid-level features across the tasks. A feature consistency module is further introduced to enhance the connection and fusion of different levels of feature maps. Evaluations on two pancreas datasets demonstrate the robustness of our proposed method in correctly segmenting the pancreas in various settings. Our experimental results outperform both baseline and state-of-the-art methods. Moreover, the ablation study shows that our proposed parts/modules are critical for effective multitask learning.
Collapse
|
43
|
Meyer A, Chlebus G, Rak M, Schindele D, Schostak M, van Ginneken B, Schenk A, Meine H, Hahn HK, Schreiber A, Hansen C. Anisotropic 3D Multi-Stream CNN for Accurate Prostate Segmentation from Multi-Planar MRI. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105821. [PMID: 33218704 DOI: 10.1016/j.cmpb.2020.105821] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Accepted: 10/26/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate and reliable segmentation of the prostate gland in MR images can support the clinical assessment of prostate cancer, as well as the planning and monitoring of focal and loco-regional therapeutic interventions. Despite the availability of multi-planar MR scans due to standardized protocols, the majority of segmentation approaches presented in the literature consider the axial scans only. In this work, we investigate whether a neural network processing anisotropic multi-planar images could work in the context of a semantic segmentation task, and if so, how this additional information would improve the segmentation quality. METHODS We propose an anisotropic 3D multi-stream CNN architecture, which processes additional scan directions to produce a high-resolution isotropic prostate segmentation. We investigate two variants of our architecture, which work on two (dual-plane) and three (triple-plane) image orientations, respectively. The influence of additional information used by these models is evaluated by comparing them with a single-plane baseline processing only axial images. To realize a fair comparison, we employ a hyperparameter optimization strategy to select optimal configurations for the individual approaches. RESULTS Training and evaluation on two datasets spanning multiple sites show statistical significant improvement over the plain axial segmentation (p<0.05 on the Dice similarity coefficient). The improvement can be observed especially at the base (0.898 single-plane vs. 0.906 triple-plane) and apex (0.888 single-plane vs. 0.901 dual-plane). CONCLUSION This study indicates that models employing two or three scan directions are superior to plain axial segmentation. The knowledge of precise boundaries of the prostate is crucial for the conservation of risk structures. Thus, the proposed models have the potential to improve the outcome of prostate cancer diagnosis and therapies.
Collapse
Affiliation(s)
- Anneke Meyer
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Germany.
| | - Grzegorz Chlebus
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany; Radboud University Medical Center, Nijmegen, The Netherlands
| | - Marko Rak
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Germany
| | - Daniel Schindele
- Clinic of Urology and Pediatric Urology, University Hospital Magdeburg, Germany
| | - Martin Schostak
- Clinic of Urology and Pediatric Urology, University Hospital Magdeburg, Germany
| | - Bram van Ginneken
- Radboud University Medical Center, Nijmegen, The Netherlands; Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Andrea Schenk
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Hans Meine
- University of Bremen, Medical Image Computing Group, Bremen, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Horst K Hahn
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | | | - Christian Hansen
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Germany
| |
Collapse
|
44
|
Tao L, Ma L, Xie M, Liu X, Tian Z, Fei B. Automatic Segmentation of the Prostate on MR Images based on Anatomy and Deep Learning. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2021; 11598:115981N. [PMID: 35755404 PMCID: PMC9232192 DOI: 10.1117/12.2581893] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Accurate segmentation of the prostate has many applications in the detection, diagnosis and treatment of prostate cancer. Automatic segmentation can be a challenging task because of the inhomogeneous intensity distributions on MR images. In this paper, we propose an automatic segmentation method for the prostate on MR images based on anatomy. We use the 3D U-Net guided by anatomy knowledge, including the location and shape prior knowledge of the prostate on MR images, to constrain the segmentation of the gland. The proposed method has been evaluated on the public dataset PROMISE2012. Experimental results show that the proposed method achieves a mean Dice similarity coefficient of 91.6% as compared to the manual segmentation. The experimental results indicate that the proposed method based on anatomy knowledge can achieve satisfactory segmentation performance for prostate MRI.
Collapse
Affiliation(s)
- Lei Tao
- College of Software, Nankai University, Tianjin, China
| | - Ling Ma
- College of Software, Nankai University, Tianjin, China
- Corresponding author:
| | - Maoqiang Xie
- College of Software, Nankai University, Tianjin, China
| | - Xiabi Liu
- School of Computer Science, Beijing Institute of Technology, Beijing, China
| | - Zhiqiang Tian
- School of Software Engineering, Xi’an Jiaotong University, Xi’an, Shaanxi, China
| | - Baowei Fei
- Department of Bioengineering, The University of Texas at Dallas
- Department of Radiology, UT Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
45
|
Bhattacharya S, Reddy Maddikunta PK, Pham QV, Gadekallu TR, Krishnan S SR, Chowdhary CL, Alazab M, Jalil Piran M. Deep learning and medical image processing for coronavirus (COVID-19) pandemic: A survey. SUSTAINABLE CITIES AND SOCIETY 2021; 65:102589. [PMID: 33169099 PMCID: PMC7642729 DOI: 10.1016/j.scs.2020.102589] [Citation(s) in RCA: 109] [Impact Index Per Article: 36.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Since December 2019, the coronavirus disease (COVID-19) outbreak has caused many death cases and affected all sectors of human life. With gradual progression of time, COVID-19 was declared by the world health organization (WHO) as an outbreak, which has imposed a heavy burden on almost all countries, especially ones with weaker health systems and ones with slow responses. In the field of healthcare, deep learning has been implemented in many applications, e.g., diabetic retinopathy detection, lung nodule classification, fetal localization, and thyroid diagnosis. Numerous sources of medical images (e.g., X-ray, CT, and MRI) make deep learning a great technique to combat the COVID-19 outbreak. Motivated by this fact, a large number of research works have been proposed and developed for the initial months of 2020. In this paper, we first focus on summarizing the state-of-the-art research works related to deep learning applications for COVID-19 medical image processing. Then, we provide an overview of deep learning and its applications to healthcare found in the last decade. Next, three use cases in China, Korea, and Canada are also presented to show deep learning applications for COVID-19 medical image processing. Finally, we discuss several challenges and issues related to deep learning implementations for COVID-19 medical image processing, which are expected to drive further studies in controlling the outbreak and controlling the crisis, which results in smart healthy cities.
Collapse
Affiliation(s)
- Sweta Bhattacharya
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | | | - Quoc-Viet Pham
- Research Institute of Computer, Information and Communication, Pusan National University, Busan 46241, Republic of Korea
| | - Thippa Reddy Gadekallu
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Siva Rama Krishnan S
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Chiranji Lal Chowdhary
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Mamoun Alazab
- College of Engineering, IT & Environment, Charles Darwin University, Australia
| | - Md Jalil Piran
- Department of Computer Science and Engineering, Sejong University, 05006, Seoul, Republic of Korea
| |
Collapse
|
46
|
Furtado P. Testing Segmentation Popular Loss and Variations in Three Multiclass Medical Imaging Problems. J Imaging 2021; 7:16. [PMID: 34460615 PMCID: PMC8321275 DOI: 10.3390/jimaging7020016] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 01/16/2021] [Accepted: 01/22/2021] [Indexed: 12/15/2022] Open
Abstract
Image structures are segmented automatically using deep learning (DL) for analysis and processing. The three most popular base loss functions are cross entropy (crossE), intersect-over-the-union (IoU), and dice. Which should be used, is it useful to consider simple variations, such as modifying formula coefficients? How do characteristics of different image structures influence scores? Taking three different medical image segmentation problems (segmentation of organs in magnetic resonance images (MRI), liver in computer tomography images (CT) and diabetic retinopathy lesions in eye fundus images (EFI)), we quantify loss functions and variations, as well as segmentation scores of different targets. We first describe the limitations of metrics, since loss is a metric, then we describe and test alternatives. Experimentally, we observed that DeeplabV3 outperforms UNet and fully convolutional network (FCN) in all datasets. Dice scored 1 to 6 percentage points (pp) higher than cross entropy over all datasets, IoU improved 0 to 3 pp. Varying formula coefficients improved scores, but the best choices depend on the dataset: compared to crossE, different false positive vs. false negative weights improved MRI by 12 pp, and assigning zero weight to background improved EFI by 6 pp. Multiclass segmentation scored higher than n-uniclass segmentation in MRI by 8 pp. EFI lesions score low compared to more constant structures (e.g., optic disk or even organs), but loss modifications improve those scores significantly 6 to 9 pp. Our conclusions are that dice is best, it is worth assigning 0 weight to class background and to test different weights on false positives and false negatives.
Collapse
Affiliation(s)
- Pedro Furtado
- Dei/FCT/CISUC, University of Coimbra, Polo II, 3030-290 Coimbra, Portugal
| |
Collapse
|
47
|
|
48
|
Zhang H, Lu G, Zhan M, Zhang B. Semi-Supervised Classification of Graph Convolutional Networks with Laplacian Rank Constraints. Neural Process Lett 2021. [DOI: 10.1007/s11063-020-10404-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
49
|
Artificial intelligence in oncology. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00018-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
50
|
A 3D-2D Hybrid U-Net Convolutional Neural Network Approach to Prostate Organ Segmentation of Multiparametric MRI. AJR Am J Roentgenol 2020; 216:111-116. [PMID: 32812797 DOI: 10.2214/ajr.19.22168] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
OBJECTIVE Prostate cancer is the most commonly diagnosed cancer in men in the United States with more than 200,000 new cases in 2018. Multiparametric MRI (mpMRI) is increasingly used for prostate cancer evaluation. Prostate organ segmentation is an essential step of surgical planning for prostate fusion biopsies. Deep learning convolutional neural networks (CNNs) are the predominant method of machine learning for medical image recognition. In this study, we describe a deep learning approach, a subset of artificial intelligence, for automatic localization and segmentation of prostates from mpMRI. MATERIALS AND METHODS This retrospective study included patients who underwent prostate MRI and ultrasound-MRI fusion transrectal biopsy between September 2014 and December 2016. Axial T2-weighted images were manually segmented by two abdominal radiologists, which served as ground truth. These manually segmented images were used for training on a customized hybrid 3D-2D U-Net CNN architecture in a fivefold cross-validation paradigm for neural network training and validation. The Dice score, a measure of overlap between manually segmented and automatically derived segmentations, and Pearson linear correlation coefficient of prostate volume were used for statistical evaluation. RESULTS The CNN was trained on 299 MRI examinations (total number of MR images = 7774) of 287 patients. The customized hybrid 3D-2D U-Net had a mean Dice score of 0.898 (range, 0.890-0.908) and a Pearson correlation coefficient for prostate volume of 0.974. CONCLUSION A deep learning CNN can automatically segment the prostate organ from clinical MR images. Further studies should examine developing pattern recognition for lesion localization and quantification.
Collapse
|