1
|
Hasanabadi S, Aghamiri SMR, Abin AA, Abdollahi H, Arabi H, Zaidi H. Enhancing Lymphoma Diagnosis, Treatment, and Follow-Up Using 18F-FDG PET/CT Imaging: Contribution of Artificial Intelligence and Radiomics Analysis. Cancers (Basel) 2024; 16:3511. [PMID: 39456604 PMCID: PMC11505665 DOI: 10.3390/cancers16203511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2024] [Revised: 10/11/2024] [Accepted: 10/15/2024] [Indexed: 10/28/2024] Open
Abstract
Lymphoma, encompassing a wide spectrum of immune system malignancies, presents significant complexities in its early detection, management, and prognosis assessment since it can mimic post-infectious/inflammatory diseases. The heterogeneous nature of lymphoma makes it challenging to definitively pinpoint valuable biomarkers for predicting tumor biology and selecting the most effective treatment strategies. Although molecular imaging modalities, such as positron emission tomography/computed tomography (PET/CT), specifically 18F-FDG PET/CT, hold significant importance in the diagnosis of lymphoma, prognostication, and assessment of treatment response, they still face significant challenges. Over the past few years, radiomics and artificial intelligence (AI) have surfaced as valuable tools for detecting subtle features within medical images that may not be easily discerned by visual assessment. The rapid expansion of AI and its application in medicine/radiomics is opening up new opportunities in the nuclear medicine field. Radiomics and AI capabilities seem to hold promise across various clinical scenarios related to lymphoma. Nevertheless, the need for more extensive prospective trials is evident to substantiate their reliability and standardize their applications. This review aims to provide a comprehensive perspective on the current literature regarding the application of AI and radiomics applied/extracted on/from 18F-FDG PET/CT in the management of lymphoma patients.
Collapse
Affiliation(s)
- Setareh Hasanabadi
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran 1983969411, Iran; (S.H.); (S.M.R.A.)
| | - Seyed Mahmud Reza Aghamiri
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran 1983969411, Iran; (S.H.); (S.M.R.A.)
| | - Ahmad Ali Abin
- Faculty of Computer Science and Engineering, Shahid Beheshti University, Tehran 1983969411, Iran;
| | - Hamid Abdollahi
- Department of Radiology, University of British Columbia, Vancouver, BC V5Z 1M9, Canada;
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC V5Z 1L3, Canada
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland;
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland;
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB Groningen, The Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, 500 Odense, Denmark
- University Research and Innovation Center, Óbuda University, 1034 Budapest, Hungary
| |
Collapse
|
2
|
Huang L, Ruan S, Xing Y, Feng M. A review of uncertainty quantification in medical image analysis: Probabilistic and non-probabilistic methods. Med Image Anal 2024; 97:103223. [PMID: 38861770 DOI: 10.1016/j.media.2024.103223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 03/16/2024] [Accepted: 05/27/2024] [Indexed: 06/13/2024]
Abstract
The comprehensive integration of machine learning healthcare models within clinical practice remains suboptimal, notwithstanding the proliferation of high-performing solutions reported in the literature. A predominant factor hindering widespread adoption pertains to an insufficiency of evidence affirming the reliability of the aforementioned models. Recently, uncertainty quantification methods have been proposed as a potential solution to quantify the reliability of machine learning models and thus increase the interpretability and acceptability of the results. In this review, we offer a comprehensive overview of the prevailing methods proposed to quantify the uncertainty inherent in machine learning models developed for various medical image tasks. Contrary to earlier reviews that exclusively focused on probabilistic methods, this review also explores non-probabilistic approaches, thereby furnishing a more holistic survey of research pertaining to uncertainty quantification for machine learning models. Analysis of medical images with the summary and discussion on medical applications and the corresponding uncertainty evaluation protocols are presented, which focus on the specific challenges of uncertainty in medical image analysis. We also highlight some potential future research work at the end. Generally, this review aims to allow researchers from both clinical and technical backgrounds to gain a quick and yet in-depth understanding of the research in uncertainty quantification for medical image analysis machine learning models.
Collapse
Affiliation(s)
- Ling Huang
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore
| | - Su Ruan
- Quantif, LITIS, University of Rouen Normandy, France.
| | - Yucheng Xing
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore
| | - Mengling Feng
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore; Institute of Data Science, National University of Singapore, Singapore
| |
Collapse
|
3
|
Hassan R, Mondal MRH, Ahamed SI. UDBRNet: A novel uncertainty driven boundary refined network for organ at risk segmentation. PLoS One 2024; 19:e0304771. [PMID: 38885241 PMCID: PMC11182520 DOI: 10.1371/journal.pone.0304771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Accepted: 05/19/2024] [Indexed: 06/20/2024] Open
Abstract
Organ segmentation has become a preliminary task for computer-aided intervention, diagnosis, radiation therapy, and critical robotic surgery. Automatic organ segmentation from medical images is a challenging task due to the inconsistent shape and size of different organs. Besides this, low contrast at the edges of organs due to similar types of tissue confuses the network's ability to segment the contour of organs properly. In this paper, we propose a novel convolution neural network based uncertainty-driven boundary-refined segmentation network (UDBRNet) that segments the organs from CT images. The CT images are segmented first and produce multiple segmentation masks from multi-line segmentation decoder. Uncertain regions are identified from multiple masks and the boundaries of the organs are refined based on uncertainty data. Our method achieves remarkable performance, boasting dice accuracies of 0.80, 0.95, 0.92, and 0.94 for Esophagus, Heart, Trachea, and Aorta respectively on the SegThor dataset, and 0.71, 0.89, 0.85, 0.97, and 0.97 for Esophagus, Spinal Cord, Heart, Left-Lung, and Right-Lung respectively on the LCTSC dataset. These results demonstrate the superiority of our uncertainty-driven boundary refinement technique over state-of-the-art segmentation networks such as UNet, Attention UNet, FC-denseNet, BASNet, UNet++, R2UNet, TransUNet, and DS-TransUNet. UDBRNet presents a promising network for more precise organ segmentation, particularly in challenging, uncertain conditions. The source code of our proposed method will be available at https://github.com/riadhassan/UDBRNet.
Collapse
Affiliation(s)
- Riad Hassan
- Institute of Information and Communication Technology, Bangladesh University of Engineering and Technology, Palashi, Dhaka, Bangladesh
| | - M. Rubaiyat Hossain Mondal
- Institute of Information and Communication Technology, Bangladesh University of Engineering and Technology, Palashi, Dhaka, Bangladesh
| | - Sheikh Iqbal Ahamed
- Department of Computer Science, Marquette University, Milwaukee, Wisconsin, United States of America
| |
Collapse
|
4
|
Lambert B, Forbes F, Doyle S, Dehaene H, Dojat M. Trustworthy clinical AI solutions: A unified review of uncertainty quantification in Deep Learning models for medical image analysis. Artif Intell Med 2024; 150:102830. [PMID: 38553168 DOI: 10.1016/j.artmed.2024.102830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 02/28/2024] [Accepted: 03/01/2024] [Indexed: 04/02/2024]
Abstract
The full acceptance of Deep Learning (DL) models in the clinical field is rather low with respect to the quantity of high-performing solutions reported in the literature. End users are particularly reluctant to rely on the opaque predictions of DL models. Uncertainty quantification methods have been proposed in the literature as a potential solution, to reduce the black-box effect of DL models and increase the interpretability and the acceptability of the result by the final user. In this review, we propose an overview of the existing methods to quantify uncertainty associated with DL predictions. We focus on applications to medical image analysis, which present specific challenges due to the high dimensionality of images and their variable quality, as well as constraints associated with real-world clinical routine. Moreover, we discuss the concept of structural uncertainty, a corpus of methods to facilitate the alignment of segmentation uncertainty estimates with clinical attention. We then discuss the evaluation protocols to validate the relevance of uncertainty estimates. Finally, we highlight the open challenges for uncertainty quantification in the medical field.
Collapse
Affiliation(s)
- Benjamin Lambert
- Univ. Grenoble Alpes, Inserm, U1216, Grenoble Institut des Neurosciences, Grenoble, 38000, France; Pixyl Research and Development Laboratory, Grenoble, 38000, France
| | - Florence Forbes
- Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, Grenoble, 38000, France
| | - Senan Doyle
- Pixyl Research and Development Laboratory, Grenoble, 38000, France
| | - Harmonie Dehaene
- Pixyl Research and Development Laboratory, Grenoble, 38000, France
| | - Michel Dojat
- Univ. Grenoble Alpes, Inserm, U1216, Grenoble Institut des Neurosciences, Grenoble, 38000, France.
| |
Collapse
|
5
|
Wang M, Jiang H. PST-Radiomics: a PET/CT lymphoma classification method based on pseudo spatial-temporal radiomic features and structured atrous recurrent convolutional neural network. Phys Med Biol 2023; 68:235014. [PMID: 37956448 DOI: 10.1088/1361-6560/ad0c0f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Accepted: 11/13/2023] [Indexed: 11/15/2023]
Abstract
Objective.Existing radiomic methods tend to treat each isolated tumor as an inseparable whole, when extracting radiomic features. However, they may discard the critical intra-tumor metabolic heterogeneity (ITMH) information, that contributes to triggering tumor subtypes. To improve lymphoma classification performance, we propose a pseudo spatial-temporal radiomic method (PST-Radiomics) based on positron emission tomography computed tomography (PET/CT).Approach.Specifically, to enable exploitation of ITMH, we first present a multi-threshold gross tumor volume sequence (GTVS). Next, we extract 1D radiomic features based on PET images and each volume in GTVS and create a pseudo spatial-temporal feature sequence (PSTFS) tightly interwoven with ITMH. Then, we reshape PSTFS to create 2D pseudo spatial-temporal feature maps (PSTFM), of which the columns are elements of PSTFS. Finally, to learn from PSTFM in an end-to-end manner, we build a light-weighted pseudo spatial-temporal radiomic network (PSTR-Net), in which a structured atrous recurrent convolutional neural network serves as a PET branch to better exploit the strong local dependencies in PSTFM, and a residual convolutional neural network is used as a CT branch to exploit conventional radiomic features extracted from CT volumes.Main results.We validate PST-Radiomics based on a PET/CT lymphoma subtype classification task. Experimental results quantitatively demonstrate the superiority of PST-Radiomics, when compared to existing radiomic methods.Significance.Feature map visualization of our method shows that it performs complex feature selection while extracting hierarchical feature maps, which qualitatively demonstrates its superiority.
Collapse
Affiliation(s)
- Meng Wang
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, People's Republic of China
| |
Collapse
|
6
|
Seoni S, Jahmunah V, Salvi M, Barua PD, Molinari F, Acharya UR. Application of uncertainty quantification to artificial intelligence in healthcare: A review of last decade (2013-2023). Comput Biol Med 2023; 165:107441. [PMID: 37683529 DOI: 10.1016/j.compbiomed.2023.107441] [Citation(s) in RCA: 22] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 08/27/2023] [Accepted: 08/29/2023] [Indexed: 09/10/2023]
Abstract
Uncertainty estimation in healthcare involves quantifying and understanding the inherent uncertainty or variability associated with medical predictions, diagnoses, and treatment outcomes. In this era of Artificial Intelligence (AI) models, uncertainty estimation becomes vital to ensure safe decision-making in the medical field. Therefore, this review focuses on the application of uncertainty techniques to machine and deep learning models in healthcare. A systematic literature review was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Our analysis revealed that Bayesian methods were the predominant technique for uncertainty quantification in machine learning models, with Fuzzy systems being the second most used approach. Regarding deep learning models, Bayesian methods emerged as the most prevalent approach, finding application in nearly all aspects of medical imaging. Most of the studies reported in this paper focused on medical images, highlighting the prevalent application of uncertainty quantification techniques using deep learning models compared to machine learning models. Interestingly, we observed a scarcity of studies applying uncertainty quantification to physiological signals. Thus, future research on uncertainty quantification should prioritize investigating the application of these techniques to physiological signals. Overall, our review highlights the significance of integrating uncertainty techniques in healthcare applications of machine learning and deep learning models. This can provide valuable insights and practical solutions to manage uncertainty in real-world medical data, ultimately improving the accuracy and reliability of medical diagnoses and treatment recommendations.
Collapse
Affiliation(s)
- Silvia Seoni
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | | | - Massimo Salvi
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Prabal Datta Barua
- School of Business (Information System), University of Southern Queensland, Toowoomba, QLD, 4350, Australia; Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, 2007, Australia
| | - Filippo Molinari
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy.
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
7
|
Park S, Cha YK, Park S, Chung MJ, Kim K. Automated precision localization of peripherally inserted central catheter tip through model-agnostic multi-stage networks. Artif Intell Med 2023; 144:102643. [PMID: 37783538 DOI: 10.1016/j.artmed.2023.102643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 05/30/2023] [Accepted: 08/28/2023] [Indexed: 10/04/2023]
Abstract
BACKGROUND Peripherally inserted central catheters (PICCs) have been widely used as one of the representative central venous lines (CVCs) due to their long-term intravascular access with low infectivity. However, PICCs have a fatal drawback of a high frequency of tip mispositions, increasing the risk of puncture, embolism, and complications such as cardiac arrhythmias. To automatically and precisely detect it, various attempts have been made by using the latest deep learning (DL) technologies. However, even with these approaches, it is still practically difficult to determine the tip location because the multiple fragments phenomenon (MFP) occurs in the process of predicting and extracting the PICC line required before predicting the tip. OBJECTIVE This study aimed to develop a system generally applied to existing models and to restore the PICC line more exactly by removing the MFs of the model output, thereby precisely localizing the actual tip position for detecting its misposition. METHODS To achieve this, we proposed a multi-stage DL-based framework post-processing the PICC line extraction result of the existing technology. Our method consists of the following three stages: 1. Existing PICC line segmentation network for a baseline, 2. Patch-based PICC line refinement network, 3. PICC line reconnection network. The proposed second and third-stage models address MFs caused by the sparseness of the PICC line and the line disconnection due to confusion with anatomical structures respectively, thereby enhancing tip detection. RESULTS To verify the objective performance of the proposed MFCN, internal validation and external validation were conducted. For internal validation, learning (130 samples) and verification (150 samples) were performed with 280 data, including PICC among Chest X-ray (CXR) images taken at our institution. External validation was conducted using a public dataset called the Royal Australian and New Zealand College of Radiologists (RANZCR), and training (130 samples) and validation (150 samples) were performed with 280 data of CXR images, including PICC, which has the same number as that for internal validation. The performance was compared by root mean squared error (RMSE) and the ratio of single fragment images (RatioSFI) (i.e., the rate at which model predicts PICC as multiple sub-lines) according to whether or not MFCN is applied to seven conventional models (i.e., FCDN, UNET, AUNET, TUNET, FCDN-HT, UNET-ELL, and UNET-RPN). In internal validation, when MFCN was applied to the existing single model, MFP was improved by an average of 45 %. The RMSE improved over 63% from an average of 27.54 mm (17.16 to 35.80 mm) to 9.77 mm (9.11 to 10.98 mm). In external validation, when MFCN was applied, the MFP incidence rate decreased by an average of 32% and the RMSE decreased by an average of 65%. Therefore, by applying the proposed MFCN, we observed the consistent detection performance improvement of PICC tip location compared to the existing model. CONCLUSION In this study, we applied the proposed technique to the existing technique and demonstrated that it provides high tip detection performance, proving its high versatility and superiority. Therefore, we believe, in countries and regions where radiologists are scarce, that the proposed DL approach will be able to effectively detect PICC misposition on behalf of radiologists.
Collapse
Affiliation(s)
- Subin Park
- Department of Health Sciences and Technology, SAIHST, Sungkyunkwan University, Seoul 06351, Republic of Korea
| | - Yoon Ki Cha
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea
| | - Soyoung Park
- Department of Health Sciences and Technology, SAIHST, Sungkyunkwan University, Seoul 06351, Republic of Korea
| | - Myung Jin Chung
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea; Medical AI Research Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul 06351, Republic of Korea.
| | - Kyungsu Kim
- Medical AI Research Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul 06351, Republic of Korea; Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea.
| |
Collapse
|
8
|
A review of deep learning-based multiple-lesion recognition from medical images: classification, detection and segmentation. Comput Biol Med 2023; 157:106726. [PMID: 36924732 DOI: 10.1016/j.compbiomed.2023.106726] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 02/07/2023] [Accepted: 02/27/2023] [Indexed: 03/05/2023]
Abstract
Deep learning-based methods have become the dominant methodology in medical image processing with the advancement of deep learning in natural image classification, detection, and segmentation. Deep learning-based approaches have proven to be quite effective in single lesion recognition and segmentation. Multiple-lesion recognition is more difficult than single-lesion recognition due to the little variation between lesions or the too wide range of lesions involved. Several studies have recently explored deep learning-based algorithms to solve the multiple-lesion recognition challenge. This paper includes an in-depth overview and analysis of deep learning-based methods for multiple-lesion recognition developed in recent years, including multiple-lesion recognition in diverse body areas and recognition of whole-body multiple diseases. We discuss the challenges that still persist in the multiple-lesion recognition tasks by critically assessing these efforts. Finally, we outline existing problems and potential future research areas, with the hope that this review will help researchers in developing future approaches that will drive additional advances.
Collapse
|
9
|
Osuala R, Kushibar K, Garrucho L, Linardos A, Szafranowska Z, Klein S, Glocker B, Diaz O, Lekadir K. Data synthesis and adversarial networks: A review and meta-analysis in cancer imaging. Med Image Anal 2023; 84:102704. [PMID: 36473414 DOI: 10.1016/j.media.2022.102704] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/02/2022] [Accepted: 11/21/2022] [Indexed: 11/26/2022]
Abstract
Despite technological and medical advances, the detection, interpretation, and treatment of cancer based on imaging data continue to pose significant challenges. These include inter-observer variability, class imbalance, dataset shifts, inter- and intra-tumour heterogeneity, malignancy determination, and treatment effect uncertainty. Given the recent advancements in image synthesis, Generative Adversarial Networks (GANs), and adversarial training, we assess the potential of these technologies to address a number of key challenges of cancer imaging. We categorise these challenges into (a) data scarcity and imbalance, (b) data access and privacy, (c) data annotation and segmentation, (d) cancer detection and diagnosis, and (e) tumour profiling, treatment planning and monitoring. Based on our analysis of 164 publications that apply adversarial training techniques in the context of cancer imaging, we highlight multiple underexplored solutions with research potential. We further contribute the Synthesis Study Trustworthiness Test (SynTRUST), a meta-analysis framework for assessing the validation rigour of medical image synthesis studies. SynTRUST is based on 26 concrete measures of thoroughness, reproducibility, usefulness, scalability, and tenability. Based on SynTRUST, we analyse 16 of the most promising cancer imaging challenge solutions and observe a high validation rigour in general, but also several desirable improvements. With this work, we strive to bridge the gap between the needs of the clinical cancer imaging community and the current and prospective research on data synthesis and adversarial networks in the artificial intelligence community.
Collapse
Affiliation(s)
- Richard Osuala
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain.
| | - Kaisar Kushibar
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Lidia Garrucho
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Akis Linardos
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Zuzanna Szafranowska
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Ben Glocker
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, UK
| | - Oliver Diaz
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Karim Lekadir
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| |
Collapse
|
10
|
Xie H, Liu Y, Lei H, Song T, Yue G, Du Y, Wang T, Zhang G, Lei B. Adversarial learning-based multi-level dense-transmission knowledge distillation for AP-ROP detection. Med Image Anal 2023; 84:102725. [PMID: 36527770 DOI: 10.1016/j.media.2022.102725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2021] [Revised: 10/31/2022] [Accepted: 12/02/2022] [Indexed: 12/13/2022]
Abstract
The Aggressive Posterior Retinopathy of Prematurity (AP-ROP) is the major cause of blindness for premature infants. The automatic diagnosis method has become an important tool for detecting AP-ROP. However, most existing automatic diagnosis methods were with heavy complexity, which hinders the development of the detecting devices. Hence, a small network (student network) with a high imitation ability is exactly needed, which can mimic a large network (teacher network) with promising diagnostic performance. Also, if the student network is too small due to the increasing gap between teacher and student networks, the diagnostic performance will drop. To tackle the above issues, we propose a novel adversarial learning-based multi-level dense knowledge distillation method for detecting AP-ROP. Specifically, the pre-trained teacher network is utilized to train multiple intermediate-size networks (i.e., teacher-assistant networks) and one student network by dense transmission mode, where the knowledge from all upper-level networks is transmitted to the current lower-level network. To ensure that two adjacent networks can distill the abundant knowledge, the adversarial learning module is leveraged to enforce the lower-level network to generate the features that are similar to those of the upper-level network. Extensive experiments demonstrate that our proposed method can realize the effective knowledge distillation from the teacher to student networks. We achieve a promising knowledge distillation performance for our private dataset and a public dataset, which can provide a new insight for devising lightweight detecting systems of fundus diseases for practical use.
Collapse
Affiliation(s)
- Hai Xie
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Yaling Liu
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| | - Haijun Lei
- Guangdong Province Key Laboratory of Popular High-performance Computers, School of Computer and Software Engineering, Shenzhen University, Shenzhen, China
| | - Tiancheng Song
- Shenzhen Silan Zhichuang Technology Co., Ltd., Shenzhen, China
| | - Guanghui Yue
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Yueshanyi Du
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Guoming Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China.
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China.
| |
Collapse
|
11
|
Kotsyfakis S, Iliaki-Giannakoudaki E, Anagnostopoulos A, Papadokostaki E, Giannakoudakis K, Goumenakis M, Kotsyfakis M. The application of machine learning to imaging in hematological oncology: A scoping review. Front Oncol 2022; 12:1080988. [PMID: 36605438 PMCID: PMC9808781 DOI: 10.3389/fonc.2022.1080988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 12/05/2022] [Indexed: 12/24/2022] Open
Abstract
Background Here, we conducted a scoping review to (i) establish which machine learning (ML) methods have been applied to hematological malignancy imaging; (ii) establish how ML is being applied to hematological cancer radiology; and (iii) identify addressable research gaps. Methods The review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Extension for Scoping Reviews guidelines. The inclusion criteria were (i) pediatric and adult patients with suspected or confirmed hematological malignancy undergoing imaging (population); (ii) any study using ML techniques to derive models using radiological images to apply to the clinical management of these patients (concept); and (iii) original research articles conducted in any setting globally (context). Quality Assessment of Diagnostic Accuracy Studies 2 criteria were used to assess diagnostic and segmentation studies, while the Newcastle-Ottawa scale was used to assess the quality of observational studies. Results Of 53 eligible studies, 33 applied diverse ML techniques to diagnose hematological malignancies or to differentiate them from other diseases, especially discriminating gliomas from primary central nervous system lymphomas (n=18); 11 applied ML to segmentation tasks, while 9 applied ML to prognostication or predicting therapeutic responses, especially for diffuse large B-cell lymphoma. All studies reported discrimination statistics, but no study calculated calibration statistics. Every diagnostic/segmentation study had a high risk of bias due to their case-control design; many studies failed to provide adequate details of the reference standard; and only a few studies used independent validation. Conclusion To deliver validated ML-based models to radiologists managing hematological malignancies, future studies should (i) adhere to standardized, high-quality reporting guidelines such as the Checklist for Artificial Intelligence in Medical Imaging; (ii) validate models in independent cohorts; (ii) standardize volume segmentation methods for segmentation tasks; (iv) establish comprehensive prospective studies that include different tumor grades, comparisons with radiologists, optimal imaging modalities, sequences, and planes; (v) include side-by-side comparisons of different methods; and (vi) include low- and middle-income countries in multicentric studies to enhance generalizability and reduce inequity.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Michail Kotsyfakis
- Biology Center of the Czech Academy of Sciences, Budweis (Ceske Budejovice), Czechia,*Correspondence: Michail Kotsyfakis,
| |
Collapse
|
12
|
Wang M, Jiang H, Shi T, Wang Z, Guo J, Lu G, Wang Y, Yao YD. PSR-Nets: Deep neural networks with prior shift regularization for PET/CT based automatic, accurate, and calibrated whole-body lymphoma segmentation. Comput Biol Med 2022; 151:106215. [PMID: 36306584 DOI: 10.1016/j.compbiomed.2022.106215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 10/04/2022] [Accepted: 10/15/2022] [Indexed: 12/27/2022]
Abstract
Lymphoma is a type of lymphatic tissue originated cancer. Automatic and accurate lymphoma segmentation is critical for its diagnosis and prognosis yet challenging due to the severely class-imbalanced problem. Generally, deep neural networks trained with class-observation-frequency based re-weighting loss functions are used to address this problem. However, the majority class can be under-weighted by them, due to the existence of data overlap. Besides, they are more mis-calibrated. To resolve these, we propose a neural network with prior-shift regularization (PSR-Net), which comprises a UNet-like backbone with re-weighting loss functions, and a prior-shift regularization (PSR) module including a prior-shift layer (PSL), a regularizer generation layer (RGL), and an expected prediction confidence updating layer (EPCUL). We first propose a trainable expected prediction confidence (EPC) for each class. Periodically, PSL shifts a prior training dataset to a more informative dataset based on EPCs; RGL presents a generalized informative-voxel-aware (GIVA) loss with EPCs and calculates it on the informative dataset for model finetuning in back-propagation; and EPCUL updates EPCs to refresh PSL and RRL in next forward-propagation. PSR-Net is trained in a two- stage manner. The backbone is first trained with re-weighting loss functions, then we reload the best saved model for the backbone and continue to train it with the weighted sum of the re-weighting loss functions, the GIVA regularizer and the L2 loss function of EPCs for regularization fine-tuning. Extensive experiments are performed based on PET/CT volumes with advanced stage lymphomas. Our PSR-Net achieves 95.12% sensitivity and 87.18% Dice coefficient, demonstrating the effectiveness of PSR-Net, when compared to the baselines and the state-of-the-arts.
Collapse
Affiliation(s)
- Meng Wang
- Department of Software College, Northeastern University, Shenyang 110819, China
| | - Huiyan Jiang
- Department of Software College, Northeastern University, Shenyang 110819, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, China.
| | - Tianyu Shi
- Department of Software College, Northeastern University, Shenyang 110819, China
| | - Zhiguo Wang
- Department of Nuclear Medicine, General Hospital of Northern Military Area, Shenyang 110016, China
| | - Jia Guo
- Department of Nuclear Medicine, General Hospital of Northern Military Area, Shenyang 110016, China
| | - Guoxiu Lu
- Department of Nuclear Medicine, General Hospital of Northern Military Area, Shenyang 110016, China
| | - Youchao Wang
- Department of Nuclear Medicine, General Hospital of Northern Military Area, Shenyang 110016, China
| | - Yu-Dong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA
| |
Collapse
|
13
|
Lee J, Liu C, Kim J, Chen Z, Sun Y, Rogers JR, Chung WK, Weng C. Deep learning for rare disease: A scoping review. J Biomed Inform 2022; 135:104227. [DOI: 10.1016/j.jbi.2022.104227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 08/22/2022] [Accepted: 10/07/2022] [Indexed: 10/31/2022]
|
14
|
Wang X, Wang L, Sheng Y, Zhu C, Jiang N, Bai C, Xia M, Shao Z, Gu Z, Huang X, Zhao R, Liu Z. Automatic and accurate segmentation of peripherally inserted central catheter (PICC) from chest X-rays using multi-stage attention-guided learning. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.01.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
15
|
Xun S, Li D, Zhu H, Chen M, Wang J, Li J, Chen M, Wu B, Zhang H, Chai X, Jiang Z, Zhang Y, Huang P. Generative adversarial networks in medical image segmentation: A review. Comput Biol Med 2022; 140:105063. [PMID: 34864584 DOI: 10.1016/j.compbiomed.2021.105063] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 11/14/2021] [Accepted: 11/20/2021] [Indexed: 12/13/2022]
Abstract
PURPOSE Since Generative Adversarial Network (GAN) was introduced into the field of deep learning in 2014, it has received extensive attention from academia and industry, and a lot of high-quality papers have been published. GAN effectively improves the accuracy of medical image segmentation because of its good generating ability and capability to capture data distribution. This paper introduces the origin, working principle, and extended variant of GAN, and it reviews the latest development of GAN-based medical image segmentation methods. METHOD To find the papers, we searched on Google Scholar and PubMed with the keywords like "segmentation", "medical image", and "GAN (or generative adversarial network)". Also, additional searches were performed on Semantic Scholar, Springer, arXiv, and the top conferences in computer science with the above keywords related to GAN. RESULTS We reviewed more than 120 GAN-based architectures for medical image segmentation that were published before September 2021. We categorized and summarized these papers according to the segmentation regions, imaging modality, and classification methods. Besides, we discussed the advantages, challenges, and future research directions of GAN in medical image segmentation. CONCLUSIONS We discussed in detail the recent papers on medical image segmentation using GAN. The application of GAN and its extended variants has effectively improved the accuracy of medical image segmentation. Obtaining the recognition of clinicians and patients and overcoming the instability, low repeatability, and uninterpretability of GAN will be an important research direction in the future.
Collapse
Affiliation(s)
- Siyi Xun
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China
| | - Dengwang Li
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China.
| | - Hui Zhu
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, Shandong, China
| | - Min Chen
- The Second Hospital of Shandong University, Shandong University, The Department of Medicine, The Second Hospital of Shandong University, Jinan, China
| | - Jianbo Wang
- Department of Radiation Oncology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, Shandong, 250012, China
| | - Jie Li
- Department of Infectious Disease, Shandong Provincial Hospital Affiliated to Shandong University, Cheeloo College of Medicine, Shandong University, Jinan, Shandong, China
| | - Meirong Chen
- Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, Shandong, China
| | - Bing Wu
- Laibo Biotechnology Co., Ltd., Jinan, Shandong, China
| | - Hua Zhang
- LinkingMed Technology Co., Ltd., Beijing, China
| | - Xiangfei Chai
- Huiying Medical Technology (Beijing) Co., Ltd., Beijing, China
| | - Zekun Jiang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China
| | - Yan Zhang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China
| | - Pu Huang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China.
| |
Collapse
|
16
|
Abstract
Mistrust is a major barrier to implementing deep learning in healthcare settings. Entrustment could be earned by conveying model certainty, or the probability that a given model output is accurate, but the use of uncertainty estimation for deep learning entrustment is largely unexplored, and there is no consensus regarding optimal methods for quantifying uncertainty. Our purpose is to critically evaluate methods for quantifying uncertainty in deep learning for healthcare applications and propose a conceptual framework for specifying certainty of deep learning predictions. We searched Embase, MEDLINE, and PubMed databases for articles relevant to study objectives, complying with PRISMA guidelines, rated study quality using validated tools, and extracted data according to modified CHARMS criteria. Among 30 included studies, 24 described medical imaging applications. All imaging model architectures used convolutional neural networks or a variation thereof. The predominant method for quantifying uncertainty was Monte Carlo dropout, producing predictions from multiple networks for which different neurons have dropped out and measuring variance across the distribution of resulting predictions. Conformal prediction offered similar strong performance in estimating uncertainty, along with ease of interpretation and application not only to deep learning but also to other machine learning approaches. Among the six articles describing non-imaging applications, model architectures and uncertainty estimation methods were heterogeneous, but predictive performance was generally strong, and uncertainty estimation was effective in comparing modeling methods. Overall, the use of model learning curves to quantify epistemic uncertainty (attributable to model parameters) was sparse. Heterogeneity in reporting methods precluded the performance of a meta-analysis. Uncertainty estimation methods have the potential to identify rare but important misclassifications made by deep learning models and compare modeling methods, which could build patient and clinician trust in deep learning applications in healthcare. Efficient maturation of this field will require standardized guidelines for reporting performance and uncertainty metrics.
Collapse
|
17
|
Diao Z, Jiang H, Han XH, Yao YD, Shi T. EFNet: evidence fusion network for tumor segmentation from PET-CT volumes. Phys Med Biol 2021; 66. [PMID: 34555816 DOI: 10.1088/1361-6560/ac299a] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 09/23/2021] [Indexed: 11/11/2022]
Abstract
Precise delineation of target tumor from positron emission tomography-computed tomography (PET-CT) is a key step in clinical practice and radiation therapy. PET-CT co-segmentation actually uses the complementary information of two modalities to reduce the uncertainty of single-modal segmentation, so as to obtain more accurate segmentation results. At present, the PET-CT segmentation methods based on fully convolutional neural network (FCN) mainly adopt image fusion and feature fusion. The current fusion strategies do not consider the uncertainty of multi-modal segmentation and complex feature fusion consumes more computing resources, especially when dealing with 3D volumes. In this work, we analyze the PET-CT co-segmentation from the perspective of uncertainty, and propose evidence fusion network (EFNet). The network respectively outputs PET result and CT result containing uncertainty by proposed evidence loss, which are used as PET evidence and CT evidence. Then we use evidence fusion to reduce uncertainty of single-modal evidence. The final segmentation result is obtained based on evidence fusion of PET evidence and CT evidence. EFNet uses the basic 3D U-Net as backbone and only uses simple unidirectional feature fusion. In addition, EFNet can separately train and predict PET evidence and CT evidence, without the need for parallel training of two branch networks. We do experiments on the soft-tissue-sarcomas and lymphoma datasets. Compared with 3D U-Net, our proposed method improves the Dice by 8% and 5% respectively. Compared with the complex feature fusion method, our proposed method improves the Dice by 7% and 2% respectively. Our results show that in PET-CT segmentation methods based on FCN, by outputting uncertainty evidence and evidence fusion, the network can be simplified and the segmentation results can be improved.
Collapse
Affiliation(s)
- Zhaoshuo Diao
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, People's Republic of China.,Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, People's Republic of China
| | - Xian-Hua Han
- Graduate School of Sciences and Technology for Innovation, Yamaguchi University, Yamaguchi-shi 7538511, Japan
| | - Yu-Dong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken NJ 07030, United States of America
| | - Tianyu Shi
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
| |
Collapse
|
18
|
Brosch-Lenz J, Yousefirizi F, Zukotynski K, Beauregard JM, Gaudet V, Saboury B, Rahmim A, Uribe C. Role of Artificial Intelligence in Theranostics:: Toward Routine Personalized Radiopharmaceutical Therapies. PET Clin 2021; 16:627-641. [PMID: 34537133 DOI: 10.1016/j.cpet.2021.06.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
We highlight emerging uses of artificial intelligence (AI) in the field of theranostics, focusing on its significant potential to enable routine and reliable personalization of radiopharmaceutical therapies (RPTs). Personalized RPTs require patient-specific dosimetry calculations accompanying therapy. Additionally we discuss the potential to exploit biological information from diagnostic and therapeutic molecular images to derive biomarkers for absorbed dose and outcome prediction; toward personalization of therapies. We try to motivate the nuclear medicine community to expand and align efforts into making routine and reliable personalization of RPTs a reality.
Collapse
Affiliation(s)
- Julia Brosch-Lenz
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada
| | - Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada
| | - Katherine Zukotynski
- Department of Medicine and Radiology, McMaster University, 1200 Main Street West, Hamilton, Ontario L9G 4X5, Canada
| | - Jean-Mathieu Beauregard
- Department of Radiology and Nuclear Medicine, Cancer Research Centre, Université Laval, 2325 Rue de l'Université, Québec City, Quebec G1V 0A6, Canada; Department of Medical Imaging, Research Center (Oncology Axis), CHU de Québec - Université Laval, 2325 Rue de l'Université, Québec City, Quebec G1V 0A6, Canada
| | - Vincent Gaudet
- Department of Electrical and Computer Engineering, University of Waterloo, 200 University Avenue West, Waterloo, Ontario N2L 3G1, Canada
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada; Department of Radiology, University of British Columbia, 11th Floor, 2775 Laurel St, Vancouver, British Columbia V5Z 1M9, Canada; Department of Physics, University of British Columbia, 325 - 6224 Agricultural Road, Vancouver, British Columbia V6T 1Z1, Canada
| | - Carlos Uribe
- Department of Radiology, University of British Columbia, 11th Floor, 2775 Laurel St, Vancouver, British Columbia V5Z 1M9, Canada; Department of Functional Imaging, BC Cancer, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| |
Collapse
|
19
|
Wang M, Jiang H, Shi T, Yao YD. HD-RDS-UNet: Leveraging Spatial-Temporal Correlation between the Decoder Feature Maps for Lymphoma Segmentation. IEEE J Biomed Health Inform 2021; 26:1116-1127. [PMID: 34351864 DOI: 10.1109/jbhi.2021.3102612] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Lymphoma is a group of malignant tumors originated in the lymphatic system. Automatic and accurate lymphoma segmentation in PET/CT volumes is critical yet challenging in the clinical practice. Recently, UNet-like architectures are widely used for medical image segmentation. The pure UNet-like architectures model the spatial correlation between the feature maps very well, whereas they discard the critical temporal correlation. Some prior work combines UNet with recurrent neural networks (RNNs) to utilize the spatial and temporal correlation simultaneously. However, it is inconvenient to incorporate some advanced techniques for UNet to RNNs, which hampers their further improvements. In this paper, we propose a recurrent dense siamese decoder architecture, which simulates RNNs and can densely utilize the spatial-temporal correlation between the decoder feature maps following a UNet approach. We combine it with a modified hyper dense encoder. Therefore, the proposed model is a UNet with a hyper dense encoder and a recurrent dense siamese decoder (HD-RDS-UNet). To stabilize the training process, we propose a weighted Dice loss with stable gradient and self-adaptive parameters. We perform patient-independent fivefold cross-validation on 3D volumes collected from whole-body PET/CT scans of patients with lymphomas. The experimental results show that the volume-wise average Dice score and sensitivity are 85.58% and 94.63%, respectively. The patient-wise average Dice score and sensitivity are 85.85% and 95.01%, respectively. The different configurations of HD-RDS-UNet consistently show superiority in the performance comparison. Besides, a trained HD-RDS-UNet can be easily pruned, resulting in significantly reduced inference time and memory usage, while keeping very good segmentation performance.
Collapse
|
20
|
Yuan C, Zhang M, Huang X, Xie W, Lin X, Zhao W, Li B, Qian D. Diffuse large B-cell lymphoma segmentation in PET-CT images via hybrid learning for feature fusion. Med Phys 2021; 48:3665-3678. [PMID: 33735451 DOI: 10.1002/mp.14847] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 02/09/2021] [Accepted: 03/10/2021] [Indexed: 12/27/2022] Open
Abstract
PURPOSE Diffuse large B-cell lymphoma (DLBCL) is an aggressive type of lymphoma with high mortality and poor prognosis that especially has a high incidence in Asia. Accurate segmentation of DLBCL lesions is crucial for clinical radiation therapy. However, manual delineation of DLBCL lesions is tedious and time-consuming. Automatic segmentation provides an alternative solution but is difficult for diffuse lesions without the sufficient utilization of multimodality information. Our work is the first study focusing on positron emission tomography and computed tomography (PET-CT) feature fusion for the DLBCL segmentation issue. We aim to improve the fusion performance of complementary information contained in PET-CT imaging with a hybrid learning module in the supervised convolutional neural network. METHODS First, two encoder branches extract single-modality features, respectively. Next, the hybrid learning component utilizes them to generate spatial fusion maps which can quantify the contribution of complementary information. Such feature fusion maps are then concatenated with specific-modality (i.e., PET and CT) feature maps to obtain a representation of the final fused feature maps in different scales. Finally, the reconstruction part of our network creates a prediction map of DLBCL lesions by integrating and up-sampling the final fused feature maps from encoder blocks in different scales. RESULTS The ability of our method was evaluated to detect foreground and segment lesions in three independent body regions (nasopharynx, chest, and abdomen) of a set of 45 PET-CT scans. Extensive ablation experiments compared our method to four baseline techniques for multimodality fusion (input-level (IL) fusion, multichannel (MC) strategy, multibranch (MB) strategy, and quantitative weighting (QW) fusion). The results showed that our method achieved a high detection accuracy (99.63% in the nasopharynx, 99.51% in the chest, and 99.21% in the abdomen) and had the superiority in segmentation performance with the mean dice similarity coefficient (DSC) of 73.03% and the modified Hausdorff distance (MHD) of 4.39 mm, when compared with the baselines (DSC: IL: 53.08%, MC: 63.59%, MB: 69.98%, and QW: 72.19%; MHD: IL: 12.16 mm, MC: 6.46 mm, MB: 4.83 mm, and QW: 4.89 mm). CONCLUSIONS A promising segmentation method has been proposed for the challenging DLBCL lesions in PET-CT images, which improves the understanding of complementary information by feature fusion and may guide clinical radiotherapy. The statistically significant analysis based on P-value calculation has indicated a degree of significant difference between our proposed method and other baselines (almost metrics: P < 0.05). This is a preliminary research using a small sample size, and we will collect data continually to achieve the larger verification study.
Collapse
Affiliation(s)
- Cheng Yuan
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200040, China
| | - Miao Zhang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Xinyun Huang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Wei Xie
- Department of Hematology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Xiaozhu Lin
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Weili Zhao
- Department of Hematology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Biao Li
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Dahong Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200040, China
| |
Collapse
|
21
|
Li S, Jiang H, Li H, Yao YD. AW-SDRLSE: Adaptive Weighting and Scalable Distance Regularized Level Set Evolution for Lymphoma Segmentation on PET Images. IEEE J Biomed Health Inform 2021; 25:1173-1184. [PMID: 32841130 DOI: 10.1109/jbhi.2020.3017546] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Accurate lymphoma segmentation on Positron Emission Tomography (PET) images is of great importance for medical diagnoses, such as for distinguishing benign and malignant. To this end, this paper proposes an adaptive weighting and scalable distance regularized level set evolution (AW-SDRLSE) method for delineating lymphoma boundaries on 2D PET slices. There are three important characteristics with respect to AW-SDRLSE: 1) A scalable distance regularization term is proposed and a parameter q can control the contour's convergence rate and precision in theory. 2) A novel dynamic annular mask is proposed to calculate mean intensities of local interior and exterior regions and further define the region energy term. 3) As the level set method is sensitive to parameters, we thus propose an adaptive weighting strategy for the length and area energy terms using local region intensity and boundary direction information. AW-SDRLSE is evaluated on 90 cases of real PET data with a mean Dice coefficient of 0.8796. Comparative results demonstrate the accuracy and robustness of AW-SDRLSE as well as its performance advantages as compared with related level set methods. In addition, experimental results indicate that AW-SDRLSE can be a fine segmentation method for improving the lymphoma segmentation results obtained by deep learning (DL) methods significantly.
Collapse
|
22
|
Torres-Velázquez M, Chen WJ, Li X, McMillan AB. Application and Construction of Deep Learning Networks in Medical Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:137-159. [PMID: 34017931 PMCID: PMC8132932 DOI: 10.1109/trpms.2020.3030611] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Deep learning (DL) approaches are part of the machine learning (ML) subfield concerned with the development of computational models to train artificial intelligence systems. DL models are characterized by automatically extracting high-level features from the input data to learn the relationship between matching datasets. Thus, its implementation offers an advantage over common ML methods that often require the practitioner to have some domain knowledge of the input data to select the best latent representation. As a result of this advantage, DL has been successfully applied within the medical imaging field to address problems, such as disease classification and tumor segmentation for which it is difficult or impossible to determine which image features are relevant. Therefore, taking into consideration the positive impact of DL on the medical imaging field, this article reviews the key concepts associated with its evolution and implementation. The sections of this review summarize the milestones related to the development of the DL field, followed by a description of the elements of deep neural network and an overview of its application within the medical imaging field. Subsequently, the key steps necessary to implement a supervised DL application are defined, and associated limitations are discussed.
Collapse
Affiliation(s)
- Maribel Torres-Velázquez
- Department of Biomedical Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Wei-Jie Chen
- Department of Electrical and Computer Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Xue Li
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Alan B McMillan
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA, and also with the Department of Medical Physics, University of Wisconsin-Madison, Madison, WI 53705 USA
| |
Collapse
|
23
|
Development and validation of an 18F-FDG PET radiomic model for prognosis prediction in patients with nasal-type extranodal natural killer/T cell lymphoma. Eur Radiol 2020; 30:5578-5587. [PMID: 32435928 DOI: 10.1007/s00330-020-06943-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Revised: 04/02/2020] [Accepted: 05/07/2020] [Indexed: 02/05/2023]
Abstract
OBJECTIVES To identify an 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography (PET) radiomics-based model for predicting progression-free survival (PFS) and overall survival (OS) of nasal-type extranodal natural killer/T cell lymphoma (ENKTL). METHODS In this retrospective study, a total of 110 ENKTL patients were divided into a training cohort (n = 82) and a validation cohort (n = 28). Forty-one features were extracted from pretreatment PET images of the patients. Least absolute shrinkage and selection operator (LASSO) regression was used to develop the radiomic signatures (R-signatures). A radiomics-based model was built and validated in the two cohorts and compared with a metabolism-based model. RESULTS The R-signatures were constructed with moderate predictive ability in the training and validation cohorts (R-signaturePFS: AUC = 0.788 and 0.473; R-signatureOS: AUC = 0.637 and 0.730). For PFS, the radiomics-based model showed better discrimination than the metabolism-based model in the training cohort (C-index = 0.811 vs. 0.751) but poorer discrimination in the validation cohort (C-index = 0.588 vs. 0.693). The calibration of the radiomics-based model was poorer than that of the metabolism-based model (training cohort: p = 0.415 vs. 0.428, validation cohort: p = 0.228 vs. 0.652). For OS, the performance of the radiomics-based model was poorer (training cohort: C-index = 0.818 vs. 0.828, p = 0.853 vs. 0.885; validation cohort: C-index = 0.628 vs. 0.753, p < 0.05 vs. 0.913). CONCLUSIONS Radiomic features derived from PET images can predict the outcomes of patients with ENKTL, but the performance of the radiomics-based model was inferior to that of the metabolism-based model. KEY POINTS • The R-signatures calculated by using 18F-FDG PET radiomic features can predict the survival of patients with ENKTL. • The radiomics-based models integrating the R-signatures and clinical factors achieved good predictive values. • The performance of the radiomics-based model was inferior to that of the metabolism-based model in the two cohorts.
Collapse
|