1
|
Li T, Guo Y, Zhao Z, Chen M, Lin Q, Hu X, Yao Z, Hu B. Automated Diagnosis of Major Depressive Disorder With Multi-Modal MRIs Based on Contrastive Learning: A Few-Shot Study. IEEE Trans Neural Syst Rehabil Eng 2024; 32:1566-1576. [PMID: 38512734 DOI: 10.1109/tnsre.2024.3380357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2024]
Abstract
Depression ranks among the most prevalent mood-related psychiatric disorders. Existing clinical diagnostic approaches relying on scale interviews are susceptible to individual and environmental variations. In contrast, the integration of neuroimaging techniques and computer science has provided compelling evidence for the quantitative assessment of major depressive disorder (MDD). However, one of the major challenges in computer-aided diagnosis of MDD is to automatically and effectively mine the complementary cross-modal information from limited datasets. In this study, we proposed a few-shot learning framework that integrates multi-modal MRI data based on contrastive learning. In the upstream task, it is designed to extract knowledge from heterogeneous data. Subsequently, the downstream task is dedicated to transferring the acquired knowledge to the target dataset, where a hierarchical fusion paradigm is also designed to integrate features across inter- and intra-modalities. Lastly, the proposed model was evaluated on a set of multi-modal clinical data, achieving average scores of 73.52% and 73.09% for accuracy and AUC, respectively. Our findings also reveal that the brain regions within the default mode network and cerebellum play a crucial role in the diagnosis, which provides further direction in exploring reproducible biomarkers for MDD diagnosis.
Collapse
|
2
|
Meng Y, Yang Y, Hu M, Zhang Z, Zhou X. Artificial intelligence-based radiomics in bone tumors: Technical advances and clinical application. Semin Cancer Biol 2023; 95:75-87. [PMID: 37499847 DOI: 10.1016/j.semcancer.2023.07.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 07/21/2023] [Accepted: 07/22/2023] [Indexed: 07/29/2023]
Abstract
Radiomics is the extraction of predefined mathematic features from medical images for predicting variables of clinical interest. Recent research has demonstrated that radiomics can be processed by artificial intelligence algorithms to reveal complex patterns and trends for diagnosis, and prediction of prognosis and response to treatment modalities in various types of cancer. Artificial intelligence tools can utilize radiological images to solve next-generation issues in clinical decision making. Bone tumors can be classified as primary and secondary (metastatic) tumors. Osteosarcoma, Ewing sarcoma, and chondrosarcoma are the dominating primary tumors of bone. The development of bone tumor model systems and relevant research, and the assessment of novel treatment methods are ongoing to improve clinical outcomes, notably for patients with metastases. Artificial intelligence and radiomics have been utilized in almost full spectrum of clinical care of bone tumors. Radiomics models have achieved excellent performance in the diagnosis and grading of bone tumors. Furthermore, the models enable to predict overall survival, metastases, and recurrence. Radiomics features have exhibited promise in assisting therapeutic planning and evaluation, especially neoadjuvant chemotherapy. This review provides an overview of the evolution and opportunities for artificial intelligence in imaging, with a focus on hand-crafted features and deep learning-based radiomics approaches. We summarize the current application of artificial intelligence-based radiomics both in primary and metastatic bone tumors, and discuss the limitations and future opportunities of artificial intelligence-based radiomics in this field. In the era of personalized medicine, our in-depth understanding of emerging artificial intelligence-based radiomics approaches will bring innovative solutions to bone tumors and achieve clinical application.
Collapse
Affiliation(s)
- Yichen Meng
- Department of Orthopedics, Second Affiliated Hospital of Naval Medical University, Shanghai 200003, PR China
| | - Yue Yang
- Department of Orthopedics, Second Affiliated Hospital of Naval Medical University, Shanghai 200003, PR China
| | - Miao Hu
- Department of Orthopedics, Second Affiliated Hospital of Naval Medical University, Shanghai 200003, PR China
| | - Zheng Zhang
- Department of Orthopedics, Second Affiliated Hospital of Naval Medical University, Shanghai 200003, PR China.
| | - Xuhui Zhou
- Department of Orthopedics, Second Affiliated Hospital of Naval Medical University, Shanghai 200003, PR China.
| |
Collapse
|
3
|
Chen YY, Yu PN, Lai YC, Hsieh TC, Cheng DC. Bone Metastases Lesion Segmentation on Breast Cancer Bone Scan Images with Negative Sample Training. Diagnostics (Basel) 2023; 13:3042. [PMID: 37835785 PMCID: PMC10572884 DOI: 10.3390/diagnostics13193042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 09/18/2023] [Accepted: 09/22/2023] [Indexed: 10/15/2023] Open
Abstract
The use of deep learning methods for the automatic detection and quantification of bone metastases in bone scan images holds significant clinical value. A fast and accurate automated system for segmenting bone metastatic lesions can assist clinical physicians in diagnosis. In this study, a small internal dataset comprising 100 breast cancer patients (90 cases of bone metastasis and 10 cases of non-metastasis) and 100 prostate cancer patients (50 cases of bone metastasis and 50 cases of non-metastasis) was used for model training. Initially, all image labels were binary. We used the Otsu thresholding method or negative mining to generate a non-metastasis mask, thereby transforming the image labels into three classes. We adopted the Double U-Net as the baseline model and made modifications to its output activation function. We changed the activation function to SoftMax to accommodate multi-class segmentation. Several methods were used to enhance model performance, including background pre-processing to remove background information, adding negative samples to improve model precision, and using transfer learning to leverage shared features between two datasets, which enhances the model's performance. The performance was investigated via 10-fold cross-validation and computed on a pixel-level scale. The best model we achieved had a precision of 69.96%, a sensitivity of 63.55%, and an F1-score of 66.60%. Compared to the baseline model, this represents an 8.40% improvement in precision, a 0.56% improvement in sensitivity, and a 4.33% improvement in the F1-score. The developed system has the potential to provide pre-diagnostic reports for physicians in final decisions and the calculation of the bone scan index (BSI) with the combination with bone skeleton segmentation.
Collapse
Affiliation(s)
- Yi-You Chen
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung 404, Taiwan; (Y.-Y.C.); (P.-N.Y.)
| | - Po-Nien Yu
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung 404, Taiwan; (Y.-Y.C.); (P.-N.Y.)
| | - Yung-Chi Lai
- Department of Nuclear Medicine, Feng Yuan Hospital, Ministry of Health and Welfare, Taichung 420, Taiwan;
| | - Te-Chun Hsieh
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung 404, Taiwan; (Y.-Y.C.); (P.-N.Y.)
- Department of Nuclear Medicine and PET Center, China Medical University Hospital, Taichung 404, Taiwan
| | - Da-Chuan Cheng
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung 404, Taiwan; (Y.-Y.C.); (P.-N.Y.)
| |
Collapse
|
4
|
Systematic Review of Tumor Segmentation Strategies for Bone Metastases. Cancers (Basel) 2023; 15:cancers15061750. [PMID: 36980636 PMCID: PMC10046265 DOI: 10.3390/cancers15061750] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 03/09/2023] [Accepted: 03/10/2023] [Indexed: 03/18/2023] Open
Abstract
Purpose: To investigate the segmentation approaches for bone metastases in differentiating benign from malignant bone lesions and characterizing malignant bone lesions. Method: The literature search was conducted in Scopus, PubMed, IEEE and MedLine, and Web of Science electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 77 original articles, 24 review articles, and 1 comparison paper published between January 2010 and March 2022 were included in the review. Results: The results showed that most studies used neural network-based approaches (58.44%) and CT-based imaging (50.65%) out of 77 original articles. However, the review highlights the lack of a gold standard for tumor boundaries and the need for manual correction of the segmentation output, which largely explains the absence of clinical translation studies. Moreover, only 19 studies (24.67%) specifically mentioned the feasibility of their proposed methods for use in clinical practice. Conclusion: Development of tumor segmentation techniques that combine anatomical information and metabolic activities is encouraging despite not having an optimal tumor segmentation method for all applications or can compensate for all the difficulties built into data limitations.
Collapse
|
5
|
Huo T, Xie Y, Fang Y, Wang Z, Liu P, Duan Y, Zhang J, Wang H, Xue M, Liu S, Ye Z. Deep learning-based algorithm improves radiologists' performance in lung cancer bone metastases detection on computed tomography. Front Oncol 2023; 13:1125637. [PMID: 36845701 PMCID: PMC9946454 DOI: 10.3389/fonc.2023.1125637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 01/13/2023] [Indexed: 02/10/2023] Open
Abstract
Purpose To develop and assess a deep convolutional neural network (DCNN) model for the automatic detection of bone metastases from lung cancer on computed tomography (CT). Methods In this retrospective study, CT scans acquired from a single institution from June 2012 to May 2022 were included. In total, 126 patients were assigned to a training cohort (n = 76), a validation cohort (n = 12), and a testing cohort (n = 38). We trained and developed a DCNN model based on positive scans with bone metastases and negative scans without bone metastases to detect and segment the bone metastases of lung cancer on CT. We evaluated the clinical efficacy of the DCNN model in an observer study with five board-certified radiologists and three junior radiologists. The receiver operator characteristic curve was used to assess the sensitivity and false positives of the detection performance; the intersection-over-union and dice coefficient were used to evaluate the segmentation performance of predicted lung cancer bone metastases. Results The DCNN model achieved a detection sensitivity of 0.894, with 5.24 average false positives per case, and a segmentation dice coefficient of 0.856 in the testing cohort. Through the radiologists-DCNN model collaboration, the detection accuracy of the three junior radiologists improved from 0.617 to 0.879 and the sensitivity from 0.680 to 0.902. Furthermore, the mean interpretation time per case of the junior radiologists was reduced by 228 s (p = 0.045). Conclusions The proposed DCNN model for automatic lung cancer bone metastases detection can improve diagnostic efficiency and reduce the diagnosis time and workload of junior radiologists.
Collapse
Affiliation(s)
- Tongtong Huo
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China,Research Institute of Imaging, National Key Laboratory of Multi-Spectral Information Processing Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Yi Xie
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ying Fang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ziyi Wang
- Research Institute of Imaging, National Key Laboratory of Multi-Spectral Information Processing Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Pengran Liu
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yuyu Duan
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Jiayao Zhang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Honglin Wang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Mingdi Xue
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Songxiang Liu
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China,*Correspondence: Songxiang Liu, ; Zhewei Ye,
| | - Zhewei Ye
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China,*Correspondence: Songxiang Liu, ; Zhewei Ye,
| |
Collapse
|
6
|
Kao YS, Huang CP, Tsai WW, Yang J. A systematic review for using deep learning in bone scan classification. Clin Transl Imaging 2023. [DOI: 10.1007/s40336-023-00539-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
7
|
Integrating Transfer Learning and Feature Aggregation into Self-defined Convolutional Neural Network for Automated Detection of Lung Cancer Bone Metastasis. J Med Biol Eng 2022. [DOI: 10.1007/s40846-022-00770-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
8
|
Lin Q, Gao R, Luo M, Wang H, Cao Y, Man Z, Wang R. Semi-supervised segmentation of metastasis lesions in bone scan images. Front Mol Biosci 2022; 9:956720. [DOI: 10.3389/fmolb.2022.956720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 10/13/2022] [Indexed: 11/13/2022] Open
Abstract
To develop a deep image segmentation model that automatically identifies and delineates lesions of skeletal metastasis in bone scan images, facilitating clinical diagnosis of lung cancer–caused bone metastasis by nuclear medicine physicians. A semi-supervised segmentation model is proposed, comprising the feature extraction subtask and pixel classification subtask. During the feature extraction stage, cascaded layers which include the dilated residual convolution, inception connection, and feature aggregation learn the hierarchal representations of low-resolution bone scan images. During the pixel classification stage, each pixel is first classified into categories in a semi-supervised manner, and the boundary of pixels belonging to an individual lesion is then delineated using a closed curve. Experimental evaluation conducted on 2,280 augmented samples (112 original images) demonstrates that the proposed model performs well for automated segmentation of metastasis lesions, with a score of 0.692 for DSC if the model is trained using 37% of the labeled samples. The self-defined semi-supervised segmentation model can be utilized as an automated clinical tool to detect and delineate metastasis lesions in bone scan images, using only a few manually labeled image samples. Nuclear medicine physicians need only attend to those segmented lesions while ignoring the background when they diagnose bone metastasis using low-resolution images. More images of patients from multiple centers are typically needed to further improve the scalability and performance of the model via mitigating the impacts of variability in size, shape, and intensity of bone metastasis lesions.
Collapse
|
9
|
Ibrahim A, Mohamed HK, Maher A, Zhang B. A Survey on Human Cancer Categorization Based on Deep Learning. Front Artif Intell 2022; 5:884749. [PMID: 35832207 PMCID: PMC9271903 DOI: 10.3389/frai.2022.884749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 05/09/2022] [Indexed: 11/13/2022] Open
Abstract
In recent years, we have witnessed the fast growth of deep learning, which involves deep neural networks, and the development of the computing capability of computer devices following the advance of graphics processing units (GPUs). Deep learning can prototypically and successfully categorize histopathological images, which involves imaging classification. Various research teams apply deep learning to medical diagnoses, especially cancer diseases. Convolutional neural networks (CNNs) detect the conventional visual features of disease diagnoses, e.g., lung, skin, brain, prostate, and breast cancer. A CNN has a procedure for perfectly investigating medicinal science images. This study assesses the main deep learning concepts relevant to medicinal image investigation and surveys several charities in the field. In addition, it covers the main categories of imaging procedures in medication. The survey comprises the usage of deep learning for object detection, classification, and human cancer categorization. In addition, the most popular cancer types have also been introduced. This article discusses the Vision-Based Deep Learning System among the dissimilar sorts of data mining techniques and networks. It then introduces the most extensively used DL network category, which is convolutional neural networks (CNNs) and investigates how CNN architectures have evolved. Starting with Alex Net and progressing with the Google and VGG networks, finally, a discussion of the revealed challenges and trends for upcoming research is held.
Collapse
Affiliation(s)
- Ahmad Ibrahim
- Department of Computer Science, October 6 University, Cairo, Egypt
- *Correspondence: Ahmad Ibrahim
| | - Hoda K. Mohamed
- Department of Computer Engineering, Ain Shams University, Cairo, Egypt
| | - Ali Maher
- Department of Computer Science, October 6 University, Cairo, Egypt
| | - Baochang Zhang
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
| |
Collapse
|