1
|
Li S, Zhang M, Xue M, Zhu Q. Real-time breast lesion classification combining diffuse optical tomography frequency domain data and BI-RADS assessment. JOURNAL OF BIOPHOTONICS 2024; 17:e202300483. [PMID: 38430216 PMCID: PMC11065578 DOI: 10.1002/jbio.202300483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 02/07/2024] [Accepted: 02/08/2024] [Indexed: 03/03/2024]
Abstract
Ultrasound (US)-guided diffuse optical tomography (DOT) has demonstrated potential for breast cancer diagnosis, in which real-time or near real-time diagnosis with high accuracy is desired. However, DOT's relatively slow data processing and image reconstruction speeds have hindered real-time diagnosis. Here, we propose a real-time classification scheme that combines US breast imaging reporting and data system (BI-RADS) readings and DOT frequency domain measurements. A convolutional neural network is trained to generate malignancy probability scores from DOT measurements. Subsequently, these scores are integrated with BI-RADS assessments using a support vector machine classifier, which then provides the final diagnostic output. An area under the receiver operating characteristic curve of 0.978 is achieved in distinguishing between benign and malignant breast lesions in patient data without image reconstruction.
Collapse
Affiliation(s)
- Shuying Li
- Department of Biomedical Engineering, Washington University in St. Louis, 63130 St. Louis, USA
| | - Menghao Zhang
- Department of Electrical & Systems Engineering, Washington University in St. Louis, 63130 St. Louis, USA
| | - Minghao Xue
- Department of Biomedical Engineering, Washington University in St. Louis, 63130 St. Louis, USA
| | - Quing Zhu
- Department of Biomedical Engineering, Washington University in St. Louis, 63130 St. Louis, USA
- Department of Electrical & Systems Engineering, Washington University in St. Louis, 63130 St. Louis, USA
- Department of Radiology, Washington University School of Medicine, 63110 St. Louis, USA
| |
Collapse
|
2
|
Carriero A, Groenhoff L, Vologina E, Basile P, Albera M. Deep Learning in Breast Cancer Imaging: State of the Art and Recent Advancements in Early 2024. Diagnostics (Basel) 2024; 14:848. [PMID: 38667493 PMCID: PMC11048882 DOI: 10.3390/diagnostics14080848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 04/07/2024] [Accepted: 04/17/2024] [Indexed: 04/28/2024] Open
Abstract
The rapid advancement of artificial intelligence (AI) has significantly impacted various aspects of healthcare, particularly in the medical imaging field. This review focuses on recent developments in the application of deep learning (DL) techniques to breast cancer imaging. DL models, a subset of AI algorithms inspired by human brain architecture, have demonstrated remarkable success in analyzing complex medical images, enhancing diagnostic precision, and streamlining workflows. DL models have been applied to breast cancer diagnosis via mammography, ultrasonography, and magnetic resonance imaging. Furthermore, DL-based radiomic approaches may play a role in breast cancer risk assessment, prognosis prediction, and therapeutic response monitoring. Nevertheless, several challenges have limited the widespread adoption of AI techniques in clinical practice, emphasizing the importance of rigorous validation, interpretability, and technical considerations when implementing DL solutions. By examining fundamental concepts in DL techniques applied to medical imaging and synthesizing the latest advancements and trends, this narrative review aims to provide valuable and up-to-date insights for radiologists seeking to harness the power of AI in breast cancer care.
Collapse
Affiliation(s)
| | - Léon Groenhoff
- Radiology Department, Maggiore della Carità Hospital, 28100 Novara, Italy; (A.C.); (E.V.); (P.B.); (M.A.)
| | | | | | | |
Collapse
|
3
|
Liu Z, Jia J, Bai F, Ding Y, Han L, Bai G. Predicting rectal cancer tumor budding grading based on MRI and CT with multimodal deep transfer learning: A dual-center study. Heliyon 2024; 10:e28769. [PMID: 38590908 PMCID: PMC11000007 DOI: 10.1016/j.heliyon.2024.e28769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Revised: 03/24/2024] [Accepted: 03/24/2024] [Indexed: 04/10/2024] Open
Abstract
Objective To investigate the effectiveness of a multimodal deep learning model in predicting tumor budding (TB) grading in rectal cancer (RC) patients. Materials and methods A retrospective analysis was conducted on 355 patients with rectal adenocarcinoma from two different hospitals. Among them, 289 patients from our institution were randomly divided into an internal training cohort (n = 202) and an internal validation cohort (n = 87) in a 7:3 ratio, while an additional 66 patients from another hospital constituted an external validation cohort. Various deep learning models were constructed and compared for their performance using T1CE and CT-enhanced images, and the optimal models were selected for the creation of a multimodal fusion model. Based on single and multiple factor logistic regression, clinical N staging and fecal occult blood were identified as independent risk factors and used to construct the clinical model. A decision-level fusion was employed to integrate these two models to create an ensemble model. The predictive performance of each model was evaluated using the area under the curve (AUC), DeLong's test, calibration curve, and decision curve analysis (DCA). Model visualization Gradient-weighted Class Activation Mapping (Grad-CAM) was performed for model interpretation. Results The multimodal fusion model demonstrated superior performance compared to single-modal models, with AUC values of 0.869 (95% CI: 0.761-0.976) for the internal validation cohort and 0.848 (95% CI: 0.721-0.975) for the external validation cohort. N-stage and fecal occult blood were identified as clinically independent risk factors through single and multivariable logistic regression analysis. The final ensemble model exhibited the best performance, with AUC values of 0.898 (95% CI: 0.820-0.975) for the internal validation cohort and 0.868 (95% CI: 0.768-0.968) for the external validation cohort. Conclusion Multimodal deep learning models can effectively and non-invasively provide individualized predictions for TB grading in RC patients, offering valuable guidance for treatment selection and prognosis assessment.
Collapse
Affiliation(s)
- Ziyan Liu
- Deparment of Medical Imaging Center, The Affiliated Huaian NO.1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| | - Jianye Jia
- Deparment of Medical Imaging Center, The Affiliated Huaian NO.1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| | - Fan Bai
- Deparment of Medical Imaging Center, The Affiliated Huaian NO.1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| | - Yuxin Ding
- Deparment of Medical Imaging Center, The Affiliated Huaian NO.1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| | - Lei Han
- Deparment of Medical Imaging, Huaian Hospital Affiliated to Xuzhou Medical University, Huaian, Jiangsu, China
| | - Genji Bai
- Deparment of Medical Imaging Center, The Affiliated Huaian NO.1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| |
Collapse
|
4
|
Yi H, Yang R, Wang Y, Wang Y, Guo H, Cao X, Zhu S, He X. Enhanced model iteration algorithm with graph neural network for diffuse optical tomography. BIOMEDICAL OPTICS EXPRESS 2024; 15:1910-1925. [PMID: 38495688 PMCID: PMC10942675 DOI: 10.1364/boe.509775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 02/01/2024] [Accepted: 02/12/2024] [Indexed: 03/19/2024]
Abstract
Diffuse optical tomography (DOT) employs near-infrared light to reveal the optical parameters of biological tissues. Due to the strong scattering of photons in tissues and the limited surface measurements, DOT reconstruction is severely ill-posed. The Levenberg-Marquardt (LM) is a popular iteration method for DOT, however, it is computationally expensive and its reconstruction accuracy needs improvement. In this study, we propose a neural model based iteration algorithm which combines the graph neural network with Levenberg-Marquardt (GNNLM), which utilizes a graph data structure to represent the finite element mesh. In order to verify the performance of the graph neural network, two GNN variants, namely graph convolutional neural network (GCN) and graph attention neural network (GAT) were employed in the experiments. The results showed that GCNLM performs best in the simulation experiments within the training data distribution. However, GATLM exhibits superior performance in the simulation experiments outside the training data distribution and real experiments with breast-like phantoms. It demonstrated that the GATLM trained with simulation data can generalize well to situations outside the training data distribution without transfer training. This offers the possibility to provide more accurate absorption coefficient distributions in clinical practice.
Collapse
Affiliation(s)
- Huangjian Yi
- School of Information Sciences and Technology, Northwest University, Xi’an, Shaanxi 710069, China
- The Xi’an Key Laboratory of Radiomics and Intelligent Perception, No. 1 Xuefu Avenue, 710127 Xi’an, Shaanxi, China
| | - Ruigang Yang
- School of Information Sciences and Technology, Northwest University, Xi’an, Shaanxi 710069, China
- The Xi’an Key Laboratory of Radiomics and Intelligent Perception, No. 1 Xuefu Avenue, 710127 Xi’an, Shaanxi, China
| | - Yishuo Wang
- School of Information Sciences and Technology, Northwest University, Xi’an, Shaanxi 710069, China
- The Xi’an Key Laboratory of Radiomics and Intelligent Perception, No. 1 Xuefu Avenue, 710127 Xi’an, Shaanxi, China
| | - Yihan Wang
- School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710026, China
| | - Hongbo Guo
- School of Information Sciences and Technology, Northwest University, Xi’an, Shaanxi 710069, China
- The Xi’an Key Laboratory of Radiomics and Intelligent Perception, No. 1 Xuefu Avenue, 710127 Xi’an, Shaanxi, China
| | - Xu Cao
- School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710026, China
| | - Shouping Zhu
- School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710026, China
| | - Xiaowei He
- School of Information Sciences and Technology, Northwest University, Xi’an, Shaanxi 710069, China
- The Xi’an Key Laboratory of Radiomics and Intelligent Perception, No. 1 Xuefu Avenue, 710127 Xi’an, Shaanxi, China
| |
Collapse
|
5
|
Oyelade ON, Irunokhai EA, Wang H. A twin convolutional neural network with hybrid binary optimizer for multimodal breast cancer digital image classification. Sci Rep 2024; 14:692. [PMID: 38184742 PMCID: PMC10771515 DOI: 10.1038/s41598-024-51329-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Accepted: 01/03/2024] [Indexed: 01/08/2024] Open
Abstract
There is a wide application of deep learning technique to unimodal medical image analysis with significant classification accuracy performance observed. However, real-world diagnosis of some chronic diseases such as breast cancer often require multimodal data streams with different modalities of visual and textual content. Mammography, magnetic resonance imaging (MRI) and image-guided breast biopsy represent a few of multimodal visual streams considered by physicians in isolating cases of breast cancer. Unfortunately, most studies applying deep learning techniques to solving classification problems in digital breast images have often narrowed their study to unimodal samples. This is understood considering the challenging nature of multimodal image abnormality classification where the fusion of high dimension heterogeneous features learned needs to be projected into a common representation space. This paper presents a novel deep learning approach combining a dual/twin convolutional neural network (TwinCNN) framework to address the challenge of breast cancer image classification from multi-modalities. First, modality-based feature learning was achieved by extracting both low and high levels features using the networks embedded with TwinCNN. Secondly, to address the notorious problem of high dimensionality associated with the extracted features, binary optimization method is adapted to effectively eliminate non-discriminant features in the search space. Furthermore, a novel method for feature fusion is applied to computationally leverage the ground-truth and predicted labels for each sample to enable multimodality classification. To evaluate the proposed method, digital mammography images and digital histopathology breast biopsy samples from benchmark datasets namely MIAS and BreakHis respectively. Experimental results obtained showed that the classification accuracy and area under the curve (AUC) for the single modalities yielded 0.755 and 0.861871 for histology, and 0.791 and 0.638 for mammography. Furthermore, the study investigated classification accuracy resulting from the fused feature method, and the result obtained showed that 0.977, 0.913, and 0.667 for histology, mammography, and multimodality respectively. The findings from the study confirmed that multimodal image classification based on combination of image features and predicted label improves performance. In addition, the contribution of the study shows that feature dimensionality reduction based on binary optimizer supports the elimination of non-discriminant features capable of bottle-necking the classifier.
Collapse
Affiliation(s)
- Olaide N Oyelade
- School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast, BT9 SBN, UK.
| | | | - Hui Wang
- School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast, BT9 SBN, UK
| |
Collapse
|
6
|
Xue M, Zhang M, Li S, Zou Y, Zhu Q. Automated pipeline for breast cancer diagnosis using US assisted diffuse optical tomography. BIOMEDICAL OPTICS EXPRESS 2023; 14:6072-6087. [PMID: 38021111 PMCID: PMC10659805 DOI: 10.1364/boe.502244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 10/11/2023] [Accepted: 10/11/2023] [Indexed: 12/01/2023]
Abstract
Ultrasound (US)-guided diffuse optical tomography (DOT) is a portable and non-invasive imaging modality for breast cancer diagnosis and treatment response monitoring. However, DOT data pre-processing and imaging reconstruction often require labor intensive manual processing which hampers real-time diagnosis. In this study, we aim at providing an automated US-assisted DOT pre-processing, imaging and diagnosis pipeline to achieve near real-time diagnosis. We have developed an automated DOT pre-processing method including motion detection, mismatch classification using deep-learning approach, and outlier removal. US-lesion information needed for DOT reconstruction was extracted by a semi-automated lesion segmentation approach combined with a US reading algorithm. A deep learning model was used to evaluate the quality of the reconstructed DOT images and a two-step deep-learning model developed earlier is implemented to provide final diagnosis based on US imaging features and DOT measurements and imaging results. The presented US-assisted DOT pipeline accurately processed the DOT measurements and reconstruction and reduced the procedure time to 2 to 3 minutes while maintained a comparable classification result with manually processed dataset.
Collapse
Affiliation(s)
- Minghao Xue
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA
| | - Menghao Zhang
- Department of Electrical & Systems Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA
| | - Shuying Li
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA
| | - Yun Zou
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA
| | - Quing Zhu
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA
- Department of Electrical & Systems Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA
- Department of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| |
Collapse
|
7
|
Nouizi F, Kwong TC, Turong B, Nikkhah D, Sampathkumaran U, Gulsen G. Fast ICCD-based temperature modulated fluorescence tomography. APPLIED OPTICS 2023; 62:7420-7430. [PMID: 37855510 DOI: 10.1364/ao.499281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 09/06/2023] [Indexed: 10/20/2023]
Abstract
Fluorescence tomography (FT) has become a powerful preclinical imaging modality with a great potential for several clinical applications. Although it has superior sensitivity and utilizes low-cost instrumentation, the highly scattering nature of bio-tissue makes FT in thick samples challenging, resulting in poor resolution and low quantitative accuracy. To overcome the limitations of FT, we previously introduced a novel method, termed temperature modulated fluorescence tomography (TMFT), which is based on two key elements: (1) temperature-sensitive fluorescent agents (ThermoDots) and (2) high-intensity focused ultrasound (HIFU). The fluorescence emission of ThermoDots increases up to hundredfold with only several degree temperature elevation. The exceptional and reversible response of these ThermoDots enables their modulation, which effectively allows their localization using the HIFU. Their localization is then used as functional a priori during the FT image reconstruction process to resolve their distribution with higher spatial resolution. The last version of the TMFT system was based on a cooled CCD camera utilizing a step-and-shoot mode, which necessitated long total imaging time only for a small selected region of interest (ROI). In this paper, we present the latest version of our TMFT technology, which uses a much faster continuous HIFU scanning mode based on an intensified CCD (ICCD) camera. This new, to the best of our knowledge, version can capture the whole field-of-view (FOV) of 50×30m m 2 at once and reduces the total imaging time down to 30 min, while preserving the same high resolution (∼1.3m m) and superior quantitative accuracy (<7% error) as the previous versions. Therefore, this new method is an important step toward utilization of TMFT for preclinical imaging.
Collapse
|
8
|
Zhang M, Li S, Xue M, Zhu Q. Two-stage classification strategy for breast cancer diagnosis using ultrasound-guided diffuse optical tomography and deep learning. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:086002. [PMID: 37638108 PMCID: PMC10457211 DOI: 10.1117/1.jbo.28.8.086002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 07/29/2023] [Accepted: 08/02/2023] [Indexed: 08/29/2023]
Abstract
Significance Ultrasound (US)-guided diffuse optical tomography (DOT) has demonstrated great potential for breast cancer diagnosis in which real-time or near real-time diagnosis with high accuracy is desired. Aim We aim to use US-guided DOT to achieve an automated, fast, and accurate classification of breast lesions. Approach We propose a two-stage classification strategy with deep learning. In the first stage, US images and histograms created from DOT perturbation measurements are combined to predict benign lesions. Then the non-benign suspicious lesions are passed through to the second stage, which combine US image features, DOT histogram features, and 3D DOT reconstructed images for final diagnosis. Results The first stage alone identified 73.0% of benign cases without image reconstruction. In distinguishing between benign and malignant breast lesions in patient data, the two-stage classification approach achieved an area under the receiver operating characteristic curve of 0.946, outperforming the diagnoses of all single-modality models and of a single-stage classification model that combines all US images, DOT histogram, and imaging features. Conclusions The proposed two-stage classification strategy achieves better classification accuracy than single-modality-only models and a single-stage classification model that combines all features. It can potentially distinguish breast cancers from benign lesions in near real-time.
Collapse
Affiliation(s)
- Menghao Zhang
- Washington University in St. Louis, Department of Electrical and Systems Engineering, St. Louis, Missouri, United States
| | - Shuying Li
- Washington University in St. Louis, Department of Biomedical Engineering, St. Louis, Missouri, United States
| | - Minghao Xue
- Washington University in St. Louis, Department of Biomedical Engineering, St. Louis, Missouri, United States
| | - Quing Zhu
- Washington University in St. Louis, Department of Electrical and Systems Engineering, St. Louis, Missouri, United States
- Washington University in St. Louis, Department of Biomedical Engineering, St. Louis, Missouri, United States
- Washington University School of Medicine, Department of Radiology, St. Louis, Missouri, United States
| |
Collapse
|