1
|
Guo J, Chen B, Cao H, Dai Q, Qin L, Zhang J, Zhang Y, Zhang H, Sui Y, Chen T, Yang D, Gong X, Li D. Cross-modal deep learning model for predicting pathologic complete response to neoadjuvant chemotherapy in breast cancer. NPJ Precis Oncol 2024; 8:189. [PMID: 39237596 PMCID: PMC11377584 DOI: 10.1038/s41698-024-00678-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 08/26/2024] [Indexed: 09/07/2024] Open
Abstract
Pathological complete response (pCR) serves as a critical measure of the success of neoadjuvant chemotherapy (NAC) in breast cancer, directly influencing subsequent therapeutic decisions. With the continuous advancement of artificial intelligence, methods for early and accurate prediction of pCR are being extensively explored. In this study, we propose a cross-modal multi-pathway automated prediction model that integrates temporal and spatial information. This model fuses digital pathology images from biopsy specimens and multi-temporal ultrasound (US) images to predict pCR status early in NAC. The model demonstrates exceptional predictive efficacy. Our findings lay the foundation for developing personalized treatment paradigms based on individual responses. This approach has the potential to become a critical auxiliary tool for the early prediction of NAC response in breast cancer patients.
Collapse
Affiliation(s)
- Jianming Guo
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, 150000, Harbin, China
| | - Baihui Chen
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, 150000, Harbin, China
| | - Hongda Cao
- School of Computer, Beihang University, 100191, Beijing, China
| | - Quan Dai
- Medicine & Laboratory of Translational Research in Ultrasound Theranostics, Sichuan Cancer Center, Affiliated Cancer Hospital of University of Electronic Science and Technology of China, 610041, Chengdu, China
- Department of Ultrasound, Sichuan Clinical Research Center for Cancer, Sichuan Cancer Hospital & Institute, 610041, Chengdu, China
| | - Ling Qin
- Department of Pathology, Harbin Medical University Cancer Hospital, 150000, Harbin, China
| | - Jinfeng Zhang
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, 150000, Harbin, China
| | - Youxue Zhang
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, 150000, Harbin, China
| | - Huanyu Zhang
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, 150000, Harbin, China
| | - Yuan Sui
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, 150000, Harbin, China
| | - Tianyu Chen
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, 150000, Harbin, China
| | - Dongxu Yang
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, 150000, Harbin, China
| | - Xue Gong
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, 150000, Harbin, China
| | - Dalin Li
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, 150000, Harbin, China.
| |
Collapse
|
2
|
Sun J, Shen X, Zhang N, Zhang Q, Xing K, Liu Y. Combination of conventional ultrasound with quantitative and qualitative analyses of CEUS for the differentiation of benign and malignant breast solid lesions: A modified breast cancer model. Asian J Surg 2024:S1015-9584(24)01844-X. [PMID: 39214812 DOI: 10.1016/j.asjsur.2024.08.104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Revised: 08/05/2024] [Accepted: 08/15/2024] [Indexed: 09/04/2024] Open
Abstract
OBJECTIVE Breast cancer has become one of the main diseases threatening women's health and lives. Ultrasound (US) is the first diagnostic option for several patients because of its non-radiation, convenient, and low-cost features. Conventional US combined with contrast-enhanced US (CEUS) has improved diagnostic accuracy, while due to the presence of numerous parameters, no international consensus on diagnostic criteria could be attained. Therefore, it is necessary to develop a reliable diagnostic model with the involvement of a few parameters while increasing the diagnostic accuracy. METHODS Data from 265 patients, including conventional US, CEUS, and postoperative pathological results, were collected. 21 parameters from the conventional US and both qualitative and quantitative aspects of CEUS were analyzed through univariate and multivariate logistic regression analyses. Specific parameters with independent influential factors were identified. A nomogram was subsequently developed to visually represent the contribution and linear weighting of each parameter. The effectiveness of the new model was assessed through calibration curves and the Hosmer-Lemeshow goodness-of-fit test. RESULTS Six independent influential factors for breast malignant tumors were identified, including homogeneous echo, lesion vascularity, enhancement mode, enhancement shape, nourishing vessels, and slope. The area under the curve (AUC) values in the training and test datasets were 0.933 and 0.860, respectively. The modified model exhibited satisfactory diagnostic accuracy and operability. CONCLUSION The modified model, despite incorporating fewer parameters, maintained diagnostic accuracy. It is exhibited as a convenient, effective, and easily deployable model for diagnosing malignant breast nodules.
Collapse
Affiliation(s)
- Jingjing Sun
- Department of Ultrasound, Handan Central Hospital, Handan, China.
| | - Xianghui Shen
- Department of Ultrasound, Handan Central Hospital, Handan, China
| | - Ning Zhang
- Department of Ultrasound, Handan Central Hospital, Handan, China
| | - Qiang Zhang
- Department of Ultrasound, Handan Central Hospital, Handan, China
| | - Kai Xing
- Department of Ultrasound, Handan Central Hospital, Handan, China
| | - Yanchao Liu
- Department of Ultrasound, Handan Central Hospital, Handan, China
| |
Collapse
|
3
|
Li Z, Wang D, Zhu X. Unveiling the functions of five recently characterized lncRNAs in cancer progression. Clin Transl Oncol 2024:10.1007/s12094-024-03619-w. [PMID: 39066874 DOI: 10.1007/s12094-024-03619-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Accepted: 07/11/2024] [Indexed: 07/30/2024]
Abstract
Numerous studies over the past few decades have shown that RNAs are multifaceted, multifunctional regulators of most cellular processes, contrary to the initial belief that they only act as mediators for translating DNA into proteins. LncRNAs, which refer to transcripts longer than 200nt and lack the ability to code for proteins, have recently been identified as central regulators of a variety of biochemical and cellular processes, particularly cancer. When they are abnormally expressed, they are closely associated with tumor occurrence, metastasis, and tumor staging. Therefore, through searches on Google Scholar, PubMed, and CNKI, we identified five five recently characterized lncRNAs-Lnc-SLC2A12-10:1, LncRNA BCRT1, lncRNA IGFBP4-1, LncRNA PCNAP1, and LncRNA CDC6-that have been linked to the promotion of cancer cell proliferation, invasion, and metastasis. Consequently, this review encapsulates the existing research and molecular underpinnings of these five newly identified lncRNAs across various types of cancer. It suggests that these novel lncRNAs hold potential as independent biomarkers for clinical diagnosis and prognosis, as well as candidates for therapeutic intervention. In parallel, we discuss the challenges inherent in the research on these five newly discovered lncRNAs and look forward to the avenues for future exploration in this field.
Collapse
Affiliation(s)
- Zhicheng Li
- Department of Urology, Affiliated Hospital of Inner Mongolia Medical University, Hohhot, 010050, Inner Mongolia, China
| | - Dan Wang
- Department of Urology, Affiliated Hospital of Inner Mongolia Medical University, Hohhot, 010050, Inner Mongolia, China
| | - Xiaojun Zhu
- Department of Urology, Affiliated Hospital of Inner Mongolia Medical University, Hohhot, 010050, Inner Mongolia, China.
| |
Collapse
|
4
|
Lu G, Tian R, Yang W, Liu R, Liu D, Xiang Z, Zhang G. Deep learning radiomics based on multimodal imaging for distinguishing benign and malignant breast tumours. Front Med (Lausanne) 2024; 11:1402967. [PMID: 39036101 PMCID: PMC11257849 DOI: 10.3389/fmed.2024.1402967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 06/14/2024] [Indexed: 07/23/2024] Open
Abstract
Objectives This study aimed to develop a deep learning radiomic model using multimodal imaging to differentiate benign and malignant breast tumours. Methods Multimodality imaging data, including ultrasonography (US), mammography (MG), and magnetic resonance imaging (MRI), from 322 patients (112 with benign breast tumours and 210 with malignant breast tumours) with histopathologically confirmed breast tumours were retrospectively collected between December 2018 and May 2023. Based on multimodal imaging, the experiment was divided into three parts: traditional radiomics, deep learning radiomics, and feature fusion. We tested the performance of seven classifiers, namely, SVM, KNN, random forest, extra trees, XGBoost, LightGBM, and LR, on different feature models. Through feature fusion using ensemble and stacking strategies, we obtained the optimal classification model for benign and malignant breast tumours. Results In terms of traditional radiomics, the ensemble fusion strategy achieved the highest accuracy, AUC, and specificity, with values of 0.892, 0.942 [0.886-0.996], and 0.956 [0.873-1.000], respectively. The early fusion strategy with US, MG, and MRI achieved the highest sensitivity of 0.952 [0.887-1.000]. In terms of deep learning radiomics, the stacking fusion strategy achieved the highest accuracy, AUC, and sensitivity, with values of 0.937, 0.947 [0.887-1.000], and 1.000 [0.999-1.000], respectively. The early fusion strategies of US+MRI and US+MG achieved the highest specificity of 0.954 [0.867-1.000]. In terms of feature fusion, the ensemble and stacking approaches of the late fusion strategy achieved the highest accuracy of 0.968. In addition, stacking achieved the highest AUC and specificity, which were 0.997 [0.990-1.000] and 1.000 [0.999-1.000], respectively. The traditional radiomic and depth features of US+MG + MR achieved the highest sensitivity of 1.000 [0.999-1.000] under the early fusion strategy. Conclusion This study demonstrated the potential of integrating deep learning and radiomic features with multimodal images. As a single modality, MRI based on radiomic features achieved greater accuracy than US or MG. The US and MG models achieved higher accuracy with transfer learning than the single-mode or radiomic models. The traditional radiomic and depth features of US+MG + MR achieved the highest sensitivity under the early fusion strategy, showed higher diagnostic performance, and provided more valuable information for differentiation between benign and malignant breast tumours.
Collapse
Affiliation(s)
- Guoxiu Lu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
- Department of Nuclear Medicine, General Hospital of Northern Theater Command, Shenyang, Liaoning, China
| | - Ronghui Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Wei Yang
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital and Institute, Shenyang, Liaoning, China
| | - Ruibo Liu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Dongmei Liu
- Department of Ultrasound, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| | - Zijie Xiang
- Biomedical Engineering, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Guoxu Zhang
- Department of Nuclear Medicine, General Hospital of Northern Theater Command, Shenyang, Liaoning, China
| |
Collapse
|
5
|
Abd El-Khalek AA, Balaha HM, Alghamdi NS, Ghazal M, Khalil AT, Abo-Elsoud MEA, El-Baz A. A concentrated machine learning-based classification system for age-related macular degeneration (AMD) diagnosis using fundus images. Sci Rep 2024; 14:2434. [PMID: 38287062 PMCID: PMC10825213 DOI: 10.1038/s41598-024-52131-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 01/14/2024] [Indexed: 01/31/2024] Open
Abstract
The increase in eye disorders among older individuals has raised concerns, necessitating early detection through regular eye examinations. Age-related macular degeneration (AMD), a prevalent condition in individuals over 45, is a leading cause of vision impairment in the elderly. This paper presents a comprehensive computer-aided diagnosis (CAD) framework to categorize fundus images into geographic atrophy (GA), intermediate AMD, normal, and wet AMD categories. This is crucial for early detection and precise diagnosis of age-related macular degeneration (AMD), enabling timely intervention and personalized treatment strategies. We have developed a novel system that extracts both local and global appearance markers from fundus images. These markers are obtained from the entire retina and iso-regions aligned with the optical disc. Applying weighted majority voting on the best classifiers improves performance, resulting in an accuracy of 96.85%, sensitivity of 93.72%, specificity of 97.89%, precision of 93.86%, F1 of 93.72%, ROC of 95.85%, balanced accuracy of 95.81%, and weighted sum of 95.38%. This system not only achieves high accuracy but also provides a detailed assessment of the severity of each retinal region. This approach ensures that the final diagnosis aligns with the physician's understanding of AMD, aiding them in ongoing treatment and follow-up for AMD patients.
Collapse
Affiliation(s)
- Aya A Abd El-Khalek
- Communications and Electronics Engineering Department, Nile Higher Institute for Engineering and Technology, Mansoura, Egypt
| | - Hossam Magdy Balaha
- BioImaging Lab, Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY, USA
| | - Norah Saleh Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Depatrment, Abu Dhabi University, Abu Dhabi, UAE
| | - Abeer T Khalil
- Communications and Electronics Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Mohy Eldin A Abo-Elsoud
- Communications and Electronics Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Ayman El-Baz
- BioImaging Lab, Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY, USA.
| |
Collapse
|
6
|
Farahat IS, Sharafeldeen A, Ghazal M, Alghamdi NS, Mahmoud A, Connelly J, van Bogaert E, Zia H, Tahtouh T, Aladrousy W, Tolba AE, Elmougy S, El-Baz A. An AI-based novel system for predicting respiratory support in COVID-19 patients through CT imaging analysis. Sci Rep 2024; 14:851. [PMID: 38191606 PMCID: PMC10774502 DOI: 10.1038/s41598-023-51053-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 12/29/2023] [Indexed: 01/10/2024] Open
Abstract
The proposed AI-based diagnostic system aims to predict the respiratory support required for COVID-19 patients by analyzing the correlation between COVID-19 lesions and the level of respiratory support provided to the patients. Computed tomography (CT) imaging will be used to analyze the three levels of respiratory support received by the patient: Level 0 (minimum support), Level 1 (non-invasive support such as soft oxygen), and Level 2 (invasive support such as mechanical ventilation). The system will begin by segmenting the COVID-19 lesions from the CT images and creating an appearance model for each lesion using a 2D, rotation-invariant, Markov-Gibbs random field (MGRF) model. Three MGRF-based models will be created, one for each level of respiratory support. This suggests that the system will be able to differentiate between different levels of severity in COVID-19 patients. The system will decide for each patient using a neural network-based fusion system, which combines the estimates of the Gibbs energy from the three MGRF-based models. The proposed system were assessed using 307 COVID-19-infected patients, achieving an accuracy of [Formula: see text], a sensitivity of [Formula: see text], and a specificity of [Formula: see text], indicating a high level of prediction accuracy.
Collapse
Affiliation(s)
- Ibrahim Shawky Farahat
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | | | - Mohammed Ghazal
- Electrical, Computer and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi, UAE
| | - Norah Saleh Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Ali Mahmoud
- Department of Bioengineering, University of Louisville, Louisville, USA
| | - James Connelly
- Department of Radiology, University of Louisville, Louisville, USA
| | - Eric van Bogaert
- Department of Radiology, University of Louisville, Louisville, USA
| | - Huma Zia
- Electrical, Computer and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi, UAE
| | - Tania Tahtouh
- College of Health Sciences, Abu Dhabi University, Abu Dhabi, UAE
| | - Waleed Aladrousy
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Ahmed Elsaid Tolba
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
- The Higher Institute of Engineering and Automotive Technology and Energy, Kafr El Sheikh, Egypt
| | - Samir Elmougy
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Ayman El-Baz
- Department of Bioengineering, University of Louisville, Louisville, USA.
| |
Collapse
|