1
|
Csore J, Roy TL, Wright G, Karmonik C. Unsupervised classification of multi-contrast magnetic resonance histology of peripheral arterial disease lesions using a convolutional variational autoencoder with a Gaussian mixture model in latent space: A technical feasibility study. Comput Med Imaging Graph 2024; 115:102372. [PMID: 38581959 DOI: 10.1016/j.compmedimag.2024.102372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 02/09/2024] [Accepted: 03/18/2024] [Indexed: 04/08/2024]
Abstract
PURPOSE To investigate the feasibility of a deep learning algorithm combining variational autoencoder (VAE) and two-dimensional (2D) convolutional neural networks (CNN) for automatically quantifying hard tissue presence and morphology in multi-contrast magnetic resonance (MR) images of peripheral arterial disease (PAD) occlusive lesions. METHODS Multi-contrast MR images (T2-weighted and ultrashort echo time) were acquired from lesions harvested from six amputated legs with high isotropic spatial resolution (0.078 mm and 0.156 mm, respectively) at 9.4 T. A total of 4014 pseudo-color combined images were generated, with 75% used to train a VAE employing custom 2D CNN layers. A Gaussian mixture model (GMM) was employed to classify the latent space data into four tissue classes: I) concentric calcified (c), II) eccentric calcified (e), III) occluded with hard tissue (h) and IV) occluded with soft tissue (s). Test image probabilities, encoded by the trained VAE were used to evaluate model performance. RESULTS GMM component classification probabilities ranged from 0.92 to 0.97 for class (c), 1.00 for class (e), 0.82-0.95 for class (h) and 0.56-0.93 for the remaining class (s). Due to the complexity of soft-tissue lesions reflected in the heterogeneity of the pseudo-color images, more GMM components (n=17) were attributed to class (s), compared to the other three (c, e and h) (n=6). CONCLUSION Combination of 2D CNN VAE and GMM achieves high classification probabilities for hard tissue-containing lesions. Automatic recognition of these classes may aid therapeutic decision-making and identifying uncrossable lesions prior to endovascular intervention.
Collapse
Affiliation(s)
- Judit Csore
- DeBakey Heart and Vascular Center, Houston Methodist Hospital, 6565 Fannin Street, Houston, TX 77030, USA; Heart and Vascular Center, Semmelweis University, 68 Városmajor Street, Budapest 1122, Hungary.
| | - Trisha L Roy
- DeBakey Heart and Vascular Center, Houston Methodist Hospital, 6565 Fannin Street, Houston, TX 77030, USA
| | - Graham Wright
- Sunnybrook Research Institute, 2075 Bayview Avenue, Toronto, Ontario M4N 3M5, Canada
| | - Christof Karmonik
- MRI Core, Translational Imaging Center, Houston Methodist Research Institute, 6670 Bertner Avenue, Houston, TX 77030, USA
| |
Collapse
|
2
|
Ramamoorthy P, Ramakantha Reddy BR, Askar SS, Abouhawwash M. Histopathology-based breast cancer prediction using deep learning methods for healthcare applications. Front Oncol 2024; 14:1300997. [PMID: 38894870 PMCID: PMC11184215 DOI: 10.3389/fonc.2024.1300997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 04/12/2024] [Indexed: 06/21/2024] Open
Abstract
Breast cancer (BC) is the leading cause of female cancer mortality and is a type of cancer that is a major threat to women's health. Deep learning methods have been used extensively in many medical domains recently, especially in detection and classification applications. Studying histological images for the automatic diagnosis of BC is important for patients and their prognosis. Owing to the complication and variety of histology images, manual examination can be difficult and susceptible to errors and thus needs the services of experienced pathologists. Therefore, publicly accessible datasets called BreakHis and invasive ductal carcinoma (IDC) are used in this study to analyze histopathological images of BC. Next, using super-resolution generative adversarial networks (SRGANs), which create high-resolution images from low-quality images, the gathered images from BreakHis and IDC are pre-processed to provide useful results in the prediction stage. The components of conventional generative adversarial network (GAN) loss functions and effective sub-pixel nets were combined to create the concept of SRGAN. Next, the high-quality images are sent to the data augmentation stage, where new data points are created by making small adjustments to the dataset using rotation, random cropping, mirroring, and color-shifting. Next, patch-based feature extraction using Inception V3 and Resnet-50 (PFE-INC-RES) is employed to extract the features from the augmentation. After the features have been extracted, the next step involves processing them and applying transductive long short-term memory (TLSTM) to improve classification accuracy by decreasing the number of false positives. The results of suggested PFE-INC-RES is evaluated using existing methods on the BreakHis dataset, with respect to accuracy (99.84%), specificity (99.71%), sensitivity (99.78%), and F1-score (99.80%), while the suggested PFE-INC-RES performed better in the IDC dataset based on F1-score (99.08%), accuracy (99.79%), specificity (98.97%), and sensitivity (99.17%).
Collapse
Affiliation(s)
- Prabhu Ramamoorthy
- Department of Electronics and Communication Engineering, Gnanamani College of Technology, Namakkal, India
| | | | - S. S. Askar
- Department of Statistics and Operations Research, College of Science, King Saud University, Riyadh, Saudi Arabia
| | - Mohamed Abouhawwash
- Department of Mathematics, Faculty of Science, Mansoura University, Mansoura, Egypt
| |
Collapse
|
3
|
Zadeh Shirazi A, Tofighi M, Gharavi A, Gomez GA. The Application of Artificial Intelligence to Cancer Research: A Comprehensive Guide. Technol Cancer Res Treat 2024; 23:15330338241250324. [PMID: 38775067 PMCID: PMC11113055 DOI: 10.1177/15330338241250324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 03/28/2024] [Accepted: 04/08/2024] [Indexed: 05/25/2024] Open
Abstract
Advancements in AI have notably changed cancer research, improving patient care by enhancing detection, survival prediction, and treatment efficacy. This review covers the role of Machine Learning, Soft Computing, and Deep Learning in oncology, explaining key concepts and algorithms (like SVM, Naïve Bayes, and CNN) in a clear, accessible manner. It aims to make AI advancements understandable to a broad audience, focusing on their application in diagnosing, classifying, and predicting various cancer types, thereby underlining AI's potential to better patient outcomes. Moreover, we present a tabular summary of the most significant advances from the literature, offering a time-saving resource for readers to grasp each study's main contributions. The remarkable benefits of AI-powered algorithms in cancer care underscore their potential for advancing cancer research and clinical practice. This review is a valuable resource for researchers and clinicians interested in the transformative implications of AI in cancer care.
Collapse
Affiliation(s)
- Amin Zadeh Shirazi
- Centre for Cancer Biology, SA Pathology and the University of South Australia, Adelaide, SA, Australia
| | - Morteza Tofighi
- Department of Electrical Engineering, Faculty of Engineering, Bu-Ali Sina University, Hamedan, Iran
| | - Alireza Gharavi
- Department of Computer Science, Azad University, Mashhad Branch, Mashhad, Iran
| | - Guillermo A. Gomez
- Centre for Cancer Biology, SA Pathology and the University of South Australia, Adelaide, SA, Australia
| |
Collapse
|
4
|
Zhang J, Wu J, Zhou XS, Shi F, Shen D. Recent advancements in artificial intelligence for breast cancer: Image augmentation, segmentation, diagnosis, and prognosis approaches. Semin Cancer Biol 2023; 96:11-25. [PMID: 37704183 DOI: 10.1016/j.semcancer.2023.09.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 08/03/2023] [Accepted: 09/05/2023] [Indexed: 09/15/2023]
Abstract
Breast cancer is a significant global health burden, with increasing morbidity and mortality worldwide. Early screening and accurate diagnosis are crucial for improving prognosis. Radiographic imaging modalities such as digital mammography (DM), digital breast tomosynthesis (DBT), magnetic resonance imaging (MRI), ultrasound (US), and nuclear medicine techniques, are commonly used for breast cancer assessment. And histopathology (HP) serves as the gold standard for confirming malignancy. Artificial intelligence (AI) technologies show great potential for quantitative representation of medical images to effectively assist in segmentation, diagnosis, and prognosis of breast cancer. In this review, we overview the recent advancements of AI technologies for breast cancer, including 1) improving image quality by data augmentation, 2) fast detection and segmentation of breast lesions and diagnosis of malignancy, 3) biological characterization of the cancer such as staging and subtyping by AI-based classification technologies, 4) prediction of clinical outcomes such as metastasis, treatment response, and survival by integrating multi-omics data. Then, we then summarize large-scale databases available to help train robust, generalizable, and reproducible deep learning models. Furthermore, we conclude the challenges faced by AI in real-world applications, including data curating, model interpretability, and practice regulations. Besides, we expect that clinical implementation of AI will provide important guidance for the patient-tailored management.
Collapse
Affiliation(s)
- Jiadong Zhang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
5
|
Balasubramaniam S, Velmurugan Y, Jaganathan D, Dhanasekaran S. A Modified LeNet CNN for Breast Cancer Diagnosis in Ultrasound Images. Diagnostics (Basel) 2023; 13:2746. [PMID: 37685284 PMCID: PMC10486538 DOI: 10.3390/diagnostics13172746] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 07/06/2023] [Accepted: 07/11/2023] [Indexed: 09/10/2023] Open
Abstract
Convolutional neural networks (CNNs) have been extensively utilized in medical image processing to automatically extract meaningful features and classify various medical conditions, enabling faster and more accurate diagnoses. In this paper, LeNet, a classic CNN architecture, has been successfully applied to breast cancer data analysis. It demonstrates its ability to extract discriminative features and classify malignant and benign tumors with high accuracy, thereby supporting early detection and diagnosis of breast cancer. LeNet with corrected Rectified Linear Unit (ReLU), a modification of the traditional ReLU activation function, has been found to improve the performance of LeNet in breast cancer data analysis tasks via addressing the "dying ReLU" problem and enhancing the discriminative power of the extracted features. This has led to more accurate, reliable breast cancer detection and diagnosis and improved patient outcomes. Batch normalization improves the performance and training stability of small and shallow CNN architecture like LeNet. It helps to mitigate the effects of internal covariate shift, which refers to the change in the distribution of network activations during training. This classifier will lessen the overfitting problem and reduce the running time. The designed classifier is evaluated against the benchmarking deep learning models, proving that this has produced a higher recognition rate. The accuracy of the breast image recognition rate is 89.91%. This model will achieve better performance in segmentation, feature extraction, classification, and breast cancer tumor detection.
Collapse
Affiliation(s)
| | - Yuvarajan Velmurugan
- Computer Science and Engineering, Sona College of Technology, Salem 636005, India; (Y.V.); (D.J.)
| | - Dhayanithi Jaganathan
- Computer Science and Engineering, Sona College of Technology, Salem 636005, India; (Y.V.); (D.J.)
| | | |
Collapse
|
6
|
Csore J, Karmonik C, Wilhoit K, Buckner L, Roy TL. Automatic Classification of Magnetic Resonance Histology of Peripheral Arterial Chronic Total Occlusions Using a Variational Autoencoder: A Feasibility Study. Diagnostics (Basel) 2023; 13:diagnostics13111925. [PMID: 37296778 DOI: 10.3390/diagnostics13111925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 05/18/2023] [Accepted: 05/22/2023] [Indexed: 06/12/2023] Open
Abstract
The novel approach of our study consists in adapting and in evaluating a custom-made variational autoencoder (VAE) using two-dimensional (2D) convolutional neural networks (CNNs) on magnetic resonance imaging (MRI) images for differentiate soft vs. hard plaque components in peripheral arterial disease (PAD). Five amputated lower extremities were imaged at a clinical ultra-high field 7 Tesla MRI. Ultrashort echo time (UTE), T1-weighted (T1w) and T2-weighted (T2w) datasets were acquired. Multiplanar reconstruction (MPR) images were obtained from one lesion per limb. Images were aligned to each other and pseudo-color red-green-blue images were created. Four areas in latent space were defined corresponding to the sorted images reconstructed by the VAE. Images were classified from their position in latent space and scored using tissue score (TS) as following: (1) lumen patent, TS:0; (2) partially patent, TS:1; (3) mostly occluded with soft tissue, TS:3; (4) mostly occluded with hard tissue, TS:5. Average and relative percentage of TS was calculated per lesion defined as the sum of the tissue score for each image divided by the total number of images. In total, 2390 MPR reconstructed images were included in the analysis. Relative percentage of average tissue score varied from only patent (lesion #1) to presence of all four classes. Lesions #2, #3 and #5 were classified to contain tissues except mostly occluded with hard tissue while lesion #4 contained all (ranges (I): 0.2-100%, (II): 46.3-75.9%, (III): 18-33.5%, (IV): 20%). Training the VAE was successful as images with soft/hard tissues in PAD lesions were satisfactory separated in latent space. Using VAE may assist in rapid classification of MRI histology images acquired in a clinical setup for facilitating endovascular procedures.
Collapse
Affiliation(s)
- Judit Csore
- DeBakey Heart and Vascular Center, Houston Methodist Hospital, 6565 Fannin Street, Houston, TX 77030, USA
- Heart and Vascular Center, Semmelweis University, 68 Városmajor Street, 1122 Budapest, Hungary
| | - Christof Karmonik
- MRI Core, Translational Imaging Center, Houston Methodist Research Institute, 6670 Bertner Avenue, Houston, 77030 TX, USA
| | - Kayla Wilhoit
- MRI Core, Translational Imaging Center, Houston Methodist Research Institute, 6670 Bertner Avenue, Houston, 77030 TX, USA
| | - Lily Buckner
- MRI Core, Translational Imaging Center, Houston Methodist Research Institute, 6670 Bertner Avenue, Houston, 77030 TX, USA
| | - Trisha L Roy
- DeBakey Heart and Vascular Center, Houston Methodist Hospital, 6565 Fannin Street, Houston, TX 77030, USA
| |
Collapse
|