1
|
S S, Umapathy S, Alhajlah O, Almutairi F, Aslam S, R K A. F-Net: Follicles Net an efficient tool for the diagnosis of polycystic ovarian syndrome using deep learning techniques. PLoS One 2024; 19:e0307571. [PMID: 39146307 PMCID: PMC11326594 DOI: 10.1371/journal.pone.0307571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Accepted: 07/08/2024] [Indexed: 08/17/2024] Open
Abstract
The study's primary objectives encompass the following: (i) To implement the object detection of ovarian follicles using you only look once (YOLO)v8 and subsequently segment the identified follicles using a hybrid fuzzy c-means-based active contour technique. (ii) To extract statistical features and evaluate the effectiveness of both machine learning (ML) and deep learning (DL) classifiers in detecting polycystic ovary syndrome (PCOS). The research involved a two different dataset in which dataset1 comprising both normal (N = 50) and PCOS (N = 50) subjects, dataset 2 consists of 100 normal and 100 PCOS affected subjects for classification. The YOLOv8 method was employed for follicle detection, whereas statistical features were derived using Gray-level co-occurrence matrices (GLCM). For PCOS classification, various ML models such as Random Forest (RF), k- star, and stochastic gradient descent (SGD) were employed. Additionally, pre-trained models such as MobileNet, ResNet152V2, and DenseNet121 and Vision transformer were applied for the categorization of PCOS and healthy controls. Furthermore, a custom model named Follicles Net (F-Net) was developed to enhance the performance and accuracy in PCOS classification. Remarkably, the F-Net model outperformed among all ML and DL classifiers, achieving an impressive classification accuracy of 95% for dataset1 and 97.5% for dataset2 respectively in detecting PCOS. Consequently, the custom F-Net model holds significant potential as an effective automated diagnostic tool for distinguishing between normal and PCOS.
Collapse
Affiliation(s)
- Sowmiya S
- Biomedical Engineering Department, Faculty of Engineering and Technology, SRM Institute of Science and Technology, Chennai, Tamil Nadu, India
| | - Snekhalatha Umapathy
- Biomedical Engineering Department, Faculty of Engineering and Technology, SRM Institute of Science and Technology, Chennai, Tamil Nadu, India
| | - Omar Alhajlah
- Department of Applied Computer Sciences, Applied Computer Science College, King Saud University, Riyadh, Saudi Arabia
| | - Fadiyah Almutairi
- Department of Information System, College of Computer and Information Sciences (CCIS), Majmaah University, Al Majmaah, Saudi Arabia
| | - Shabnam Aslam
- Department of Information Technology, College of Computer and Information Sciences (CCIS), Majmaah University, Al Majmaah, Saudi Arabia
| | - Ahalya R K
- Department of Biomedical Engineering, Easwari Engineering College, Chennai, Tamil Nadu, India
| |
Collapse
|
2
|
Chowa SS, Azam S, Montaha S, Bhuiyan MRI, Jonkman M. Improving the Automated Diagnosis of Breast Cancer with Mesh Reconstruction of Ultrasound Images Incorporating 3D Mesh Features and a Graph Attention Network. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1067-1085. [PMID: 38361007 DOI: 10.1007/s10278-024-00983-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 11/17/2023] [Accepted: 12/11/2023] [Indexed: 02/17/2024]
Abstract
This study proposes a novel approach for breast tumor classification from ultrasound images into benign and malignant by converting the region of interest (ROI) of a 2D ultrasound image into a 3D representation using the point-e system, allowing for in-depth analysis of underlying characteristics. Instead of relying solely on 2D imaging features, this method extracts 3D mesh features that describe tumor patterns more precisely. Ten informative and medically relevant mesh features are extracted and assessed with two feature selection techniques. Additionally, a feature pattern analysis has been conducted to determine the feature's significance. A feature table with dimensions of 445 × 12 is generated and a graph is constructed, considering the rows as nodes and the relationships among the nodes as edges. The Spearman correlation coefficient method is employed to identify edges between the strongly connected nodes (with a correlation score greater than or equal to 0.7), resulting in a graph containing 56,054 edges and 445 nodes. A graph attention network (GAT) is proposed for the classification task and the model is optimized with an ablation study, resulting in the highest accuracy of 99.34%. The performance of the proposed model is compared with ten machine learning (ML) models and one-dimensional convolutional neural network where the test accuracy of these models ranges from 73 to 91%. Our novel 3D mesh-based approach, coupled with the GAT, yields promising performance for breast tumor classification, outperforming traditional models, and has the potential to reduce time and effort of radiologists providing a reliable diagnostic system.
Collapse
Affiliation(s)
- Sadia Sultana Chowa
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia.
| | - Sidratul Montaha
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Md Rahad Islam Bhuiyan
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| |
Collapse
|
3
|
Ayana G, Barki H, Choe SW. Pathological Insights: Enhanced Vision Transformers for the Early Detection of Colorectal Cancer. Cancers (Basel) 2024; 16:1441. [PMID: 38611117 PMCID: PMC11010958 DOI: 10.3390/cancers16071441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 04/02/2024] [Accepted: 04/04/2024] [Indexed: 04/14/2024] Open
Abstract
Endoscopic pathological findings of the gastrointestinal tract are crucial for the early diagnosis of colorectal cancer (CRC). Previous deep learning works, aimed at improving CRC detection performance and reducing subjective analysis errors, are limited to polyp segmentation. Pathological findings were not considered and only convolutional neural networks (CNNs), which are not able to handle global image feature information, were utilized. This work introduces a novel vision transformer (ViT)-based approach for early CRC detection. The core components of the proposed approach are ViTCol, a boosted vision transformer for classifying endoscopic pathological findings, and PUTS, a vision transformer-based model for polyp segmentation. Results demonstrate the superiority of this vision transformer-based CRC detection method over existing CNN and vision transformer models. ViTCol exhibited an outstanding performance in classifying pathological findings, with an area under the receiver operating curve (AUC) value of 0.9999 ± 0.001 on the Kvasir dataset. PUTS provided outstanding results in segmenting polyp images, with mean intersection over union (mIoU) of 0.8673 and 0.9092 on the Kvasir-SEG and CVC-Clinic datasets, respectively. This work underscores the value of spatial transformers in localizing input images, which can seamlessly integrate into the main vision transformer network, enhancing the automated identification of critical image features for early CRC detection.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea;
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia
| | - Hika Barki
- Department of Artificial Intelligence Convergence, Pukyong National University, Busan 48513, Republic of Korea;
| | - Se-woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea;
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea
- Emerging Pathogens Institute, University of Florida, Gainesville, FL 32608, USA
| |
Collapse
|
4
|
Chowa SS, Azam S, Montaha S, Payel IJ, Bhuiyan MRI, Hasan MZ, Jonkman M. Graph neural network-based breast cancer diagnosis using ultrasound images with optimized graph construction integrating the medically significant features. J Cancer Res Clin Oncol 2023; 149:18039-18064. [PMID: 37982829 PMCID: PMC10725367 DOI: 10.1007/s00432-023-05464-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 10/06/2023] [Indexed: 11/21/2023]
Abstract
PURPOSE An automated computerized approach can aid radiologists in the early diagnosis of breast cancer. In this study, a novel method is proposed for classifying breast tumors into benign and malignant, based on the ultrasound images through a Graph Neural Network (GNN) model utilizing clinically significant features. METHOD Ten informative features are extracted from the region of interest (ROI), based on the radiologists' diagnosis markers. The significance of the features is evaluated using density plot and T test statistical analysis method. A feature table is generated where each row represents individual image, considered as node, and the edges between the nodes are denoted by calculating the Spearman correlation coefficient. A graph dataset is generated and fed into the GNN model. The model is configured through ablation study and Bayesian optimization. The optimized model is then evaluated with different correlation thresholds for getting the highest performance with a shallow graph. The performance consistency is validated with k-fold cross validation. The impact of utilizing ROIs and handcrafted features for breast tumor classification is evaluated by comparing the model's performance with Histogram of Oriented Gradients (HOG) descriptor features from the entire ultrasound image. Lastly, a clustering-based analysis is performed to generate a new filtered graph, considering weak and strong relationships of the nodes, based on the similarities. RESULTS The results indicate that with a threshold value of 0.95, the GNN model achieves the highest test accuracy of 99.48%, precision and recall of 100%, and F1 score of 99.28%, reducing the number of edges by 85.5%. The GNN model's performance is 86.91%, considering no threshold value for the graph generated from HOG descriptor features. Different threshold values for the Spearman's correlation score are experimented with and the performance is compared. No significant differences are observed between the previous graph and the filtered graph. CONCLUSION The proposed approach might aid the radiologists in effective diagnosing and learning tumor pattern of breast cancer.
Collapse
Affiliation(s)
- Sadia Sultana Chowa
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia.
| | - Sidratul Montaha
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Israt Jahan Payel
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1216, Bangladesh
| | - Md Rahad Islam Bhuiyan
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Md Zahid Hasan
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1216, Bangladesh
| | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| |
Collapse
|
5
|
Ayana G, Dese K, Dereje Y, Kebede Y, Barki H, Amdissa D, Husen N, Mulugeta F, Habtamu B, Choe SW. Vision-Transformer-Based Transfer Learning for Mammogram Classification. Diagnostics (Basel) 2023; 13:diagnostics13020178. [PMID: 36672988 PMCID: PMC9857963 DOI: 10.3390/diagnostics13020178] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 12/27/2022] [Accepted: 12/27/2022] [Indexed: 01/06/2023] Open
Abstract
Breast mass identification is a crucial procedure during mammogram-based early breast cancer diagnosis. However, it is difficult to determine whether a breast lump is benign or cancerous at early stages. Convolutional neural networks (CNNs) have been used to solve this problem and have provided useful advancements. However, CNNs focus only on a certain portion of the mammogram while ignoring the remaining and present computational complexity because of multiple convolutions. Recently, vision transformers have been developed as a technique to overcome such limitations of CNNs, ensuring better or comparable performance in natural image classification. However, the utility of this technique has not been thoroughly investigated in the medical image domain. In this study, we developed a transfer learning technique based on vision transformers to classify breast mass mammograms. The area under the receiver operating curve of the new model was estimated as 1 ± 0, thus outperforming the CNN-based transfer-learning models and vision transformer models trained from scratch. The technique can, hence, be applied in a clinical setting, to improve the early diagnosis of breast cancer.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia
| | - Kokeb Dese
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia
| | - Yisak Dereje
- Department of Information Engineering, Marche Polytechnic University, 60121 Ancona, Italy
| | - Yonas Kebede
- Biomedical Engineering Unit, Black Lion Hospital, Addis Ababa University, Addis Ababa 1000, Ethiopia
| | - Hika Barki
- Department of Artificial Intelligence Convergence, Pukyong National University, Busan 48513, Republic of Korea
| | - Dechassa Amdissa
- Department of Basic and Applied Science for Engineering, Sapienza University of Rome, 00161 Roma, Italy
| | - Nahimiya Husen
- Department of Bioengineering and Robotics, Campus Bio-Medico University of Rome, 00128 Roma, Italy
| | - Fikadu Mulugeta
- Center of Biomedical Engineering, Addis Ababa Institute of Technology, Addis Ababa University, Addis Ababa 1000, Ethiopia
| | - Bontu Habtamu
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia
| | - Se-Woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea
- Correspondence: ; Tel.: +82-54-478-7781; Fax: +82-54-462-1049
| |
Collapse
|
6
|
Ayana G, Choe SW. BUViTNet: Breast Ultrasound Detection via Vision Transformers. Diagnostics (Basel) 2022; 12:diagnostics12112654. [PMID: 36359497 PMCID: PMC9689470 DOI: 10.3390/diagnostics12112654] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 10/26/2022] [Accepted: 10/26/2022] [Indexed: 11/06/2022] Open
Abstract
Convolutional neural networks (CNNs) have enhanced ultrasound image-based early breast cancer detection. Vision transformers (ViTs) have recently surpassed CNNs as the most effective method for natural image analysis. ViTs have proven their capability of incorporating more global information than CNNs at lower layers, and their skip connections are more powerful than those of CNNs, which endows ViTs with superior performance. However, the effectiveness of ViTs in breast ultrasound imaging has not yet been investigated. Here, we present BUViTNet breast ultrasound detection via ViTs, where ViT-based multistage transfer learning is performed using ImageNet and cancer cell image datasets prior to transfer learning for classifying breast ultrasound images. We utilized two publicly available ultrasound breast image datasets, Mendeley and breast ultrasound images (BUSI), to train and evaluate our algorithm. The proposed method achieved the highest area under the receiver operating characteristics curve (AUC) of 1 ± 0, Matthew’s correlation coefficient (MCC) of 1 ± 0, and kappa score of 1 ± 0 on the Mendeley dataset. Furthermore, BUViTNet achieved the highest AUC of 0.968 ± 0.02, MCC of 0.961 ± 0.01, and kappa score of 0.959 ± 0.02 on the BUSI dataset. BUViTNet outperformed ViT trained from scratch, ViT-based conventional transfer learning, and CNN-based transfer learning in classifying breast ultrasound images (p < 0.01 in all cases). Our findings indicate that improved transformers are effective in analyzing breast images and can provide an improved diagnosis if used in clinical settings. Future work will consider the use of a wide range of datasets and parameters for optimized performance.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
| | - Se-woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
- Correspondence: ; Tel.: +82-54-478-7781; Fax: +82-54-462-1049
| |
Collapse
|
7
|
Ayana G, Ryu J, Choe SW. Ultrasound-Responsive Nanocarriers for Breast Cancer Chemotherapy. MICROMACHINES 2022; 13:mi13091508. [PMID: 36144131 PMCID: PMC9503784 DOI: 10.3390/mi13091508] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 09/08/2022] [Accepted: 09/08/2022] [Indexed: 05/13/2023]
Abstract
Breast cancer is the most common type of cancer and it is treated with surgical intervention, radiotherapy, chemotherapy, or a combination of these regimens. Despite chemotherapy's ample use, it has limitations such as bioavailability, adverse side effects, high-dose requirements, low therapeutic indices, multiple drug resistance development, and non-specific targeting. Drug delivery vehicles or carriers, of which nanocarriers are prominent, have been introduced to overcome chemotherapy limitations. Nanocarriers have been preferentially used in breast cancer chemotherapy because of their role in protecting therapeutic agents from degradation, enabling efficient drug concentration in target cells or tissues, overcoming drug resistance, and their relatively small size. However, nanocarriers are affected by physiological barriers, bioavailability of transported drugs, and other factors. To resolve these issues, the use of external stimuli has been introduced, such as ultrasound, infrared light, thermal stimulation, microwaves, and X-rays. Recently, ultrasound-responsive nanocarriers have become popular because they are cost-effective, non-invasive, specific, tissue-penetrating, and deliver high drug concentrations to their target. In this paper, we review recent developments in ultrasound-guided nanocarriers for breast cancer chemotherapy, discuss the relevant challenges, and provide insights into future directions.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
| | - Jaemyung Ryu
- Department of Optical Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
- Correspondence: (J.R.); (S.-w.C.); Tel.: +82-54-478-7781 (S.-w.C.); Fax: +82-54-462-1049 (S.-w.C.)
| | - Se-woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
- Correspondence: (J.R.); (S.-w.C.); Tel.: +82-54-478-7781 (S.-w.C.); Fax: +82-54-462-1049 (S.-w.C.)
| |
Collapse
|