1
|
Lu Z, Tang K, Wu Y, Zhang X, An Z, Zhu X, Feng Q, Zhao Y. BreasTDLUSeg: A coarse-to-fine framework for segmentation of breast terminal duct lobular units on histopathological whole-slide images. Comput Med Imaging Graph 2024; 118:102432. [PMID: 39461144 DOI: 10.1016/j.compmedimag.2024.102432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 06/29/2024] [Accepted: 08/31/2024] [Indexed: 10/29/2024]
Abstract
Automatic segmentation of breast terminal duct lobular units (TDLUs) on histopathological whole-slide images (WSIs) is crucial for the quantitative evaluation of TDLUs in the diagnostic and prognostic analysis of breast cancer. However, TDLU segmentation remains a great challenge due to its highly heterogeneous sizes, structures, and morphologies as well as the small areas on WSIs. In this study, we propose BreasTDLUSeg, an efficient coarse-to-fine two-stage framework based on multi-scale attention to achieve localization and precise segmentation of TDLUs on hematoxylin and eosin (H&E)-stained WSIs. BreasTDLUSeg consists of two networks: a superpatch-based patch-level classification network (SPPC-Net) and a patch-based pixel-level segmentation network (PPS-Net). SPPC-Net takes a superpatch as input and adopts a sub-region classification head to classify each patch within the superpatch as TDLU positive or negative. PPS-Net takes the TDLU positive patches derived from SPPC-Net as input. PPS-Net deploys a multi-scale CNN-Transformer as an encoder to learn enhanced multi-scale morphological representations and an upsampler to generate pixel-wise segmentation masks for the TDLU positive patches. We also constructed two breast cancer TDLU datasets containing a total of 530 superpatch images with patch-level annotations and 2322 patch images with pixel-level annotations to enable the development of TDLU segmentation methods. Experiments on the two datasets demonstrate that BreasTDLUSeg outperforms other state-of-the-art methods with the highest Dice similarity coefficients of 79.97% and 92.93%, respectively. The proposed method shows great potential to assist pathologists in the pathological analysis of breast cancer. An open-source implementation of our approach can be found at https://github.com/Dian-kai/BreasTDLUSeg.
Collapse
Affiliation(s)
- Zixiao Lu
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University, Guangzhou, Guangdong, China
| | - Kai Tang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, China
| | - Yi Wu
- Wormpex AI Research, Bellevue, WA 98004, USA
| | - Xiaoxuan Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, China
| | - Ziqi An
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, China
| | - Xiongfeng Zhu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, China.
| | - Yinghua Zhao
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University, Guangzhou, Guangdong, China.
| |
Collapse
|
2
|
Alshemaimri BK. Novel Deep CNNs Explore Regions, Boundaries, and Residual Learning for COVID-19 Infection Analysis in Lung CT. Tomography 2024; 10:1205-1221. [PMID: 39195726 PMCID: PMC11359787 DOI: 10.3390/tomography10080091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2024] [Revised: 07/06/2024] [Accepted: 07/17/2024] [Indexed: 08/29/2024] Open
Abstract
COVID-19 poses a global health crisis, necessitating precise diagnostic methods for timely containment. However, accurately delineating COVID-19-affected regions in lung CT scans is challenging due to contrast variations and significant texture diversity. In this regard, this study introduces a novel two-stage classification and segmentation CNN approach for COVID-19 lung radiological pattern analysis. A novel Residual-BRNet is developed to integrate boundary and regional operations with residual learning, capturing key COVID-19 radiological homogeneous regions, texture variations, and structural contrast patterns in the classification stage. Subsequently, infectious CT images undergo lesion segmentation using the newly proposed RESeg segmentation CNN in the second stage. The RESeg leverages both average and max-pooling implementations to simultaneously learn region homogeneity and boundary-related patterns. Furthermore, novel pixel attention (PA) blocks are integrated into RESeg to effectively address mildly COVID-19-infected regions. The evaluation of the proposed Residual-BRNet CNN in the classification stage demonstrates promising performance metrics, achieving an accuracy of 97.97%, F1-score of 98.01%, sensitivity of 98.42%, and MCC of 96.81%. Meanwhile, PA-RESeg in the segmentation phase achieves an optimal segmentation performance with an IoU score of 98.43% and a dice similarity score of 95.96% of the lesion region. The framework's effectiveness in detecting and segmenting COVID-19 lesions highlights its potential for clinical applications.
Collapse
Affiliation(s)
- Bader Khalid Alshemaimri
- Software Engineering Department, College of Computing and Information Sciences, King Saud University, Riyadh 11671, Saudi Arabia
| |
Collapse
|
3
|
Hong S, Wu J, Zhu L, Chen W. Brain tumor classification in VIT-B/16 based on relative position encoding and residual MLP. PLoS One 2024; 19:e0298102. [PMID: 38954731 PMCID: PMC11218980 DOI: 10.1371/journal.pone.0298102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 06/15/2024] [Indexed: 07/04/2024] Open
Abstract
Brain tumors pose a significant threat to health, and their early detection and classification are crucial. Currently, the diagnosis heavily relies on pathologists conducting time-consuming morphological examinations of brain images, leading to subjective outcomes and potential misdiagnoses. In response to these challenges, this study proposes an improved Vision Transformer-based algorithm for human brain tumor classification. To overcome the limitations of small existing datasets, Homomorphic Filtering, Channels Contrast Limited Adaptive Histogram Equalization, and Unsharp Masking techniques are applied to enrich dataset images, enhancing information and improving model generalization. Addressing the limitation of the Vision Transformer's self-attention structure in capturing input token sequences, a novel relative position encoding method is employed to enhance the overall predictive capabilities of the model. Furthermore, the introduction of residual structures in the Multi-Layer Perceptron tackles convergence degradation during training, leading to faster convergence and enhanced algorithm accuracy. Finally, this study comprehensively analyzes the network model's performance on validation sets in terms of accuracy, precision, and recall. Experimental results demonstrate that the proposed model achieves a classification accuracy of 91.36% on an augmented open-source brain tumor dataset, surpassing the original VIT-B/16 accuracy by 5.54%. This validates the effectiveness of the proposed approach in brain tumor classification, offering potential reference for clinical diagnoses by medical practitioners.
Collapse
Affiliation(s)
- Shuang Hong
- School of Information Science and Engineering, Wuhan University of Science and Technology, Wuhan, Hubei, China
| | - Jin Wu
- School of Information Science and Engineering, Wuhan University of Science and Technology, Wuhan, Hubei, China
| | - Lei Zhu
- School of Information Science and Engineering, Wuhan University of Science and Technology, Wuhan, Hubei, China
| | - Weijie Chen
- School of Information Science and Engineering, Wuhan University of Science and Technology, Wuhan, Hubei, China
| |
Collapse
|
4
|
Zahoor MM, Khan SH, Alahmadi TJ, Alsahfi T, Mazroa ASA, Sakr HA, Alqahtani S, Albanyan A, Alshemaimri BK. Brain Tumor MRI Classification Using a Novel Deep Residual and Regional CNN. Biomedicines 2024; 12:1395. [PMID: 39061969 PMCID: PMC11274019 DOI: 10.3390/biomedicines12071395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 05/30/2024] [Accepted: 06/10/2024] [Indexed: 07/28/2024] Open
Abstract
Brain tumor classification is essential for clinical diagnosis and treatment planning. Deep learning models have shown great promise in this task, but they are often challenged by the complex and diverse nature of brain tumors. To address this challenge, we propose a novel deep residual and region-based convolutional neural network (CNN) architecture, called Res-BRNet, for brain tumor classification using magnetic resonance imaging (MRI) scans. Res-BRNet employs a systematic combination of regional and boundary-based operations within modified spatial and residual blocks. The spatial blocks extract homogeneity, heterogeneity, and boundary-related features of brain tumors, while the residual blocks significantly capture local and global texture variations. We evaluated the performance of Res-BRNet on a challenging dataset collected from Kaggle repositories, Br35H, and figshare, containing various tumor categories, including meningioma, glioma, pituitary, and healthy images. Res-BRNet outperformed standard CNN models, achieving excellent accuracy (98.22%), sensitivity (0.9811), F1-score (0.9841), and precision (0.9822). Our results suggest that Res-BRNet is a promising tool for brain tumor classification, with the potential to improve the accuracy and efficiency of clinical diagnosis and treatment planning.
Collapse
Affiliation(s)
- Mirza Mumtaz Zahoor
- Faculty of Computer Sciences, Ibadat International University, Islamabad 44000, Pakistan;
| | - Saddam Hussain Khan
- Department of Computer System Engineering, University of Engineering and Applied Science (UEAS), Swat 19060, Pakistan;
| | - Tahani Jaser Alahmadi
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia;
| | - Tariq Alsahfi
- Department of Information Systems and Technology, College of Computer Science and Engineering, University of Jeddah, Jeddah 21959, Saudi Arabia;
| | - Alanoud S. Al Mazroa
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia;
| | - Hesham A. Sakr
- Nile Higher Institute for Engineering and Technology, Mansoura 35511, Dakahlia, Egypt;
| | - Saeed Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 61441, Saudi Arabia;
| | - Abdullah Albanyan
- College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 16278, Saudi Arabia;
| | | |
Collapse
|
5
|
Aljrees T. Improving prediction of cervical cancer using KNN imputer and multi-model ensemble learning. PLoS One 2024; 19:e0295632. [PMID: 38170713 PMCID: PMC10763959 DOI: 10.1371/journal.pone.0295632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 11/23/2023] [Indexed: 01/05/2024] Open
Abstract
Cervical cancer is a leading cause of women's mortality, emphasizing the need for early diagnosis and effective treatment. In line with the imperative of early intervention, the automated identification of cervical cancer has emerged as a promising avenue, leveraging machine learning techniques to enhance both the speed and accuracy of diagnosis. However, an inherent challenge in the development of these automated systems is the presence of missing values in the datasets commonly used for cervical cancer detection. Missing data can significantly impact the performance of machine learning models, potentially leading to inaccurate or unreliable results. This study addresses a critical challenge in automated cervical cancer identification-handling missing data in datasets. The study present a novel approach that combines three machine learning models into a stacked ensemble voting classifier, complemented by the use of a KNN Imputer to manage missing values. The proposed model achieves remarkable results with an accuracy of 0.9941, precision of 0.98, recall of 0.96, and an F1 score of 0.97. This study examines three distinct scenarios: one involving the deletion of missing values, another utilizing KNN imputation, and a third employing PCA for imputing missing values. This research has significant implications for the medical field, offering medical experts a powerful tool for more accurate cervical cancer therapy and enhancing the overall effectiveness of testing procedures. By addressing missing data challenges and achieving high accuracy, this work represents a valuable contribution to cervical cancer detection, ultimately aiming to reduce the impact of this disease on women's health and healthcare systems.
Collapse
Affiliation(s)
- Turki Aljrees
- College of Computer Science and Engineering, University of Hafr Al-Batin, Hafar Al-Batin, Saudi Arabia
| |
Collapse
|
6
|
Khan SH, Alahmadi TJ, Alsahfi T, Alsadhan AA, Mazroa AA, Alkahtani HK, Albanyan A, Sakr HA. COVID-19 infection analysis framework using novel boosted CNNs and radiological images. Sci Rep 2023; 13:21837. [PMID: 38071373 PMCID: PMC10710448 DOI: 10.1038/s41598-023-49218-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 12/05/2023] [Indexed: 12/18/2023] Open
Abstract
COVID-19, a novel pathogen that emerged in late 2019, has the potential to cause pneumonia with unique variants upon infection. Hence, the development of efficient diagnostic systems is crucial in accurately identifying infected patients and effectively mitigating the spread of the disease. However, the system poses several challenges because of the limited availability of labeled data, distortion, and complexity in image representation, as well as variations in contrast and texture. Therefore, a novel two-phase analysis framework has been developed to scrutinize the subtle irregularities associated with COVID-19 contamination. A new Convolutional Neural Network-based STM-BRNet is developed, which integrates the Split-Transform-Merge (STM) block and Feature map enrichment (FME) techniques in the first phase. The STM block captures boundary and regional-specific features essential for detecting COVID-19 infectious CT slices. Additionally, by incorporating the FME and Transfer Learning (TL) concept into the STM blocks, multiple enhanced channels are generated to effectively capture minute variations in illumination and texture specific to COVID-19-infected images. Additionally, residual multipath learning is used to improve the learning capacity of STM-BRNet and progressively increase the feature representation by boosting at a high level through TL. In the second phase of the analysis, the COVID-19 CT scans are processed using the newly developed SA-CB-BRSeg segmentation CNN to accurately delineate infection in the images. The SA-CB-BRSeg method utilizes a unique approach that combines smooth and heterogeneous processes in both the encoder and decoder. These operations are structured to effectively capture COVID-19 patterns, including region-homogenous, texture variation, and border. By incorporating these techniques, the SA-CB-BRSeg method demonstrates its ability to accurately analyze and segment COVID-19 related data. Furthermore, the SA-CB-BRSeg model incorporates the novel concept of CB in the decoder, where additional channels are combined using TL to enhance the learning of low contrast regions. The developed STM-BRNet and SA-CB-BRSeg models achieve impressive results, with an accuracy of 98.01%, recall of 98.12%, F-score of 98.11%, Dice Similarity of 96.396%, and IOU of 98.85%. The proposed framework will alleviate the workload and enhance the radiologist's decision-making capacity in identifying the infected region of COVID-19 and evaluating the severity stages of the disease.
Collapse
Affiliation(s)
- Saddam Hussain Khan
- Department of Computer Systems Engineering, University of Engineering and Applied Science, Swat, 19060, Pakistan
| | - Tahani Jaser Alahmadi
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia.
| | - Tariq Alsahfi
- Department of Information Systems and Technology, College of Computer Science and Engineering, University of Jeddah, Jeddah, Saudi Arabia
| | - Abeer Abdullah Alsadhan
- Computer Science Department, Applied College, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia.
| | - Alanoud Al Mazroa
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia
| | - Hend Khalid Alkahtani
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia
| | - Abdullah Albanyan
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Hesham A Sakr
- Nile Higher Institute for Engineering and Technology, Mansoura, Egypt
| |
Collapse
|
7
|
Alrowais F, Alotaibi FA, Hassan AQA, Marzouk R, Alnfiai MM, Sayed A. Enhanced Pelican Optimization Algorithm with Deep Learning-Driven Mitotic Nuclei Classification on Breast Histopathology Images. Biomimetics (Basel) 2023; 8:538. [PMID: 37999179 PMCID: PMC10669319 DOI: 10.3390/biomimetics8070538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Revised: 11/01/2023] [Accepted: 11/07/2023] [Indexed: 11/25/2023] Open
Abstract
Breast cancer (BC) is a prevalent disease worldwide, and accurate diagnoses are vital for successful treatment. Histopathological (HI) inspection, particularly the detection of mitotic nuclei, has played a pivotal function in the prognosis and diagnosis of BC. It includes the detection and classification of mitotic nuclei within breast tissue samples. Conventionally, the detection of mitotic nuclei has been a subjective task and is time-consuming for pathologists to perform manually. Automatic classification using computer algorithms, especially deep learning (DL) algorithms, has been developed as a beneficial alternative. DL and CNNs particularly have shown outstanding performance in different image classification tasks, including mitotic nuclei classification. CNNs can learn intricate hierarchical features from HI images, making them suitable for detecting subtle patterns related to the mitotic nuclei. In this article, we present an Enhanced Pelican Optimization Algorithm with a Deep Learning-Driven Mitotic Nuclei Classification (EPOADL-MNC) technique on Breast HI. This developed EPOADL-MNC system examines the histopathology images for the classification of mitotic and non-mitotic cells. In this presented EPOADL-MNC technique, the ShuffleNet model can be employed for the feature extraction method. In the hyperparameter tuning procedure, the EPOADL-MNC algorithm makes use of the EPOA system to alter the hyperparameters of the ShuffleNet model. Finally, we used an adaptive neuro-fuzzy inference system (ANFIS) for the classification and detection of mitotic cell nuclei on histopathology images. A series of simulations took place to validate the improved detection performance of the EPOADL-MNC technique. The comprehensive outcomes highlighted the better outcomes of the EPOADL-MNC algorithm compared to existing DL techniques with a maximum accuracy of 97.83%.
Collapse
Affiliation(s)
- Fadwa Alrowais
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Faiz Abdullah Alotaibi
- Department of Information Science, College of Humanities and Social Sciences, King Saud University, P.O. Box 28095, Riyadh 11437, Saudi Arabia
| | - Abdulkhaleq Q. A. Hassan
- Department of English, College of Science and Arts at Mahayil, King Khalid University, Abha 62529, Saudi Arabia
| | - Radwa Marzouk
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Mrim M. Alnfiai
- Department of Information Technology, College of Computers and Information Technology, Taif University, Taif P.O. Box 11099, Taif 21944, Saudi Arabia
| | - Ahmed Sayed
- Research Center, Future University in Egypt, New Cairo 11835, Egypt
| |
Collapse
|
8
|
Pramanik P, Pramanik R, Schwenker F, Sarkar R. DBU-Net: Dual branch U-Net for tumor segmentation in breast ultrasound images. PLoS One 2023; 18:e0293615. [PMID: 37930947 PMCID: PMC10627442 DOI: 10.1371/journal.pone.0293615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 10/16/2023] [Indexed: 11/08/2023] Open
Abstract
Breast ultrasound medical images often have low imaging quality along with unclear target boundaries. These issues make it challenging for physicians to accurately identify and outline tumors when diagnosing patients. Since precise segmentation is crucial for diagnosis, there is a strong need for an automated method to enhance the segmentation accuracy, which can serve as a technical aid in diagnosis. Recently, the U-Net and its variants have shown great success in medical image segmentation. In this study, drawing inspiration from the U-Net concept, we propose a new variant of the U-Net architecture, called DBU-Net, for tumor segmentation in breast ultrasound images. To enhance the feature extraction capabilities of the encoder, we introduce a novel approach involving the utilization of two distinct encoding paths. In the first path, the original image is employed, while in the second path, we use an image created using the Roberts edge filter, in which edges are highlighted. This dual branch encoding strategy helps to extract the semantic rich information through a mutually informative learning process. At each level of the encoder, both branches independently undergo two convolutional layers followed by a pooling layer. To facilitate cross learning between the branches, a weighted addition scheme is implemented. These weights are dynamically learned by considering the gradient with respect to the loss function. We evaluate the performance of our proposed DBU-Net model on two datasets, namely BUSI and UDIAT, and our experimental results demonstrate superior performance compared to state-of-the-art models.
Collapse
Affiliation(s)
- Payel Pramanik
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, India
| | - Rishav Pramanik
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, India
| | | | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, India
| |
Collapse
|