1
|
Cao W, Guo J, You X, Liu Y, Li L, Cui W, Cao Y, Chen X, Zheng J. NeighborNet: Learning Intra- and Inter-Image Pixel Neighbor Representation for Breast Lesion Segmentation. IEEE J Biomed Health Inform 2024; 28:4761-4771. [PMID: 38743530 DOI: 10.1109/jbhi.2024.3400802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Breast lesion segmentation from ultrasound images is essential in computer-aided breast cancer diagnosis. To alleviate the problems of blurry lesion boundaries and irregular morphologies, common practices combine CNN and attention to integrate global and local information. However, previous methods use two independent modules to extract global and local features separately, such feature-wise inflexible integration ignores the semantic gap between them, resulting in representation redundancy/insufficiency and undesirable restrictions in clinic practices. Moreover, medical images are highly similar to each other due to the imaging methods and human tissues, but the captured global information by transformer-based methods in the medical domain is limited within images, the semantic relations and common knowledge across images are largely ignored. To alleviate the above problems, in the neighbor view, this paper develops a pixel neighbor representation learning method (NeighborNet) to flexibly integrate global and local context within and across images for lesion morphology and boundary modeling. Concretely, we design two neighbor layers to investigate two properties (i.e., number and distribution) of neighbors. The neighbor number for each pixel is not fixed but determined by itself. The neighbor distribution is extended from one image to all images in the datasets. With the two properties, for each pixel at each feature level, the proposed NeighborNet can evolve into the transformer or degenerate into the CNN for adaptive context representation learning to cope with the irregular lesion morphologies and blurry boundaries. The state-of-the-art performances on three ultrasound datasets prove the effectiveness of the proposed NeighborNet.
Collapse
|
2
|
Li W, Ye X, Chen X, Jiang X, Yang Y. A deep learning-based method for the detection and segmentation of breast masses in ultrasound images. Phys Med Biol 2024; 69:155027. [PMID: 38986480 DOI: 10.1088/1361-6560/ad61b6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 07/10/2024] [Indexed: 07/12/2024]
Abstract
Objective.Automated detection and segmentation of breast masses in ultrasound images are critical for breast cancer diagnosis, but remain challenging due to limited image quality and complex breast tissues. This study aims to develop a deep learning-based method that enables accurate breast mass detection and segmentation in ultrasound images.Approach.A novel convolutional neural network-based framework that combines the You Only Look Once (YOLO) v5 network and the Global-Local (GOLO) strategy was developed. First, YOLOv5 was applied to locate the mass regions of interest (ROIs). Second, a Global Local-Connected Multi-Scale Selection (GOLO-CMSS) network was developed to segment the masses. The GOLO-CMSS operated on both the entire images globally and mass ROIs locally, and then integrated the two branches for a final segmentation output. Particularly, in global branch, CMSS applied Multi-Scale Selection (MSS) modules to automatically adjust the receptive fields, and Multi-Input (MLI) modules to enable fusion of shallow and deep features at different resolutions. The USTC dataset containing 28 477 breast ultrasound images was collected for training and test. The proposed method was also tested on three public datasets, UDIAT, BUSI and TUH. The segmentation performance of GOLO-CMSS was compared with other networks and three experienced radiologists.Main results.YOLOv5 outperformed other detection models with average precisions of 99.41%, 95.15%, 93.69% and 96.42% on the USTC, UDIAT, BUSI and TUH datasets, respectively. The proposed GOLO-CMSS showed superior segmentation performance over other state-of-the-art networks, with Dice similarity coefficients (DSCs) of 93.19%, 88.56%, 87.58% and 90.37% on the USTC, UDIAT, BUSI and TUH datasets, respectively. The mean DSC between GOLO-CMSS and each radiologist was significantly better than that between radiologists (p< 0.001).Significance.Our proposed method can accurately detect and segment breast masses with a decent performance comparable to radiologists, highlighting its great potential for clinical implementation in breast ultrasound examination.
Collapse
Affiliation(s)
- Wanqing Li
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
| | - Xianjun Ye
- Department of Ultrasound Medicine, The First Affiliate Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui 230001, People's Republic of China
| | - Xuemin Chen
- Health Management Center, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui 230001, People's Republic of China
| | - Xianxian Jiang
- Graduate School of Bengbu Medical College, Bengbu, Anhui 233030, People's Republic of China
| | - Yidong Yang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
- Ion Medical Research Institute, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
- Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui 230001, People's Republic of China
| |
Collapse
|
3
|
Nisar KS, Anjum MW, Raja MAZ, Shoaib M. Design of a novel intelligent computing framework for predictive solutions of malaria propagation model. PLoS One 2024; 19:e0298451. [PMID: 38635576 PMCID: PMC11025872 DOI: 10.1371/journal.pone.0298451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Accepted: 01/23/2024] [Indexed: 04/20/2024] Open
Abstract
The paper presents an innovative computational framework for predictive solutions for simulating the spread of malaria. The structure incorporates sophisticated computing methods to improve the reliability of predicting malaria outbreaks. The study strives to provide a strong and effective tool for forecasting the propagation of malaria via the use of an AI-based recurrent neural network (RNN). The model is classified into two groups, consisting of humans and mosquitoes. To develop the model, the traditional Ross-Macdonald model is expanded upon, allowing for a more comprehensive analysis of the intricate dynamics at play. To gain a deeper understanding of the extended Ross model, we employ RNN, treating it as an initial value problem involving a system of first-order ordinary differential equations, each representing one of the seven profiles. This method enables us to obtain valuable insights and elucidate the complexities inherent in the propagation of malaria. Mosquitoes and humans constitute the two cohorts encompassed within the exposition of the mathematical dynamical model. Human dynamics are comprised of individuals who are susceptible, exposed, infectious, and in recovery. The mosquito population, on the other hand, is divided into three categories: susceptible, exposed, and infected. For RNN, we used the input of 0 to 300 days with an interval length of 3 days. The evaluation of the precision and accuracy of the methodology is conducted by superimposing the estimated solution onto the numerical solution. In addition, the outcomes obtained from the RNN are examined, including regression analysis, assessment of error autocorrelation, examination of time series response plots, mean square error, error histogram, and absolute error. A reduced mean square error signifies that the model's estimates are more accurate. The result is consistent with acquiring an approximate absolute error close to zero, revealing the efficacy of the suggested strategy. This research presents a novel approach to solving the malaria propagation model using recurrent neural networks. Additionally, it examines the behavior of various profiles under varying initial conditions of the malaria propagation model, which consists of a system of ordinary differential equations.
Collapse
Affiliation(s)
- Kottakkaran Sooppy Nisar
- Department of Mathematics, College of Science and Humanities in Alkharj, Prince Sattam Bin Abdulaziz, University, Alkharj, Saudi Arabia
| | | | - Muhammad Asif Zahoor Raja
- Future Technology Research Center, National Yunlin University of Science and Technology, Douliou, Yunlin, Taiwan, R.O.C
| | | |
Collapse
|
4
|
Chen J, Shen X, Zhao Y, Qian W, Ma H, Sang L. Attention gate and dilation U-shaped network (GDUNet): an efficient breast ultrasound image segmentation network with multiscale information extraction. Quant Imaging Med Surg 2024; 14:2034-2048. [PMID: 38415149 PMCID: PMC10895089 DOI: 10.21037/qims-23-947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 01/08/2024] [Indexed: 02/29/2024]
Abstract
Background In recent years, computer-aided diagnosis (CAD) systems have played an important role in breast cancer screening and diagnosis. The image segmentation task is the key step in a CAD system for the rapid identification of lesions. Therefore, an efficient breast image segmentation network is necessary for improving the diagnostic accuracy in breast cancer screening. However, due to the characteristics of blurred boundaries, low contrast, and speckle noise in breast ultrasound images, breast lesion segmentation is challenging. In addition, many of the proposed breast tumor segmentation networks are too complex to be applied in practice. Methods We developed the attention gate and dilation U-shaped network (GDUNet), a lightweight, breast lesion segmentation model. This model improves the inverted bottleneck, integrating it with tokenized multilayer perceptron (MLP) to construct the encoder. Additionally, we introduce the lightweight attention gate (AG) within the skip connection, which effectively filters noise in low-level semantic information across spatial and channel dimensions, thus attenuating irrelevant features. To further improve performance, we innovated the AG dilation (AGDT) block and embedded it between the encoder and decoder in order to capture critical multiscale contextual information. Results We conducted experiments on two breast cancer datasets. The experiment's results show that compared to UNet, GDUNet could reduce the number of parameters by 10 times and the computational complexity by 58 times while providing a double of the inference speed. Moreover, the GDUNet achieved a better segmentation performance than did the state-of-the-art medical image segmentation architecture. Conclusions Our proposed GDUNet method can achieve advanced segmentation performance on different breast ultrasound image datasets with high efficiency.
Collapse
Affiliation(s)
- Jiadong Chen
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Xiaoyan Shen
- School of Life and Health Technology, Dongguan University of Technology, Dongguan, China
| | - Yu Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Wei Qian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - He Ma
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Liang Sang
- Department of Ultrasound, The First Hospital of China Medical University, Shenyang, China
| |
Collapse
|
5
|
Hossain S, Azam S, Montaha S, Karim A, Chowa SS, Mondol C, Zahid Hasan M, Jonkman M. Automated breast tumor ultrasound image segmentation with hybrid UNet and classification using fine-tuned CNN model. Heliyon 2023; 9:e21369. [PMID: 37885728 PMCID: PMC10598544 DOI: 10.1016/j.heliyon.2023.e21369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 10/11/2023] [Accepted: 10/20/2023] [Indexed: 10/28/2023] Open
Abstract
Introduction Breast cancer stands as the second most deadly form of cancer among women worldwide. Early diagnosis and treatment can significantly mitigate mortality rates. Purpose The study aims to classify breast ultrasound images into benign and malignant tumors. This approach involves segmenting the breast's region of interest (ROI) employing an optimized UNet architecture and classifying the ROIs through an optimized shallow CNN model utilizing an ablation study. Method Several image processing techniques are utilized to improve image quality by removing text, artifacts, and speckle noise, and statistical analysis is done to check the enhanced image quality is satisfactory. With the processed dataset, the segmentation of breast tumor ROI is carried out, optimizing the UNet model through an ablation study where the architectural configuration and hyperparameters are altered. After obtaining the tumor ROIs from the fine-tuned UNet model (RKO-UNet), an optimized CNN model is employed to classify the tumor into benign and malignant classes. To enhance the CNN model's performance, an ablation study is conducted, coupled with the integration of an attention unit. The model's performance is further assessed by classifying breast cancer with mammogram images. Result The proposed classification model (RKONet-13) results in an accuracy of 98.41 %. The performance of the proposed model is further compared with five transfer learning models for both pre-segmented and post-segmented datasets. K-fold cross-validation is done to assess the proposed RKONet-13 model's performance stability. Furthermore, the performance of the proposed model is compared with previous literature, where the proposed model outperforms existing methods, demonstrating its effectiveness in breast cancer diagnosis. Lastly, the model demonstrates its robustness for breast cancer classification, delivering an exceptional performance of 96.21 % on a mammogram dataset. Conclusion The efficacy of this study relies on image pre-processing, segmentation with hybrid attention UNet, and classification with fine-tuned robust CNN model. This comprehensive approach aims to determine an effective technique for detecting breast cancer within ultrasound images.
Collapse
Affiliation(s)
- Shahed Hossain
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, 0909, NT, Australia
| | - Sidratul Montaha
- Department of Computer Science, University of Calgary, Calgary, AB, T2N 1N4, Canada
| | - Asif Karim
- Faculty of Science and Technology, Charles Darwin University, Casuarina, 0909, NT, Australia
| | - Sadia Sultana Chowa
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh
| | - Chaity Mondol
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh
| | - Md Zahid Hasan
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh
| | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, 0909, NT, Australia
| |
Collapse
|
6
|
Deb SD, Jha RK. Breast UltraSound Image classification using fuzzy-rank-based ensemble network. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
|
7
|
GadAllah MT, Mohamed AENA, Hefnawy AA, Zidan HE, El-Banby GM, Mohamed Badawy S. Convolutional Neural Networks Based Classification of Segmented Breast Ultrasound Images – A Comparative Preliminary Study. 2023 INTELLIGENT METHODS, SYSTEMS, AND APPLICATIONS (IMSA) 2023. [DOI: 10.1109/imsa58542.2023.10217585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Affiliation(s)
| | - Abd El-Naser A. Mohamed
- Menoufia University,Faculty of Electronic Engineering,Electronics and Electrical Communications Engineering Department,Menoufia,Egypt
| | - Alaa A. Hefnawy
- Electronics Research Institute (ERI),Computers and Systems Department,Cairo,Egypt
| | - Hassan E. Zidan
- Electronics Research Institute (ERI),Computers and Systems Department,Cairo,Egypt
| | - Ghada M. El-Banby
- Menoufia University,Faculty of Electronic Engineering,Industrial Electronics and Control Engineering Department,Menoufia,Egypt
| | - Samir Mohamed Badawy
- Menoufia University,Faculty of Electronic Engineering,Industrial Electronics and Control Engineering Department,Menoufia,Egypt
| |
Collapse
|
8
|
Alam T, Shia WC, Hsu FR, Hassan T. Improving Breast Cancer Detection and Diagnosis through Semantic Segmentation Using the Unet3+ Deep Learning Framework. Biomedicines 2023; 11:1536. [PMID: 37371631 DOI: 10.3390/biomedicines11061536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Revised: 05/23/2023] [Accepted: 05/24/2023] [Indexed: 06/29/2023] Open
Abstract
We present an analysis and evaluation of breast cancer detection and diagnosis using segmentation models. We used an advanced semantic segmentation method and a deep convolutional neural network to identify the Breast Imaging Reporting and Data System (BI-RADS) lexicon for breast ultrasound images. To improve the segmentation results, we used six models to analyse 309 patients, including 151 benign and 158 malignant tumour images. We compared the Unet3+ architecture with several other models, such as FCN, Unet, SegNet, DeeplabV3+ and pspNet. The Unet3+ model is a state-of-the-art, semantic segmentation architecture that showed optimal performance with an average accuracy of 82.53% and an average intersection over union (IU) of 52.57%. The weighted IU was found to be 89.14% with a global accuracy of 90.99%. The application of these types of segmentation models to the detection and diagnosis of breast cancer provides remarkable results. Our proposed method has the potential to provide a more accurate and objective diagnosis of breast cancer, leading to improved patient outcomes.
Collapse
Affiliation(s)
- Taukir Alam
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 407, Taiwan
| | - Wei-Chung Shia
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 407, Taiwan
- Molecular Medicine Laboratory, Department of Research, Changhua Christian Hospital, Changhua 500, Taiwan
| | - Fang-Rong Hsu
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 407, Taiwan
| | - Taimoor Hassan
- Institute of Translational Medicine and New Drug Development, China Medical University, Taichung 404333, Taiwan
| |
Collapse
|
9
|
Chen G, Li L, Dai Y, Zhang J, Yap MH. AAU-Net: An Adaptive Attention U-Net for Breast Lesions Segmentation in Ultrasound Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1289-1300. [PMID: 36455083 DOI: 10.1109/tmi.2022.3226268] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Various deep learning methods have been proposed to segment breast lesions from ultrasound images. However, similar intensity distributions, variable tumor morphologies and blurred boundaries present challenges for breast lesions segmentation, especially for malignant tumors with irregular shapes. Considering the complexity of ultrasound images, we develop an adaptive attention U-net (AAU-net) to segment breast lesions automatically and stably from ultrasound images. Specifically, we introduce a hybrid adaptive attention module (HAAM), which mainly consists of a channel self-attention block and a spatial self-attention block, to replace the traditional convolution operation. Compared with the conventional convolution operation, the design of the hybrid adaptive attention module can help us capture more features under different receptive fields. Different from existing attention mechanisms, the HAAM module can guide the network to adaptively select more robust representation in channel and space dimensions to cope with more complex breast lesions segmentation. Extensive experiments with several state-of-the-art deep learning segmentation methods on three public breast ultrasound datasets show that our method has better performance on breast lesions segmentation. Furthermore, robustness analysis and external experiments demonstrate that our proposed AAU-net has better generalization performance in the breast lesion segmentation. Moreover, the HAAM module can be flexibly applied to existing network frameworks. The source code is available on https://github.com/CGPxy/AAU-net.
Collapse
|
10
|
Alhussan AA, Eid MM, Towfek SK, Khafaga DS. Breast Cancer Classification Depends on the Dynamic Dipper Throated Optimization Algorithm. Biomimetics (Basel) 2023; 8:163. [PMID: 37092415 PMCID: PMC10123690 DOI: 10.3390/biomimetics8020163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 04/12/2023] [Accepted: 04/14/2023] [Indexed: 04/25/2023] Open
Abstract
According to the American Cancer Society, breast cancer is the second largest cause of mortality among women after lung cancer. Women's death rates can be decreased if breast cancer is diagnosed and treated early. Due to the lengthy duration of manual breast cancer diagnosis, an automated approach is necessary for early cancer identification. This research proposes a novel framework integrating metaheuristic optimization with deep learning and feature selection for robustly classifying breast cancer from ultrasound images. The structure of the proposed methodology consists of five stages, namely, data augmentation to improve the learning of convolutional neural network (CNN) models, transfer learning using GoogleNet deep network for feature extraction, selection of the best set of features using a novel optimization algorithm based on a hybrid of dipper throated and particle swarm optimization algorithms, and classification of the selected features using CNN optimized using the proposed optimization algorithm. To prove the effectiveness of the proposed approach, a set of experiments were conducted on a breast cancer dataset, freely available on Kaggle, to evaluate the performance of the proposed feature selection method and the performance of the optimized CNN. In addition, statistical tests were established to study the stability and difference of the proposed approach compared to state-of-the-art approaches. The achieved results confirmed the superiority of the proposed approach with a classification accuracy of 98.1%, which is better than the other approaches considered in the conducted experiments.
Collapse
Affiliation(s)
- Amel Ali Alhussan
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Marwa M. Eid
- Faculty of Artificial Intelligence, Delta University for Science and Technology, Mansoura 35712, Egypt
| | - S. K. Towfek
- Delta Higher Institute for Engineering and Technology, Mansoura 35111, Egypt
- Computer Science and Intelligent Systems Research Center, Blacksburg, VA 24060, USA
| | - Doaa Sami Khafaga
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| |
Collapse
|
11
|
You H, Yu L, Tian S, Cai W. A stereo spatial decoupling network for medical image classification. COMPLEX INTELL SYST 2023; 9:1-10. [PMID: 37361963 PMCID: PMC10107597 DOI: 10.1007/s40747-023-01049-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 03/09/2023] [Indexed: 06/28/2023]
Abstract
Deep convolutional neural network (CNN) has made great progress in medical image classification. However, it is difficult to establish effective spatial associations, and always extracts similar low-level features, resulting in redundancy of information. To solve these limitations, we propose a stereo spatial discoupling network (TSDNets), which can leverage the multi-dimensional spatial details of medical images. Then, we use an attention mechanism to progressively extract the most discriminative features from three directions: horizontal, vertical, and depth. Moreover, a cross feature screening strategy is used to divide the original feature maps into three levels: important, secondary and redundant. Specifically, we design a cross feature screening module (CFSM) and a semantic guided decoupling module (SGDM) to model multi-dimension spatial relationships, thereby enhancing the feature representation capabilities. The extensive experiments conducted on multiple open source baseline datasets demonstrate that our TSDNets outperforms previous state-of-the-art models.
Collapse
Affiliation(s)
- Hongfeng You
- School of Information Science and Engineering, Xinjiang University, Urumqi, 830000 China
| | - Long Yu
- Network Center, Xinjiang University, Urumqi, 830000 China
| | - Shengwei Tian
- Software College, Xinjiang University, Urumqi, 830000 China
| | - Weiwei Cai
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, 214122 China
| |
Collapse
|
12
|
Zhu Z, Wang SH, Zhang YD. A Survey of Convolutional Neural Network in Breast Cancer. COMPUTER MODELING IN ENGINEERING & SCIENCES : CMES 2023; 136:2127-2172. [PMID: 37152661 PMCID: PMC7614504 DOI: 10.32604/cmes.2023.025484] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 10/28/2022] [Indexed: 05/09/2023]
Abstract
Problems For people all over the world, cancer is one of the most feared diseases. Cancer is one of the major obstacles to improving life expectancy in countries around the world and one of the biggest causes of death before the age of 70 in 112 countries. Among all kinds of cancers, breast cancer is the most common cancer for women. The data showed that female breast cancer had become one of the most common cancers. Aims A large number of clinical trials have proved that if breast cancer is diagnosed at an early stage, it could give patients more treatment options and improve the treatment effect and survival ability. Based on this situation, there are many diagnostic methods for breast cancer, such as computer-aided diagnosis (CAD). Methods We complete a comprehensive review of the diagnosis of breast cancer based on the convolutional neural network (CNN) after reviewing a sea of recent papers. Firstly, we introduce several different imaging modalities. The structure of CNN is given in the second part. After that, we introduce some public breast cancer data sets. Then, we divide the diagnosis of breast cancer into three different tasks: 1. classification; 2. detection; 3. segmentation. Conclusion Although this diagnosis with CNN has achieved great success, there are still some limitations. (i) There are too few good data sets. A good public breast cancer dataset needs to involve many aspects, such as professional medical knowledge, privacy issues, financial issues, dataset size, and so on. (ii) When the data set is too large, the CNN-based model needs a sea of computation and time to complete the diagnosis. (iii) It is easy to cause overfitting when using small data sets.
Collapse
Affiliation(s)
| | | | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK
| |
Collapse
|
13
|
Zhang M, Huang A, Yang D, Xu R. Boundary-oriented Network for Automatic Breast Tumor Segmentation in Ultrasound Images. ULTRASONIC IMAGING 2023; 45:62-73. [PMID: 36951101 DOI: 10.1177/01617346231162925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Breast cancer is considered as the most prevalent cancer. Using ultrasound images is a momentous clinical diagnosis method to locate breast tumors. However, accurate segmentation of breast tumors remains an open problem due to ultrasound artifacts, low contrast, and complicated tumor shapes in ultrasound images. To address this issue, we proposed a boundary-oriented network (BO-Net) for boosting breast tumor segmentation in ultrasound images. The BO-Net boosts tumor segmentation performance from two perspectives. Firstly, a boundary-oriented module (BOM) was designed to capture the weak boundaries of breast tumors by learning additional breast tumor boundary maps. Second, we focus on enhanced feature extraction, which takes advantage of the Atrous Spatial Pyramid Pooling (ASPP) module and Squeeze-and-Excitation (SE) block to obtain multi-scale and efficient feature information. We evaluate our network on two public datasets: Dataset B and BUSI. For the Dataset B, our network achieves 0.8685 in Dice, 0.7846 in Jaccard, 0.8604 in Precision, 0.9078 in Recall, and 0.9928 in Specificity. For the BUSI dataset, our network achieves 0.7954 in Dice, 0.7033 in Jaccard, 0.8275 in Precision, 0.8251 in Recall, and 0.9814 in Specificity. Experimental results show that BO-Net outperforms the state-of-the-art segmentation methods for breast tumor segmentation in ultrasound images. It demonstrates that focusing on boundary and feature enhancement creates more efficient and robust breast tumor segmentation.
Collapse
Affiliation(s)
- Mengmeng Zhang
- School of Media and Design, Hangzhou Dianzi University, Hangzhou, China
| | - Aibin Huang
- School of Media and Design, Hangzhou Dianzi University, Hangzhou, China
| | - Debiao Yang
- School of Media and Design, Hangzhou Dianzi University, Hangzhou, China
| | - Rui Xu
- School of Media and Design, Hangzhou Dianzi University, Hangzhou, China
| |
Collapse
|
14
|
Sasikala S, Arun Kumar S, Ezhilarasi M. Improved breast cancer detection using fusion of bimodal sonographic features through binary firefly algorithm. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2164944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Affiliation(s)
- S. Sasikala
- Department of Electronics & Communication Engineering, Kumaraguru College of Technology, Coimbatore, Tamil Nadu, India
| | - S. Arun Kumar
- Department of Electronics & Communication Engineering, Kumaraguru College of Technology, Coimbatore, Tamil Nadu, India
| | - M. Ezhilarasi
- Department of Electronics & Instrumentation Engineering, Kumaraguru College of Technology, Coimbatore, Tamil Nadu, India
| |
Collapse
|
15
|
Efficient Breast Cancer Diagnosis from Complex Mammographic Images Using Deep Convolutional Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:7717712. [PMID: 36909966 PMCID: PMC9998154 DOI: 10.1155/2023/7717712] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 02/15/2023] [Accepted: 02/23/2023] [Indexed: 03/06/2023]
Abstract
Medical image analysis places a significant focus on breast cancer, which poses a significant threat to women's health and contributes to many fatalities. An early and precise diagnosis of breast cancer through digital mammograms can significantly improve the accuracy of disease detection. Computer-aided diagnosis (CAD) systems must analyze the medical imagery and perform detection, segmentation, and classification processes to assist radiologists with accurately detecting breast lesions. However, early-stage mammography cancer detection is certainly difficult. The deep convolutional neural network has demonstrated exceptional results and is considered a highly effective tool in the field. This study proposes a computational framework for diagnosing breast cancer using a ResNet-50 convolutional neural network to classify mammogram images. To train and classify the INbreast dataset into benign or malignant categories, the framework utilizes transfer learning from the pretrained ResNet-50 CNN on ImageNet. The results revealed that the proposed framework achieved an outstanding classification accuracy of 93%, surpassing other models trained on the same dataset. This novel approach facilitates early diagnosis and classification of malignant and benign breast cancer, potentially saving lives and resources. These outcomes highlight that deep convolutional neural network algorithms can be trained to achieve highly accurate results in various mammograms, along with the capacity to enhance medical tools by reducing the error rate in screening mammograms.
Collapse
|
16
|
Qasmieh IA, Alquran H, Zyout A, Al-Issa Y, Mustafa WA, Alsalatie M. Automated Detection of Corneal Ulcer Using Combination Image Processing and Deep Learning. Diagnostics (Basel) 2022; 12:diagnostics12123204. [PMID: 36553211 PMCID: PMC9777193 DOI: 10.3390/diagnostics12123204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 11/26/2022] [Accepted: 12/12/2022] [Indexed: 12/23/2022] Open
Abstract
A corneal ulcers are one of the most common eye diseases. They come from various infections, such as bacteria, viruses, or parasites. They may lead to ocular morbidity and visual disability. Therefore, early detection can reduce the probability of reaching the visually impaired. One of the most common techniques exploited for corneal ulcer screening is slit-lamp images. This paper proposes two highly accurate automated systems to localize the corneal ulcer region. The designed approaches are image processing techniques with Hough transform and deep learning approaches. The two methods are validated and tested on the publicly available SUSTech-SYSU database. The accuracy is evaluated and compared between both systems. Both systems achieve an accuracy of more than 90%. However, the deep learning approach is more accurate than the traditional image processing techniques. It reaches 98.9% accuracy and Dice similarity 99.3%. However, the first method does not require parameters to optimize an explicit training model. The two approaches can perform well in the medical field. Moreover, the first model has more leverage than the deep learning model because the last one needs a large training dataset to build reliable software in clinics. Both proposed methods help physicians in corneal ulcer level assessment and improve treatment efficiency.
Collapse
Affiliation(s)
- Isam Abu Qasmieh
- Biomedical Systems and Medical Informatics Engineering, Yarmouk University, Irbid 21163, Jordan
| | - Hiam Alquran
- Biomedical Systems and Medical Informatics Engineering, Yarmouk University, Irbid 21163, Jordan
| | - Ala’a Zyout
- Biomedical Systems and Medical Informatics Engineering, Yarmouk University, Irbid 21163, Jordan
| | - Yazan Al-Issa
- Department of Computer Engineering, Yarmouk University, Irbid 21163, Jordan
| | - Wan Azani Mustafa
- Faculty of Electrical Engineering & Technology, Campus Pauh Putra, Universiti Malaysia Perlis (UniMAP), Arau, Perlis 02600, Malaysia
- Advanced Computing (AdvComp), Centre of Excellence (CoE), Campus Pauh Putra, Universiti Malaysia Perlis (UniMAP), Arau, Perlis 02600, Malaysia
- Correspondence:
| | - Mohammed Alsalatie
- The Institute of Biomedical Technology, King Hussein Medical Center, Royal Jordanian Medical Service, Amman 11855, Jordan
| |
Collapse
|
17
|
Mújica-Vargas D, Matuz-Cruz M, García-Aquino C, Ramos-Palencia C. Efficient System for Delimitation of Benign and Malignant Breast Masses. ENTROPY (BASEL, SWITZERLAND) 2022; 24:e24121775. [PMID: 36554180 PMCID: PMC9777637 DOI: 10.3390/e24121775] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Revised: 11/23/2022] [Accepted: 11/26/2022] [Indexed: 06/01/2023]
Abstract
In this study, a high-performing scheme is introduced to delimit benign and malignant masses in breast ultrasound images. The proposal is built upon by the Nonlocal Means filter for image quality improvement, an Intuitionistic Fuzzy C-Means local clustering algorithm for superpixel generation with high adherence to the edges, and the DBSCAN algorithm for the global clustering of those superpixels in order to delimit masses' regions. The empirical study was performed using two datasets, both with benign and malignant breast tumors. The quantitative results with respect to the BUSI dataset were JSC≥0.907, DM≥0.913, HD≥7.025, and MCR≤6.431 for benign masses and JSC≥0.897, DM≥0.900, HD≥8.666, and MCR≤8.016 for malignant ones, while the MID dataset resulted in JSC≥0.890, DM≥0.905, HD≥8.370, and MCR≤7.241 along with JSC≥0.881, DM≥0.898, HD≥8.865, and MCR≤7.808 for benign and malignant masses, respectively. These numerical results revealed that our proposal outperformed all the evaluated comparative state-of-the-art methods in mass delimitation. This is confirmed by the visual results since the segmented regions had a better edge delimitation.
Collapse
Affiliation(s)
- Dante Mújica-Vargas
- Departamento de Ciencias Computacionales, Tecnológico Nacional de México, Centro Nacional de Investigación y Desarrollo Tecnológico, Cuernavaca 62490, Morelos, Mexico
| | - Manuel Matuz-Cruz
- Tecnológico Nacional de México, Instituto Tecnológico de Tapachula, Tapachula 30700, Chiapas, Mexico
| | - Christian García-Aquino
- Departamento de Ciencias Computacionales, Tecnológico Nacional de México, Centro Nacional de Investigación y Desarrollo Tecnológico, Cuernavaca 62490, Morelos, Mexico
| | - Celia Ramos-Palencia
- Departamento de Ciencias Computacionales, Tecnológico Nacional de México, Centro Nacional de Investigación y Desarrollo Tecnológico, Cuernavaca 62490, Morelos, Mexico
| |
Collapse
|
18
|
Breast Cancer Classification by Using Multi-Headed Convolutional Neural Network Modeling. Healthcare (Basel) 2022; 10:healthcare10122367. [PMID: 36553891 PMCID: PMC9777990 DOI: 10.3390/healthcare10122367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 11/18/2022] [Accepted: 11/22/2022] [Indexed: 11/27/2022] Open
Abstract
Breast cancer is one of the most widely recognized diseases after skin cancer. Though it can occur in all kinds of people, it is undeniably more common in women. Several analytical techniques, such as Breast MRI, X-ray, Thermography, Mammograms, Ultrasound, etc., are utilized to identify it. In this study, artificial intelligence was used to rapidly detect breast cancer by analyzing ultrasound images from the Breast Ultrasound Images Dataset (BUSI), which consists of three categories: Benign, Malignant, and Normal. The relevant dataset comprises grayscale and masked ultrasound images of diagnosed patients. Validation tests were accomplished for quantitative outcomes utilizing the exhibition measures for each procedure. The proposed framework is discovered to be effective, substantiating outcomes with only raw image evaluation giving a 78.97% test accuracy and masked image evaluation giving 81.02% test precision, which could decrease human errors in the determination cycle. Additionally, our described framework accomplishes higher accuracy after using multi-headed CNN with two processed datasets based on masked and original images, where the accuracy hopped up to 92.31% (±2) with a Mean Squared Error (MSE) loss of 0.05. This work primarily contributes to identifying the usefulness of multi-headed CNN when working with two different types of data inputs. Finally, a web interface has been made to make this model usable for non-technical personals.
Collapse
|
19
|
Research on Classification Method of Medical Ultrasound Image Processing Based on Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022. [DOI: 10.1155/2022/8912566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
In clinical applications, the classification of ultrasound images needs to be processed as an aid to diagnosis. Based on this, a hybrid model of cascaded deep convolutional neural network consisting of two different CNNs and a new classification method are designed and evaluated for its feasibility and effectiveness in ultrasound image classification. A total of 1000 pathological slides of patients with thyroid nodular lesions kept in the Department of Pathology of the First Affiliated Hospital of Lanzhou University, China, were retrospectively collected. After image acquisition, the images were randomly divided into training set, validation set, and test set in the ratio of 4 : 3 : 3. Three convolutional neural network models (VGG 19 model, Inception V3 model, and DenseNet 161 model) with pretraining parameters acquired on the training set were trained, and the models were combined to construct an integrated learning model, and the performance of the models in recognizing pathological images was evaluated based on the test set data. The experimental results show that the VGG 19 model is less effective in classification, with a correct rate of 88.20%, which is lower than that of Inception V3 and DenseNet161 models (92.87% and 92.95%). InceptionV3 and DenseNet161 models have significant advantages in terms of accuracy, number of parameters, and training efficiency, where the DenseNet161 model has faster convergence and better generalization performance, but occupies more video memory in the operation; moreover, the DenseNet161 operation time (1986.48 s) and response time (16 s) are better than the other two models. In addition, the integrated learning of InceptionV3 and DenseNet161 can improve the recognition of pathological images by a single model. Compared with other methods, the performance of the cascaded CNNs proposed in this study is significantly improved, and the multiview strategy can improve the performance of cascaded CNNs. The experimental results demonstrate the potential clinical application of cascaded CNNs, which can provide physicians with an objective second opinion and reduce their heavy workload, in addition to making the diagnosis of thyroid nodules easy and reproducible for people without medical expertise.
Collapse
|
20
|
A. Mohamed E, Gaber T, Karam O, Rashed EA. A Novel CNN pooling layer for breast cancer segmentation and classification from thermograms. PLoS One 2022; 17:e0276523. [PMID: 36269756 PMCID: PMC9586394 DOI: 10.1371/journal.pone.0276523] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 10/10/2022] [Indexed: 11/06/2022] Open
Abstract
Breast cancer is the second most frequent cancer worldwide, following lung cancer and the fifth leading cause of cancer death and a major cause of cancer death among women. In recent years, convolutional neural networks (CNNs) have been successfully applied for the diagnosis of breast cancer using different imaging modalities. Pooling is a main data processing step in CNN that decreases the feature maps’ dimensionality without losing major patterns. However, the effect of pooling layer was not studied efficiently in literature. In this paper, we propose a novel design for the pooling layer called vector pooling block (VPB) for the CCN algorithm. The proposed VPB consists of two data pathways, which focus on extracting features along horizontal and vertical orientations. The VPB makes the CNNs able to collect both global and local features by including long and narrow pooling kernels, which is different from the traditional pooling layer, that gathers features from a fixed square kernel. Based on the novel VPB, we proposed a new pooling module called AVG-MAX VPB. It can collect informative features by using two types of pooling techniques, maximum and average pooling. The VPB and the AVG-MAX VPB are plugged into the backbone CNNs networks, such as U-Net, AlexNet, ResNet18 and GoogleNet, to show the advantages in segmentation and classification tasks associated with breast cancer diagnosis from thermograms. The proposed pooling layer was evaluated using a benchmark thermogram database (DMR-IR) and its results compared with U-Net results which was used as base results. The U-Net results were as follows: global accuracy = 96.6%, mean accuracy = 96.5%, mean IoU = 92.07%, and mean BF score = 78.34%. The VBP-based results were as follows: global accuracy = 98.3%, mean accuracy = 97.9%, mean IoU = 95.87%, and mean BF score = 88.68% while the AVG-MAX VPB-based results were as follows: global accuracy = 99.2%, mean accuracy = 98.97%, mean IoU = 98.03%, and mean BF score = 94.29%. Other network architectures also demonstrate superior improvement considering the use of VPB and AVG-MAX VPB.
Collapse
Affiliation(s)
- Esraa A. Mohamed
- Faculty of Science, Department of Mathematics, Suez Canal University, Ismailia, Egypt
| | - Tarek Gaber
- Faculty of Computers and Informatics, Suez Canal University, Ismailia, Egypt
- School of Science, Engineering and Environment University of Salford, Manchester, United Kingdom
- * E-mail:
| | - Omar Karam
- Faculty of Informatics and Computer Science, British University in Egypt (BUE), Cairo, Egypt
| | - Essam A. Rashed
- Faculty of Science, Department of Mathematics, Suez Canal University, Ismailia, Egypt
- Graduate School of Information Science, University of Hyogo, Kobe, Japan
| |
Collapse
|
21
|
Deep Learning Based Semantic Image Segmentation Methods for Classification of Web Page Imagery. FUTURE INTERNET 2022. [DOI: 10.3390/fi14100277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Semantic segmentation is the task of clustering together parts of an image that belong to the same object class. Semantic segmentation of webpages is important for inferring contextual information from the webpage. This study examines and compares deep learning methods for classifying webpages based on imagery that is obscured by semantic segmentation. Fully convolutional neural network architectures (UNet and FCN-8) with defined hyperparameters and loss functions are used to demonstrate how they can support an efficient method of this type of classification scenario in custom-prepared webpage imagery data that are labeled multi-class and semantically segmented masks using HTML elements such as paragraph text, images, logos, and menus. Using the proposed Seg-UNet model achieved the best accuracy of 95%. A comparison with various optimizer functions demonstrates the overall efficacy of the proposed semantic segmentation approach.
Collapse
|
22
|
Rautela K, Kumar D, Kumar V. Dual-modality synthetic mammogram construction for breast lesion detection using U-DARTS. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
23
|
Shia WC, Hsu FR, Dai ST, Guo SL, Chen DR. Semantic Segmentation of the Malignant Breast Imaging Reporting and Data System Lexicon on Breast Ultrasound Images by Using DeepLab v3. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22145352. [PMID: 35891030 PMCID: PMC9323504 DOI: 10.3390/s22145352] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 07/14/2022] [Accepted: 07/15/2022] [Indexed: 05/26/2023]
Abstract
In this study, an advanced semantic segmentation method and deep convolutional neural network was applied to identify the Breast Imaging Reporting and Data System (BI-RADS) lexicon for breast ultrasound images, thereby facilitating image interpretation and diagnosis by providing radiologists an objective second opinion. A total of 684 images (380 benign and 308 malignant tumours) from 343 patients (190 benign and 153 malignant breast tumour patients) were analysed in this study. Six malignancy-related standardised BI-RADS features were selected after analysis. The DeepLab v3+ architecture and four decode networks were used, and their semantic segmentation performance was evaluated and compared. Subsequently, DeepLab v3+ with the ResNet-50 decoder showed the best performance in semantic segmentation, with a mean accuracy and mean intersection over union (IU) of 44.04% and 34.92%, respectively. The weighted IU was 84.36%. For the diagnostic performance, the area under the curve was 83.32%. This study aimed to automate identification of the malignant BI-RADS lexicon on breast ultrasound images to facilitate diagnosis and improve its quality. The evaluation showed that DeepLab v3+ with the ResNet-50 decoder was suitable for solving this problem, offering a better balance of performance and computational resource usage than a fully connected network and other decoders.
Collapse
Affiliation(s)
- Wei-Chung Shia
- Molecular Medicine Laboratory, Department of Research, Changhua Christian Hospital, Changhua 500, Taiwan
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 407, Taiwan; (F.-R.H.); (S.-T.D.)
| | - Fang-Rong Hsu
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 407, Taiwan; (F.-R.H.); (S.-T.D.)
| | - Seng-Tong Dai
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 407, Taiwan; (F.-R.H.); (S.-T.D.)
| | - Shih-Lin Guo
- Comprehensive Breast Cancer Center, Changhua Christian Hospital, Changhua 500, Taiwan;
| | - Dar-Ren Chen
- Comprehensive Breast Cancer Center, Changhua Christian Hospital, Changhua 500, Taiwan;
- School of Medicine, Chung Shan Medical University, Taichung 402, Taiwan
| |
Collapse
|
24
|
Yang T, Yu X, Ma N, Zhang Y, Li H. Deep representation-based transfer learning for deep neural networks. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
25
|
CloudRCNN: A Framework Based on Deep Neural Networks for Semantic Segmentation of Satellite Cloud Images. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12115370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Shadow cumulus clouds are widely distributed globally. They carry critical information to analyze environmental and climate changes. They can also shape the energy and water cycles of the global ecosystem at multiple scales by impacting solar radiation transfer and precipitation. Satellite images are an important source of cloud data. The accurate detection and segmentation of clouds is of great significance for climate and environmental monitoring. In this paper, we propose an improved MaskRCNN framework for the semantic segmentation of satellite images. We also explore two deep neural network architectures using auxiliary loss and feature fusion functions. We conduct comparative experiments on the dataset called “Understanding Clouds from Satellite Images”, sourced from the Kaggle competition. Compared to the baseline model, MaskRCNN, the mIoU of the CloudRCNN (auxiliary loss) model improves by 15.24%, and that of the CloudRCNN (feature fusion) model improves by 12.77%. More importantly, the two neural network architectures proposed in this paper can be widely applied to various semantic segmentation neural network models to improve the distinction between the foreground and the background.
Collapse
|
26
|
A High-Precision Classification Method of Mammary Cancer Based on Improved DenseNet Driven by an Attention Mechanism. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8585036. [PMID: 35607649 PMCID: PMC9124075 DOI: 10.1155/2022/8585036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 04/08/2022] [Accepted: 04/18/2022] [Indexed: 11/29/2022]
Abstract
Cancer is one of the major causes of human disease and death worldwide, and mammary cancer is one of the most common cancer types among women today. In this paper, we used the deep learning method to conduct a preliminary experiment on Breast Cancer Histopathological Database (BreakHis); BreakHis is an open dataset. We propose a high-precision classification method of mammary based on an improved convolutional neural network on the BreakHis dataset. We proposed three different MFSCNET models according to the different insertion positions and the number of SE modules, respectively, MFSCNet A, MFSCNet B, and MFSCNet C. We carried out experiments on the BreakHis dataset. Through experimental comparison, especially, the MFSCNet A network model has obtained the best performance in the high-precision classification experiments of mammary cancer. The accuracy of dichotomy was 99.05% to 99.89%. The accuracy of multiclass classification ranges from 94.36% to approximately 98.41%.Therefore, it is proved that MFSCNet can accurately classify the mammary histological images and has a great application prospect in predicting the degree of tumor. Code will be made available on http://github.com/xiaoan-maker/MFSCNet.
Collapse
|
27
|
TTCNN: A Breast Cancer Detection and Classification towards Computer-Aided Diagnosis Using Digital Mammography in Early Stages. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073273] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Breast cancer is a major research area in the medical image analysis field; it is a dangerous disease and a major cause of death among women. Early and accurate diagnosis of breast cancer based on digital mammograms can enhance disease detection accuracy. Medical imagery must be detected, segmented, and classified for computer-aided diagnosis (CAD) systems to help the radiologists for accurate diagnosis of breast lesions. Therefore, an accurate breast cancer detection and classification approach is proposed for screening of mammograms. In this paper, we present a deep learning system that can identify breast cancer in mammogram screening images using an “end-to-end” training strategy that efficiently uses mammography images for computer-aided breast cancer recognition in the early stages. First, the proposed approach implements the modified contrast enhancement method in order to refine the detail of edges from the source mammogram images. Next, the transferable texture convolutional neural network (TTCNN) is presented to enhance the performance of classification and the energy layer is integrated in this work to extract the texture features from the convolutional layer. The proposed approach consists of only three layers of convolution and one energy layer, rather than the pooling layer. In the third stage, we analyzed the performance of TTCNN based on deep features of convolutional neural network models (InceptionResNet-V2, Inception-V3, VGG-16, VGG-19, GoogLeNet, ResNet-18, ResNet-50, and ResNet-101). The deep features are extracted by determining the best layers which enhance the classification accuracy. In the fourth stage, by using the convolutional sparse image decomposition approach, all the extracted feature vectors are fused and, finally, the best features are selected by using the entropy controlled firefly method. The proposed approach employed on DDSM, INbreast, and MIAS datasets and attained the average accuracy of 97.49%. Our proposed transferable texture CNN-based method for classifying screening mammograms has outperformed prior methods. These findings demonstrate that automatic deep learning algorithms can be easily trained to achieve high accuracy in diverse mammography images, and can offer great potential to improve clinical tools to minimize false positive and false negative screening mammography results.
Collapse
|
28
|
Deep learning model for fully automated breast cancer detection system from thermograms. PLoS One 2022; 17:e0262349. [PMID: 35030211 PMCID: PMC8759675 DOI: 10.1371/journal.pone.0262349] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 12/22/2021] [Indexed: 11/19/2022] Open
Abstract
Breast cancer is one of the most common diseases among women worldwide. It is considered one of the leading causes of death among women. Therefore, early detection is necessary to save lives. Thermography imaging is an effective diagnostic technique which is used for breast cancer detection with the help of infrared technology. In this paper, we propose a fully automatic breast cancer detection system. First, U-Net network is used to automatically extract and isolate the breast area from the rest of the body which behaves as noise during the breast cancer detection model. Second, we propose a two-class deep learning model, which is trained from scratch for the classification of normal and abnormal breast tissues from thermal images. Also, it is used to extract more characteristics from the dataset that is helpful in training the network and improve the efficiency of the classification process. The proposed system is evaluated using real data (A benchmark, database (DMR-IR)) and achieved accuracy = 99.33%, sensitivity = 100% and specificity = 98.67%. The proposed system is expected to be a helpful tool for physicians in clinical use.
Collapse
|
29
|
An Optimized Framework for Breast Cancer Classification Using Machine Learning. BIOMED RESEARCH INTERNATIONAL 2022; 2022:8482022. [PMID: 35224101 PMCID: PMC8881122 DOI: 10.1155/2022/8482022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 01/17/2022] [Indexed: 11/29/2022]
Abstract
Breast cancer, if diagnosed and treated early, has a better chance of surviving. Many studies have shown that a larger number of ultrasound images are generated every day, and the number of radiologists able to analyze this medical data is very limited. This often results in misclassification of breast lesions, resulting in a high false-positive rate. In this article, we propose a computer-aided diagnosis (CAD) system that can automatically generate an optimized algorithm. To train machine learning, we employ 13 features out of 185 available. Five machine learning classifiers were used to classify malignant versus benign tumors. The experimental results revealed Bayesian optimization with a tree-structured Parzen estimator based on a machine learning classifier for 10-fold cross-validation. The LightGBM classifier performs better than the other four classifiers, achieving 99.86% accuracy, 100.0% precision, 99.60% recall, and 99.80% for the FI score.
Collapse
|
30
|
A Multi-Agent Deep Reinforcement Learning Approach for Enhancement of COVID-19 CT Image Segmentation. J Pers Med 2022; 12:jpm12020309. [PMID: 35207796 PMCID: PMC8880720 DOI: 10.3390/jpm12020309] [Citation(s) in RCA: 31] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 02/14/2022] [Accepted: 02/15/2022] [Indexed: 11/18/2022] Open
Abstract
Currently, most mask extraction techniques are based on convolutional neural networks (CNNs). However, there are still numerous problems that mask extraction techniques need to solve. Thus, the most advanced methods to deploy artificial intelligence (AI) techniques are necessary. The use of cooperative agents in mask extraction increases the efficiency of automatic image segmentation. Hence, we introduce a new mask extraction method that is based on multi-agent deep reinforcement learning (DRL) to minimize the long-term manual mask extraction and to enhance medical image segmentation frameworks. A DRL-based method is introduced to deal with mask extraction issues. This new method utilizes a modified version of the Deep Q-Network to enable the mask detector to select masks from the image studied. Based on COVID-19 computed tomography (CT) images, we used DRL mask extraction-based techniques to extract visual features of COVID-19 infected areas and provide an accurate clinical diagnosis while optimizing the pathogenic diagnostic test and saving time. We collected CT images of different cases (normal chest CT, pneumonia, typical viral cases, and cases of COVID-19). Experimental validation achieved a precision of 97.12% with a Dice of 80.81%, a sensitivity of 79.97%, a specificity of 99.48%, a precision of 85.21%, an F1 score of 83.01%, a structural metric of 84.38%, and a mean absolute error of 0.86%. Additionally, the results of the visual segmentation clearly reflected the ground truth. The results reveal the proof of principle for using DRL to extract CT masks for an effective diagnosis of COVID-19.
Collapse
|
31
|
Explainable Ensemble Machine Learning for Breast Cancer Diagnosis Based on Ultrasound Image Texture Features. FORECASTING 2022. [DOI: 10.3390/forecast4010015] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Image classification is widely used to build predictive models for breast cancer diagnosis. Most existing approaches overwhelmingly rely on deep convolutional networks to build such diagnosis pipelines. These model architectures, although remarkable in performance, are black-box systems that provide minimal insight into the inner logic behind their predictions. This is a major drawback as the explainability of prediction is vital for applications such as cancer diagnosis. In this paper, we address this issue by proposing an explainable machine learning pipeline for breast cancer diagnosis based on ultrasound images. We extract first- and second-order texture features of the ultrasound images and use them to build a probabilistic ensemble of decision tree classifiers. Each decision tree learns to classify the input ultrasound image by learning a set of robust decision thresholds for texture features of the image. The decision path of the model predictions can then be interpreted by decomposing the learned decision trees. Our results show that our proposed framework achieves high predictive performance while being explainable.
Collapse
|
32
|
Jabeen K, Khan MA, Alhaisoni M, Tariq U, Zhang YD, Hamza A, Mickus A, Damaševičius R. Breast Cancer Classification from Ultrasound Images Using Probability-Based Optimal Deep Learning Feature Fusion. SENSORS 2022; 22:s22030807. [PMID: 35161552 PMCID: PMC8840464 DOI: 10.3390/s22030807] [Citation(s) in RCA: 56] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 01/12/2022] [Accepted: 01/17/2022] [Indexed: 12/11/2022]
Abstract
After lung cancer, breast cancer is the second leading cause of death in women. If breast cancer is detected early, mortality rates in women can be reduced. Because manual breast cancer diagnosis takes a long time, an automated system is required for early cancer detection. This paper proposes a new framework for breast cancer classification from ultrasound images that employs deep learning and the fusion of the best selected features. The proposed framework is divided into five major steps: (i) data augmentation is performed to increase the size of the original dataset for better learning of Convolutional Neural Network (CNN) models; (ii) a pre-trained DarkNet-53 model is considered and the output layer is modified based on the augmented dataset classes; (iii) the modified model is trained using transfer learning and features are extracted from the global average pooling layer; (iv) the best features are selected using two improved optimization algorithms known as reformed differential evaluation (RDE) and reformed gray wolf (RGW); and (v) the best selected features are fused using a new probability-based serial approach and classified using machine learning algorithms. The experiment was conducted on an augmented Breast Ultrasound Images (BUSI) dataset, and the best accuracy was 99.1%. When compared with recent techniques, the proposed framework outperforms them.
Collapse
Affiliation(s)
- Kiran Jabeen
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha’il, Ha’il 55211, Saudi Arabia;
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam Bin Abdulaziz University, Al-Kharaj 11942, Saudi Arabia;
| | - Yu-Dong Zhang
- Department of Informatics, University of Leicester, Leicester LE1 7RH, UK;
| | - Ameer Hamza
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Artūras Mickus
- Department of Applied Informatics, Vytautas Magnus University, LT-44404 Kaunas, Lithuania;
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, LT-44404 Kaunas, Lithuania;
- Correspondence:
| |
Collapse
|
33
|
Vijayakumar K, Rajinikanth V, Kirubakaran MK. Automatic detection of breast cancer in ultrasound images using Mayfly algorithm optimized handcrafted features. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:751-766. [PMID: 35527619 DOI: 10.3233/xst-221136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
BACKGROUND The incidence rates of breast cancer in women community is progressively raising and the premature diagnosis is necessary to detect and cure the disease. OBJECTIVE To develop a novel automated disuse detection framework to examine the Breast-Ultrasound-Images (BUI). METHODS This scheme includes the following stages; (i) Image acquisition and resizing, (ii) Gaussian filter-based pre-processing, (iii) Handcrafted features extraction, (iv) Optimal feature selection with Mayfly Algorithm (MA), (v) Binary classification and validation. The dataset includes BUI extracted from 133 normal, 445 benign and 210 malignant cases. Each BUI is resized to 256×256×1 pixels and the resized BUIs are used to develop and test the new scheme. Handcrafted feature-based cancer detection is employed and the parameters, such as Entropies, Local-Binary-Pattern (LBP) and Hu moments are considered. To avoid the over-fitting problem, a feature reduction procedure is also implemented with MA and the reduced feature sub-set is used to train and validate the classifiers developed in this research. RESULTS The experiments were performed to classify BUIs between (i) normal and benign, (ii) normal and malignant, and (iii) benign and malignant cases. The results show that classification accuracy of > 94%, precision of > 92%, sensitivity of > 92% and specificity of > 90% are achieved applying the developed new schemes or framework. CONCLUSION In this work, a machine-learning scheme is employed to detect/classify the disease using BUI and achieves promising results. In future, we will test the feasibility of implementing deep-learning method to this framework to further improve detection accuracy.
Collapse
Affiliation(s)
- K Vijayakumar
- Department of Computer Science and Engineering, St. Joseph's Institute of Technology, Chennai, Tamilnadu, India
| | - V Rajinikanth
- Department of Electronics and Instrumentation Engineering, St. Joseph's College of Engineering, Chennai, Tamilnadu, India
| | - M K Kirubakaran
- Department of Information Technology, St. Joseph's Institute of Technology, Chennai, Tamilnadu, India
| |
Collapse
|
34
|
Chavan T, Prajapati K, JV KR. InvUNET: Involuted UNET for Breast Tumor Segmentation from Ultrasound. Artif Intell Med 2022. [DOI: 10.1007/978-3-031-09342-5_27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
35
|
Breast Cancer Detection Using Mammogram Images with Improved Multi-Fractal Dimension Approach and Feature Fusion. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112412122] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Breast cancer detection using mammogram images at an early stage is an important step in disease diagnostics. We propose a new method for the classification of benign or malignant breast cancer from mammogram images. Hybrid thresholding and the machine learning method are used to derive the region of interest (ROI). The derived ROI is then separated into five different blocks. The wavelet transform is applied to suppress noise from each produced block based on BayesShrink soft thresholding by capturing high and low frequencies within different sub-bands. An improved fractal dimension (FD) approach, called multi-FD (M-FD), is proposed to extract multiple features from each denoised block. The number of features extracted is then reduced by a genetic algorithm. Five classifiers are trained and used with the artificial neural network (ANN) to classify the extracted features from each block. Lastly, the fusion process is performed on the results of five blocks to obtain the final decision. The proposed approach is tested and evaluated on four benchmark mammogram image datasets (MIAS, DDSM, INbreast, and BCDR). We present the results of single- and double-dataset evaluations. Only one dataset is used for training and testing in the single-dataset evaluation, whereas two datasets (one for training, and one for testing) are used in the double-dataset evaluation. The experiment results show that the proposed method yields better results on the INbreast dataset in the single-dataset evaluation, whilst better results are obtained on the remaining datasets in the double-dataset evaluation. The proposed approach outperforms other state-of-the-art models on the Mini-MIAS dataset.
Collapse
|
36
|
Khan MA, Alhaisoni M, Tariq U, Hussain N, Majid A, Damaševičius R, Maskeliūnas R. COVID-19 Case Recognition from Chest CT Images by Deep Learning, Entropy-Controlled Firefly Optimization, and Parallel Feature Fusion. SENSORS (BASEL, SWITZERLAND) 2021; 21:7286. [PMID: 34770595 PMCID: PMC8588229 DOI: 10.3390/s21217286] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Revised: 10/28/2021] [Accepted: 10/29/2021] [Indexed: 12/12/2022]
Abstract
In healthcare, a multitude of data is collected from medical sensors and devices, such as X-ray machines, magnetic resonance imaging, computed tomography (CT), and so on, that can be analyzed by artificial intelligence methods for early diagnosis of diseases. Recently, the outbreak of the COVID-19 disease caused many deaths. Computer vision researchers support medical doctors by employing deep learning techniques on medical images to diagnose COVID-19 patients. Various methods were proposed for COVID-19 case classification. A new automated technique is proposed using parallel fusion and optimization of deep learning models. The proposed technique starts with a contrast enhancement using a combination of top-hat and Wiener filters. Two pre-trained deep learning models (AlexNet and VGG16) are employed and fine-tuned according to target classes (COVID-19 and healthy). Features are extracted and fused using a parallel fusion approach-parallel positive correlation. Optimal features are selected using the entropy-controlled firefly optimization method. The selected features are classified using machine learning classifiers such as multiclass support vector machine (MC-SVM). Experiments were carried out using the Radiopaedia database and achieved an accuracy of 98%. Moreover, a detailed analysis is conducted and shows the improved performance of the proposed scheme.
Collapse
Affiliation(s)
- Muhammad Attique Khan
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan; (M.A.K.); (N.H.); (A.M.)
| | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha’il, Ha’il 55211, Saudi Arabia;
| | - Usman Tariq
- Information Systems Department, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al Khraj 11942, Saudi Arabia;
| | - Nazar Hussain
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan; (M.A.K.); (N.H.); (A.M.)
| | - Abdul Majid
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan; (M.A.K.); (N.H.); (A.M.)
| | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
| | - Rytis Maskeliūnas
- Department of Multimedia Engineering, Kaunas University of Technology, 51368 Kaunas, Lithuania;
| |
Collapse
|
37
|
Automated Segmentation of Median Nerve in Dynamic Sonography Using Deep Learning: Evaluation of Model Performance. Diagnostics (Basel) 2021; 11:diagnostics11101893. [PMID: 34679591 PMCID: PMC8534332 DOI: 10.3390/diagnostics11101893] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 10/01/2021] [Accepted: 10/10/2021] [Indexed: 11/21/2022] Open
Abstract
There is an emerging trend to employ dynamic sonography in the diagnosis of entrapment neuropathy, which exhibits aberrant spatiotemporal characteristics of the entrapped nerve when adjacent tissues move. However, the manual tracking of the entrapped nerve in consecutive images demands tons of human labors and impedes its popularity clinically. Here we evaluated the performance of automated median nerve segmentation in dynamic sonography using a variety of deep learning models pretrained with ImageNet, including DeepLabV3+, U-Net, FPN, and Mask-R-CNN. Dynamic ultrasound images of the median nerve at across wrist level were acquired from 52 subjects diagnosed as carpal tunnel syndrome when they moved their fingers. The videos of 16 subjects exhibiting diverse appearance and that of the remaining 36 subjects were used for model test and training, respectively. The centroid, circularity, perimeter, and cross section area of the median nerve in individual frame were automatically determined from the inferred nerve. The model performance was evaluated by the score of intersection over union (IoU) between the annotated and model-predicted data. We found that both DeepLabV3+ and Mask R-CNN predicted median nerve the best with averaged IOU scores close to 0.83, which indicates the feasibility of automated median nerve segmentation in dynamic sonography using deep learning.
Collapse
|