1
|
Masumoto R, Eguchi Y, Takeuchi H, Inage K, Narita M, Shiga Y, Inoue M, Toshi N, Tokeshi S, Okuyama K, Ohyama S, Suzuki N, Maki S, Furuya T, Ohtori S, Orita S. Automatic generation of diffusion tensor imaging for the lumbar nerve using convolutional neural networks. Magn Reson Imaging 2024; 114:110237. [PMID: 39278577 DOI: 10.1016/j.mri.2024.110237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 08/25/2024] [Accepted: 09/10/2024] [Indexed: 09/18/2024]
Abstract
【PURPOSE】: Diffusion Tensor Imaging (DTI) with tractography is useful for the functional diagnosis of degenerative lumbar disorders. However, it is not widely used in clinical settings due to time and health care provider costs, as it is performed manually on hospital workstations. The purpose of this study is to construct a system that extracts the lumbar nerve and generates tractography automatically using deep learning semantic segmentation. 【METHODS】: We acquired 839 axial diffusion weighted images (DWI) from the DTI data of 90 patients with degenerative lumbar disorders, and segmented the lumbar nerve roots using U-Net, a semantic segmentation model. Using five architectural models, the accuracy of the lumbar nerve root segmentation was evaluated using a Dice coefficient. We also created automatic scripts from three commercially available software tools, including MRICronGL for medical image viewing, Diffusion Toolkit for reconstruction of the DWI data, and Trackvis for the creation of the tractography, and compared the time required to create the tractography, and evaluated the quality of the automated tractography was evaluated. 【RESULTS】: Among the five models, the architectural model Resnet34 performed the best with a Dice = 0.780. The creation time for the automatic lumbar nerve tractography was 191 s, which was significantly shorter by 235 s than the manual time of 426 s (p < 0.05). Furthermore, the agreement between manual and automated tractography was 3.67 ± 1.53 (satisfactory). 【CONCLUSIONS】: Using deep learning semantic segmentation, we were able to construct a system that automatically extracted the lumbar nerve and generated lumbar nerve tractography. This technology makes it possible to analyze lumbar nerve DTI and create tractography automatically, and is expected to advance the clinical applications of DTI for the assessment of the lumbar nerve.
Collapse
Affiliation(s)
- Rira Masumoto
- Department of Medical Engineering, Faculty of Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522, Japan.
| | - Yawara Eguchi
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba 260-8670, Japan; Department of Orthopaedic Surgery, Shimoshizu National Hospital, 934-5, Shikawatashi, Yotsukaido, Chiba 284-0003, Japan.
| | - Hidenari Takeuchi
- Department of Medical Engineering, Faculty of Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522, Japan
| | - Kazuhide Inage
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba 260-8670, Japan
| | - Miyako Narita
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba 260-8670, Japan
| | - Yasuhiro Shiga
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba 260-8670, Japan
| | - Masahiro Inoue
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba 260-8670, Japan
| | - Noriyasu Toshi
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba 260-8670, Japan
| | - Soichiro Tokeshi
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba 260-8670, Japan
| | - Kohei Okuyama
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba 260-8670, Japan
| | - Shuhei Ohyama
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba 260-8670, Japan
| | - Noritaka Suzuki
- Department of Medical Engineering, Faculty of Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522, Japan
| | - Satoshi Maki
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba 260-8670, Japan; Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522, Japan
| | - Takeo Furuya
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba 260-8670, Japan.
| | - Seiji Ohtori
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba 260-8670, Japan.
| | - Sumihisa Orita
- Department of Medical Engineering, Faculty of Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522, Japan; Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba 260-8670, Japan; Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522, Japan.
| |
Collapse
|
2
|
Oza P, Oza U, Oza R, Sharma P, Patel S, Kumar P, Gohel B. Digital mammography dataset for breast cancer diagnosis research (DMID) with breast mass segmentation analysis. Biomed Eng Lett 2024; 14:317-330. [PMID: 38374902 PMCID: PMC10874363 DOI: 10.1007/s13534-023-00339-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 11/08/2023] [Accepted: 11/28/2023] [Indexed: 02/21/2024] Open
Abstract
Purpose:In the last two decades, computer-aided detection and diagnosis (CAD) systems have been created to help radiologists discover and diagnose lesions observed on breast imaging tests. These systems can serve as a second opinion tool for the radiologist. However, developing algorithms for identifying and diagnosing breast lesions relies heavily on mammographic datasets. Many existing databases do not consider all the needs necessary for research and study, such as mammographic masks, radiology reports, breast composition, etc. This paper aims to introduce and describe a new mammographic database. Methods:The proposed dataset comprises mammograms with several lesions, such as masses, calcifications, architectural distortions, and asymmetries. In addition, a radiologist report is provided, describing the details of the breast, such as breast density, description of abnormality present, condition of the skin, nipple and pectoral muscles, etc., for each mammogram. Results:We present results of commonly used segmentation framework trained on our proposed dataset. We used information regarding the class of abnormalities (benign or malignant) and breast tissue density provided with each mammogram to analyze the segmentation model's performance concerning these parameters. Conclusion:The presented dataset provides diverse mammogram images to develop and train models for breast cancer diagnosis applications.
Collapse
Affiliation(s)
| | - Urvi Oza
- Dhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar, India
| | - Rajiv Oza
- Rad Imaging, X-Ray and Sonography Clinic, Ahmedabad, India
| | - Paawan Sharma
- Pandit Deendayal Energy University, Gandhinagar, India
| | - Samir Patel
- Pandit Deendayal Energy University, Gandhinagar, India
| | | | - Bakul Gohel
- Dhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar, India
| |
Collapse
|
3
|
Wang L. Mammography with deep learning for breast cancer detection. Front Oncol 2024; 14:1281922. [PMID: 38410114 PMCID: PMC10894909 DOI: 10.3389/fonc.2024.1281922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 01/19/2024] [Indexed: 02/28/2024] Open
Abstract
X-ray mammography is currently considered the golden standard method for breast cancer screening, however, it has limitations in terms of sensitivity and specificity. With the rapid advancements in deep learning techniques, it is possible to customize mammography for each patient, providing more accurate information for risk assessment, prognosis, and treatment planning. This paper aims to study the recent achievements of deep learning-based mammography for breast cancer detection and classification. This review paper highlights the potential of deep learning-assisted X-ray mammography in improving the accuracy of breast cancer screening. While the potential benefits are clear, it is essential to address the challenges associated with implementing this technology in clinical settings. Future research should focus on refining deep learning algorithms, ensuring data privacy, improving model interpretability, and establishing generalizability to successfully integrate deep learning-assisted mammography into routine breast cancer screening programs. It is hoped that the research findings will assist investigators, engineers, and clinicians in developing more effective breast imaging tools that provide accurate diagnosis, sensitivity, and specificity for breast cancer.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| |
Collapse
|
4
|
Liang H, Li Z, Lin W, Xie Y, Zhang S, Li Z, Luo H, Li T, Han S. Enhancing Gastrointestinal Stromal Tumor (GIST) Diagnosis: An Improved YOLOv8 Deep Learning Approach for Precise Mitotic Detection. IEEE ACCESS 2024; 12:116829-116840. [DOI: 10.1109/access.2024.3446613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/14/2024]
Affiliation(s)
- Haoxin Liang
- Department of General Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Zhichun Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, SAR, China
| | - Weijie Lin
- The Second Clinical College, Southern Medical University, Guangzhou, Guangdong, China
| | - Yuheng Xie
- The Second Clinical College, Southern Medical University, Guangzhou, Guangdong, China
| | - Shuo Zhang
- School of Instrument Science and Engineering, Southeast University, Nanjing, Jiangsu, China
| | - Zhou Li
- Department of General Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Hongyu Luo
- Department of General Surgery, The Sixth People’s Hospital of Huizhou City, Huizhou, China
| | - Tian Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, SAR, China
| | - Shuai Han
- Department of General Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| |
Collapse
|
5
|
Zhang J, Wu J, Zhou XS, Shi F, Shen D. Recent advancements in artificial intelligence for breast cancer: Image augmentation, segmentation, diagnosis, and prognosis approaches. Semin Cancer Biol 2023; 96:11-25. [PMID: 37704183 DOI: 10.1016/j.semcancer.2023.09.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 08/03/2023] [Accepted: 09/05/2023] [Indexed: 09/15/2023]
Abstract
Breast cancer is a significant global health burden, with increasing morbidity and mortality worldwide. Early screening and accurate diagnosis are crucial for improving prognosis. Radiographic imaging modalities such as digital mammography (DM), digital breast tomosynthesis (DBT), magnetic resonance imaging (MRI), ultrasound (US), and nuclear medicine techniques, are commonly used for breast cancer assessment. And histopathology (HP) serves as the gold standard for confirming malignancy. Artificial intelligence (AI) technologies show great potential for quantitative representation of medical images to effectively assist in segmentation, diagnosis, and prognosis of breast cancer. In this review, we overview the recent advancements of AI technologies for breast cancer, including 1) improving image quality by data augmentation, 2) fast detection and segmentation of breast lesions and diagnosis of malignancy, 3) biological characterization of the cancer such as staging and subtyping by AI-based classification technologies, 4) prediction of clinical outcomes such as metastasis, treatment response, and survival by integrating multi-omics data. Then, we then summarize large-scale databases available to help train robust, generalizable, and reproducible deep learning models. Furthermore, we conclude the challenges faced by AI in real-world applications, including data curating, model interpretability, and practice regulations. Besides, we expect that clinical implementation of AI will provide important guidance for the patient-tailored management.
Collapse
Affiliation(s)
- Jiadong Zhang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
6
|
Balaji K. Image Augmentation based on Variational Autoencoder for Breast Tumor Segmentation. Acad Radiol 2023; 30 Suppl 2:S172-S183. [PMID: 36804294 DOI: 10.1016/j.acra.2022.12.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 12/18/2022] [Accepted: 12/21/2022] [Indexed: 02/18/2023]
Abstract
RATIONALE AND OBJECTIVES Breast tumor segmentation based on Dynamic Contrast-Enhanced Magnetic Resonance Imaging is significant step for computable radiomics analysis of breast cancer. Manual tumor annotation is time-consuming process and involves medical acquaintance, biased, inclined to error, and inter-user discrepancy. A number of modern trainings have revealed the capability of deep learning representations in image segmentation. MATERIALS AND METHODS Here, we describe a 3D Connected-UNets for tumor segmentation from 3D Magnetic Resonance Imagings based on encoder-decoder architecture. Due to a restricted training dataset size, a variational auto-encoder outlet is supplementary to renovate the input image itself in order to identify the shared decoder and execute additional controls on its layers. Based on initial segmentation of Connected-UNets, fully connected 3D provisional unsystematic domain is used to enhance segmentation outcomes by discovering 2D neighbor areas and 3D volume statistics. Moreover, 3D connected modules evaluation is used to endure around large modules and decrease segmentation noise. RESULTS The proposed method has been assessed on two widely offered datasets, explicitly INbreast and the curated breast imaging subset of digital database for screening mammography The proposed model has also been estimated using a private dataset. CONCLUSION The experimental results show that the proposed model outperforms the state-of-the-art methods for breast tumor segmentation.
Collapse
Affiliation(s)
- K Balaji
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, 632014 India.
| |
Collapse
|
7
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
8
|
Zhang XY, Wei Q, Wu GG, Tang Q, Pan XF, Chen GQ, Zhang D, Dietrich CF, Cui XW. Artificial intelligence - based ultrasound elastography for disease evaluation - a narrative review. Front Oncol 2023; 13:1197447. [PMID: 37333814 PMCID: PMC10272784 DOI: 10.3389/fonc.2023.1197447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 05/22/2023] [Indexed: 06/20/2023] Open
Abstract
Ultrasound elastography (USE) provides complementary information of tissue stiffness and elasticity to conventional ultrasound imaging. It is noninvasive and free of radiation, and has become a valuable tool to improve diagnostic performance with conventional ultrasound imaging. However, the diagnostic accuracy will be reduced due to high operator-dependence and intra- and inter-observer variability in visual observations of radiologists. Artificial intelligence (AI) has great potential to perform automatic medical image analysis tasks to provide a more objective, accurate and intelligent diagnosis. More recently, the enhanced diagnostic performance of AI applied to USE have been demonstrated for various disease evaluations. This review provides an overview of the basic concepts of USE and AI techniques for clinical radiologists and then introduces the applications of AI in USE imaging that focus on the following anatomical sites: liver, breast, thyroid and other organs for lesion detection and segmentation, machine learning (ML) - assisted classification and prognosis prediction. In addition, the existing challenges and future trends of AI in USE are also discussed.
Collapse
Affiliation(s)
- Xian-Ya Zhang
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qi Wei
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ge-Ge Wu
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qi Tang
- Department of Ultrasonography, The First Hospital of Changsha, Changsha, China
| | - Xiao-Fang Pan
- Health Medical Department, Dalian Municipal Central Hospital, Dalian, China
| | - Gong-Quan Chen
- Department of Medical Ultrasound, Minda Hospital of Hubei Minzu University, Enshi, China
| | - Di Zhang
- Department of Medical Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | | | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
9
|
Cantone M, Marrocco C, Tortorella F, Bria A. Convolutional Networks and Transformers for Mammography Classification: An Experimental Study. SENSORS (BASEL, SWITZERLAND) 2023; 23:1229. [PMID: 36772268 PMCID: PMC9921468 DOI: 10.3390/s23031229] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 01/13/2023] [Accepted: 01/18/2023] [Indexed: 05/31/2023]
Abstract
Convolutional Neural Networks (CNN) have received a large share of research in mammography image analysis due to their capability of extracting hierarchical features directly from raw data. Recently, Vision Transformers are emerging as viable alternative to CNNs in medical imaging, in some cases performing on par or better than their convolutional counterparts. In this work, we conduct an extensive experimental study to compare the most recent CNN and Vision Transformer architectures for whole mammograms classification. We selected, trained and tested 33 different models, 19 convolutional- and 14 transformer-based, on the largest publicly available mammography image database OMI-DB. We also performed an analysis of the performance at eight different image resolutions and considering all the individual lesion categories in isolation (masses, calcifications, focal asymmetries, architectural distortions). Our findings confirm the potential of visual transformers, which performed on par with traditional CNNs like ResNet, but at the same time show a superiority of modern convolutional networks like EfficientNet.
Collapse
Affiliation(s)
- Marco Cantone
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, 03043 Cassino, FR, Italy
| | - Claudio Marrocco
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, 03043 Cassino, FR, Italy
| | - Francesco Tortorella
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, SA, Italy
| | - Alessandro Bria
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, 03043 Cassino, FR, Italy
| |
Collapse
|
10
|
Das HS, Das A, Neog A, Mallik S, Bora K, Zhao Z. Breast cancer detection: Shallow convolutional neural network against deep convolutional neural networks based approach. Front Genet 2023; 13:1097207. [PMID: 36685963 PMCID: PMC9846574 DOI: 10.3389/fgene.2022.1097207] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Accepted: 12/15/2022] [Indexed: 01/06/2023] Open
Abstract
Introduction: Of all the cancers that afflict women, breast cancer (BC) has the second-highest mortality rate, and it is also believed to be the primary cause of the high death rate. Breast cancer is the most common cancer that affects women globally. There are two types of breast tumors: benign (less harmful and unlikely to become breast cancer) and malignant (which are very dangerous and might result in aberrant cells that could result in cancer). Methods: To find breast abnormalities like masses and micro-calcifications, competent and educated radiologists often examine mammographic images. This study focuses on computer-aided diagnosis to help radiologists make more precise diagnoses of breast cancer. This study aims to compare and examine the performance of the proposed shallow convolutional neural network architecture having different specifications against pre-trained deep convolutional neural network architectures trained on mammography images. Mammogram images are pre-processed in this study's initial attempt to carry out the automatic identification of BC. Thereafter, three different types of shallow convolutional neural networks with representational differences are then fed with the resulting data. In the second method, transfer learning via fine-tuning is used to feed the same collection of images into pre-trained convolutional neural networks VGG19, ResNet50, MobileNet-v2, Inception-v3, Xception, and Inception-ResNet-v2. Results: In our experiment with two datasets, the accuracy for the CBIS-DDSM and INbreast datasets are 80.4%, 89.2%, and 87.8%, 95.1% respectively. Discussion: It can be concluded from the experimental findings that the deep network-based approach with precise tuning outperforms all other state-of-the-art techniques in experiments on both datasets.
Collapse
Affiliation(s)
- Himanish Shekhar Das
- Department of Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Akalpita Das
- Department of Computer Science and Engineering, GIMT Guwahati, Guwahati, India
| | - Anupal Neog
- Department of AI and Machine Learning COE, IQVIA, Bengaluru, Karnataka, India
| | - Saurav Mallik
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, MA, United States
- Department of Pharmacology and Toxicology, University of Arizona, Tucson, AZ, United States
| | - Kangkana Bora
- Department of Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Department of Pathology and Laboratory Medicine, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, United States
| |
Collapse
|
11
|
Breast Cancer Classification by Using Multi-Headed Convolutional Neural Network Modeling. Healthcare (Basel) 2022; 10:healthcare10122367. [PMID: 36553891 PMCID: PMC9777990 DOI: 10.3390/healthcare10122367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 11/18/2022] [Accepted: 11/22/2022] [Indexed: 11/27/2022] Open
Abstract
Breast cancer is one of the most widely recognized diseases after skin cancer. Though it can occur in all kinds of people, it is undeniably more common in women. Several analytical techniques, such as Breast MRI, X-ray, Thermography, Mammograms, Ultrasound, etc., are utilized to identify it. In this study, artificial intelligence was used to rapidly detect breast cancer by analyzing ultrasound images from the Breast Ultrasound Images Dataset (BUSI), which consists of three categories: Benign, Malignant, and Normal. The relevant dataset comprises grayscale and masked ultrasound images of diagnosed patients. Validation tests were accomplished for quantitative outcomes utilizing the exhibition measures for each procedure. The proposed framework is discovered to be effective, substantiating outcomes with only raw image evaluation giving a 78.97% test accuracy and masked image evaluation giving 81.02% test precision, which could decrease human errors in the determination cycle. Additionally, our described framework accomplishes higher accuracy after using multi-headed CNN with two processed datasets based on masked and original images, where the accuracy hopped up to 92.31% (±2) with a Mean Squared Error (MSE) loss of 0.05. This work primarily contributes to identifying the usefulness of multi-headed CNN when working with two different types of data inputs. Finally, a web interface has been made to make this model usable for non-technical personals.
Collapse
|
12
|
An integrated framework for breast mass classification and diagnosis using stacked ensemble of residual neural networks. Sci Rep 2022; 12:12259. [PMID: 35851592 PMCID: PMC9293883 DOI: 10.1038/s41598-022-15632-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 06/27/2022] [Indexed: 11/16/2022] Open
Abstract
A computer-aided diagnosis (CAD) system requires automated stages of tumor detection, segmentation, and classification that are integrated sequentially into one framework to assist the radiologists with a final diagnosis decision. In this paper, we introduce the final step of breast mass classification and diagnosis using a stacked ensemble of residual neural network (ResNet) models (i.e. ResNet50V2, ResNet101V2, and ResNet152V2). The work presents the task of classifying the detected and segmented breast masses into malignant or benign, and diagnosing the Breast Imaging Reporting and Data System (BI-RADS) assessment category with a score from 2 to 6 and the shape as oval, round, lobulated, or irregular. The proposed methodology was evaluated on two publicly available datasets, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, and additionally on a private dataset. Comparative experiments were conducted on the individual models and an average ensemble of models with an XGBoost classifier. Qualitative and quantitative results show that the proposed model achieved better performance for (1) Pathology classification with an accuracy of 95.13%, 99.20%, and 95.88%; (2) BI-RADS category classification with an accuracy of 85.38%, 99%, and 96.08% respectively on CBIS-DDSM, INbreast, and the private dataset; and (3) shape classification with 90.02% on the CBIS-DDSM dataset. Our results demonstrate that our proposed integrated framework could benefit from all automated stages to outperform the latest deep learning methodologies.
Collapse
|
13
|
AlEisa HN, Touiti W, Ali ALHussan A, Ben Aoun N, Ejbali R, Zaied M, Saadia A. Breast Cancer Classification Using FCN and Beta Wavelet Autoencoder. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8044887. [PMID: 35785059 PMCID: PMC9246636 DOI: 10.1155/2022/8044887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 06/04/2022] [Indexed: 11/17/2022]
Abstract
In this paper, a new classification approach of breast cancer based on Fully Convolutional Networks (FCNs) and Beta Wavelet Autoencoder (BWAE) is presented. FCN, as a powerful image segmentation model, is used to extract the relevant information from mammography images. It will identify the relevant zones to model while WAE is used to model the extracted information for these zones. In fact, WAE has proven its superiority to the majority of the features extraction approaches. The fusion of these two techniques have improved the feature extraction phase and this by keeping and modeling only the relevant and useful features for the identification and description of breast masses. The experimental results showed the effectiveness of our proposed method which has given very encouraging results in comparison with the states of the art approaches on the same mammographic image base. A precision rate of 94% for benign and 93% for malignant was achieved with a recall rate of 92% for benign and 95% for malignant. For the normal case, we were able to reach a rate of 100%.
Collapse
Affiliation(s)
- Hussah Nasser AlEisa
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Wajdi Touiti
- Research Team in Intelligent Machines, National School of Engineers of Gabes, B. P. W 6072, Gabes, Tunisia
| | - Amel Ali ALHussan
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Najib Ben Aoun
- College of Computer Science and Information Technology, Al Baha University, Al Baha, Saudi Arabia
- REGIM-Lab, Research Groups in Intelligent Machines, National School of Engineers of Sfax (ENIS), University of Sfax, Sfax, Tunisia
| | - Ridha Ejbali
- Research Team in Intelligent Machines, National School of Engineers of Gabes, B. P. W 6072, Gabes, Tunisia
| | - Mourad Zaied
- Research Team in Intelligent Machines, National School of Engineers of Gabes, B. P. W 6072, Gabes, Tunisia
| | - Ayesha Saadia
- Department of Computer Science, Faculty of Computing and Artificial Intelligence, Air University, PAF Complex, Islamabad, Pakistan
| |
Collapse
|
14
|
Characteristics of Computed Tomography Images for Patients with Acute Liver Injury Caused by Sepsis under Deep Learning Algorithm. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2022:9322196. [PMID: 35360262 PMCID: PMC8958061 DOI: 10.1155/2022/9322196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 02/18/2022] [Accepted: 02/21/2022] [Indexed: 11/17/2022]
Abstract
This study was aimed at exploring the application of image segmentation based on full convolutional neural network (FCN) in liver computed tomography (CT) image segmentation and analyzing the clinical features of acute liver injury caused by sepsis. The Sigmoid function, encoder-decoder, and weighted cross entropy loss function were introduced and optimized based on FCN. The Dice value, precision, recall rate, volume overlap error (VOE), relative volume difference (RVD), and root mean square error (RMSE) values of the optimized algorithms were compared and analyzed. 92 patients with sepsis were selected as the research objects, and they were divided into a nonacute liver injury group (50 cases) and acute liver injury group (42 cases) based on whether they had acute liver injury. The differences in the proportion of patients with different disease histories, the proportion of patients with different infection sites, the number of organ failure, and the time of admission to intensive care unit (ICU) were compared between the two groups. It was found that the optimized window CT image Dice value after preprocessing (0.704 ± 0.06) was significantly higher than the other two methods (P < 0.05). The Dice value, precision, and recall rate of the optimized-FCN algorithm were (0.826 ± 0.06), (0.91 ± 0.08), and (0.88 ± 0.09), respectively, which were significantly higher than other algorithms (P < 0.05). The VOE, RVD, and RMSE values were (21.19 ± 1.97), (10.45 ± 1.02), and (0.25 ± 0.02), respectively, which were significantly lower than other algorithms (P < 0.05). The proportion of patients with a history of drinking in the nonacute liver injury group was lower than that in the acute liver injury group (P < 0.05), and the proportion of patients with a history of hypotension was greatly higher than that in the nonacute liver injury group (P < 0.01). CT images of sepsis patients with acute liver injury showed that large areas of liver parenchyma mixed with high-density hematoma, the number of organ failures, and the length of stay in ICU were significantly higher than those in the nonacute liver injury group (P < 0.05). It showed that the optimization algorithm based on FCN greatly improved the performance of CT image segmentation. Long-term drinking, low blood pressure, number of organ failures, and length of stay in ICU were all related to sepsis and acute liver injury. Conclusion in this study could provide a reference basis for the diagnosis and prognosis of acute liver injury caused by sepsis.
Collapse
|
15
|
Sperm morphology analysis by using the fusion of two-stage fine-tuned deep networks. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
16
|
An U, Bhardwaj A, Shameer K, Subramanian L. High Precision Mammography Lesion Identification From Imprecise Medical Annotations. Front Big Data 2021; 4:742779. [PMID: 34977563 PMCID: PMC8716325 DOI: 10.3389/fdata.2021.742779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 10/20/2021] [Indexed: 11/21/2022] Open
Abstract
Breast cancer screening using Mammography serves as the earliest defense against breast cancer, revealing anomalous tissue years before it can be detected through physical screening. Despite the use of high resolution radiography, the presence of densely overlapping patterns challenges the consistency of human-driven diagnosis and drives interest in leveraging state-of-art localization ability of deep convolutional neural networks (DCNN). The growing availability of digitized clinical archives enables the training of deep segmentation models, but training using the most widely available form of coarse hand-drawn annotations works against learning the precise boundary of cancerous tissue in evaluation, while producing results that are more aligned with the annotations rather than the underlying lesions. The expense of collecting high quality pixel-level data in the field of medical science makes this even more difficult. To surmount this fundamental challenge, we propose LatentCADx, a deep learning segmentation model capable of precisely annotating cancer lesions underlying hand-drawn annotations, which we procedurally obtain using joint classification training and a strict segmentation penalty. We demonstrate the capability of LatentCADx on a publicly available dataset of 2,620 Mammogram case files, where LatentCADx obtains classification ROC of 0.97, AP of 0.87, and segmentation AP of 0.75 (IOU = 0.5), giving comparable or better performance than other models. Qualitative and precision evaluation of LatentCADx annotations on validation samples reveals that LatentCADx increases the specificity of segmentations beyond that of existing models trained on hand-drawn annotations, with pixel level specificity reaching a staggering value of 0.90. It also obtains sharp boundary around lesions unlike other methods, reducing the confused pixels in the output by more than 60%.
Collapse
Affiliation(s)
- Ulzee An
- Department of Computer Science, University of California, Los Angeles, Los Angeles, CA, United States
| | - Ankit Bhardwaj
- Department of Computer Science, Courant Institute of Mathematical Sciences, New York University, New York, NY, United States
| | | | - Lakshminarayanan Subramanian
- Department of Computer Science, Courant Institute of Mathematical Sciences, New York University, New York, NY, United States
- Department of Population Health, NYU Grossman School of Medicine, New York University, New York, NY, United States
| |
Collapse
|
17
|
Connected-UNets: a deep learning architecture for breast mass segmentation. NPJ Breast Cancer 2021; 7:151. [PMID: 34857755 PMCID: PMC8640011 DOI: 10.1038/s41523-021-00358-x] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 11/01/2021] [Indexed: 12/19/2022] Open
Abstract
Breast cancer analysis implies that radiologists inspect mammograms to detect suspicious breast lesions and identify mass tumors. Artificial intelligence techniques offer automatic systems for breast mass segmentation to assist radiologists in their diagnosis. With the rapid development of deep learning and its application to medical imaging challenges, UNet and its variations is one of the state-of-the-art models for medical image segmentation that showed promising performance on mammography. In this paper, we propose an architecture, called Connected-UNets, which connects two UNets using additional modified skip connections. We integrate Atrous Spatial Pyramid Pooling (ASPP) in the two standard UNets to emphasize the contextual information within the encoder–decoder network architecture. We also apply the proposed architecture on the Attention UNet (AUNet) and the Residual UNet (ResUNet). We evaluated the proposed architectures on two publically available datasets, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, and additionally on a private dataset. Experiments were also conducted using additional synthetic data using the cycle-consistent Generative Adversarial Network (CycleGAN) model between two unpaired datasets to augment and enhance the images. Qualitative and quantitative results show that the proposed architecture can achieve better automatic mass segmentation with a high Dice score of 89.52%, 95.28%, and 95.88% and Intersection over Union (IoU) score of 80.02%, 91.03%, and 92.27%, respectively, on CBIS-DDSM, INbreast, and the private dataset.
Collapse
|
18
|
Deep Learning-Based CT Imaging in Diagnosing Myeloma and Its Prognosis Evaluation. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:5436793. [PMID: 34552707 PMCID: PMC8452442 DOI: 10.1155/2021/5436793] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2021] [Revised: 08/20/2021] [Accepted: 08/23/2021] [Indexed: 11/30/2022]
Abstract
Imaging examination plays an important role in the early diagnosis of myeloma. The study focused on the segmentation effects of deep learning-based models on CT images for myeloma, and the influence of different chemotherapy treatments on the prognosis of patients. Specifically, 186 patients with suspected myeloma were the research subjects. The U-Net model was adjusted to segment the CT images, and then, the Faster region convolutional neural network (RCNN) model was used to label the lesions. Patients were divided into bortezomib group (group 1, n = 128) and non-bortezomib group (group 2, n = 58). The biochemical indexes, blood routine indexes, and skeletal muscle of the two groups were compared before and after chemotherapy. The results showed that the improved U-Net model demonstrated good segmentation results, the Faster RCNN model can realize the labeling of the lesion area in the CT image, and the classification accuracy rate was as high as 99%. Compared with group 1, group 2 showed enlarged psoas major and erector spinae muscle after treatment and decreased bone marrow plasma cells content, blood M protein, urine 24 h light chain, pBNP, ß-2 microglobulin (β2MG), ALP, and white blood cell (WBC) levels (P < 0.05). In conclusion, deep learning is suggested in the segmentation and classification of CT images for myeloma, which can lift the detection accuracy. Two different chemotherapy regimens both improve the prognosis of patients, but the effects of non-bortezomib chemotherapy are better.
Collapse
|
19
|
Breast Cancer Segmentation Methods: Current Status and Future Potentials. BIOMED RESEARCH INTERNATIONAL 2021; 2021:9962109. [PMID: 34337066 PMCID: PMC8321730 DOI: 10.1155/2021/9962109] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 05/14/2021] [Accepted: 06/11/2021] [Indexed: 12/24/2022]
Abstract
Early breast cancer detection is one of the most important issues that need to be addressed worldwide as it can help increase the survival rate of patients. Mammograms have been used to detect breast cancer in the early stages; if detected in the early stages, it can drastically reduce treatment costs. The detection of tumours in the breast depends on segmentation techniques. Segmentation plays a significant role in image analysis and includes detection, feature extraction, classification, and treatment. Segmentation helps physicians quantify the volume of tissue in the breast for treatment planning. In this work, we have grouped segmentation methods into three groups: classical segmentation that includes region-, threshold-, and edge-based segmentation; machine learning segmentation; and supervised and unsupervised and deep learning segmentation. The findings of our study revealed that region-based segmentation is frequently used for classical methods, and the most frequently used techniques are region growing. Further, a median filter is a robust tool for removing noise. Moreover, the MIAS database is frequently used in classical segmentation methods. Meanwhile, in machine learning segmentation, unsupervised machine learning methods are more frequently used, and U-Net is frequently used for mammogram image segmentation because it does not require many annotated images compared with other deep learning models. Furthermore, reviewed papers revealed that it is possible to train a deep learning model without performing any preprocessing or postprocessing and also showed that the U-Net model is frequently used for mammogram segmentation. The U-Net model is frequently used because it does not require many annotated images and also because of the presence of high-performance GPU computing, which makes it easy to train networks with more layers. Additionally, we identified mammograms and utilised widely used databases, wherein 3 and 28 are public and private databases, respectively.
Collapse
|
20
|
Meng W, Sun Y, Qian H, Chen X, Yu Q, Abiyasi N, Yan S, Peng H, Zhang H, Zhang X. Computer-Aided Diagnosis Evaluation of the Correlation Between Magnetic Resonance Imaging With Molecular Subtypes in Breast Cancer. Front Oncol 2021; 11:693339. [PMID: 34249745 PMCID: PMC8260834 DOI: 10.3389/fonc.2021.693339] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 05/26/2021] [Indexed: 12/25/2022] Open
Abstract
Background There is a demand for additional alternative methods that can allow the differentiation of the breast tumor into molecular subtypes precisely and conveniently. Purpose The present study aimed to determine suitable optimal classifiers and investigate the general applicability of computer-aided diagnosis (CAD) to associate between the breast cancer molecular subtype and the extracted MR imaging features. Methods We analyzed a total of 264 patients (mean age: 47.9 ± 9.7 years; range: 19–81 years) with 264 masses (mean size: 28.6 ± 15.86 mm; range: 5–91 mm) using a Unet model and Gradient Tree Boosting for segmentation and classification. Results The tumors were segmented clearly by the Unet model automatically. All the extracted features which including the shape features,the texture features of the tumors and the clinical features were input into the classifiers for classification, and the results showed that the GTB classifier is superior to other classifiers, which achieved F1-Score 0.72, AUC 0.81 and score 0.71. Analyzed the different features combinations, we founded that the texture features associated with the clinical features are the optimal features to different the breast cancer subtypes. Conclusion CAD is feasible to differentiate the breast cancer subtypes, automatical segmentation were feasible by Unet model and the extracted texture features from breast MR imaging with the clinical features can be used to help differentiating the molecular subtype. Moreover, in the clinical features, BPE and age characteristics have the best potential for subtype.
Collapse
Affiliation(s)
- Wei Meng
- Department of Radiology, Third Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Yunfeng Sun
- Department of Radiology, Third Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Haibin Qian
- Department of Radiology, Third Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xiaodan Chen
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
| | - Qiujie Yu
- Department of Radiology, Third Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Nanding Abiyasi
- Department of Pathology, Third Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shaolei Yan
- Department of Radiology, Third Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Haiyong Peng
- Department of Radiology, Third Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Hongxia Zhang
- Department of Radiology, Third Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xiushi Zhang
- Department of Radiology, Third Affiliated Hospital of Harbin Medical University, Harbin, China
| |
Collapse
|
21
|
Alanazi SA, Kamruzzaman MM, Islam Sarker MN, Alruwaili M, Alhwaiti Y, Alshammari N, Siddiqi MH. Boosting Breast Cancer Detection Using Convolutional Neural Network. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:5528622. [PMID: 33884157 PMCID: PMC8041556 DOI: 10.1155/2021/5528622] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 02/21/2021] [Accepted: 03/24/2021] [Indexed: 01/27/2023]
Abstract
Breast cancer forms in breast cells and is considered as a very common type of cancer in women. Breast cancer is also a very life-threatening disease of women after lung cancer. A convolutional neural network (CNN) method is proposed in this study to boost the automatic identification of breast cancer by analyzing hostile ductal carcinoma tissue zones in whole-slide images (WSIs). The paper investigates the proposed system that uses various convolutional neural network (CNN) architectures to automatically detect breast cancer, comparing the results with those from machine learning (ML) algorithms. All architectures were guided by a big dataset of about 275,000, 50 × 50-pixel RGB image patches. Validation tests were done for quantitative results using the performance measures for every methodology. The proposed system is found to be successful, achieving results with 87% accuracy, which could reduce human mistakes in the diagnosis process. Moreover, our proposed system achieves accuracy higher than the 78% accuracy of machine learning (ML) algorithms. The proposed system therefore improves accuracy by 9% above results from machine learning (ML) algorithms.
Collapse
Affiliation(s)
- Saad Awadh Alanazi
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakakah, Saudi Arabia
| | - M. M. Kamruzzaman
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakakah, Saudi Arabia
| | - Md Nazirul Islam Sarker
- School of Political Science and Public Administration, Neijiang Normal University, Neijiang, China
| | - Madallah Alruwaili
- Department of Computer Engineering and Networks, College of Computer and Information Sciences, Jouf University, Sakakah, Saudi Arabia
| | - Yousef Alhwaiti
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakakah, Saudi Arabia
| | - Nasser Alshammari
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakakah, Saudi Arabia
| | - Muhammad Hameed Siddiqi
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakakah, Saudi Arabia
| |
Collapse
|