1
|
Dal E, Srivastava A, Chigarira B, Hage Chehade C, Matthew Thomas V, Galarza Fortuna GM, Garg D, Ji R, Gebrael G, Agarwal N, Swami U, Li H. Effectiveness of ChatGPT 4.0 in Telemedicine-Based Management of Metastatic Prostate Carcinoma. Diagnostics (Basel) 2024; 14:1899. [PMID: 39272684 PMCID: PMC11394468 DOI: 10.3390/diagnostics14171899] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Revised: 07/29/2024] [Accepted: 08/22/2024] [Indexed: 09/15/2024] Open
Abstract
The recent rise in telemedicine, notably during the COVID-19 pandemic, highlights the potential of integrating artificial intelligence tools in healthcare. This study assessed the effectiveness of ChatGPT versus medical oncologists in the telemedicine-based management of metastatic prostate cancer. In this retrospective study, 102 patients who met inclusion criteria were analyzed to compare the competencies of ChatGPT and oncologists in telemedicine consultations. ChatGPT's role in pre-charting and determining the need for in-person consultations was evaluated. The primary outcome was the concordance between ChatGPT and oncologists in treatment decisions. Results showed a moderate concordance (Cohen's Kappa = 0.43, p < 0.001). The number of diagnoses made by both parties was not significantly different (median number of diagnoses: 5 vs. 5, p = 0.12). In conclusion, ChatGPT exhibited moderate agreement with oncologists in management via telemedicine, indicating the need for further research to explore its healthcare applications.
Collapse
Affiliation(s)
- Emre Dal
- Huntsman Cancer Institute, University of Utah, Salt Lake City, UT 84112, USA
| | - Ayana Srivastava
- Huntsman Cancer Institute, University of Utah, Salt Lake City, UT 84112, USA
| | - Beverly Chigarira
- Huntsman Cancer Institute, University of Utah, Salt Lake City, UT 84112, USA
| | - Chadi Hage Chehade
- Huntsman Cancer Institute, University of Utah, Salt Lake City, UT 84112, USA
| | | | | | - Diya Garg
- Huntsman Cancer Institute, University of Utah, Salt Lake City, UT 84112, USA
| | - Richard Ji
- Huntsman Cancer Institute, University of Utah, Salt Lake City, UT 84112, USA
| | - Georges Gebrael
- Huntsman Cancer Institute, University of Utah, Salt Lake City, UT 84112, USA
| | - Neeraj Agarwal
- Huntsman Cancer Institute, University of Utah, Salt Lake City, UT 84112, USA
| | - Umang Swami
- Huntsman Cancer Institute, University of Utah, Salt Lake City, UT 84112, USA
| | - Haoran Li
- Department of Medical Oncology, University of Kansas Cancer Center, Westwood, KS 66205, USA
| |
Collapse
|
2
|
S S, Dharani Devi G, V R, Jeyalakshmi J. Privacy-Preserving Breast Cancer Classification: A Federated Transfer Learning Approach. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1488-1504. [PMID: 38424280 PMCID: PMC11300768 DOI: 10.1007/s10278-024-01035-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 01/11/2024] [Accepted: 01/30/2024] [Indexed: 03/02/2024]
Abstract
Breast cancer is deadly cancer causing a considerable number of fatalities among women in worldwide. To enhance patient outcomes as well as survival rates, early and accurate detection is crucial. Machine learning techniques, particularly deep learning, have demonstrated impressive success in various image recognition tasks, including breast cancer classification. However, the reliance on large labeled datasets poses challenges in the medical domain due to privacy issues and data silos. This study proposes a novel transfer learning approach integrated into a federated learning framework to solve the limitations of limited labeled data and data privacy in collaborative healthcare settings. For breast cancer classification, the mammography and MRO images were gathered from three different medical centers. Federated learning, an emerging privacy-preserving paradigm, empowers multiple medical institutions to jointly train the global model while maintaining data decentralization. Our proposed methodology capitalizes on the power of pre-trained ResNet, a deep neural network architecture, as a feature extractor. By fine-tuning the higher layers of ResNet using breast cancer datasets from diverse medical centers, we enable the model to learn specialized features relevant to different domains while leveraging the comprehensive image representations acquired from large-scale datasets like ImageNet. To overcome domain shift challenges caused by variations in data distributions across medical centers, we introduce domain adversarial training. The model learns to minimize the domain discrepancy while maximizing classification accuracy, facilitating the acquisition of domain-invariant features. We conducted extensive experiments on diverse breast cancer datasets obtained from multiple medical centers. Comparative analysis was performed to evaluate the proposed approach against traditional standalone training and federated learning without domain adaptation. When compared with traditional models, our proposed model showed a classification accuracy of 98.8% and a computational time of 12.22 s. The results showcase promising enhancements in classification accuracy and model generalization, underscoring the potential of our method in improving breast cancer classification performance while upholding data privacy in a federated healthcare environment.
Collapse
Affiliation(s)
- Selvakanmani S
- Department of Information Technology, R.M.K Engineering College, Chennai, Tamil Nadu, India.
| | - G Dharani Devi
- Department of Computer Science and Engineering, Rajalakshmi Engineering College, Chennai, Tamil Nadu, India
| | - Rekha V
- Department of Artificial Intelligence and Data Science, Panimalar Engineering College, Chennai, Tamil Nadu, India
| | - J Jeyalakshmi
- Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidhyapeetham, Chennai, India
| |
Collapse
|
3
|
Qu G, Song Q, Fang T. The artistic image processing for visual healing in smart city. Sci Rep 2024; 14:16846. [PMID: 39039163 PMCID: PMC11263401 DOI: 10.1038/s41598-024-68082-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 07/19/2024] [Indexed: 07/24/2024] Open
Abstract
This study investigates the processing methods of artistic images within the context of Smart city (SC) initiatives, focusing on the visual healing effects of artistic image processing to enhance urban residents' mental health and quality of life. Firstly, it examines the role of artistic image processing techniques in visual healing. Secondly, deep learning technology is introduced and improved, proposing the overlapping segmentation vision transformer (OSViT) for image blocks, and further integrating the bidirectional long short-term memory (BiLSTM) algorithm. An innovative artistic image processing and classification recognition model based on OSViT-BiLSTM is then constructed. Finally, the visual healing effect of the processed art images in different scenes is analyzed. The results demonstrate that the proposed model achieves a classification recognition accuracy of 92.9% for art images, which is at least 6.9% higher than that of other existing model algorithms. Additionally, over 90% of users report satisfaction with the visual healing effects of the artistic images. Therefore, it is found that the proposed model can accurately identify artistic images, enhance their beauty and artistry, and improve the visual healing effect. This study provides an experimental reference for incorporating visual healing into SC initiatives.
Collapse
Affiliation(s)
- Guangfu Qu
- School of Film, Shandong University of Arts, Jinan, 250300, China.
| | - Qian Song
- Sichuan Fine Arts Institute, Chongqing, 400053, China
| | - Ting Fang
- Sichuan Fine Arts Institute, Chongqing, 400053, China.
| |
Collapse
|
4
|
Ram TB, Krishnan S, Jeevanandam J, Danquah MK, Thomas S. Emerging Biohybrids of Aptamer-Based Nano-Biosensing Technologies for Effective Early Cancer Detection. Mol Diagn Ther 2024; 28:425-453. [PMID: 38775897 DOI: 10.1007/s40291-024-00717-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/01/2024] [Indexed: 06/28/2024]
Abstract
Cancer is a leading global cause of mortality, which underscores the imperative of early detection for improved patient outcomes. Biorecognition molecules, especially aptamers, have emerged as highly effective tools for early and accurate cancer cell identification. Aptamers, with superior versatility in synthesis and modification, offer enhanced binding specificity and stability compared with conventional antibodies. Hence, this article reviews diagnostic strategies employing aptamer-based biohybrid nano-biosensing technologies, focusing on their utility in detecting cancer biomarkers and abnormal cells. Recent developments include the synthesis of nano-aptamers using diverse nanomaterials, such as metallic nanoparticles, metal oxide nanoparticles, carbon-derived substances, and biohybrid nanostructures. The integration of these nanomaterials with aptamers significantly enhances sensitivity and specificity, promising innovative and efficient approaches for cancer diagnosis. This convergence of nanotechnology with aptamer research holds the potential to revolutionize cancer treatment through rapid, accurate, and non-invasive diagnostic methods.
Collapse
Affiliation(s)
| | | | - Jaison Jeevanandam
- CQM-Centro de Química da Madeira, Universidade da Madeira, Campus da Penteada, 9020-105, Funchal, Madeira, Portugal.
| | - Michael K Danquah
- Department of Chemical and Biomolecular Engineering, University of Tennessee, Knoxville, TN, USA
| | - Sabu Thomas
- School of Polymer Science and Technology and School of Chemical Sciences, Mahatma Gandhi University, Kottayam, Kerala, India
| |
Collapse
|
5
|
Safdar Ali Khan M, Husen A, Nisar S, Ahmed H, Shah Muhammad S, Aftab S. Offloading the computational complexity of transfer learning with generic features. PeerJ Comput Sci 2024; 10:e1938. [PMID: 38660182 PMCID: PMC11041970 DOI: 10.7717/peerj-cs.1938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 02/19/2024] [Indexed: 04/26/2024]
Abstract
Deep learning approaches are generally complex, requiring extensive computational resources and having high time complexity. Transfer learning is a state-of-the-art approach to reducing the requirements of high computational resources by using pre-trained models without compromising accuracy and performance. In conventional studies, pre-trained models are trained on datasets from different but similar domains with many domain-specific features. The computational requirements of transfer learning are directly dependent on the number of features that include the domain-specific and the generic features. This article investigates the prospects of reducing the computational requirements of the transfer learning models by discarding domain-specific features from a pre-trained model. The approach is applied to breast cancer detection using the dataset curated breast imaging subset of the digital database for screening mammography and various performance metrics such as precision, accuracy, recall, F1-score, and computational requirements. It is seen that discarding the domain-specific features to a specific limit provides significant performance improvements as well as minimizes the computational requirements in terms of training time (reduced by approx. 12%), processor utilization (reduced approx. 25%), and memory usage (reduced approx. 22%). The proposed transfer learning strategy increases accuracy (approx. 7%) and offloads computational complexity expeditiously.
Collapse
Affiliation(s)
- Muhammad Safdar Ali Khan
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Arif Husen
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
- Department of Computer Science, COMSATS Institute of Information Technology, Lahore, Punjab, Pakistan
| | - Shafaq Nisar
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Hasnain Ahmed
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Syed Shah Muhammad
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Shabib Aftab
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| |
Collapse
|
6
|
Kumar V, Prabha C, Sharma P, Mittal N, Askar SS, Abouhawwash M. Unified deep learning models for enhanced lung cancer prediction with ResNet-50-101 and EfficientNet-B3 using DICOM images. BMC Med Imaging 2024; 24:63. [PMID: 38500083 PMCID: PMC10946139 DOI: 10.1186/s12880-024-01241-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 03/07/2024] [Indexed: 03/20/2024] Open
Abstract
Significant advancements in machine learning algorithms have the potential to aid in the early detection and prevention of cancer, a devastating disease. However, traditional research methods face obstacles, and the amount of cancer-related information is rapidly expanding. The authors have developed a helpful support system using three distinct deep-learning models, ResNet-50, EfficientNet-B3, and ResNet-101, along with transfer learning, to predict lung cancer, thereby contributing to health and reducing the mortality rate associated with this condition. This offer aims to address the issue effectively. Using a dataset of 1,000 DICOM lung cancer images from the LIDC-IDRI repository, each image is classified into four different categories. Although deep learning is still making progress in its ability to analyze and understand cancer data, this research marks a significant step forward in the fight against cancer, promoting better health outcomes and potentially lowering the mortality rate. The Fusion Model, like all other models, achieved 100% precision in classifying Squamous Cells. The Fusion Model and ResNet-50 achieved a precision of 90%, closely followed by EfficientNet-B3 and ResNet-101 with slightly lower precision. To prevent overfitting and improve data collection and planning, the authors implemented a data extension strategy. The relationship between acquiring knowledge and reaching specific scores was also connected to advancing and addressing the issue of imprecise accuracy, ultimately contributing to advancements in health and a reduction in the mortality rate associated with lung cancer.
Collapse
Affiliation(s)
- Vinod Kumar
- Department of Computer Science and Engineering, Chandigarh University, Mohali, Punjab, India
| | - Chander Prabha
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Preeti Sharma
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Nitin Mittal
- Skill Faculty of Engineering and Technology, Shri Vishwakarma Skill University, Palwal, Haryana, India.
| | - S S Askar
- Department of Statistics and Operations Research, College of Science, King Saud University, P.O. Box 2455, 11451, Riyadh, Saudi Arabia
| | - Mohamed Abouhawwash
- Department of Mathematics, Faculty of Science, Mansoura University, Mansoura, 35516, Egypt
| |
Collapse
|
7
|
Ayana G, Lee E, Choe SW. Vision Transformers for Breast Cancer Human Epidermal Growth Factor Receptor 2 Expression Staging without Immunohistochemical Staining. THE AMERICAN JOURNAL OF PATHOLOGY 2024; 194:402-414. [PMID: 38096984 DOI: 10.1016/j.ajpath.2023.11.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 10/10/2023] [Accepted: 11/20/2023] [Indexed: 12/31/2023]
Abstract
Accurate staging of human epidermal growth factor receptor 2 (HER2) expression is vital for evaluating breast cancer treatment efficacy. However, it typically involves costly and complex immunohistochemical staining, along with hematoxylin and eosin staining. This work presents customized vision transformers for staging HER2 expression in breast cancer using only hematoxylin and eosin-stained images. The proposed algorithm comprised three modules: a localization module for weakly localizing critical image features using spatial transformers, an attention module for global learning via vision transformers, and a loss module to determine proximity to a HER2 expression level based on input images by calculating ordinal loss. Results, reported with 95% CIs, reveal the proposed approach's success in HER2 expression staging: area under the receiver operating characteristic curve, 0.9202 ± 0.01; precision, 0.922 ± 0.01; sensitivity, 0.876 ± 0.01; and specificity, 0.959 ± 0.02 over fivefold cross-validation. Comparatively, this approach significantly outperformed conventional vision transformer models and state-of-the-art convolutional neural network models (P < 0.001). Furthermore, it surpassed existing methods when evaluated on an independent test data set. This work holds great importance, aiding HER2 expression staging in breast cancer treatment while circumventing the costly and time-consuming immunohistochemical staining procedure, thereby addressing diagnostic disparities in low-resource settings and low-income countries.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea; School of Biomedical Engineering, Jimma University, Jimma, Ethiopia
| | - Eonjin Lee
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea
| | - Se-Woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea; Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea.
| |
Collapse
|
8
|
Li J, Jiang P, An Q, Wang GG, Kong HF. Medical image identification methods: A review. Comput Biol Med 2024; 169:107777. [PMID: 38104516 DOI: 10.1016/j.compbiomed.2023.107777] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/30/2023] [Accepted: 11/28/2023] [Indexed: 12/19/2023]
Abstract
The identification of medical images is an essential task in computer-aided diagnosis, medical image retrieval and mining. Medical image data mainly include electronic health record data and gene information data, etc. Although intelligent imaging provided a good scheme for medical image analysis over traditional methods that rely on the handcrafted features, it remains challenging due to the diversity of imaging modalities and clinical pathologies. Many medical image identification methods provide a good scheme for medical image analysis. The concepts pertinent of methods, such as the machine learning, deep learning, convolutional neural networks, transfer learning, and other image processing technologies for medical image are analyzed and summarized in this paper. We reviewed these recent studies to provide a comprehensive overview of applying these methods in various medical image analysis tasks, such as object detection, image classification, image registration, segmentation, and other tasks. Especially, we emphasized the latest progress and contributions of different methods in medical image analysis, which are summarized base on different application scenarios, including classification, segmentation, detection, and image registration. In addition, the applications of different methods are summarized in different application area, such as pulmonary, brain, digital pathology, brain, skin, lung, renal, breast, neuromyelitis, vertebrae, and musculoskeletal, etc. Critical discussion of open challenges and directions for future research are finally summarized. Especially, excellent algorithms in computer vision, natural language processing, and unmanned driving will be applied to medical image recognition in the future.
Collapse
Affiliation(s)
- Juan Li
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China; School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
| | - Pan Jiang
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China
| | - Qing An
- School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China
| | - Gai-Ge Wang
- School of Computer Science and Technology, Ocean University of China, Qingdao, 266100, China.
| | - Hua-Feng Kong
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China.
| |
Collapse
|
9
|
Azam S, Montaha S, Raiaan MAK, Rafid AKMRH, Mukta SH, Jonkman M. An Automated Decision Support System to Analyze Malignancy Patterns of Breast Masses Employing Medically Relevant Features of Ultrasound Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:45-59. [PMID: 38343240 DOI: 10.1007/s10278-023-00925-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 09/22/2023] [Accepted: 10/23/2023] [Indexed: 03/02/2024]
Abstract
An automated computer-aided approach might aid radiologists in diagnosing breast cancer at a primary stage. This study proposes a novel decision support system to classify breast tumors into benign and malignant based on clinically important features, using ultrasound images. Nine handcrafted features, which align with the clinical markers used by radiologists, are extracted from the region of interest (ROI) of ultrasound images. To validate that these elected clinical markers have a significant impact on predicting the benign and malignant classes, ten machine learning (ML) models are experimented with resulting in test accuracies in the range of 96 to 99%. In addition, four feature selection techniques are explored where two features are eliminated according to the feature ranking score of each feature selection method. The Random Forest classifier is trained with the resultant four feature sets. Results indicate that even when eliminating only two features, the performance of the model is reduced for each feature selection technique. These experiments validate the efficiency and effectiveness of the clinically important features. To develop the decision support system, a probability density function (PDF) graph is generated for each feature in order to find a threshold range to distinguish benign and malignant tumors. Based on the threshold range of particular features, a decision support system is developed in such a way that if at least eight out of nine features are within the threshold range, the image will be denoted as true predicted. With this algorithm, a test accuracy of 99.38% and an F1 Score of 99.05% is achieved, which means that our decision support system outperforms all the previously trained ML models. Moreover, after calculating individual class-based test accuracies, for the benign class, a test accuracy of 99.31% has been attained where only three benign instances are misclassified out of 437 instances, and for the malignant class, a test accuracy of 99.52% has been attained where only one malignant instance is misclassified out of 210 instances. This system is robust, time-effective, and reliable as the radiologists' criteria are followed and may aid specialists in making a diagnosis.
Collapse
Affiliation(s)
- Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia.
| | - Sidratul Montaha
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | | | | | | | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| |
Collapse
|
10
|
Kim S, Rakib Hasan K, Ando Y, Ko S, Lee D, Park NJY, Cho J. Improving Tumor-Infiltrating Lymphocytes Score Prediction in Breast Cancer with Self-Supervised Learning. Life (Basel) 2024; 14:90. [PMID: 38255705 PMCID: PMC11154396 DOI: 10.3390/life14010090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 01/02/2024] [Accepted: 01/03/2024] [Indexed: 01/24/2024] Open
Abstract
Tumor microenvironment (TME) plays a pivotal role in immuno-oncology, which investigates the intricate interactions between tumors and the human immune system. Specifically, tumor-infiltrating lymphocytes (TILs) are crucial biomarkers for evaluating the prognosis of breast cancer patients and have the potential to refine immunotherapy precision and accurately identify tumor cells in specific cancer types. In this study, we conducted tissue segmentation and lymphocyte detection tasks to predict TIL scores by employing self-supervised learning (SSL) model-based approaches capable of addressing limited labeling data issues. Our experiments showed a 1.9% improvement in tissue segmentation and a 2% improvement in lymphocyte detection over the ImageNet pre-training model. Using these SSL-based models, we achieved a TIL score of 0.718 with a 4.4% improvement. In particular, when trained with only 10% of the entire dataset, the SwAV pre-trained model exhibited a superior performance over other models. Our work highlights improved tissue segmentation and lymphocyte detection using the SSL model with less labeled data for TIL score prediction.
Collapse
Affiliation(s)
- Sijin Kim
- Department of Biomedical Science, Kyungpook National University, Daegu 41566, Republic of Korea; (S.K.); (K.R.H.); (Y.A.); (S.K.); (D.L.)
| | - Kazi Rakib Hasan
- Department of Biomedical Science, Kyungpook National University, Daegu 41566, Republic of Korea; (S.K.); (K.R.H.); (Y.A.); (S.K.); (D.L.)
| | - Yu Ando
- Department of Biomedical Science, Kyungpook National University, Daegu 41566, Republic of Korea; (S.K.); (K.R.H.); (Y.A.); (S.K.); (D.L.)
| | - Seokhwan Ko
- Department of Biomedical Science, Kyungpook National University, Daegu 41566, Republic of Korea; (S.K.); (K.R.H.); (Y.A.); (S.K.); (D.L.)
| | - Donghyeon Lee
- Department of Biomedical Science, Kyungpook National University, Daegu 41566, Republic of Korea; (S.K.); (K.R.H.); (Y.A.); (S.K.); (D.L.)
| | - Nora Jee-Young Park
- Department of Pathology, School of Medicine, Kyungpook National University, Daegu 41944, Republic of Korea;
- Department of Pathology, Kyungpook National University Chilgok Hospital, Daegu 41404, Republic of Korea
| | - Junghwan Cho
- Clinical Omics Institute, Kyungpook National University, Daegu 41405, Republic of Korea
| |
Collapse
|
11
|
Harrison P, Hasan R, Park K. State-of-the-Art of Breast Cancer Diagnosis in Medical Images via Convolutional Neural Networks (CNNs). JOURNAL OF HEALTHCARE INFORMATICS RESEARCH 2023; 7:387-432. [PMID: 37927373 PMCID: PMC10620373 DOI: 10.1007/s41666-023-00144-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 08/14/2023] [Accepted: 08/22/2023] [Indexed: 11/07/2023]
Abstract
Early detection of breast cancer is crucial for a better prognosis. Various studies have been conducted where tumor lesions are detected and localized on images. This is a narrative review where the studies reviewed are related to five different image modalities: histopathological, mammogram, magnetic resonance imaging (MRI), ultrasound, and computed tomography (CT) images, making it different from other review studies where fewer image modalities are reviewed. The goal is to have the necessary information, such as pre-processing techniques and CNN-based diagnosis techniques for the five modalities, readily available in one place for future studies. Each modality has pros and cons, such as mammograms might give a high false positive rate for radiographically dense breasts, while ultrasounds with low soft tissue contrast result in early-stage false detection, and MRI provides a three-dimensional volumetric image, but it is expensive and cannot be used as a routine test. Various studies were manually reviewed using particular inclusion and exclusion criteria; as a result, 91 recent studies that classify and detect tumor lesions on breast cancer images from 2017 to 2022 related to the five image modalities were included. For histopathological images, the maximum accuracy achieved was around 99 % , and the maximum sensitivity achieved was 97.29 % by using DenseNet, ResNet34, and ResNet50 architecture. For mammogram images, the maximum accuracy achieved was 96.52 % using a customized CNN architecture. For MRI, the maximum accuracy achieved was 98.33 % using customized CNN architecture. For ultrasound, the maximum accuracy achieved was around 99 % by using DarkNet-53, ResNet-50, G-CNN, and VGG. For CT, the maximum sensitivity achieved was 96 % by using Xception architecture. Histopathological and ultrasound images achieved higher accuracy of around 99 % by using ResNet34, ResNet50, DarkNet-53, G-CNN, and VGG compared to other modalities for either of the following reasons: use of pre-trained architectures with pre-processing techniques, use of modified architectures with pre-processing techniques, use of two-stage CNN, and higher number of studies available for Artificial Intelligence (AI)/machine learning (ML) researchers to reference. One of the gaps we found is that only a single image modality is used for CNN-based diagnosis; in the future, a multiple image modality approach can be used to design a CNN architecture with higher accuracy.
Collapse
Affiliation(s)
- Pratibha Harrison
- Department of Computer and Information Science, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| | - Rakib Hasan
- Department of Mechanical Engineering, Khulna University of Engineering & Technology, PhulBari Gate, Khulna, 9203 Bangladesh
| | - Kihan Park
- Department of Mechanical Engineering, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| |
Collapse
|
12
|
Mudeng V, Farid MN, Ayana G, Choe SW. Domain and Histopathology Adaptations-Based Classification for Malignancy Grading System. THE AMERICAN JOURNAL OF PATHOLOGY 2023; 193:2080-2098. [PMID: 37673327 DOI: 10.1016/j.ajpath.2023.07.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 06/30/2023] [Accepted: 07/19/2023] [Indexed: 09/08/2023]
Abstract
Accurate proliferation rate quantification can be used to devise an appropriate treatment for breast cancer. Pathologists use breast tissue biopsy glass slides stained with hematoxylin and eosin to obtain grading information. However, this manual evaluation may lead to high costs and be ineffective because diagnosis depends on the facility and the pathologists' insights and experiences. Convolutional neural network acts as a computer-based observer to improve clinicians' capacity in grading breast cancer. Therefore, this study proposes a novel scheme for automatic breast cancer malignancy grading from invasive ductal carcinoma. The proposed classifiers implement multistage transfer learning incorporating domain and histopathologic transformations. Domain adaptation using pretrained models, such as InceptionResNetV2, InceptionV3, NASNet-Large, ResNet50, ResNet101, VGG19, and Xception, was applied to classify the ×40 magnification BreaKHis data set into eight classes. Subsequently, InceptionV3 and Xception, which contain the domain and histopathology pretrained weights, were determined to be the best for this study and used to categorize the Databiox database into grades 1, 2, or 3. To provide a comprehensive report, this study offered a patchless automated grading system for magnification-dependent and magnification-independent classifications. With an overall accuracy (means ± SD) of 90.17% ± 3.08% to 97.67% ± 1.09% and an F1 score of 0.9013 to 0.9760 for magnification-dependent classification, the classifiers in this work achieved outstanding performance. The proposed approach could be used for breast cancer grading systems in clinical settings.
Collapse
Affiliation(s)
- Vicky Mudeng
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea; Department of Electrical Engineering, Institut Teknologi Kalimantan, Balikpapan, Indonesia
| | - Mifta Nur Farid
- Department of Electrical Engineering, Institut Teknologi Kalimantan, Balikpapan, Indonesia
| | - Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea
| | - Se-Woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea; Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea.
| |
Collapse
|
13
|
Zhang M, He G, Pan C, Yun B, Shen D, Meng M. Discrimination of benign and malignant breast lesions on dynamic contrast-enhanced magnetic resonance imaging using deep learning. J Cancer Res Ther 2023; 19:1589-1596. [PMID: 38156926 DOI: 10.4103/jcrt.jcrt_325_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Accepted: 09/26/2023] [Indexed: 01/03/2024]
Abstract
PURPOSE To evaluate the capability of deep transfer learning (DTL) and fine-tuning methods in differentiating malignant from benign lesions in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). METHODS The diagnostic efficiencies of the VGG19, ResNet50, and DenseNet201 models were tested under the same dataset. The model with the highest performance was selected and modified utilizing three fine-tuning strategies (S1-3). Fifty additional lesions were selected to form the validation set to verify the generalization abilities of these models. The accuracy (Ac) of the different models in the training and test sets, as well as the precision (Pr), recall rate (Rc), F1 score (), and area under the receiver operating characteristic curve (AUC), were primary performance indicators. Finally, the kappa test was used to compare the degree of agreement between the DTL models and pathological diagnosis in differentiating malignant from benign breast lesions. RESULTS The Pr, Rc, f1, and AUC of VGG19 (86.0%, 0.81, 0.81, and 0.81, respectively) were higher than those of DenseNet201 (70.0%, 0.61, 0.63, and 0.61, respectively) and ResNet50 (61.0%, 0.59, 0.59, and 0.59). After fine-tuning, the Pr, Rc, f1, and AUC of S1 (87.0%, 0.86, 0.86, and 0.86, respectively) were higher than those of VGG19. Notably, the degree of agreement between S1 and pathological diagnosis in differentiating malignant from benign breast lesions was 0.720 (κ = 0.720), which was higher than that of DenseNet201 (κ = 0.440), VGG19 (κ = 0.640), and ResNet50 (κ = 0.280). CONCLUSION The VGG19 model is an effective method for identifying benign and malignant breast lesions on DCE-MRI, and its performance can be further improved via fine-tuning. Overall, our findings insinuate that this technique holds potential clinical application value.
Collapse
Affiliation(s)
- Ming Zhang
- Department of Radiology, The Affiliated Changzhou No. 2 People's Hospital, Nanjing Medical University, Changzhou, Jiangsu Province, P.R. China
| | - Guangyuan He
- Department of Radiology, The Affiliated Changzhou No. 2 People's Hospital, Nanjing Medical University, Changzhou, Jiangsu Province, P.R. China
| | - Changjie Pan
- Department of Radiology, The Affiliated Changzhou No. 2 People's Hospital, Nanjing Medical University, Changzhou, Jiangsu Province, P.R. China
| | - Bing Yun
- Teaching and Research Department of English, Nanjing Forestry University Nanjing 210037, Jiangsu Province, P.R. China
| | - Dong Shen
- Department of Radiology, The Affiliated Changzhou No. 2 People's Hospital, Nanjing Medical University, Changzhou, Jiangsu Province, P.R. China
| | - Mingzhu Meng
- Department of Radiology, The Affiliated Changzhou No. 2 People's Hospital, Nanjing Medical University, Changzhou, Jiangsu Province, P.R. China
| |
Collapse
|
14
|
Zhang J, Wu J, Zhou XS, Shi F, Shen D. Recent advancements in artificial intelligence for breast cancer: Image augmentation, segmentation, diagnosis, and prognosis approaches. Semin Cancer Biol 2023; 96:11-25. [PMID: 37704183 DOI: 10.1016/j.semcancer.2023.09.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 08/03/2023] [Accepted: 09/05/2023] [Indexed: 09/15/2023]
Abstract
Breast cancer is a significant global health burden, with increasing morbidity and mortality worldwide. Early screening and accurate diagnosis are crucial for improving prognosis. Radiographic imaging modalities such as digital mammography (DM), digital breast tomosynthesis (DBT), magnetic resonance imaging (MRI), ultrasound (US), and nuclear medicine techniques, are commonly used for breast cancer assessment. And histopathology (HP) serves as the gold standard for confirming malignancy. Artificial intelligence (AI) technologies show great potential for quantitative representation of medical images to effectively assist in segmentation, diagnosis, and prognosis of breast cancer. In this review, we overview the recent advancements of AI technologies for breast cancer, including 1) improving image quality by data augmentation, 2) fast detection and segmentation of breast lesions and diagnosis of malignancy, 3) biological characterization of the cancer such as staging and subtyping by AI-based classification technologies, 4) prediction of clinical outcomes such as metastasis, treatment response, and survival by integrating multi-omics data. Then, we then summarize large-scale databases available to help train robust, generalizable, and reproducible deep learning models. Furthermore, we conclude the challenges faced by AI in real-world applications, including data curating, model interpretability, and practice regulations. Besides, we expect that clinical implementation of AI will provide important guidance for the patient-tailored management.
Collapse
Affiliation(s)
- Jiadong Zhang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
15
|
Hossain S, Azam S, Montaha S, Karim A, Chowa SS, Mondol C, Zahid Hasan M, Jonkman M. Automated breast tumor ultrasound image segmentation with hybrid UNet and classification using fine-tuned CNN model. Heliyon 2023; 9:e21369. [PMID: 37885728 PMCID: PMC10598544 DOI: 10.1016/j.heliyon.2023.e21369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 10/11/2023] [Accepted: 10/20/2023] [Indexed: 10/28/2023] Open
Abstract
Introduction Breast cancer stands as the second most deadly form of cancer among women worldwide. Early diagnosis and treatment can significantly mitigate mortality rates. Purpose The study aims to classify breast ultrasound images into benign and malignant tumors. This approach involves segmenting the breast's region of interest (ROI) employing an optimized UNet architecture and classifying the ROIs through an optimized shallow CNN model utilizing an ablation study. Method Several image processing techniques are utilized to improve image quality by removing text, artifacts, and speckle noise, and statistical analysis is done to check the enhanced image quality is satisfactory. With the processed dataset, the segmentation of breast tumor ROI is carried out, optimizing the UNet model through an ablation study where the architectural configuration and hyperparameters are altered. After obtaining the tumor ROIs from the fine-tuned UNet model (RKO-UNet), an optimized CNN model is employed to classify the tumor into benign and malignant classes. To enhance the CNN model's performance, an ablation study is conducted, coupled with the integration of an attention unit. The model's performance is further assessed by classifying breast cancer with mammogram images. Result The proposed classification model (RKONet-13) results in an accuracy of 98.41 %. The performance of the proposed model is further compared with five transfer learning models for both pre-segmented and post-segmented datasets. K-fold cross-validation is done to assess the proposed RKONet-13 model's performance stability. Furthermore, the performance of the proposed model is compared with previous literature, where the proposed model outperforms existing methods, demonstrating its effectiveness in breast cancer diagnosis. Lastly, the model demonstrates its robustness for breast cancer classification, delivering an exceptional performance of 96.21 % on a mammogram dataset. Conclusion The efficacy of this study relies on image pre-processing, segmentation with hybrid attention UNet, and classification with fine-tuned robust CNN model. This comprehensive approach aims to determine an effective technique for detecting breast cancer within ultrasound images.
Collapse
Affiliation(s)
- Shahed Hossain
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, 0909, NT, Australia
| | - Sidratul Montaha
- Department of Computer Science, University of Calgary, Calgary, AB, T2N 1N4, Canada
| | - Asif Karim
- Faculty of Science and Technology, Charles Darwin University, Casuarina, 0909, NT, Australia
| | - Sadia Sultana Chowa
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh
| | - Chaity Mondol
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh
| | - Md Zahid Hasan
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh
| | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, 0909, NT, Australia
| |
Collapse
|
16
|
Ashurov A, Chelloug SA, Tselykh A, Muthanna MSA, Muthanna A, Al-Gaashani MSAM. Improved Breast Cancer Classification through Combining Transfer Learning and Attention Mechanism. Life (Basel) 2023; 13:1945. [PMID: 37763348 PMCID: PMC10532552 DOI: 10.3390/life13091945] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 09/16/2023] [Accepted: 09/17/2023] [Indexed: 09/29/2023] Open
Abstract
Breast cancer, a leading cause of female mortality worldwide, poses a significant health challenge. Recent advancements in deep learning techniques have revolutionized breast cancer pathology by enabling accurate image classification. Various imaging methods, such as mammography, CT, MRI, ultrasound, and biopsies, aid in breast cancer detection. Computer-assisted pathological image classification is of paramount importance for breast cancer diagnosis. This study introduces a novel approach to breast cancer histopathological image classification. It leverages modified pre-trained CNN models and attention mechanisms to enhance model interpretability and robustness, emphasizing localized features and enabling accurate discrimination of complex cases. Our method involves transfer learning with deep CNN models-Xception, VGG16, ResNet50, MobileNet, and DenseNet121-augmented with the convolutional block attention module (CBAM). The pre-trained models are finetuned, and the two CBAM models are incorporated at the end of the pre-trained models. The models are compared to state-of-the-art breast cancer diagnosis approaches and tested for accuracy, precision, recall, and F1 score. The confusion matrices are used to evaluate and visualize the results of the compared models. They help in assessing the models' performance. The test accuracy rates for the attention mechanism (AM) using the Xception model on the "BreakHis" breast cancer dataset are encouraging at 99.2% and 99.5%. The test accuracy for DenseNet121 with AMs is 99.6%. The proposed approaches also performed better than previous approaches examined in the related studies.
Collapse
Affiliation(s)
- Asadulla Ashurov
- School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China;
| | - Samia Allaoua Chelloug
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Alexey Tselykh
- Institute of Computer Technologies and Information Security, Southern Federal University, Taganrog 347922, Russia; (A.T.); (M.S.A.M.)
| | - Mohammed Saleh Ali Muthanna
- Institute of Computer Technologies and Information Security, Southern Federal University, Taganrog 347922, Russia; (A.T.); (M.S.A.M.)
| | - Ammar Muthanna
- RUDN University, 6 Miklukho-Maklaya Street, Moscow 117198, Russia;
| | - Mehdhar S. A. M. Al-Gaashani
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China;
| |
Collapse
|
17
|
GadAllah MT, Mohamed AENA, Hefnawy AA, Zidan HE, El-Banby GM, Mohamed Badawy S. Convolutional Neural Networks Based Classification of Segmented Breast Ultrasound Images – A Comparative Preliminary Study. 2023 INTELLIGENT METHODS, SYSTEMS, AND APPLICATIONS (IMSA) 2023. [DOI: 10.1109/imsa58542.2023.10217585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Affiliation(s)
| | - Abd El-Naser A. Mohamed
- Menoufia University,Faculty of Electronic Engineering,Electronics and Electrical Communications Engineering Department,Menoufia,Egypt
| | - Alaa A. Hefnawy
- Electronics Research Institute (ERI),Computers and Systems Department,Cairo,Egypt
| | - Hassan E. Zidan
- Electronics Research Institute (ERI),Computers and Systems Department,Cairo,Egypt
| | - Ghada M. El-Banby
- Menoufia University,Faculty of Electronic Engineering,Industrial Electronics and Control Engineering Department,Menoufia,Egypt
| | - Samir Mohamed Badawy
- Menoufia University,Faculty of Electronic Engineering,Industrial Electronics and Control Engineering Department,Menoufia,Egypt
| |
Collapse
|
18
|
Xiao H, Liu Q, Li L. MFMANet: Multi-feature Multi-attention Network for efficient subtype classification on non-small cell lung cancer CT images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2023]
|
19
|
Li H, Drukker K, Hu Q, Whitney HM, Fuhrman JD, Giger ML. Predicting intensive care need for COVID-19 patients using deep learning on chest radiography. J Med Imaging (Bellingham) 2023; 10:044504. [PMID: 37608852 PMCID: PMC10440543 DOI: 10.1117/1.jmi.10.4.044504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 07/12/2023] [Accepted: 08/01/2023] [Indexed: 08/24/2023] Open
Abstract
Purpose Image-based prediction of coronavirus disease 2019 (COVID-19) severity and resource needs can be an important means to address the COVID-19 pandemic. In this study, we propose an artificial intelligence/machine learning (AI/ML) COVID-19 prognosis method to predict patients' needs for intensive care by analyzing chest X-ray radiography (CXR) images using deep learning. Approach The dataset consisted of 8357 CXR exams from 5046 COVID-19-positive patients as confirmed by reverse transcription polymerase chain reaction (RT-PCR) tests for the SARS-CoV-2 virus with a training/validation/test split of 64%/16%/20% on a by patient level. Our model involved a DenseNet121 network with a sequential transfer learning technique employed to train on a sequence of gradually more specific and complex tasks: (1) fine-tuning a model pretrained on ImageNet using a previously established CXR dataset with a broad spectrum of pathologies; (2) refining on another established dataset to detect pneumonia; and (3) fine-tuning using our in-house training/validation datasets to predict patients' needs for intensive care within 24, 48, 72, and 96 h following the CXR exams. The classification performances were evaluated on our independent test set (CXR exams of 1048 patients) using the area under the receiver operating characteristic curve (AUC) as the figure of merit in the task of distinguishing between those COVID-19-positive patients who required intensive care following the imaging exam and those who did not. Results Our proposed AI/ML model achieved an AUC (95% confidence interval) of 0.78 (0.74, 0.81) when predicting the need for intensive care 24 h in advance, and at least 0.76 (0.73, 0.80) for 48 h or more in advance using predictions based on the AI prognostic marker derived from CXR images. Conclusions This AI/ML prediction model for patients' needs for intensive care has the potential to support both clinical decision-making and resource management.
Collapse
Affiliation(s)
- Hui Li
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Karen Drukker
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Qiyuan Hu
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Heather M. Whitney
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Jordan D. Fuhrman
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Maryellen L. Giger
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| |
Collapse
|
20
|
Hwang I, Trivedi H, Brown-Mulry B, Zhang L, Nalla V, Gastounioti A, Gichoya J, Seyyed-Kalantari L, Banerjee I, Woo M. Impact of multi-source data augmentation on performance of convolutional neural networks for abnormality classification in mammography. FRONTIERS IN RADIOLOGY 2023; 3:1181190. [PMID: 37588666 PMCID: PMC10426498 DOI: 10.3389/fradi.2023.1181190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 05/30/2023] [Indexed: 08/18/2023]
Abstract
Introduction To date, most mammography-related AI models have been trained using either film or digital mammogram datasets with little overlap. We investigated whether or not combining film and digital mammography during training will help or hinder modern models designed for use on digital mammograms. Methods To this end, a total of six binary classifiers were trained for comparison. The first three classifiers were trained using images only from Emory Breast Imaging Dataset (EMBED) using ResNet50, ResNet101, and ResNet152 architectures. The next three classifiers were trained using images from EMBED, Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM), and Digital Database for Screening Mammography (DDSM) datasets. All six models were tested only on digital mammograms from EMBED. Results The results showed that performance degradation to the customized ResNet models was statistically significant overall when EMBED dataset was augmented with CBIS-DDSM/DDSM. While the performance degradation was observed in all racial subgroups, some races are subject to more severe performance drop as compared to other races. Discussion The degradation may potentially be due to ( 1) a mismatch in features between film-based and digital mammograms ( 2) a mismatch in pathologic and radiological information. In conclusion, use of both film and digital mammography during training may hinder modern models designed for breast cancer screening. Caution is required when combining film-based and digital mammograms or when utilizing pathologic and radiological information simultaneously.
Collapse
Affiliation(s)
- InChan Hwang
- School of Data Science and Analytics, Kennesaw State University, Kennesaw, GA, United States
| | - Hari Trivedi
- Department of Radiology, Emory University, Atlanta, GA, United States
| | - Beatrice Brown-Mulry
- School of Data Science and Analytics, Kennesaw State University, Kennesaw, GA, United States
| | - Linglin Zhang
- School of Data Science and Analytics, Kennesaw State University, Kennesaw, GA, United States
| | - Vineela Nalla
- Department of Information Technology, Kennesaw State University, Kennesaw, GA, United States
| | - Aimilia Gastounioti
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, MO, United States
| | - Judy Gichoya
- Department of Radiology, Emory University, Atlanta, GA, United States
| | - Laleh Seyyed-Kalantari
- Department of Electrical Engineering and Computer Science, York University, Toronto, ON, Canada
| | - Imon Banerjee
- Department of Radiology, Mayo Clinic Arizona, Phoenix, AZ, United States
| | - MinJae Woo
- School of Data Science and Analytics, Kennesaw State University, Kennesaw, GA, United States
| |
Collapse
|
21
|
Zakareya S, Izadkhah H, Karimpour J. A New Deep-Learning-Based Model for Breast Cancer Diagnosis from Medical Images. Diagnostics (Basel) 2023; 13:diagnostics13111944. [PMID: 37296796 DOI: 10.3390/diagnostics13111944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 05/15/2023] [Accepted: 05/28/2023] [Indexed: 06/12/2023] Open
Abstract
Breast cancer is one of the most prevalent cancers among women worldwide, and early detection of the disease can be lifesaving. Detecting breast cancer early allows for treatment to begin faster, increasing the chances of a successful outcome. Machine learning helps in the early detection of breast cancer even in places where there is no access to a specialist doctor. The rapid advancement of machine learning, and particularly deep learning, leads to an increase in the medical imaging community's interest in applying these techniques to improve the accuracy of cancer screening. Most of the data related to diseases is scarce. On the other hand, deep-learning models need much data to learn well. For this reason, the existing deep-learning models on medical images cannot work as well as other images. To overcome this limitation and improve breast cancer classification detection, inspired by two state-of-the-art deep networks, GoogLeNet and residual block, and developing several new features, this paper proposes a new deep model to classify breast cancer. Utilizing adopted granular computing, shortcut connection, two learnable activation functions instead of traditional activation functions, and an attention mechanism is expected to improve the accuracy of diagnosis and consequently decrease the load on doctors. Granular computing can improve diagnosis accuracy by capturing more detailed and fine-grained information about cancer images. The proposed model's superiority is demonstrated by comparing it to several state-of-the-art deep models and existing works using two case studies. The proposed model achieved an accuracy of 93% and 95% on ultrasound images and breast histopathology images, respectively.
Collapse
Affiliation(s)
- Salman Zakareya
- Department of Computer Science, University of Tabriz, Tabriz 5166616471, Iran
| | - Habib Izadkhah
- Department of Computer Science, University of Tabriz, Tabriz 5166616471, Iran
- Research Department of Computational Algorithms and Mathematical Models, University of Tabriz, Tabriz 5166616471, Iran
| | - Jaber Karimpour
- Department of Computer Science, University of Tabriz, Tabriz 5166616471, Iran
| |
Collapse
|
22
|
Gadallah MT, Mohamed AEA, Hefnawy A, Zidan H, El-banby G, Badawy SM. A Mathematical Model for Simulating Photoacoustic Signal Generation Process in Biological Tissues.. [DOI: 10.21203/rs.3.rs-2928563/v2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
Abstract
Background: Biomedical photoacoustic imaging (PAI) is a hybrid imaging modality based on the laser-generated ultrasound waves due to the photoacoustic (PA) effect physical phenomenon that has been reported firstly by A. G. Bell in 1880. Numerical modeling-based simulation for the PA signal generation process in biological tissues helps researchers for decreasing error trials in-vitro and hence decreasing error rates for in-vivo experiments. Numerical modeling methods help in obtaining a rapid modeling procedure comparable to pure mathematics. However, if a proper simplified mathematical model can be founded before applying numerical modeling techniques, it will be a great advantage for the overall numerical model. Most scientific theories, equations, and assumptions, been proposed to mathematically model the complete PA signal generation and propagation process in biological tissues, are so complicated. Hence, the researchers, especially the beginners, will find a hard difficulty to explore and obtain a proper simplified mathematical model describing the process. That’s why this paper is introduced.
Methods: In this paper we have tried to simplify understanding for the biomedical PA wave’s generation and propagation process, deducing a simplified mathematical model for the whole process. The proposed deduced model is based on three steps: a- pulsed laser irradiance, b- diffusion of light through biological tissue, and c- acoustic pressure wave generation and propagation from the target tissue to the ultrasound transducer surface. COMSOL Multiphysics, which is founded due to the finite element method (FEM) numerical modeling principle, has been utilized to validate the proposed deduced mathematical model on a simulated biological tissue including a tumor inside.
Results and Conclusion: The time-dependent study been applied by COMSOL has assured that the proposed deduced mathematical model may be considered as a simplified, easy, and fast startup base for scientific researchers to numerically model and simulate biomedical PA signals’ generation and propagation process utilizing any proper software like COMSOL.
Collapse
|
23
|
Gadallah MT, Mohamed AEA, Hefnawy A, Zidan H, El-banby G, Badawy SM. A Mathematical Model for Simulating Photoacoustic Signal Generation Process in Biological Tissues.. [DOI: 10.21203/rs.3.rs-2928563/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
Abstract
Background
Biomedical photoacoustic imaging (PAI) is a hybrid imaging modality based on the laser-generated ultrasound waves due to the photoacoustic (PA) effect physical phenomenon that has been reported firstly by A. G. Bell in 1880. Numerical modeling based simulation for PA signal generation process in biological tissues helps researchers for decreasing error trials in-vitro and hence decreasing error rates for in-vivo experiments. Numerical modeling methods help in obtaining a rapid modeling procedure comparable to pure mathematics. However, if a proper simplified mathematical model can be founded before applying numerical modeling techniques, it will be a great advantage for the overall numerical model. More scientific theories, equations, and assumptions through the biomedical PA imaging research literature have been proposed trying to mathematically model the complete PA signal generation and propagation process in biological tissues. However, most of them have so complicated details. Hence, the researchers, especially the beginners, will find a hard difficulty to explore and obtain a proper simplified mathematical model describing the process. That’s why this paper is introduced.
Methods
In this paper we have tried to simplify understanding for the biomedical PA wave’s generation and propagation process, deducing a simplified mathematical model for the whole process. The proposed deduced model is based on three steps: a- pulsed laser irradiance, b- diffusion of light through biological tissue, and c- acoustic pressure wave generation and propagation from the target tissue to the ultrasound transducer surface.
Collapse
|
24
|
Rehman SU, Khan MA, Masood A, Almujally NA, Baili J, Alhaisoni M, Tariq U, Zhang YD. BRMI-Net: Deep Learning Features and Flower Pollination-Controlled Regula Falsi-Based Feature Selection Framework for Breast Cancer Recognition in Mammography Images. Diagnostics (Basel) 2023; 13:diagnostics13091618. [PMID: 37175009 PMCID: PMC10178634 DOI: 10.3390/diagnostics13091618] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 04/16/2023] [Accepted: 04/26/2023] [Indexed: 05/15/2023] Open
Abstract
The early detection of breast cancer using mammogram images is critical for lowering women's mortality rates and allowing for proper treatment. Deep learning techniques are commonly used for feature extraction and have demonstrated significant performance in the literature. However, these features do not perform well in several cases due to redundant and irrelevant information. We created a new framework for diagnosing breast cancer using entropy-controlled deep learning and flower pollination optimization from the mammogram images. In the proposed framework, a filter fusion-based method for contrast enhancement is developed. The pre-trained ResNet-50 model is then improved and trained using transfer learning on both the original and enhanced datasets. Deep features are extracted and combined into a single vector in the following phase using a serial technique known as serial mid-value features. The top features are then classified using neural networks and machine learning classifiers in the following stage. To accomplish this, a technique for flower pollination optimization with entropy control has been developed. The exercise used three publicly available datasets: CBIS-DDSM, INbreast, and MIAS. On these selected datasets, the proposed framework achieved 93.8, 99.5, and 99.8% accuracy, respectively. Compared to the current methods, the increase in accuracy and decrease in computational time are explained.
Collapse
Affiliation(s)
- Shams Ur Rehman
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan
| | | | - Anum Masood
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology (NTNU), 7491 Trondheim, Norway
| | - Nouf Abdullah Almujally
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Jamel Baili
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia
| | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha'il, Ha'il 81451, Saudi Arabia
| | - Usman Tariq
- Management Information System Department, College of Business Administration, Prince Sattam Bin Abdulaziz University, Al-Kharj 16278, Saudi Arabia
| | - Yu-Dong Zhang
- Department of Informatics, University of Leicester, Leicester LE1 7RH, UK
| |
Collapse
|
25
|
Classification of Breast Lesions on DCE-MRI Data Using a Fine-Tuned MobileNet. Diagnostics (Basel) 2023; 13:diagnostics13061067. [PMID: 36980377 PMCID: PMC10047403 DOI: 10.3390/diagnostics13061067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/06/2023] [Accepted: 03/07/2023] [Indexed: 03/14/2023] Open
Abstract
It is crucial to diagnose breast cancer early and accurately to optimize treatment. Presently, most deep learning models used for breast cancer detection cannot be used on mobile phones or low-power devices. This study intended to evaluate the capabilities of MobileNetV1 and MobileNetV2 and their fine-tuned models to differentiate malignant lesions from benign lesions in breast dynamic contrast-enhanced magnetic resonance images (DCE-MRI).
Collapse
|
26
|
Qiu Y, Lin F, Chen W, Xu M. Pre-training in Medical Data: A Survey. MACHINE INTELLIGENCE RESEARCH 2023. [PMCID: PMC9942039 DOI: 10.1007/s11633-022-1382-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/23/2023]
Abstract
Medical data refers to health-related information associated with regular patient care or as part of a clinical trial program. There are many categories of such data, such as clinical imaging data, bio-signal data, electronic health records (EHR), and multi-modality medical data. With the development of deep neural networks in the last decade, the emerging pre-training paradigm has become dominant in that it has significantly improved machine learning methods’ performance in a data-limited scenario. In recent years, studies of pre-training in the medical domain have achieved significant progress. To summarize these technology advancements, this work provides a comprehensive survey of recent advances for pre-training on several major types of medical data. In this survey, we summarize a large number of related publications and the existing benchmarking in the medical domain. Especially, the survey briefly describes how some pre-training methods are applied to or developed for medical data. From a data-driven perspective, we examine the extensive use of pre-training in many medical scenarios. Moreover, based on the summary of recent pre-training studies, we identify several challenges in this field to provide insights for future studies.
Collapse
Affiliation(s)
- Yixuan Qiu
- The University of Queensland, Brisbane, 4072 Australia
| | - Feng Lin
- The University of Queensland, Brisbane, 4072 Australia
| | - Weitong Chen
- The University of Adelaide, Adelaide, 5005 Australia
| | - Miao Xu
- The University of Queensland, Brisbane, 4072 Australia
| |
Collapse
|
27
|
Sasikala S, Arun Kumar S, Ezhilarasi M. Improved breast cancer detection using fusion of bimodal sonographic features through binary firefly algorithm. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2164944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Affiliation(s)
- S. Sasikala
- Department of Electronics & Communication Engineering, Kumaraguru College of Technology, Coimbatore, Tamil Nadu, India
| | - S. Arun Kumar
- Department of Electronics & Communication Engineering, Kumaraguru College of Technology, Coimbatore, Tamil Nadu, India
| | - M. Ezhilarasi
- Department of Electronics & Instrumentation Engineering, Kumaraguru College of Technology, Coimbatore, Tamil Nadu, India
| |
Collapse
|
28
|
Ayana G, Dese K, Dereje Y, Kebede Y, Barki H, Amdissa D, Husen N, Mulugeta F, Habtamu B, Choe SW. Vision-Transformer-Based Transfer Learning for Mammogram Classification. Diagnostics (Basel) 2023; 13:diagnostics13020178. [PMID: 36672988 PMCID: PMC9857963 DOI: 10.3390/diagnostics13020178] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 12/27/2022] [Accepted: 12/27/2022] [Indexed: 01/06/2023] Open
Abstract
Breast mass identification is a crucial procedure during mammogram-based early breast cancer diagnosis. However, it is difficult to determine whether a breast lump is benign or cancerous at early stages. Convolutional neural networks (CNNs) have been used to solve this problem and have provided useful advancements. However, CNNs focus only on a certain portion of the mammogram while ignoring the remaining and present computational complexity because of multiple convolutions. Recently, vision transformers have been developed as a technique to overcome such limitations of CNNs, ensuring better or comparable performance in natural image classification. However, the utility of this technique has not been thoroughly investigated in the medical image domain. In this study, we developed a transfer learning technique based on vision transformers to classify breast mass mammograms. The area under the receiver operating curve of the new model was estimated as 1 ± 0, thus outperforming the CNN-based transfer-learning models and vision transformer models trained from scratch. The technique can, hence, be applied in a clinical setting, to improve the early diagnosis of breast cancer.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia
| | - Kokeb Dese
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia
| | - Yisak Dereje
- Department of Information Engineering, Marche Polytechnic University, 60121 Ancona, Italy
| | - Yonas Kebede
- Biomedical Engineering Unit, Black Lion Hospital, Addis Ababa University, Addis Ababa 1000, Ethiopia
| | - Hika Barki
- Department of Artificial Intelligence Convergence, Pukyong National University, Busan 48513, Republic of Korea
| | - Dechassa Amdissa
- Department of Basic and Applied Science for Engineering, Sapienza University of Rome, 00161 Roma, Italy
| | - Nahimiya Husen
- Department of Bioengineering and Robotics, Campus Bio-Medico University of Rome, 00128 Roma, Italy
| | - Fikadu Mulugeta
- Center of Biomedical Engineering, Addis Ababa Institute of Technology, Addis Ababa University, Addis Ababa 1000, Ethiopia
| | - Bontu Habtamu
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia
| | - Se-Woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea
- Correspondence: ; Tel.: +82-54-478-7781; Fax: +82-54-462-1049
| |
Collapse
|
29
|
Ayana G, Choe SW. BUViTNet: Breast Ultrasound Detection via Vision Transformers. Diagnostics (Basel) 2022; 12:diagnostics12112654. [PMID: 36359497 PMCID: PMC9689470 DOI: 10.3390/diagnostics12112654] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 10/26/2022] [Accepted: 10/26/2022] [Indexed: 11/06/2022] Open
Abstract
Convolutional neural networks (CNNs) have enhanced ultrasound image-based early breast cancer detection. Vision transformers (ViTs) have recently surpassed CNNs as the most effective method for natural image analysis. ViTs have proven their capability of incorporating more global information than CNNs at lower layers, and their skip connections are more powerful than those of CNNs, which endows ViTs with superior performance. However, the effectiveness of ViTs in breast ultrasound imaging has not yet been investigated. Here, we present BUViTNet breast ultrasound detection via ViTs, where ViT-based multistage transfer learning is performed using ImageNet and cancer cell image datasets prior to transfer learning for classifying breast ultrasound images. We utilized two publicly available ultrasound breast image datasets, Mendeley and breast ultrasound images (BUSI), to train and evaluate our algorithm. The proposed method achieved the highest area under the receiver operating characteristics curve (AUC) of 1 ± 0, Matthew’s correlation coefficient (MCC) of 1 ± 0, and kappa score of 1 ± 0 on the Mendeley dataset. Furthermore, BUViTNet achieved the highest AUC of 0.968 ± 0.02, MCC of 0.961 ± 0.01, and kappa score of 0.959 ± 0.02 on the BUSI dataset. BUViTNet outperformed ViT trained from scratch, ViT-based conventional transfer learning, and CNN-based transfer learning in classifying breast ultrasound images (p < 0.01 in all cases). Our findings indicate that improved transformers are effective in analyzing breast images and can provide an improved diagnosis if used in clinical settings. Future work will consider the use of a wide range of datasets and parameters for optimized performance.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
| | - Se-woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
- Correspondence: ; Tel.: +82-54-478-7781; Fax: +82-54-462-1049
| |
Collapse
|
30
|
Breast cancer image analysis using deep learning techniques – a survey. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00703-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
31
|
Ayana G, Ryu J, Choe SW. Ultrasound-Responsive Nanocarriers for Breast Cancer Chemotherapy. MICROMACHINES 2022; 13:1508. [PMID: 36144131 PMCID: PMC9503784 DOI: 10.3390/mi13091508] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 09/08/2022] [Accepted: 09/08/2022] [Indexed: 05/13/2023]
Abstract
Breast cancer is the most common type of cancer and it is treated with surgical intervention, radiotherapy, chemotherapy, or a combination of these regimens. Despite chemotherapy's ample use, it has limitations such as bioavailability, adverse side effects, high-dose requirements, low therapeutic indices, multiple drug resistance development, and non-specific targeting. Drug delivery vehicles or carriers, of which nanocarriers are prominent, have been introduced to overcome chemotherapy limitations. Nanocarriers have been preferentially used in breast cancer chemotherapy because of their role in protecting therapeutic agents from degradation, enabling efficient drug concentration in target cells or tissues, overcoming drug resistance, and their relatively small size. However, nanocarriers are affected by physiological barriers, bioavailability of transported drugs, and other factors. To resolve these issues, the use of external stimuli has been introduced, such as ultrasound, infrared light, thermal stimulation, microwaves, and X-rays. Recently, ultrasound-responsive nanocarriers have become popular because they are cost-effective, non-invasive, specific, tissue-penetrating, and deliver high drug concentrations to their target. In this paper, we review recent developments in ultrasound-guided nanocarriers for breast cancer chemotherapy, discuss the relevant challenges, and provide insights into future directions.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
| | - Jaemyung Ryu
- Department of Optical Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
| | - Se-woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
| |
Collapse
|
32
|
Basurto-Hurtado JA, Cruz-Albarran IA, Toledano-Ayala M, Ibarra-Manzano MA, Morales-Hernandez LA, Perez-Ramirez CA. Diagnostic Strategies for Breast Cancer Detection: From Image Generation to Classification Strategies Using Artificial Intelligence Algorithms. Cancers (Basel) 2022; 14:3442. [PMID: 35884503 PMCID: PMC9322973 DOI: 10.3390/cancers14143442] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Revised: 07/02/2022] [Accepted: 07/12/2022] [Indexed: 02/04/2023] Open
Abstract
Breast cancer is one the main death causes for women worldwide, as 16% of the diagnosed malignant lesions worldwide are its consequence. In this sense, it is of paramount importance to diagnose these lesions in the earliest stage possible, in order to have the highest chances of survival. While there are several works that present selected topics in this area, none of them present a complete panorama, that is, from the image generation to its interpretation. This work presents a comprehensive state-of-the-art review of the image generation and processing techniques to detect Breast Cancer, where potential candidates for the image generation and processing are presented and discussed. Novel methodologies should consider the adroit integration of artificial intelligence-concepts and the categorical data to generate modern alternatives that can have the accuracy, precision and reliability expected to mitigate the misclassifications.
Collapse
Affiliation(s)
- Jesus A. Basurto-Hurtado
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| | - Irving A. Cruz-Albarran
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| | - Manuel Toledano-Ayala
- División de Investigación y Posgrado de la Facultad de Ingeniería (DIPFI), Universidad Autónoma de Querétaro, Cerro de las Campanas S/N Las Campanas, Santiago de Querétaro 76010, Mexico;
| | - Mario Alberto Ibarra-Manzano
- Laboratorio de Procesamiento Digital de Señales, Departamento de Ingeniería Electrónica, Division de Ingenierias Campus Irapuato-Salamanca (DICIS), Universidad de Guanajuato, Carretera Salamanca-Valle de Santiago KM. 3.5 + 1.8 Km., Salamanca 36885, Mexico;
| | - Luis A. Morales-Hernandez
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
| | - Carlos A. Perez-Ramirez
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| |
Collapse
|
33
|
De-Speckling Breast Cancer Ultrasound Images Using a Rotationally Invariant Block Matching Based Non-Local Means (RIBM-NLM) Method. Diagnostics (Basel) 2022; 12:diagnostics12040862. [PMID: 35453909 PMCID: PMC9030862 DOI: 10.3390/diagnostics12040862] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 03/25/2022] [Accepted: 03/29/2022] [Indexed: 12/10/2022] Open
Abstract
The ultrasonic technique is an indispensable imaging modality for diagnosis of breast cancer in young women due to its ability in efficiently capturing the tissue properties, and decreasing nega-tive recognition rate thereby avoiding non-essential biopsies. Despite the advantages, ultrasound images are affected by speckle noise, generating fine-false structures that decrease the contrast of the images and diminish the actual boundaries of tissues on ultrasound image. Moreover, speckle noise negatively impacts the subsequent stages in image processing pipeline, such as edge detec-tion, segmentation, feature extraction, and classification. Previous studies have formulated vari-ous speckle reduction methods in ultrasound images; however, these methods suffer from being unable to retain finer edge details and require more processing time. In this study, we propose a breast ultrasound de-speckling method based on rotational invariant block matching non-local means (RIBM-NLM) filtering. The effectiveness of our method has been demonstrated by com-paring our results with three established de-speckling techniques, the switching bilateral filter (SBF), the non-local means filter (NLMF), and the optimized non-local means filter (ONLMF) on 250 images from public dataset and 6 images from private dataset. Evaluation metrics, including Self-Similarity Index Measure (SSIM), Peak Signal to Noise Ratio (PSNR), and Mean Square Error (MSE) were utilized to measure performance. With the proposed method, we were able to record average SSIM of 0.8915, PSNR of 65.97, MSE of 0.014, RMSE of 0.119, and computational speed of 82 seconds at noise variance of 20dB using the public dataset, all with p-value of less than 0.001 compared against NLMF, ONLMF, and SBF. Similarly, the proposed method achieved av-erage SSIM of 0.83, PSNR of 66.26, MSE of 0.015, RMSE of 0.124, and computational speed of 83 seconds at noise variance of 20dB using the private dataset, all with p-value of less than 0.001 compared against NLMF, ONLMF, and SBF.
Collapse
|
34
|
Ayana G, Park J, Choe SW. Patchless Multi-Stage Transfer Learning for Improved Mammographic Breast Mass Classification. Cancers (Basel) 2022; 14:cancers14051280. [PMID: 35267587 PMCID: PMC8909211 DOI: 10.3390/cancers14051280] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 02/22/2022] [Accepted: 02/24/2022] [Indexed: 02/01/2023] Open
Abstract
Simple Summary In this study, we propose a novel deep-learning method based on multi-stage transfer learning (MSTL) from ImageNet and cancer cell line image pre-trained models to classify mammographic masses as either benign or malignant. The proposed method alleviates the challenge of obtaining large amounts of labeled mammogram training data by utilizing a large number of cancer cell line microscopic images as an intermediate domain of learning between the natural domain (ImageNet) and medical domain (mammography). Moreover, our method does not utilize patch separation (to segment the region of interest before classification), which renders it computationally simple and fast compared to previous studies. The findings of this study are of crucial importance in the early diagnosis of breast cancer in young women with dense breasts because mammography does not provide reliable diagnosis in such cases. Abstract Despite great achievements in classifying mammographic breast-mass images via deep-learning (DL), obtaining large amounts of training data and ensuring generalizations across different datasets with robust and well-optimized algorithms remain a challenge. ImageNet-based transfer learning (TL) and patch classifiers have been utilized to address these challenges. However, researchers have been unable to achieve the desired performance for DL to be used as a standalone tool. In this study, we propose a novel multi-stage TL from ImageNet and cancer cell line image pre-trained models to classify mammographic breast masses as either benign or malignant. We trained our model on three public datasets: Digital Database for Screening Mammography (DDSM), INbreast, and Mammographic Image Analysis Society (MIAS). In addition, a mixed dataset of the images from these three datasets was used to train the model. We obtained an average five-fold cross validation AUC of 1, 0.9994, 0.9993, and 0.9998 for DDSM, INbreast, MIAS, and mixed datasets, respectively. Moreover, the observed performance improvement using our method against the patch-based method was statistically significant, with a p-value of 0.0029. Furthermore, our patchless approach performed better than patch- and whole image-based methods, improving test accuracy by 8% (91.41% vs. 99.34%), tested on the INbreast dataset. The proposed method is of significant importance in solving the need for a large training dataset as well as reducing the computational burden in training and implementing the mammography-based deep-learning models for early diagnosis of breast cancer.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea; (G.A.); (J.P.)
| | - Jinhyung Park
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea; (G.A.); (J.P.)
| | - Se-woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea; (G.A.); (J.P.)
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
- Correspondence: ; Tel.: +82-54-478-7781; Fax: +82-54-462-1049
| |
Collapse
|