1
|
Vijayarajan SM, Manoj Kumar D, Sudha G, Reddy AB. Infrared thermal images using PCSAN-Net-DBOA: An approach of breast cancer classification. Microsc Res Tech 2024; 87:1742-1752. [PMID: 38501825 DOI: 10.1002/jemt.24550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 02/02/2024] [Accepted: 02/26/2024] [Indexed: 03/20/2024]
Abstract
This manuscript proposes thermal images using PCSAN-Net-DBOA Initially, the input images are engaged from the database for mastology research with infrared image (DMR-IR) dataset for breast cancer classification. The adaptive distorted Gaussian matched-filter (ADGMF) was used in removing noise and increasing the quality of infrared thermal images. Next, these preprocessed images are given into one-dimensional quantum integer wavelet S-transform (OQIWST) for extracting Grayscale statistic features like standard deviation, mean, variance, entropy, kurtosis, and skewness. The extracted features are given into the pyramidal convolution shuffle attention neural network (PCSANN) for categorization. In general, PCSANN does not show any adaption optimization techniques to determine the optimal parameter to offer precise breast cancer categorization. This research proposes the dung beetle optimization algorithm (DBOA) to optimize the PCSANN classifier that accurately diagnoses breast cancer. The BCD-PCSANN-DBO method is implemented using Python. To classify breast cancer, performance metrics including accuracy, precision, recall, F1 score, error rate, RoC, and computational time are considered. Performance of the BCD-PCSANN-DBO approach attains 29.87%, 28.95%, and 27.92% lower computation time and 13.29%, 14.35%, and 20.54% greater RoC compared with existing methods like breast cancer diagnosis utilizing thermal infrared imaging and machine learning approaches(BCD-CNN), breast cancer classification from thermal images utilizing Grunwald-Letnikov assisted dragonfly algorithm-based deep feature selection (BCD-VGG16) and Breast cancer detection in thermograms using deep selection based on genetic algorithm and Gray Wolf Optimizer (BCD-SqueezeNet), respectively. RESEARCH HIGHLIGHTS: The input images are engaged from the breast cancer dataset for breast cancer classification. The ADQMF was used in removing noise and increasing the quality of infrared thermal images. The extracted features are given into the PCSANN for categorization. DBOA is proposed to optimize PCSANN classifier that classifies breast cancer precisely. The proposed BCD-PCSANN-DBO method is implemented using Python.
Collapse
Affiliation(s)
- S M Vijayarajan
- Department of Electronics and Communication Engineering, NPR College of Engineering & Technology, Dindigul, Tamil Nadu, India
| | - D Manoj Kumar
- Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Ramapuram Campus, Chennai, Tamilnadu, India
| | - G Sudha
- Department of Biomedical Engineering, Muthayammal Engineering College, Tamil Nadu, India
| | - A Basi Reddy
- Department of Computer Science and Engineering, School of Computing, Mohan Babu University, Tirupati, Andhra Pradesh, India
| |
Collapse
|
2
|
Carriero A, Groenhoff L, Vologina E, Basile P, Albera M. Deep Learning in Breast Cancer Imaging: State of the Art and Recent Advancements in Early 2024. Diagnostics (Basel) 2024; 14:848. [PMID: 38667493 PMCID: PMC11048882 DOI: 10.3390/diagnostics14080848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 04/07/2024] [Accepted: 04/17/2024] [Indexed: 04/28/2024] Open
Abstract
The rapid advancement of artificial intelligence (AI) has significantly impacted various aspects of healthcare, particularly in the medical imaging field. This review focuses on recent developments in the application of deep learning (DL) techniques to breast cancer imaging. DL models, a subset of AI algorithms inspired by human brain architecture, have demonstrated remarkable success in analyzing complex medical images, enhancing diagnostic precision, and streamlining workflows. DL models have been applied to breast cancer diagnosis via mammography, ultrasonography, and magnetic resonance imaging. Furthermore, DL-based radiomic approaches may play a role in breast cancer risk assessment, prognosis prediction, and therapeutic response monitoring. Nevertheless, several challenges have limited the widespread adoption of AI techniques in clinical practice, emphasizing the importance of rigorous validation, interpretability, and technical considerations when implementing DL solutions. By examining fundamental concepts in DL techniques applied to medical imaging and synthesizing the latest advancements and trends, this narrative review aims to provide valuable and up-to-date insights for radiologists seeking to harness the power of AI in breast cancer care.
Collapse
Affiliation(s)
| | - Léon Groenhoff
- Radiology Department, Maggiore della Carità Hospital, 28100 Novara, Italy; (A.C.); (E.V.); (P.B.); (M.A.)
| | | | | | | |
Collapse
|
3
|
Chen H, Zhou G, He W, Duan X, Jiang H. Classification and identification of agricultural products based on improved MobileNetV2. Sci Rep 2024; 14:3454. [PMID: 38342930 PMCID: PMC10859362 DOI: 10.1038/s41598-024-53349-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 01/31/2024] [Indexed: 02/13/2024] Open
Abstract
With the advancement of technology, the demand for increased production efficiency has gradually risen, leading to the emergence of new trends in agricultural automation and intelligence. Precision classification models play a crucial role in helping farmers accurately identify, classify, and process various agricultural products, thereby enhancing production efficiency and maximizing the economic value of agricultural products. The current MobileNetV2 network model is capable of performing the aforementioned tasks. However, it tends to exhibit recognition biases when identifying different subcategories within agricultural product varieties. To address this challenge, this paper introduces an improved MobileNetV2 convolutional neural network model. Firstly, inspired by the Inception module in GoogLeNet, we combine the improved Inception module with the original residual module, innovatively proposing a new Res-Inception module. Additionally, to further enhance the model's accuracy in detection tasks, we introduce an efficient multi-scale cross-space learning module (EMA) and embed it into the backbone structure of the network. Experimental results on the Fruit-360 dataset demonstrate that the improved MobileNetV2 outperforms the original MobileNetV2 in agricultural product classification tasks, with an accuracy increase of 1.86%.
Collapse
Affiliation(s)
- Haiwei Chen
- School of Computer Science and Information Engineering, Harbin Normal University, Harbin, 150025, China
| | - Guohui Zhou
- School of Computer Science and Information Engineering, Harbin Normal University, Harbin, 150025, China.
| | - Wei He
- School of Computer Science and Information Engineering, Harbin Normal University, Harbin, 150025, China
| | - Xiping Duan
- School of Computer Science and Information Engineering, Harbin Normal University, Harbin, 150025, China
| | - Huixin Jiang
- School of Life Sciences and Technology, Harbin Normal University, Harbin, 150025, China
| |
Collapse
|
4
|
Mudeng V, Farid MN, Ayana G, Choe SW. Domain and Histopathology Adaptations-Based Classification for Malignancy Grading System. THE AMERICAN JOURNAL OF PATHOLOGY 2023; 193:2080-2098. [PMID: 37673327 DOI: 10.1016/j.ajpath.2023.07.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 06/30/2023] [Accepted: 07/19/2023] [Indexed: 09/08/2023]
Abstract
Accurate proliferation rate quantification can be used to devise an appropriate treatment for breast cancer. Pathologists use breast tissue biopsy glass slides stained with hematoxylin and eosin to obtain grading information. However, this manual evaluation may lead to high costs and be ineffective because diagnosis depends on the facility and the pathologists' insights and experiences. Convolutional neural network acts as a computer-based observer to improve clinicians' capacity in grading breast cancer. Therefore, this study proposes a novel scheme for automatic breast cancer malignancy grading from invasive ductal carcinoma. The proposed classifiers implement multistage transfer learning incorporating domain and histopathologic transformations. Domain adaptation using pretrained models, such as InceptionResNetV2, InceptionV3, NASNet-Large, ResNet50, ResNet101, VGG19, and Xception, was applied to classify the ×40 magnification BreaKHis data set into eight classes. Subsequently, InceptionV3 and Xception, which contain the domain and histopathology pretrained weights, were determined to be the best for this study and used to categorize the Databiox database into grades 1, 2, or 3. To provide a comprehensive report, this study offered a patchless automated grading system for magnification-dependent and magnification-independent classifications. With an overall accuracy (means ± SD) of 90.17% ± 3.08% to 97.67% ± 1.09% and an F1 score of 0.9013 to 0.9760 for magnification-dependent classification, the classifiers in this work achieved outstanding performance. The proposed approach could be used for breast cancer grading systems in clinical settings.
Collapse
Affiliation(s)
- Vicky Mudeng
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea; Department of Electrical Engineering, Institut Teknologi Kalimantan, Balikpapan, Indonesia
| | - Mifta Nur Farid
- Department of Electrical Engineering, Institut Teknologi Kalimantan, Balikpapan, Indonesia
| | - Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea
| | - Se-Woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea; Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea.
| |
Collapse
|
5
|
Rahaman MM, Millar EKA, Meijering E. Breast cancer histopathology image-based gene expression prediction using spatial transcriptomics data and deep learning. Sci Rep 2023; 13:13604. [PMID: 37604916 PMCID: PMC10442349 DOI: 10.1038/s41598-023-40219-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 08/07/2023] [Indexed: 08/23/2023] Open
Abstract
Tumour heterogeneity in breast cancer poses challenges in predicting outcome and response to therapy. Spatial transcriptomics technologies may address these challenges, as they provide a wealth of information about gene expression at the cell level, but they are expensive, hindering their use in large-scale clinical oncology studies. Predicting gene expression from hematoxylin and eosin stained histology images provides a more affordable alternative for such studies. Here we present BrST-Net, a deep learning framework for predicting gene expression from histopathology images using spatial transcriptomics data. Using this framework, we trained and evaluated four distinct state-of-the-art deep learning architectures, which include ResNet101, Inception-v3, EfficientNet (with six different variants), and vision transformer (with two different variants), all without utilizing pretrained weights for the prediction of 250 genes. To enhance the generalisation performance of the main network, we introduce an auxiliary network into the framework. Our methodology outperforms previous studies, with 237 genes identified with positive correlation, including 24 genes with a median correlation coefficient greater than 0.50. This is a notable improvement over previous studies, which could predict only 102 genes with positive correlation, with the highest correlation values ranging from 0.29 to 0.34.
Collapse
Affiliation(s)
- Md Mamunur Rahaman
- School of Computer Science and Engineering, University of New South Wales, Kensington, Sydney, NSW 2052, Australia
| | - Ewan K A Millar
- Department of Anatomical Pathology, NSW Health Pathology, St. George Hospital, Kogarah, Sydney, NSW 2217, Australia
- St. George and Sutherland Clinical School, University of New South Wales, Kensington, Sydney, NSW 2052, Australia
- Faculty of Medicine & Health Sciences, Western Sydney University, Campbelltown, Sydney, NSW 2560, Australia
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Kensington, Sydney, NSW 2052, Australia.
| |
Collapse
|
6
|
Jalloul R, Chethan HK, Alkhatib R. A Review of Machine Learning Techniques for the Classification and Detection of Breast Cancer from Medical Images. Diagnostics (Basel) 2023; 13:2460. [PMID: 37510204 PMCID: PMC10378151 DOI: 10.3390/diagnostics13142460] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 07/17/2023] [Accepted: 07/18/2023] [Indexed: 07/30/2023] Open
Abstract
Cancer is an incurable disease based on unregulated cell division. Breast cancer is the most prevalent cancer in women worldwide, and early detection can lower death rates. Medical images can be used to find important information for locating and diagnosing breast cancer. The best information for identifying and diagnosing breast cancer comes from medical pictures. This paper reviews the history of the discipline and examines how deep learning and machine learning are applied to detect breast cancer. The classification of breast cancer, using several medical imaging modalities, is covered in this paper. Numerous medical imaging modalities' classification systems for tumors, non-tumors, and dense masses are thoroughly explained. The differences between various medical image types are initially examined using a variety of study datasets. Following that, numerous machine learning and deep learning methods exist for diagnosing and classifying breast cancer. Finally, this review addressed the challenges of categorization and detection and the best results of different approaches.
Collapse
Affiliation(s)
- Reem Jalloul
- Maharaja Research Foundation, University of Mysore, Mysuru 570005, India
| | - H K Chethan
- Department of Computer Science and Engineering, Maharaja Research Foundation, Maharaja Institute of Technology, Mysuru 570004, India
| | - Ramez Alkhatib
- Biomaterial Bank Nord, Research Center Borstel Leibniz Lung Center, Parkallee 35, 23845 Borstel, Germany
| |
Collapse
|
7
|
Deng S, Ding J, Wang H, Mao G, Sun J, Hu J, Zhu X, Cheng Y, Ni G, Ao W. Deep learning-based radiomic nomograms for predicting Ki67 expression in prostate cancer. BMC Cancer 2023; 23:638. [PMID: 37422624 DOI: 10.1186/s12885-023-11130-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 06/29/2023] [Indexed: 07/10/2023] Open
Abstract
BACKGROUND To explore the value of a multiparametric magnetic resonance imaging (MRI)-based deep learning model for the preoperative prediction of Ki67 expression in prostate cancer (PCa). MATERIALS The data of 229 patients with PCa from two centers were retrospectively analyzed and divided into training, internal validation, and external validation sets. Deep learning features were extracted and selected from each patient's prostate multiparametric MRI (diffusion-weighted imaging, T2-weighted imaging, and contrast-enhanced T1-weighted imaging sequences) data to establish a deep radiomic signature and construct models for the preoperative prediction of Ki67 expression. Independent predictive risk factors were identified and incorporated into a clinical model, and the clinical and deep learning models were combined to obtain a joint model. The predictive performance of multiple deep-learning models was then evaluated. RESULTS Seven prediction models were constructed: one clinical model, three deep learning models (the DLRS-Resnet, DLRS-Inception, and DLRS-Densenet models), and three joint models (the Nomogram-Resnet, Nomogram-Inception, and Nomogram-Densenet models). The areas under the curve (AUCs) of the clinical model in the testing, internal validation, and external validation sets were 0.794, 0.711, and 0.75, respectively. The AUCs of the deep models and joint models ranged from 0.939 to 0.993. The DeLong test revealed that the predictive performance of the deep learning models and the joint models was superior to that of the clinical model (p < 0.01). The predictive performance of the DLRS-Resnet model was inferior to that of the Nomogram-Resnet model (p < 0.01), whereas the predictive performance of the remaining deep learning models and joint models did not differ significantly. CONCLUSION The multiple easy-to-use deep learning-based models for predicting Ki67 expression in PCa developed in this study can help physicians obtain more detailed prognostic data before a patient undergoes surgery.
Collapse
Affiliation(s)
- Shuitang Deng
- Department of Radiology, Tongde Hospital of Zhejiang Province, No. 234 Gucui Road, Zhejiang Province, 310012, Hangzhou, China
| | - Jingfeng Ding
- Department of Radiology, Shanghai Putuo District People's Hospital, Shanghai, China
| | - Hui Wang
- Department of Radiology, Tongde Hospital of Zhejiang Province, No. 234 Gucui Road, Zhejiang Province, 310012, Hangzhou, China
| | - Guoqun Mao
- Department of Radiology, Tongde Hospital of Zhejiang Province, No. 234 Gucui Road, Zhejiang Province, 310012, Hangzhou, China
| | - Jing Sun
- Department of Radiology, Shanghai Putuo District People's Hospital, Shanghai, China
| | - Jinwen Hu
- Department of Radiology, Shanghai Putuo District People's Hospital, Shanghai, China
| | - Xiandi Zhu
- Department of Radiology, Tongde Hospital of Zhejiang Province, No. 234 Gucui Road, Zhejiang Province, 310012, Hangzhou, China
| | - Yougen Cheng
- Department of Radiology, Tongde Hospital of Zhejiang Province, No. 234 Gucui Road, Zhejiang Province, 310012, Hangzhou, China
| | - Genghuan Ni
- Department of Radiology, The Second Affiliated Hospital of Jiaxing University, Jiaxing, Zhejiang Province, China
| | - Weiqun Ao
- Department of Radiology, Tongde Hospital of Zhejiang Province, No. 234 Gucui Road, Zhejiang Province, 310012, Hangzhou, China.
| |
Collapse
|
8
|
Pati A, Parhi M, Pattanayak BK, Singh D, Singh V, Kadry S, Nam Y, Kang BG. Breast Cancer Diagnosis Based on IoT and Deep Transfer Learning Enabled by Fog Computing. Diagnostics (Basel) 2023; 13:2191. [PMID: 37443585 DOI: 10.3390/diagnostics13132191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 06/18/2023] [Accepted: 06/19/2023] [Indexed: 07/15/2023] Open
Abstract
Across all countries, both developing and developed, women face the greatest risk of breast cancer. Patients who have their breast cancer diagnosed and staged early have a better chance of receiving treatment before the disease spreads. The automatic analysis and classification of medical images are made possible by today's technology, allowing for quicker and more accurate data processing. The Internet of Things (IoT) is now crucial for the early and remote diagnosis of chronic diseases. In this study, mammography images from the publicly available online repository The Cancer Imaging Archive (TCIA) were used to train a deep transfer learning (DTL) model for an autonomous breast cancer diagnostic system. The data were pre-processed before being fed into the model. A popular deep learning (DL) technique, i.e., convolutional neural networks (CNNs), was combined with transfer learning (TL) techniques such as ResNet50, InceptionV3, AlexNet, VGG16, and VGG19 to boost prediction accuracy along with a support vector machine (SVM) classifier. Extensive simulations were analyzed by employing a variety of performances and network metrics to demonstrate the viability of the proposed paradigm. Outperforming some current works based on mammogram images, the experimental accuracy, precision, sensitivity, specificity, and f1-scores reached 97.99%, 99.51%, 98.43%, 80.08%, and 98.97%, respectively, on the huge dataset of mammography images categorized as benign and malignant, respectively. Incorporating Fog computing technologies, this model safeguards the privacy and security of patient data, reduces the load on centralized servers, and increases the output.
Collapse
Affiliation(s)
- Abhilash Pati
- Department of Computer Science and Engineering, Faculty of Engineering and Technology (ITER), Siksha 'O' Anusandhan (Deemed to be University), Bhubaneswar 751030, India
| | - Manoranjan Parhi
- Centre for Data Sciences, Faculty of Engineering and Technology (ITER), Siksha 'O' Anusandhan (Deemed to be University), Bhubaneswar 751030, India
| | - Binod Kumar Pattanayak
- Department of Computer Science and Engineering, Faculty of Engineering and Technology (ITER), Siksha 'O' Anusandhan (Deemed to be University), Bhubaneswar 751030, India
| | - Debabrata Singh
- Department of Computer Applications, Faculty of Engineering and Technology (ITER), Siksha 'O' Anusandhan (Deemed to be University), Bhubaneswar 751030, India
| | - Vijendra Singh
- School of Computer Science, University of Petroleum and Energy Studies, Dehradun 248007, India
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Artificial Intelligence Research Center (AIRC), Ajman University, Ajman 346, United Arab Emirates
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 13-5053, Lebanon
- MEU Research Unit, Middle East University, Amman 11831, Jordan
| | - Yunyoung Nam
- Department of ICT Convergence, Soonchunhyang University, Asan 31538, Republic of Korea
| | - Byeong-Gwon Kang
- Department of ICT Convergence, Soonchunhyang University, Asan 31538, Republic of Korea
| |
Collapse
|
9
|
Balaji P, Muniasamy V, Bilfaqih SM, Muniasamy A, Tharanidharan S, Mani D, Alsid LEG. Chimp Optimization Algorithm Influenced Type-2 Intuitionistic Fuzzy C-Means Clustering-Based Breast Cancer Detection System. Cancers (Basel) 2023; 15:cancers15041131. [PMID: 36831474 PMCID: PMC9953815 DOI: 10.3390/cancers15041131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 01/30/2023] [Accepted: 02/07/2023] [Indexed: 02/12/2023] Open
Abstract
In recent years, breast cancer detection is an important area of concentration towards curative image dispensation and exploration. Detection of a disease at an early stage is an important factor in taking it to the next level of treatment. Accuracy plays an important role in the detection of disease. COA-T2FCM (Chimp Optimization Algorithm Based Type-2 Intuitionistic Fuzzy C-Means Clustering) is constructed for detection of such malignancy with the highest accuracy in this paper. The proposed detection process is designed with the combination of type-2 intuitionistic fuzzy c-means clustering in addition to oppositional function. In the type-2 intuitionistic fuzzy c-means clustering, the efficient cluster center can be preferred using the chimp optimization algorithm. Initially, the objective function of the type-2 intuitionistic fuzzy c-means clustering is considered. The chimp optimization algorithm is utilized to optimize the cluster center and fuzzifier in the clustering method. The projected technique is implemented, and in addition, performance metrics such as specificity, sensitivity, accuracy, Jaccard Similarity Index (JSI), and Dice Similarity Coefficient (DSC) are assessed. The projected technique is compared with the conventional technique such as fuzzy c means clustering and k mean clustering methods. The resulting method was also compared with existing methods to ensure the accuracy in the proposed method. The proposed algorithm is tested for its effectiveness on the mammogram images of the three different datasets collected from the Mini-Mammographic Image Analysis Society (Mini-MIAS), the Digital Database for Screening Mammography (DDSM), and Inbreast. The accuracy and Jaccard index score are generally used to measure the similarity between the proposed output and the actual cancer affected regions from the image considered. On an average the proposed method achieved an accuracy of 97.29% and JSI of 95.
Collapse
Affiliation(s)
- Prasanalakshmi Balaji
- College of Computer Science, King Khalid University, Abha 61421, Saudi Arabia
- Correspondence:
| | - Vasanthi Muniasamy
- Applied Science College, Mahala Campus, King Khalid University, Abha 61421, Saudi Arabia
| | | | | | - Sridevi Tharanidharan
- Applied Science College, Mahala Campus, King Khalid University, Abha 61421, Saudi Arabia
| | - Devi Mani
- College of Science and Arts, Sarat Abidah Campus, King Khalid University, Abha 61421, Saudi Arabia
| | | |
Collapse
|
10
|
Kani MAJM, Parvathy MS, Banu SM, Kareem MSA. Classification of skin lesion images using modified Inception V3 model with transfer learning and augmentation techniques. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-221386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
In this article, a methodological approach to classifying malignant melanoma in dermoscopy images is presented. Early treatment of skin cancer increases the patient’s survival rate. The classification of melanoma skin cancer in the early stages is decided by dermatologists to treat the patient appropriately. Dermatologists need more time to diagnose affected skin lesions due to high resemblance between melanoma and benign. In this paper, a deep learning based Computer-Aided Diagnosis (CAD) system is developed to accurately classify skin lesions with a high classification rate. A new architecture has been framed to classify the skin lesion diseases using the Inception v3 model as a baseline architecture. The extracted features from the Inception Net are then flattened and are given to the DenseNet block to extracts more fine grained features of the lesion disease. The International Skin Imaging Collaboration (ISIC) archive datasets contains 3307 dermoscopy images which includes both benign and malignant skin images. The dataset images are trained using the proposed architecture with the learning rate of 0.0001, batch size 64 using various optimizer. The performance of the proposed model has also been evaluated using confusion matrix and ROC-AUC curves. The experimental results show that the proposed model attains a highest accuracy rate of 91.29 % compared to other state-of-the-art methods like ResNet, VGG-16, DenseNet, MobileNet. A confusion matrix and ROC curve are used to evaluate the performance analysis of skin images. The classification accuracy, sensitivity, specificity, testing accuracy, and AUC values were obtained at 90.33%, 82.87%, 91.29%, 87.12%, and 87.40% .
Collapse
Affiliation(s)
- Mohamed Ali Jinna Mathina Kani
- Computer Science and Engineering, Sethu Institute of Technology Affiliated to Anna University, Pulloor, Kariyapatti, Tamilnadu, India
| | - Meenakshi Sundaram Parvathy
- Computer Science and Engineering, Sethu Institute of Technology Affiliated to Anna University, Pulloor, Kariyapatti, Tamilnadu, India
| | | | | |
Collapse
|
11
|
Wang D, Chen X, Wu Y, Tang H, Deng P. Artificial intelligence for assessing the severity of microtia via deep convolutional neural networks. Front Surg 2022; 9:929110. [PMID: 36157410 PMCID: PMC9492961 DOI: 10.3389/fsurg.2022.929110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 08/23/2022] [Indexed: 11/21/2022] Open
Abstract
Background Microtia is a congenital abnormality varying from slightly structural abnormalities to the complete absence of the external ear. However, there is no gold standard for assessing the severity of microtia. Objectives The purpose of this study was to develop and test models of artificial intelligence to assess the severity of microtia using clinical photographs. Methods A total of 800 ear images were included, and randomly divided into training, validation, and test set. Nine convolutional neural networks (CNNs) were trained for classifying the severity of microtia. The evaluation metrics, including accuracy, precision, recall, F1 score, receiver operating characteristic curve, and area under the curve (AUC) values, were used to evaluate the performance of the models. Results Eight CNNs were tested with accuracy greater than 0.8. Among them, Alexnet and Mobilenet achieved the highest accuracy of 0.9. Except for Mnasnet, all CNNs achieved high AUC values higher than 0.9 for each grade of microtia. In most CNNs, the grade I microtia had the lowest AUC values and the normal ear had the highest AUC values. Conclusion CNN can classify the severity of microtia with high accuracy. Artificial intelligence is expected to provide an objective, automated assessment of the severity of microtia.
Collapse
Affiliation(s)
| | | | | | | | - Pei Deng
- Correspondence: Pei Deng Hongbo Tang
| |
Collapse
|
12
|
Baghdadi NA, Malki A, Magdy Balaha H, AbdulAzeem Y, Badawy M, Elhosseini M. Classification of breast cancer using a manta-ray foraging optimized transfer learning framework. PeerJ Comput Sci 2022; 8:e1054. [PMID: 36092017 PMCID: PMC9454783 DOI: 10.7717/peerj-cs.1054] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 07/07/2022] [Indexed: 06/15/2023]
Abstract
Due to its high prevalence and wide dissemination, breast cancer is a particularly dangerous disease. Breast cancer survival chances can be improved by early detection and diagnosis. For medical image analyzers, diagnosing is tough, time-consuming, routine, and repetitive. Medical image analysis could be a useful method for detecting such a disease. Recently, artificial intelligence technology has been utilized to help radiologists identify breast cancer more rapidly and reliably. Convolutional neural networks, among other technologies, are promising medical image recognition and classification tools. This study proposes a framework for automatic and reliable breast cancer classification based on histological and ultrasound data. The system is built on CNN and employs transfer learning technology and metaheuristic optimization. The Manta Ray Foraging Optimization (MRFO) approach is deployed to improve the framework's adaptability. Using the Breast Cancer Dataset (two classes) and the Breast Ultrasound Dataset (three-classes), eight modern pre-trained CNN architectures are examined to apply the transfer learning technique. The framework uses MRFO to improve the performance of CNN architectures by optimizing their hyperparameters. Extensive experiments have recorded performance parameters, including accuracy, AUC, precision, F1-score, sensitivity, dice, recall, IoU, and cosine similarity. The proposed framework scored 97.73% on histopathological data and 99.01% on ultrasound data in terms of accuracy. The experimental results show that the proposed framework is superior to other state-of-the-art approaches in the literature review.
Collapse
Affiliation(s)
- Nadiah A. Baghdadi
- College of Nursing, Nursing Management and Education Department, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Amer Malki
- College of Computer Science and Engineering, Taibah University, Yanbu, Saudi Arabia
| | - Hossam Magdy Balaha
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Yousry AbdulAzeem
- Computer Engineering Department, Misr Higher Institute for Engineering and Technology, Mansoura, Egypt
| | - Mahmoud Badawy
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Mostafa Elhosseini
- College of Computer Science and Engineering, Taibah University, Yanbu, Saudi Arabia
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| |
Collapse
|
13
|
din NMU, Dar RA, Rasool M, Assad A. Breast cancer detection using deep learning: Datasets, methods, and challenges ahead. Comput Biol Med 2022; 149:106073. [DOI: 10.1016/j.compbiomed.2022.106073] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 08/21/2022] [Accepted: 08/27/2022] [Indexed: 12/22/2022]
|
14
|
MVI-Mind: A Novel Deep-Learning Strategy Using Computed Tomography (CT)-Based Radiomics for End-to-End High Efficiency Prediction of Microvascular Invasion in Hepatocellular Carcinoma. Cancers (Basel) 2022; 14:cancers14122956. [PMID: 35740620 PMCID: PMC9221272 DOI: 10.3390/cancers14122956] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 05/24/2022] [Accepted: 06/09/2022] [Indexed: 12/12/2022] Open
Abstract
Simple Summary Microvascular invasion is an important indicator for reflecting the prognosis of hepatocellular carcinoma, but the traditional diagnosis requires a postoperative pathological examination. This study is the first to propose an end-to-end deep learning architecture for predicting microvascular invasion in hepatocellular carcinoma by collecting retrospective data. This method can achieve noninvasive, accurate and efficient preoperative prediction only through the patient’s radiomic data, which is very beneficial to doctors for clinical decision making in HCC patients. Abstract Microvascular invasion (MVI) in hepatocellular carcinoma (HCC) directly affects a patient’s prognosis. The development of preoperative noninvasive diagnostic methods is significant for guiding optimal treatment plans. In this study, we investigated 138 patients with HCC and presented a novel end-to-end deep learning strategy based on computed tomography (CT) radiomics (MVI-Mind), which integrates data preprocessing, automatic segmentation of lesions and other regions, automatic feature extraction, and MVI prediction. A lightweight transformer and a convolutional neural network (CNN) were proposed for the segmentation and prediction modules, respectively. To demonstrate the superiority of MVI-Mind, we compared the framework’s performance with that of current, mainstream segmentation, and classification models. The test results showed that MVI-Mind returned the best performance in both segmentation and prediction. The mean intersection over union (mIoU) of the segmentation module was 0.9006, and the area under the receiver operating characteristic curve (AUC) of the prediction module reached 0.9223. Additionally, it only took approximately 1 min to output a prediction for each patient, end-to-end using our computing device, which indicated that MVI-Mind could noninvasively, efficiently, and accurately predict the presence of MVI in HCC patients before surgery. This result will be helpful for doctors to make rational clinical decisions.
Collapse
|