1
|
Alshuhail A, Thakur A, Chandramma R, Mahesh TR, Almusharraf A, Vinoth Kumar V, Khan SB. Refining neural network algorithms for accurate brain tumor classification in MRI imagery. BMC Med Imaging 2024; 24:118. [PMID: 38773391 PMCID: PMC11110259 DOI: 10.1186/s12880-024-01285-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Accepted: 04/29/2024] [Indexed: 05/23/2024] Open
Abstract
Brain tumor diagnosis using MRI scans poses significant challenges due to the complex nature of tumor appearances and variations. Traditional methods often require extensive manual intervention and are prone to human error, leading to misdiagnosis and delayed treatment. Current approaches primarily include manual examination by radiologists and conventional machine learning techniques. These methods rely heavily on feature extraction and classification algorithms, which may not capture the intricate patterns present in brain MRI images. Conventional techniques often suffer from limited accuracy and generalizability, mainly due to the high variability in tumor appearance and the subjective nature of manual interpretation. Additionally, traditional machine learning models may struggle with the high-dimensional data inherent in MRI images. To address these limitations, our research introduces a deep learning-based model utilizing convolutional neural networks (CNNs).Our model employs a sequential CNN architecture with multiple convolutional, max-pooling, and dropout layers, followed by dense layers for classification. The proposed model demonstrates a significant improvement in diagnostic accuracy, achieving an overall accuracy of 98% on the test dataset. The proposed model demonstrates a significant improvement in diagnostic accuracy, achieving an overall accuracy of 98% on the test dataset. The precision, recall, and F1-scores ranging from 97 to 98% with a roc-auc ranging from 99 to 100% for each tumor category further substantiate the model's effectiveness. Additionally, the utilization of Grad-CAM visualizations provides insights into the model's decision-making process, enhancing interpretability. This research addresses the pressing need for enhanced diagnostic accuracy in identifying brain tumors through MRI imaging, tackling challenges such as variability in tumor appearance and the need for rapid, reliable diagnostic tools.
Collapse
Affiliation(s)
- Asma Alshuhail
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Hofuf, Saudi Arabia
| | - Arastu Thakur
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, 562112, India
| | - R Chandramma
- Department of Computer Science & Engineering (AI & ML), Global Academy of Technology, Bangalore, India
| | - T R Mahesh
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, 562112, India
| | - Ahlam Almusharraf
- Department of Management, College of Business Administration, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia.
| | - V Vinoth Kumar
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, 632001, India
| | - Surbhi Bhatia Khan
- School of Science, Engineering and Environment, University of Salford, Manchester, UK
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon
| |
Collapse
|
2
|
Albalawi E, T R M, Thakur A, Kumar VV, Gupta M, Khan SB, Almusharraf A. Integrated approach of federated learning with transfer learning for classification and diagnosis of brain tumor. BMC Med Imaging 2024; 24:110. [PMID: 38750436 PMCID: PMC11097560 DOI: 10.1186/s12880-024-01261-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Accepted: 03/27/2024] [Indexed: 05/18/2024] Open
Abstract
Brain tumor classification using MRI images is a crucial yet challenging task in medical imaging. Accurate diagnosis is vital for effective treatment planning but is often hindered by the complex nature of tumor morphology and variations in imaging. Traditional methodologies primarily rely on manual interpretation of MRI images, supplemented by conventional machine learning techniques. These approaches often lack the robustness and scalability needed for precise and automated tumor classification. The major limitations include a high degree of manual intervention, potential for human error, limited ability to handle large datasets, and lack of generalizability to diverse tumor types and imaging conditions.To address these challenges, we propose a federated learning-based deep learning model that leverages the power of Convolutional Neural Networks (CNN) for automated and accurate brain tumor classification. This innovative approach not only emphasizes the use of a modified VGG16 architecture optimized for brain MRI images but also highlights the significance of federated learning and transfer learning in the medical imaging domain. Federated learning enables decentralized model training across multiple clients without compromising data privacy, addressing the critical need for confidentiality in medical data handling. This model architecture benefits from the transfer learning technique by utilizing a pre-trained CNN, which significantly enhances its ability to classify brain tumors accurately by leveraging knowledge gained from vast and diverse datasets.Our model is trained on a diverse dataset combining figshare, SARTAJ, and Br35H datasets, employing a federated learning approach for decentralized, privacy-preserving model training. The adoption of transfer learning further bolsters the model's performance, making it adept at handling the intricate variations in MRI images associated with different types of brain tumors. The model demonstrates high precision (0.99 for glioma, 0.95 for meningioma, 1.00 for no tumor, and 0.98 for pituitary), recall, and F1-scores in classification, outperforming existing methods. The overall accuracy stands at 98%, showcasing the model's efficacy in classifying various tumor types accurately, thus highlighting the transformative potential of federated learning and transfer learning in enhancing brain tumor classification using MRI images.
Collapse
Affiliation(s)
- Eid Albalawi
- Department of Computer science, College of Computer Science and Information Technology, King faisal University, 31982, Hofuf, Saudi Arabia
| | - Mahesh T R
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), 562112, Bangalore, India
| | - Arastu Thakur
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), 562112, Bangalore, India
| | - V Vinoth Kumar
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, 632014, Vellore, India
| | - Muskan Gupta
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), 562112, Bangalore, India
| | - Surbhi Bhatia Khan
- School of Science, Engineering and environment, University of Salford, M5 4WT, Manchester, UK.
- , Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon, Lebanon.
| | - Ahlam Almusharraf
- Department of Business Administration, College of Business and Administration, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Riyadh, Saudi Arabia
| |
Collapse
|
3
|
Li W, Yu S, Yang R, Tian Y, Zhu T, Liu H, Jiao D, Zhang F, Liu X, Tao L, Gao Y, Li Q, Zhang J, Guo X. Machine Learning Model of ResNet50-Ensemble Voting for Malignant-Benign Small Pulmonary Nodule Classification on Computed Tomography Images. Cancers (Basel) 2023; 15:5417. [PMID: 38001677 PMCID: PMC10670717 DOI: 10.3390/cancers15225417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 09/21/2023] [Accepted: 09/26/2023] [Indexed: 11/26/2023] Open
Abstract
BACKGROUND The early detection of benign and malignant lung tumors enabled patients to diagnose lesions and implement appropriate health measures earlier, dramatically improving lung cancer patients' quality of living. Machine learning methods performed admirably when recognizing small benign and malignant lung nodules. However, exploration and investigation are required to fully leverage the potential of machine learning in distinguishing between benign and malignant small lung nodules. OBJECTIVE The aim of this study was to develop and evaluate the ResNet50-Ensemble Voting model for detecting the benign and malignant nature of small pulmonary nodules (<20 mm) based on CT images. METHODS In this study, 834 CT imaging data from 396 patients with small pulmonary nodules were gathered and randomly assigned to the training and validation sets in an 8:2 ratio. ResNet50 and VGG16 algorithms were utilized to extract CT image features, followed by XGBoost, SVM, and Ensemble Voting techniques for classification, for a total of ten different classes of machine learning combinatorial classifiers. Indicators such as accuracy, sensitivity, and specificity were used to assess the models. The collected features are also shown to investigate the contrasts between them. RESULTS The algorithm we presented, ResNet50-Ensemble Voting, performed best in the test set, with an accuracy of 0.943 (0.938, 0.948) and sensitivity and specificity of 0.964 and 0.911, respectively. VGG16-Ensemble Voting had an accuracy of 0.887 (0.880, 0.894), with a sensitivity and specificity of 0.952 and 0.784, respectively. CONCLUSION Machine learning models that were implemented and integrated ResNet50-Ensemble Voting performed exceptionally well in identifying benign and malignant small pulmonary nodules (<20 mm) from various sites, which might help doctors in accurately diagnosing the nature of early-stage lung nodules in clinical practice.
Collapse
Affiliation(s)
- Weiming Li
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Siqi Yu
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Runhuang Yang
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Yixing Tian
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Tianyu Zhu
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Haotian Liu
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Danyang Jiao
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Feng Zhang
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Xiangtong Liu
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Lixin Tao
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Yan Gao
- Department of Nuclear Medicine, Xuanwu Hospital Capital Medical University, Beijing 100053, China;
| | - Qiang Li
- Beijing Physical Examination Center, Beijing 100050, China; (Q.L.); (J.Z.)
| | - Jingbo Zhang
- Beijing Physical Examination Center, Beijing 100050, China; (Q.L.); (J.Z.)
| | - Xiuhua Guo
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| |
Collapse
|
4
|
Cao X, Lu Y, Yang L, Zhu G, Hu X, Lu X, Yin J, Guo P, Zhang Q. CT image segmentation of meat sheep Loin based on deep learning. PLoS One 2023; 18:e0293764. [PMID: 37917607 PMCID: PMC10621832 DOI: 10.1371/journal.pone.0293764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Accepted: 10/18/2023] [Indexed: 11/04/2023] Open
Abstract
There are no clear boundaries between internal tissues in sheep Computerized Tomography images, and it is difficult for traditional methods to meet the requirements of image segmentation in application. Deep learning has shown excellent performance in image analysis. In this context, we investigated the Loin CT image segmentation of sheep based on deep learning models. The Fully Convolutional Neural Network (FCN) and 5 different UNet models were applied in image segmentation on the data set of 1471 CT images including the Loin part from 25 Australian White rams and Dolper rams using the method of 5-fold cross validation. After 10 independent runs, different evaluation metrics were applied to assess the performances of the models. All models showed excellent results in terms evaluation metrics. There were slight differences among the results from the six models, and Attention-UNet outperformed others methods with 0.998±0.009 in accuracy, 4.391±0.338 in AVER_HD, 0.90±0.012 in MIOU and 0.95±0.007 in DICE, respectively, while the optimal value of LOSS was 0.029±0.018 from Channel-UNet, and the running time of ResNet34-UNet is the shortest.
Collapse
Affiliation(s)
- Xiaoyao Cao
- College of Computer and Information Engineering, Tianjin Agricultural University, Tianjin, China
- Tianjin Aoqun Sheep Industry Academy Limited, Tianjin, China
- Tianjin Aoqun Animal Husbandry Limited, Tianjin, China
- Key Laboratory of Tianjin Meat Sheep Genetics and Breeding Enterprises, Tianjin, China
| | - Yihang Lu
- Tianjin Aoqun Sheep Industry Academy Limited, Tianjin, China
- Tianjin Aoqun Animal Husbandry Limited, Tianjin, China
- Key Laboratory of Tianjin Meat Sheep Genetics and Breeding Enterprises, Tianjin, China
| | - Luming Yang
- Tianjin Aoqun Sheep Industry Academy Limited, Tianjin, China
- Tianjin Aoqun Animal Husbandry Limited, Tianjin, China
- Key Laboratory of Tianjin Meat Sheep Genetics and Breeding Enterprises, Tianjin, China
| | - Guangjie Zhu
- College of Computer and Information Engineering, Tianjin Agricultural University, Tianjin, China
- Tianjin Aoqun Sheep Industry Academy Limited, Tianjin, China
- Tianjin Aoqun Animal Husbandry Limited, Tianjin, China
- Key Laboratory of Tianjin Meat Sheep Genetics and Breeding Enterprises, Tianjin, China
| | - Xinyue Hu
- Tianjin Aoqun Sheep Industry Academy Limited, Tianjin, China
- Tianjin Aoqun Animal Husbandry Limited, Tianjin, China
- Key Laboratory of Tianjin Meat Sheep Genetics and Breeding Enterprises, Tianjin, China
| | - Xiaofang Lu
- Tianjin Aoqun Sheep Industry Academy Limited, Tianjin, China
- Tianjin Aoqun Animal Husbandry Limited, Tianjin, China
- Key Laboratory of Tianjin Meat Sheep Genetics and Breeding Enterprises, Tianjin, China
| | - Jing Yin
- Tianjin Aoqun Sheep Industry Academy Limited, Tianjin, China
- Tianjin Aoqun Animal Husbandry Limited, Tianjin, China
- Key Laboratory of Tianjin Meat Sheep Genetics and Breeding Enterprises, Tianjin, China
| | - Peng Guo
- College of Computer and Information Engineering, Tianjin Agricultural University, Tianjin, China
| | - Qingfeng Zhang
- Tianjin Aoqun Sheep Industry Academy Limited, Tianjin, China
- Tianjin Aoqun Animal Husbandry Limited, Tianjin, China
- Key Laboratory of Tianjin Meat Sheep Genetics and Breeding Enterprises, Tianjin, China
| |
Collapse
|
5
|
Hu S, Duan H, Zhao J, Zhao H. A Rust Extraction and Evaluation Method for Navigation Buoys Based on Improved U-Net and Hue, Saturation, and Value. SENSORS (BASEL, SWITZERLAND) 2023; 23:8670. [PMID: 37960370 PMCID: PMC10648957 DOI: 10.3390/s23218670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 10/12/2023] [Accepted: 10/18/2023] [Indexed: 11/15/2023]
Abstract
Abnormalities of navigation buoys include tilting, rusting, breaking, etc. Realizing automatic extraction and evaluation of rust on buoys is of great significance for maritime supervision. Severe rust may cause damage to the buoy itself. Therefore, a lightweight method based on machine vision is proposed for extracting and evaluating the rust of the buoy. The method integrates image segmentation and processing. Firstly, image segmentation technology is used to extract the metal part of the buoy based on an improved U-Net. Secondly, the RGB image is converted into an HSV image by preprocessing, and the transformation law of HSV channel color value is analyzed to obtain the best segmentation threshold and then the pixels of the rusted and the metal parts can be extracted. Finally, the rust ratio of the buoy is calculated to evaluate the rust level of the buoy. Results show that both the segmentation precision and recall are above 0.95, and the accuracy is nearly 1.00. Compared with the rust evaluation algorithm directly using the image processing method, the accuracy and processing speed of rust grade evaluation are greatly improved.
Collapse
Affiliation(s)
- Shunan Hu
- School of Automotive Engineering, Changshu Institute of Technology, Changshu 215506, China
| | - Haiyan Duan
- Merchant Marine College, Shanghai Maritime University, Shanghai 201306, China
| | - Jiansen Zhao
- Merchant Marine College, Shanghai Maritime University, Shanghai 201306, China
| | - Hailiang Zhao
- Merchant Marine College, Shanghai Maritime University, Shanghai 201306, China
| |
Collapse
|
6
|
Hida M, Eto S, Wada C, Kitagawa K, Imaoka M, Nakamura M, Imai R, Kubo T, Inoue T, Sakai K, Orui J, Tazaki F, Takeda M, Hasegawa A, Yamasaka K, Nakao H. Development of Hallux Valgus Classification Using Digital Foot Images with Machine Learning. Life (Basel) 2023; 13:life13051146. [PMID: 37240791 DOI: 10.3390/life13051146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 05/03/2023] [Accepted: 05/07/2023] [Indexed: 05/28/2023] Open
Abstract
Hallux valgus, a frequently seen foot deformity, requires early detection to prevent it from becoming more severe. It is a medical economic problem, so a means of quickly distinguishing it would be helpful. We designed and investigated the accuracy of an early version of a tool for screening hallux valgus using machine learning. The tool would ascertain whether patients had hallux valgus by analyzing pictures of their feet. In this study, 507 images of feet were used for machine learning. Image preprocessing was conducted using the comparatively simple pattern A (rescaling, angle adjustment, and trimming) and slightly more complicated pattern B (same, plus vertical flip, binary formatting, and edge emphasis). This study used the VGG16 convolutional neural network. Pattern B machine learning was more accurate than pattern A. In our early model, Pattern A achieved 0.62 for accuracy, 0.56 for precision, 0.94 for recall, and 0.71 for F1 score. As for Pattern B, the scores were 0.79, 0.77, 0.96, and 0.86, respectively. Machine learning was sufficiently accurate to distinguish foot images between feet with hallux valgus and normal feet. With further refinement, this tool could be used for the easy screening of hallux valgus.
Collapse
Affiliation(s)
- Mitsumasa Hida
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
- Graduate School of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Shinji Eto
- Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, Hibikino 2-4, Wakamatsu-ku, Kitakyushu 808-0135, Japan
| | - Chikamune Wada
- Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, Hibikino 2-4, Wakamatsu-ku, Kitakyushu 808-0135, Japan
| | - Kodai Kitagawa
- Department of Industrial Systems Engineering, National Institute of Technology, Hachinohe College, 16-1 Uwanotai, Tamonoki, Hachinohe 039-1192, Japan
| | - Masakazu Imaoka
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
- Graduate School of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Misa Nakamura
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
- Graduate School of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Ryota Imai
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
- Graduate School of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Takanari Kubo
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Takao Inoue
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Keiko Sakai
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Junya Orui
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
- Graduate School of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Fumie Tazaki
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Masatoshi Takeda
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
- Graduate School of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Ayuna Hasegawa
- Department of Rehabilitation, Takata-Kamitani Hospital, Kamiyamaguchi 4-26-14, Yamaguchi, Nishinomiya 651-1421, Japan
| | - Kota Yamasaka
- Department of Rehabilitation, Takata-Kamitani Hospital, Kamiyamaguchi 4-26-14, Yamaguchi, Nishinomiya 651-1421, Japan
| | - Hidetoshi Nakao
- Department of Physical Therapy, Josai International University, 1 Gumyo, Togane 283-8555, Japan
| |
Collapse
|
7
|
Alshammari A. DenseNet_ HybWWoA: A DenseNet-Based Brain Metastasis Classification with a Hybrid Metaheuristic Feature Selection Strategy. Biomedicines 2023; 11:biomedicines11051354. [PMID: 37239025 DOI: 10.3390/biomedicines11051354] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Revised: 04/01/2023] [Accepted: 04/25/2023] [Indexed: 05/28/2023] Open
Abstract
Brain metastases (BM) are the most severe consequence of malignancy in the brain, resulting in substantial illness and death. The most common primary tumors that progress to BM are lung, breast, and melanoma. Historically, BM patients had poor clinical outcomes, with limited treatment options including surgery, stereotactic radiation therapy (SRS), whole brain radiation therapy (WBRT), systemic therapy, and symptom control alone. Magnetic Resonance Imaging (MRI) is a valuable tool for detecting cerebral tumors, though it is not infallible, as cerebral matter is interchangeable. This study offers a novel method for categorizing differing brain tumors in this context. This research additionally presents a combination of optimization algorithms called the Hybrid Whale and Water Waves Optimization Algorithm (HybWWoA), which is used to identify features by reducing the size of recovered features. This algorithm combines whale optimization and water waves optimization. The categorization procedure is consequently carried out using a DenseNet algorithm. The suggested cancer categorization method is evaluated on a number of factors, including precision, specificity, and sensitivity. The final assessment findings showed that the suggested approach exceeded the authors' expectations, with an F1-score of 97% and accuracy, precision, memory, and recollection of 92.1%, 98.5%, and 92.1%, respectively.
Collapse
Affiliation(s)
- Abdulaziz Alshammari
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| |
Collapse
|
8
|
Xie D, Yin C. Exploration of Chinese cultural communication mode based on the Internet of Things and mobile multimedia technology. PeerJ Comput Sci 2023; 9:e1330. [PMID: 37346562 PMCID: PMC10280474 DOI: 10.7717/peerj-cs.1330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 03/15/2023] [Indexed: 06/23/2023]
Abstract
Image retrieval technology has emerged as a popular research area of China's development of cultural digital image dissemination and creative creation with the growth of the Internet and the digital information age. This study uses the shadow image in Shaanxi culture as the research object, suggests a shadow image retrieval model based on CBAM-ResNet50, and implements it in the IoT system to achieve more effective deep-level cultural information retrieval. First, ResNet50 is paired with an attention mechanism to enhance the network's capacity to extract sophisticated semantic characteristics. The second step is configuring the IoT system's picture acquisition, processing, and output modules. The image processing module incorporates the CBAM-ResNet50 network to provide intelligent and effective shadow play picture retrieval. The experiment results show that shadow plays on GPU can retrieve images at a millisecond level. Both the first image and the first six photographs may be accurately retrieved, with a retrieval accuracy of 92.5 percent for the first image. This effectively communicates Chinese culture and makes it possible to retrieve detailed shadow-play images.
Collapse
Affiliation(s)
- Dan Xie
- General Department, Xi’an Traffic Engineering Institute, Xi’an, Shaanxi, China
- University of Technology MARA, Selangor, Shah Anam, Malaysia
| | - Chao Yin
- School of History and Culture, Shaanxi Normal University, Xi’an, Shaanxi, China
- Xi’an Tie Yi Middle School, Xi’an, Shaanxi, China
| |
Collapse
|