1
|
Harangi B, Bogacsovics G, Toth J, Kovacs I, Dani E, Hajdu A. Pixel-wise segmentation of cells in digitized Pap smear images. Sci Data 2024; 11:733. [PMID: 38971865 PMCID: PMC11227563 DOI: 10.1038/s41597-024-03566-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 06/24/2024] [Indexed: 07/08/2024] Open
Abstract
A simple and cheap way to recognize cervical cancer is using light microscopic analysis of Pap smear images. Training artificial intelligence-based systems becomes possible in this domain, e.g., to follow the European recommendation to screen negative smears to reduce false negative cases. The first step for such a process is segmenting the cells. A large and manually segmented dataset is required for this task, which can be used to train deep learning-based solutions. We describe a corresponding dataset with accurate manual segmentations for the enclosed cells. Altogether, the APACS23 (Annotated PAp smear images for Cell Segmentation 2023) dataset contains about 37 000 manually segmented cells and is separated into dedicated training and test parts, which could be used for an official benchmark of scientific investigations or a grand challenge.
Collapse
Affiliation(s)
- Balazs Harangi
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary.
| | - Gergo Bogacsovics
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary
| | - Janos Toth
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary
| | - Ilona Kovacs
- Department of Pathology, Kenezy Gyula Hospital and Clinic, University of Debrecen, Debrecen, Hungary
| | - Erzsebet Dani
- Department of Library and Information Science, Faculty of Humanities, University of Debrecen, Debrecen, Hungary
| | - Andras Hajdu
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary
| |
Collapse
|
2
|
Zhao Z, Qiang Y, Yang F, Hou X, Zhao J, Song K. Two-stream vision transformer based multi-label recognition for TCM prescriptions construction. Comput Biol Med 2024; 170:107920. [PMID: 38244474 DOI: 10.1016/j.compbiomed.2024.107920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 12/04/2023] [Accepted: 01/01/2024] [Indexed: 01/22/2024]
Abstract
Traditional Chinese medicine (TCM) observation diagnosis images (including facial and tongue images) provide essential human body information, holding significant importance in clinical medicine for diagnosis and treatment. TCM prescriptions, known for their simplicity, non-invasiveness, and low side effects, have been widely applied worldwide. Exploring automated herbal prescription construction based on visual diagnosis holds vital value in delving into the correlation between external features and herbal prescriptions and offering medical services in mobile healthcare systems. To effectively integrate multi-perspective visual diagnosis images and automate prescription construction, this study proposes a multi-herb recommendation framework based on Visual Transformer and multi-label classification. The framework comprises three key components: image encoder, label embedding module, and cross-modal fusion classification module. The image encoder employs a dual-stream Visual Transformer to learn dependencies between different regions of input images, capturing both local and global features. The label embedding module utilizes Graph Convolutional Networks to capture associations between diverse herbal labels. Finally, two Multi-Modal Factorized Bilinear modules are introduced as effective components to fuse cross-modal vectors, creating an end-to-end multi-label image-herb recommendation model. Through experimentation with real facial and tongue images and generating prescription data closely resembling real samples. The precision is 50.06 %, the recall rate is 48.33 %, and the F1-score is 49.18 %. This study validates the feasibility of automated herbal prescription construction from the perspective of visual diagnosis. Simultaneously, it provides valuable insights for constructing herbal prescriptions automatically from more physical information.
Collapse
Affiliation(s)
- Zijuan Zhao
- College of Computer Science and Technology(College of Data Science), Taiyuan University of Technology, Taiyuan, 030002, Shanxi, China
| | - Yan Qiang
- College of Computer Science and Technology(College of Data Science), Taiyuan University of Technology, Taiyuan, 030002, Shanxi, China; School of Software, North University of China, Taiyuan, 030051, Shanxi, China.
| | - Fenghao Yang
- College of Computer Science and Technology(College of Data Science), Taiyuan University of Technology, Taiyuan, 030002, Shanxi, China
| | - Xiao Hou
- College of Computer Science and Technology(College of Data Science), Taiyuan University of Technology, Taiyuan, 030002, Shanxi, China
| | - Juanjuan Zhao
- College of Computer Science and Technology(College of Data Science), Taiyuan University of Technology, Taiyuan, 030002, Shanxi, China; School of Information Engineering, Jinzhong College of Information. Jinzhong, 030800, China
| | - Kai Song
- College of Physics, Taiyuan University of Technology, Taiyuan, 030002, Shanxi, China
| |
Collapse
|
3
|
Yang X, Ding B, Qin J, Guo L, Zhao J, He Y. HVS-Unsup: Unsupervised cervical cell instance segmentation method based on human visual simulation. Comput Biol Med 2024; 171:108147. [PMID: 38387385 DOI: 10.1016/j.compbiomed.2024.108147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 01/22/2024] [Accepted: 02/12/2024] [Indexed: 02/24/2024]
Abstract
Instance segmentation plays an important role in the automatic diagnosis of cervical cancer. Although deep learning-based instance segmentation methods can achieve outstanding performance, they need large amounts of labeled data. This results in a huge consumption of manpower and material resources. To solve this problem, we propose an unsupervised cervical cell instance segmentation method based on human visual simulation, named HVS-Unsup. Our method simulates the process of human cell recognition and incorporates prior knowledge of cervical cells. Specifically, firstly, we utilize prior knowledge to generate three types of pseudo labels for cervical cells. In this way, the unsupervised instance segmentation is transformed to a supervised task. Secondly, we design a Nucleus Enhanced Module (NEM) and a Mask-Assisted Segmentation module (MAS) to address problems of cell overlapping, adhesion, and even scenarios involving visually indistinguishable cases. NEM can accurately locate the nuclei by the nuclei attention feature maps generated by point-level pseudo labels, and MAS can reduce the interference from impurities by updating the weight of the shallow network through the dice loss. Next, we propose a Category-Wise droploss (CW-droploss) to reduce cell omissions in lower-contrast images. Finally, we employ an iterative self-training strategy to rectify mislabeled instances. Experimental results on our dataset MS-cellSeg, the public datasets Cx22 and ISBI2015 demonstrate that HVS-Unsup outperforms existing mainstream unsupervised cervical cell segmentation methods.
Collapse
Affiliation(s)
- Xiaona Yang
- Harbin University of Science and Technology, School of Computer Science and Technology, Harbin, 150080, China
| | - Bo Ding
- Harbin University of Science and Technology, School of Computer Science and Technology, Harbin, 150080, China
| | - Jian Qin
- Harbin University of Science and Technology, School of Computer Science and Technology, Harbin, 150080, China
| | - Luyao Guo
- Harbin University of Science and Technology, School of Computer Science and Technology, Harbin, 150080, China
| | - Jing Zhao
- Northeast Forestry University, School of Mechanical and Electrical Engineering, Harbin, 150040, China
| | - Yongjun He
- Harbin Institute of Technology, School of Computer Science and Technology, Harbin, 150001, China.
| |
Collapse
|
4
|
Chen Z, Yang R, Huang M, Li F, Lu G, Wang Z. EEGProgress: A fast and lightweight progressive convolution architecture for EEG classification. Comput Biol Med 2024; 169:107901. [PMID: 38159400 DOI: 10.1016/j.compbiomed.2023.107901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 12/11/2023] [Accepted: 12/23/2023] [Indexed: 01/03/2024]
Abstract
Because of the intricate topological structure and connection of the human brain, extracting deep spatial features from electroencephalograph (EEG) signals is a challenging and time-consuming task. The extraction of topological spatial information plays a crucial role in EEG classification, and the architecture of the spatial convolution greatly affects the performance and complexity of convolutional neural network (CNN) based EEG classification models. In this study, a progressive convolution CNN architecture named EEGProgress is proposed, aiming to efficiently extract the topological spatial information of EEG signals from multi-scale levels (electrode, brain region, hemisphere, global) with superior speed. To achieve this, the raw EEG data is permuted using the empirical topological permutation rule, integrating the EEG data with numerous topological properties. Subsequently, the spatial features are extracted by a progressive feature extractor including prior, electrode, region, and hemisphere convolution blocks, progressively extracting the deep spatial features with reduced parameters and speed. Finally, the comparison and ablation experiments under both cross-subject and within-subject scenarios are conducted on a public dataset to verify the performance of the proposed EEGProgress and the effectiveness of the topological permutation. The results demonstrate the superior feature extraction ability of the proposed EEGProgress, with an average increase of 4.02% compared to other CNN-based EEG classification models under both cross-subject and within-subject scenarios. Furthermore, with the obtained average testing time, FLOPs, and parameters, the proposed EEGProgress outperforms other comparison models in terms of model complexity.
Collapse
Affiliation(s)
- Zhige Chen
- School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou 215123, China; School of Electrical Engineering, Electronics and Computer Science, University of Liverpool, Liverpool L69 3BX, United Kingdom
| | - Rui Yang
- School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou 215123, China
| | - Mengjie Huang
- Design School, Xi'an Jiaotong-Liverpool University, Suzhou 215123, China.
| | - Fumin Li
- School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou 215123, China; School of Electrical Engineering, Electronics and Computer Science, University of Liverpool, Liverpool L69 3BX, United Kingdom
| | - Guoping Lu
- School of Electrical Engineering, Nantong University, Nantong 226019, China
| | - Zidong Wang
- Department of Computer Science, Brunel University London, Uxbridge, Middlesex UB8 3PH, United Kingdom
| |
Collapse
|
5
|
Lakhan A, Hamouda H, Abdulkareem KH, Alyahya S, Mohammed MA. Digital healthcare framework for patients with disabilities based on deep federated learning schemes. Comput Biol Med 2024; 169:107845. [PMID: 38118307 DOI: 10.1016/j.compbiomed.2023.107845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 11/21/2023] [Accepted: 12/11/2023] [Indexed: 12/22/2023]
Abstract
Utilizing digital healthcare services for patients who use wheelchairs is a vital and effective means to enhance their healthcare. Digital healthcare integrates various healthcare facilities, including local laboratories and centralized hospitals, to provide healthcare services for individuals in wheelchairs. In digital healthcare, the Internet of Medical Things (IoMT) allows local wheelchairs to connect with remote digital healthcare services and generate sensors from wheelchairs to monitor and process healthcare. Recently, it has been observed that wheelchair patients, when older than thirty, suffer from high blood pressure, heart disease, body glucose, and others due to less activity because of their disabilities. However, existing wheelchair IoMT applications are straightforward and do not consider the healthcare of wheelchair patients with their diseases during their disabilities. This paper presents a novel digital healthcare framework for patients with disabilities based on deep-federated learning schemes. In the proposed framework, we offer the federated learning deep convolutional neural network schemes (FL-DCNNS) that consist of different sub-schemes. The offloading scheme collects the sensors from integrated wheelchair bio-sensors as smartwatches such as blood pressure, heartbeat, body glucose, and oxygen. The smartwatches worked with wearable devices for disabled patients in our framework. We present the federated learning-enabled laboratories for data training and share the updated weights with the data security to the centralized node for decision and prediction. We present the decision forest for centralized healthcare nodes to decide on aggregation with the different constraints: cost, energy, time, and accuracy. We implemented a deep CNN scheme in each laboratory to train and validate the model locally on the node with the consideration of resources. Simulation results show that FL-DCNNS obtained the optimal results on the sensor data and minimized the energy by 25%, time 19%, cost 28%, and improved the accuracy of disease prediction by 99% as compared to existing digital healthcare schemes for wheelchair patients.
Collapse
Affiliation(s)
- Abdullah Lakhan
- Department of Cybersecurity and Computer Science, Dawood University of Engineering and Technology, Karachi City 74800, Sindh, Pakistan.
| | - Hassen Hamouda
- Department of Business Administration, College of Science and Humanities at Alghat, Majmaah University, Al-Majmaah 11952, Saudi Arabia.
| | - Karrar Hameed Abdulkareem
- College of Agriculture, Al-Muthanna University, Samawah 66001, Iraq; College of Engineering, University of Warith Al-Anbiyaa, Karbala 56001, Iraq.
| | - Saleh Alyahya
- Department of Electrical Engineering, College of Engineering and Information Technology, Onaizah Colleges, Onaizah 2053, Saudi Arabia.
| | - Mazin Abed Mohammed
- Department of Artificial Intelligence, College of Computer Science and Information Technology, University of Anbar, Anbar 31001, Iraq.
| |
Collapse
|
6
|
Mulugeta AK, Sharma DP, Mesfin AH. Deep learning for medicinal plant species classification and recognition: a systematic review. FRONTIERS IN PLANT SCIENCE 2024; 14:1286088. [PMID: 38250440 PMCID: PMC10796487 DOI: 10.3389/fpls.2023.1286088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 12/12/2023] [Indexed: 01/23/2024]
Abstract
Knowledge of medicinal plant species is necessary to preserve medicinal plants and safeguard biodiversity. The classification and identification of these plants by botanist experts are complex and time-consuming activities. This systematic review's main objective is to systematically assess the prior research efforts on the applications and usage of deep learning approaches in classifying and recognizing medicinal plant species. Our objective was to pinpoint systematic reviews following the PRISMA guidelines related to the classification and recognition of medicinal plant species through the utilization of deep learning techniques. This review encompassed studies published between January 2018 and December 2022. Initially, we identified 1644 studies through title, keyword, and abstract screening. After applying our eligibility criteria, we selected 31 studies for a thorough and critical review. The main findings of this reviews are (1) the selected studies were carried out in 16 different countries, and India leads in paper contributions with 29%, followed by Indonesia and Sri Lanka. (2) A private dataset has been used in 67.7% of the studies subjected to image augmentation and preprocessing techniques. (3) In 96.7% of the studies, researchers have employed plant leaf organs, with 74% of them utilizing leaf shapes for the classification and recognition of medicinal plant species. (4) Transfer learning with the pre-trained model was used in 83.8% of the studies as a future extraction technique. (5) Convolutional Neural Network (CNN) is used by 64.5% of the paper as a deep learning classifier. (6) The lack of a globally available and public dataset need for medicinal plants indigenous to a specific country and the trustworthiness of the deep learning approach for the classification and recognition of medicinal plants is an observable research gap in this literature review. Therefore, further investigations and collaboration between different stakeholders are required to fulfilling the aforementioned research gaps.
Collapse
Affiliation(s)
- Adibaru Kiflie Mulugeta
- Department of Computer Science and Engineering, School of Electrical Engineering and Computing, Adama Science and Technology University, Adama, Ethiopia
| | | | - Abebe Haile Mesfin
- Department of Computer Science and Engineering, School of Electrical Engineering and Computing, Adama Science and Technology University, Adama, Ethiopia
| |
Collapse
|
7
|
Li X, Yi X, Lu L, Wang H, Zheng Y, Han M, Wang Q. TSFFM: Depression detection based on latent association of facial and body expressions. Comput Biol Med 2024; 168:107805. [PMID: 38064845 DOI: 10.1016/j.compbiomed.2023.107805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 11/23/2023] [Accepted: 11/29/2023] [Indexed: 01/10/2024]
Abstract
Depression is a prevalent mental disorder worldwide. Early screening and treatment are crucial in preventing the progression of the illness. Existing emotion-based depression recognition methods primarily rely on facial expressions, while body expressions as a means of emotional expression have been overlooked. To aid in the identification of depression, we recruited 156 participants for an emotional stimulation experiment, gathering data on facial and body expressions. Our analysis revealed notable distinctions in facial and body expressions between the case group and the control group and a synergistic relationship between these variables. Hence, we propose a two-stream feature fusion model (TSFFM) that integrates facial and body features. The central component of TSFFM is the Fusion and Extraction (FE) module. In contrast to conventional methods such as feature concatenation and decision fusion, our approach, FE, places a greater emphasis on in-depth analysis during the feature extraction and fusion processes. Firstly, within FE, we carry out local enhancement of facial and body features, employing an embedded attention mechanism, eliminating the need for original image segmentation and the use of multiple feature extractors. Secondly, FE conducts the extraction of temporal features to better capture the dynamic aspects of expression patterns. Finally, we retain and fuse informative data from different temporal and spatial features to support the ultimate decision. TSFFM achieves an Accuracy and F1-score of 0.896 and 0.896 on the depression emotional stimulus dataset, respectively. On the AVEC2014 dataset, TSFFM achieves MAE and RMSE values of 5.749 and 7.909, respectively. Furthermore, TSFFM has undergone testing on additional public datasets to showcase the effectiveness of the FE module.
Collapse
Affiliation(s)
- Xingyun Li
- Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan, China; Shandong Engineering Research Center of Big Data Applied Technology, Faculty of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan, China; Shandong Provincial Key Laboratory of Computer Networks, Shandong Fundamental Research Center for Computer Science, Jinan, China
| | - Xinyu Yi
- Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan, China; Shandong Engineering Research Center of Big Data Applied Technology, Faculty of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan, China; Shandong Provincial Key Laboratory of Computer Networks, Shandong Fundamental Research Center for Computer Science, Jinan, China
| | - Lin Lu
- Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan, China; Shandong Engineering Research Center of Big Data Applied Technology, Faculty of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan, China; Shandong Provincial Key Laboratory of Computer Networks, Shandong Fundamental Research Center for Computer Science, Jinan, China
| | - Hao Wang
- Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan, China; Shandong Engineering Research Center of Big Data Applied Technology, Faculty of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan, China; Shandong Provincial Key Laboratory of Computer Networks, Shandong Fundamental Research Center for Computer Science, Jinan, China
| | - Yunshao Zheng
- Shandong Mental Health Center, Shandong University, Jinan, China
| | - Mengmeng Han
- Advanced Technology Research Institute, Beijing Institute of Technology, Jinan, China
| | - Qingxiang Wang
- Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan, China; Shandong Engineering Research Center of Big Data Applied Technology, Faculty of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan, China; Shandong Mental Health Center, Shandong University, Jinan, China; Shandong Provincial Key Laboratory of Computer Networks, Shandong Fundamental Research Center for Computer Science, Jinan, China.
| |
Collapse
|
8
|
Chen R, Liu M, Chen W, Wang Y, Meijering E. Deep learning in mesoscale brain image analysis: A review. Comput Biol Med 2023; 167:107617. [PMID: 37918261 DOI: 10.1016/j.compbiomed.2023.107617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/06/2023] [Accepted: 10/23/2023] [Indexed: 11/04/2023]
Abstract
Mesoscale microscopy images of the brain contain a wealth of information which can help us understand the working mechanisms of the brain. However, it is a challenging task to process and analyze these data because of the large size of the images, their high noise levels, the complex morphology of the brain from the cellular to the regional and anatomical levels, the inhomogeneous distribution of fluorescent labels in the cells and tissues, and imaging artifacts. Due to their impressive ability to extract relevant information from images, deep learning algorithms are widely applied to microscopy images of the brain to address these challenges and they perform superiorly in a wide range of microscopy image processing and analysis tasks. This article reviews the applications of deep learning algorithms in brain mesoscale microscopy image processing and analysis, including image synthesis, image segmentation, object detection, and neuron reconstruction and analysis. We also discuss the difficulties of each task and possible directions for further research.
Collapse
Affiliation(s)
- Runze Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Min Liu
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China; Research Institute of Hunan University in Chongqing, Chongqing, 401135, China.
| | - Weixun Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Yaonan Wang
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney 2052, New South Wales, Australia
| |
Collapse
|
9
|
Ma X, He J, Liu X, Liu Q, Chen G, Yuan B, Li C, Xia Y. Hierarchical cumulative network for unsupervised medical image registration. Comput Biol Med 2023; 167:107598. [PMID: 37913614 DOI: 10.1016/j.compbiomed.2023.107598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 09/23/2023] [Accepted: 10/17/2023] [Indexed: 11/03/2023]
Abstract
Unsupervised deep learning techniques have gained increasing popularity in deformable medical image registration However, existing methods usually overlook the optimal similarity position between moving and fixed images To tackle this issue, we propose a novel hierarchical cumulative network (HCN), which explicitly considers the optimal similarity position with an effective Bidirectional Asymmetric Registration Module (BARM). The BARM simultaneously learns two asymmetric displacement vector fields (DVFs) to optimally warp both moving images and fixed images to their optimal similar shape along the geodesic path. Furthermore, we incorporate the BARM into a Laplacian pyramid network with hierarchical recursion, in which the moving image at the lowest level of the pyramid is warped successively for aligning to the fixed image at the lowest level of the pyramid to capture multiple DVFs. We then accumulate these DVFs and up-sample them to warp the moving images at higher levels of the pyramid to align to the fixed image of the top level. The entire system is end-to-end and jointly trained in an unsupervised manner. Extensive experiments were conducted on two public 3D Brain MRI datasets to demonstrate that our HCN outperforms both the traditional and state-of-the-art registration methods. To further evaluate the performance of our HCN, we tested it on the validation set of the MICCAI Learn2Reg 2021 challenge. Additionally, a cross-dataset evaluation was conducted to assess the generalization of our HCN. Experimental results showed that our HCN is an effective deformable registration method and achieves excellent generalization performance.
Collapse
Affiliation(s)
- Xinke Ma
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China.
| | - Jiang He
- Huiying Medical Technology Co., Ltd., Room A206, B2, Dongsheng Science and Technology Park, Haidian District, Beijing 100192, China.
| | - Xing Liu
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China.
| | - Qin Liu
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China.
| | - Geng Chen
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China.
| | - Bo Yuan
- Sichuan Provincial Health Information Center (Sichuan Provincial Health and Medical Big Data Center), Chengdu 610041, China.
| | - Changyang Li
- Sydney Polytechnic Institute, NSW 2000, Australia.
| | - Yong Xia
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China.
| |
Collapse
|
10
|
Ahmed FR, Alsenany SA, Abdelaliem SMF, Deif MA. Development of a hybrid LSTM with chimp optimization algorithm for the pressure ventilator prediction. Sci Rep 2023; 13:20927. [PMID: 38017008 PMCID: PMC10684522 DOI: 10.1038/s41598-023-47837-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2023] [Accepted: 11/19/2023] [Indexed: 11/30/2023] Open
Abstract
The utilization of mechanical ventilation is of utmost importance in the management of individuals afflicted with severe pulmonary conditions. During periods of a pandemic, it becomes imperative to build ventilators that possess the capability to autonomously adapt parameters over the course of treatment. In order to fulfil this requirement, a research investigation was undertaken with the aim of forecasting the magnitude of pressure applied on the patient by the ventilator. The aforementioned forecast was derived from a comprehensive analysis of many variables, including the ventilator's characteristics and the patient's medical state. This analysis was conducted utilizing a sophisticated computational model referred to as Long Short-Term Memory (LSTM). To enhance the predictive accuracy of the LSTM model, the researchers utilized the Chimp Optimization method (ChoA) method. The integration of LSTM and ChoA led to the development of the LSTM-ChoA model, which successfully tackled the issue of hyperparameter selection for the LSTM model. The experimental results revealed that the LSTM-ChoA model exhibited superior performance compared to alternative optimization algorithms, namely whale grey wolf optimizer (GWO), optimization algorithm (WOA), and particle swarm optimization (PSO). Additionally, the LSTM-ChoA model outperformed regression models, including K-nearest neighbor (KNN) Regressor, Random and Forest (RF) Regressor, and Support Vector Machine (SVM) Regressor, in accurately predicting ventilator pressure. The findings indicate that the suggested predictive model, LSTM-ChoA, demonstrates a reduced mean square error (MSE) value. Specifically, when comparing ChoA with GWO, the MSE fell by around 14.8%. Furthermore, when comparing ChoA with PSO and WOA, the MSE decreased by approximately 60%. Additionally, the analysis of variance (ANOVA) findings revealed that the p-value for the LSTM-ChoA model was 0.000, which is less than the predetermined significance level of 0.05. This indicates that the results of the LSTM-ChoA model are statistically significant.
Collapse
Affiliation(s)
- Fatma Refaat Ahmed
- Department of Nursing, College of Health Sciences, University of Sharjah, Sharjah, UAE
- Critical Care and Emergency Nursing Department, Faculty of Nursing, Alexandria University, Alexandria, Egypt
| | - Samira Ahmed Alsenany
- Department of Community Health Nursing, College of Nursing, Princess Nourah bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia
| | - Sally Mohammed Farghaly Abdelaliem
- Department of Nursing Management and Education, College of Nursing, Princess Nourah bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia.
| | - Mohanad A Deif
- Department of Artificial Intelligence, College of Information Technology, Misr University for Science and Technology (MUST), 6th of October City, 12566, Egypt
| |
Collapse
|
11
|
Wei Y, Rao X, Fu Y, Song L, Chen H, Li J. Machine learning prediction model based on enhanced bat algorithm and support vector machine for slow employment prediction. PLoS One 2023; 18:e0294114. [PMID: 37943766 PMCID: PMC10635481 DOI: 10.1371/journal.pone.0294114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 10/23/2023] [Indexed: 11/12/2023] Open
Abstract
The employment of college students is an important issue that affects national development and social stability. In recent years, the increase in the number of graduates, the pressure of employment, and the epidemic have made the phenomenon of 'slow employment' increasingly prominent, becoming an urgent problem to be solved. Data mining and machine learning methods are used to analyze and predict the employment prospects for graduates and provide effective employment guidance and services for universities, governments, and graduates. It is a feasible solution to alleviate the problem of 'slow employment' of graduates. Therefore, this study proposed a feature selection prediction model (bGEBA-SVM) based on an improved bat algorithm and support vector machine by extracting 1694 college graduates from 2022 classes in Zhejiang Province. To improve the search efficiency and accuracy of the optimal feature subset, this paper proposed an enhanced bat algorithm based on the Gaussian distribution-based and elimination strategies for optimizing the feature set. The training data were input to the support vector machine for prediction. The proposed method is experimented by comparing it with peers, well-known machine learning models on the IEEE CEC2017 benchmark functions, public datasets, and graduate employment prediction dataset. The experimental results show that bGEBA-SVM can obtain higher prediction Accuracy, which can reach 93.86%. In addition, further education, student leader experience, family situation, career planning, and employment structure are more relevant characteristics that affect employment outcomes. In summary, bGEBA-SVM can be regarded as an employment prediction model with strong performance and high interpretability.
Collapse
Affiliation(s)
- Yan Wei
- Department of Information Technology, Wenzhou Vocational College of Science and Technology, Wenzhou, 325006, China
| | - Xili Rao
- Department of Information Technology, Wenzhou Vocational College of Science and Technology, Wenzhou, 325006, China
| | - Yinjun Fu
- The Section of Employment, Wenzhou Vocational College of Science and Technology, Wenzhou, 325006, China
| | - Li Song
- Department of Information Technology, Wenzhou Vocational College of Science and Technology, Wenzhou, 325006, China
| | - Huiling Chen
- Department of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou, 325035, China
| | - Junhong Li
- School of Public Health and Management, Wenzhou Medical University, Wenzhou, 325035, China
| |
Collapse
|
12
|
Zhou T, Zhang X, Lu H, Li Q, Liu L, Zhou H. GMRE-iUnet: Isomorphic Unet fusion model for PET and CT lung tumor images. Comput Biol Med 2023; 166:107514. [PMID: 37826951 DOI: 10.1016/j.compbiomed.2023.107514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Revised: 08/25/2023] [Accepted: 09/19/2023] [Indexed: 10/14/2023]
Abstract
Lung tumor PET and CT image fusion is a key technology in clinical diagnosis. However, the existing fusion methods are difficult to obtain fused images with high contrast, prominent morphological features, and accurate spatial localization. In this paper, an isomorphic Unet fusion model (GMRE-iUnet) for lung tumor PET and CT images is proposed to address the above problems. The main idea of this network is as following: Firstly, this paper constructs an isomorphic Unet fusion network, which contains two independent multiscale dual encoders Unet, it can capture the features of the lesion region, spatial localization, and enrich the morphological information. Secondly, a Hybrid CNN-Transformer feature extraction module (HCTrans) is constructed to effectively integrate local lesion features and global contextual information. In addition, the residual axial attention feature compensation module (RAAFC) is embedded into the Unet to capture fine-grained information as compensation features, which makes the model focus on local connections in neighboring pixels. Thirdly, a hybrid attentional feature fusion module (HAFF) is designed for multiscale feature information fusion, it aggregates edge information and detail representations using local entropy and Gaussian filtering. Finally, the experiment results on the multimodal lung tumor medical image dataset show that the model in this paper can achieve excellent fusion performance compared with other eight fusion models. In CT mediastinal window images and PET images comparison experiment, AG, EI, QAB/F, SF, SD, and IE indexes are improved by 16.19%, 26%, 3.81%, 1.65%, 3.91% and 8.01%, respectively. GMRE-iUnet can highlight the information and morphological features of the lesion areas and provide practical help for the aided diagnosis of lung tumors.
Collapse
Affiliation(s)
- Tao Zhou
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; Key Laboratory of Image and Graphics Intelligent Processing of State Ethnic Affairs Commission, North Minzu University, Yinchuan, 750021, China
| | - Xiangxiang Zhang
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China.
| | - Huiling Lu
- School of Medical Information & Engineering, Ningxia Medical University, Yinchuan, 750004, China.
| | - Qi Li
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China
| | - Long Liu
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China
| | - Huiyu Zhou
- School of Computing and Mathematical Sciences, University of Leicester, LE1 7RH, United Kingdom
| |
Collapse
|
13
|
Yogarajan G, Alsubaie N, Rajasekaran G, Revathi T, Alqahtani MS, Abbas M, Alshahrani MM, Soufiene BO. EEG-based epileptic seizure detection using binary dragonfly algorithm and deep neural network. Sci Rep 2023; 13:17710. [PMID: 37853025 PMCID: PMC10584945 DOI: 10.1038/s41598-023-44318-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 10/06/2023] [Indexed: 10/20/2023] Open
Abstract
Electroencephalogram (EEG) is one of the most common methods used for seizure detection as it records the electrical activity of the brain. Symmetry and asymmetry of EEG signals can be used as indicators of epileptic seizures. Normally, EEG signals are symmetrical in nature, with similar patterns on both sides of the brain. However, during a seizure, there may be a sudden increase in the electrical activity in one hemisphere of the brain, causing asymmetry in the EEG signal. In patients with epilepsy, interictal EEG may show asymmetric spikes or sharp waves, indicating the presence of epileptic activity. Therefore, the detection of symmetry/asymmetry in EEG signals can be used as a useful tool in the diagnosis and management of epilepsy. However, it should be noted that EEG findings should always be interpreted in conjunction with the patient's clinical history and other diagnostic tests. In this paper, we propose an EEG-based improved automatic seizure detection system using a Deep neural network (DNN) and Binary dragonfly algorithm (BDFA). The DNN model learns the characteristics of the EEG signals through nine different statistical and Hjorth parameters extracted from various levels of decomposed signals obtained by using the Stationary Wavelet Transform. Next, the extracted features were reduced using the BDFA which helps to train DNN faster and improve its performance. The results show that the extracted features help to differentiate the normal, interictal, and ictal signals effectively with 100% accuracy, sensitivity, specificity, and F1 score with a 13% selected feature subset when compared to the existing approaches.
Collapse
Affiliation(s)
- G Yogarajan
- Department of Information Technology, Mepco Schlenk Engineering College, Sivakasi, Tamil Nadu, 626005, India
| | - Najah Alsubaie
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), P.O. Box 84428, 11671, Riyadh, Saudi Arabia
| | - G Rajasekaran
- Department of Information Technology, Mepco Schlenk Engineering College, Sivakasi, Tamil Nadu, 626005, India
| | - T Revathi
- Department of Information Technology, Mepco Schlenk Engineering College, Sivakasi, Tamil Nadu, 626005, India
| | - Mohammed S Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, King Khalid University, 61421, Abha, Saudi Arabia
- BioImaging Unit, Space Research Centre, University of Leicester, Michael Atiyah Building, Leicester, LE1 7RH, UK
| | - Mohamed Abbas
- Electrical Engineering Department, College of Engineering, King Khalid University, 61421, Abha, Saudi Arabia
| | | | - Ben Othman Soufiene
- PRINCE Laboratory Research, ISITcom, Hammam Sousse, University of Sousse, Sousse, Tunisia.
| |
Collapse
|
14
|
Li W, Yang D, Ma C, Liu L. Identifying novel disease categories through divergence optimization: An approach to prevent misdiagnosis in medical imaging. Comput Biol Med 2023; 165:107403. [PMID: 37688992 DOI: 10.1016/j.compbiomed.2023.107403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Revised: 08/09/2023] [Accepted: 08/26/2023] [Indexed: 09/11/2023]
Abstract
Given the significant changes in human lifestyle, the incidence of colon cancer has rapidly increased. The diagnostic process can often be complicated due to symptom similarities between colon cancer and other colon-related diseases. In an effort to minimize misdiagnosis, deep learning-based approaches for colon cancer diagnosis have notably progressed within the field of clinical medicine, offering more precise detection and improved patient outcomes. Despite these advancements, practical application of these techniques continues to encounter two major challenges: 1) due to the need for expert annotation, only a limited number of labels are utilized for diagnosis; and 2) the existence of diverse disease types can lead to misdiagnosis when the model encounters unfamiliar disease categories. To overcome these hurdles, we present a method incorporating Universal Domain Adaptation (UniDA). By optimizing the divergence of samples in the source domain, our method detects noise. Furthermore, to identify categories that are not present in the source domain, we optimize the divergence of unlabeled samples in the target domain. Experimental validation on two gastrointestinal datasets demonstrates that our method surpasses current state-of-the-art domain adaptation techniques in identifying unknown disease classes. It is worth noting that our proposed method is the first work of medical image diagnosis aimed at the identification of unknown categories of diseases.
Collapse
Affiliation(s)
- Wencai Li
- Department of General Surgery, The Second Affiliated Hospital of Shanghai University (Wenzhou Central Hospital), Wenzhou, Zhejiang, 325000, China.
| | - Daqing Yang
- Department of General Surgery, The Second Affiliated Hospital of Shanghai University (Wenzhou Central Hospital), Wenzhou, Zhejiang, 325000, China.
| | - Chao Ma
- School of Digital Media, Shenzhen Institute of Information Technology, Shenzhen, 518172, China.
| | - Lei Liu
- College of Computer Science, Sichuan University, Chengdu, Sichuan, 610065, China.
| |
Collapse
|
15
|
Ji J, Zhang W, Dong Y, Lin R, Geng Y, Hong L. Automated cervical cell segmentation using deep ensemble learning. BMC Med Imaging 2023; 23:137. [PMID: 37735354 PMCID: PMC10514950 DOI: 10.1186/s12880-023-01096-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Accepted: 09/04/2023] [Indexed: 09/23/2023] Open
Abstract
BACKGROUND Cervical cell segmentation is a fundamental step in automated cervical cancer cytology screening. The aim of this study was to develop and evaluate a deep ensemble model for cervical cell segmentation including both cytoplasm and nucleus segmentation. METHODS The Cx22 dataset was used to develop the automated cervical cell segmentation algorithm. The U-Net, U-Net + + , DeepLabV3, DeepLabV3Plus, Transunet, and Segformer were used as candidate model architectures, and each of the first four architectures adopted two different encoders choosing from resnet34, resnet50 and denseNet121. Models were trained under two settings: trained from scratch, encoders initialized from ImageNet pre-trained models and then all layers were fine-tuned. For every segmentation task, four models were chosen as base models, and Unweighted average was adopted as the model ensemble method. RESULTS U-Net and U-Net + + with resnet34 and denseNet121 encoders trained using transfer learning consistently performed better than other models, so they were chosen as base models. The ensemble model obtained the Dice similarity coefficient, sensitivity, specificity of 0.9535 (95% CI:0.9534-0.9536), 0.9621 (0.9619-0.9622),0.9835 (0.9834-0.9836) and 0.7863 (0.7851-0.7876), 0.9581 (0.9573-0.959), 0.9961 (0.9961-0.9962) on cytoplasm segmentation and nucleus segmentation, respectively. The Dice, sensitivity, specificity of baseline models for cytoplasm segmentation and nucleus segmentation were 0.948, 0.954, 0.9823 and 0.750, 0.713, 0.9988, respectively. Except for the specificity of cytoplasm segmentation, all metrics outperformed the best baseline models (P < 0.05) with a moderate margin. CONCLUSIONS The proposed algorithm achieved better performances on cervical cell segmentation than baseline models. It can be potentially used in automated cervical cancer cytology screening system.
Collapse
Affiliation(s)
- Jie Ji
- Network & Information Center, Shantou University, Shantou, 515041, Guangdong, China
| | - Weifeng Zhang
- Guangdong Provincial International Collaborative Center of Molecular Medicine, Laboratory of Molecular Pathology, Shantou University Medical College, Shantou, 515041, China
| | - Yuejiao Dong
- Department of Pathology, the First Affiliated Hospital of Shantou University Medical College, Shantou, 515041, Guangdong, China
| | - Ruilin Lin
- Department of Pathology, the First Affiliated Hospital of Shantou University Medical College, Shantou, 515041, Guangdong, China
| | - Yiqun Geng
- Guangdong Provincial International Collaborative Center of Molecular Medicine, Laboratory of Molecular Pathology, Shantou University Medical College, Shantou, 515041, China.
| | - Liangli Hong
- Department of Pathology, the First Affiliated Hospital of Shantou University Medical College, Shantou, 515041, Guangdong, China.
| |
Collapse
|
16
|
Laghari AA, Sun Y, Alhussein M, Aurangzeb K, Anwar MS, Rashid M. Deep residual-dense network based on bidirectional recurrent neural network for atrial fibrillation detection. Sci Rep 2023; 13:15109. [PMID: 37704659 PMCID: PMC10499947 DOI: 10.1038/s41598-023-40343-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 08/09/2023] [Indexed: 09/15/2023] Open
Abstract
Atrial fibrillation easily leads to stroke, cerebral infarction and other complications, which will seriously harm the life and health of patients. Traditional deep learning methods have weak anti-interference and generalization ability. Therefore, we propose a new-fashioned deep residual-dense network via bidirectional recurrent neural network (RNN) model for atrial fibrillation detection. The combination of one-dimensional dense residual network and bidirectional RNN for atrial fibrillation detection simplifies the tedious feature extraction steps, and constructs the end-to-end neural network to achieve atrial fibrillation detection through data feature learning. Meanwhile, the attention mechanism is utilized to fuse the different features and extract the high-value information. The accuracy of the experimental results is 97.72%, the sensitivity and specificity are 93.09% and 98.71%, respectively compared with other methods.
Collapse
Affiliation(s)
- Asif Ali Laghari
- Software College, Shenyang Normal University, Shenyang, 110034, China
| | - Yanqiu Sun
- Liaoning University of Traditional Chinese Medicine, Shenyang, China.
| | - Musaed Alhussein
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh, 11543, Kingdom of Saudi Arabia
| | - Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh, 11543, Kingdom of Saudi Arabia
| | | | - Mamoon Rashid
- Department of Computer Engineering, Faculty of Science and Technology, Vishwakarma University, Pune, 411048, India
| |
Collapse
|
17
|
Zhang X, Li Z, Zhang Q, Yin Z, Lu Z, Li Y. A new weakly supervised deep neural network for recognizing Alzheimer's disease. Comput Biol Med 2023; 163:107079. [PMID: 37321100 DOI: 10.1016/j.compbiomed.2023.107079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 05/15/2023] [Accepted: 05/27/2023] [Indexed: 06/17/2023]
Abstract
Alzheimer's disease (AD) is a chronic neurodegenerative disease that mainly affects older adults, causing memory loss and decline in thinking skills. In recent years, many traditional machine learning and deep learning methods have been used to assist in the diagnosis of AD, and most existing methods focus on early prediction of disease on a supervised basis. In reality, there is a massive amount of medical data available. However, some of those data have problems with the low-quality or lack of labels, and the cost of labeling them will be too high. To solve above problem, a new Weakly Supervised Deep Learning model (WSDL) is proposed, which adds attention mechanisms and consistency regularization to the EfficientNet framework and uses data augmentation techniques on the original data that can take full advantage of this unlabeled data. Validation of the proposed WSDL method on the brain MRI datasets of the Alzheimer's Disease Neuroimaging Program by setting five different unlabeled ratios to complete weakly supervised training showed better performance according to the compared experimental results with others baselines.
Collapse
Affiliation(s)
- Xiaobo Zhang
- School of Computing and Artificial Intelligence, SouthWest JiaoTong University, Chengdu 611756, China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, Chengdu 611756, China; National Engineering Laboratory of Integrated Transportation Big Data Application Technology, Southwest Jiaotong University, Chengdu 611756, China
| | - Zhimin Li
- School of Computing and Artificial Intelligence, SouthWest JiaoTong University, Chengdu 611756, China
| | - Qian Zhang
- School of Economics and Management, Chengdu Textile College, Chengdu 611731, China.
| | - Zegang Yin
- Department of Neurology, The General Hospital of Western Theater Command, Chengdu 610083, China
| | - Zhijie Lu
- Department of Neurology, The General Hospital of Western Theater Command, Chengdu 610083, China
| | - Yang Li
- School of Automation Science and Electrical Engineering, Beijing University of Aeronautics and Astronautics, Beijing 100191, China
| |
Collapse
|
18
|
Jia S, Wang X, Liu Z, Mao B. Comparison of multi-DLM approaches for predicting daily runoff: evidence from the data-driven model in one of China's largest wheat production-bases. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2023; 30:93862-93876. [PMID: 37523088 DOI: 10.1007/s11356-023-29030-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 07/25/2023] [Indexed: 08/01/2023]
Abstract
Runoff forecasting is extremely important for various activities of water pollution research and agricultural. Data-driven models have been proved an effective approach in predicting daily runoff when combining deep learning methods (DLM). However, predicting accuracy of daily runoff still need improved. Here, we firstly proposed a combined model of Gate Recurrent Unit (GRU) and Residual Network (ResNet) and compared with one shallow learning method (Back Propagation Neural Network, BPNN) and one deep learning method (GRU) with data from 2010 to 2020 in three stations in daily runoff forecasting in the Yiluo River watershed. The results showed that the combined model with precipitation data and runoff data as input has the highest prediction accuracy (NSE = 0.9325, 0.8735, 0.9186, respectively). Input data with precipitation have higher prediction accuracy than that without. The performance of the model was better in the dry season than the wet season. The topographic and geomorphic factors may also the main factors affecting runoff forecast. Those results of this study can provide useful strategies to predict short runoff and manage watershed scale water resources especially in the important agriculture region.
Collapse
Affiliation(s)
- Shunqing Jia
- College of Civil Engineering, Tongji University, 1239 Siping Road, Shanghai, 200092, China
| | - Xihua Wang
- College of Civil Engineering, Tongji University, 1239 Siping Road, Shanghai, 200092, China.
- Department of Earth and Environmental Sciences, University of Waterloo, Waterloo, ON, N2L 3G1, Canada.
| | - Zejun Liu
- College of Civil Engineering, Tongji University, 1239 Siping Road, Shanghai, 200092, China
| | - Boyang Mao
- College of Civil Engineering, Tongji University, 1239 Siping Road, Shanghai, 200092, China
| |
Collapse
|
19
|
Emam MM, Samee NA, Jamjoom MM, Houssein EH. Optimized deep learning architecture for brain tumor classification using improved Hunger Games Search Algorithm. Comput Biol Med 2023; 160:106966. [PMID: 37141655 DOI: 10.1016/j.compbiomed.2023.106966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 04/05/2023] [Accepted: 04/19/2023] [Indexed: 05/06/2023]
Abstract
One of the worst diseases is a brain tumor, which is defined by abnormal development of synapses in the brain. Early detection of brain tumors is essential for improving prognosis, and classifying tumors is a vital step in the disease's treatment. Different classification strategies using deep learning have been presented for the diagnosis of brain tumors. However, several challenges exist, such as the need for a competent specialist in classifying brain cancers by deep learning models and the problem of building the most precise deep learning model for categorizing brain tumors. We propose an evolved and highly efficient model based on deep learning and improved metaheuristic algorithms to address these challenges. Specifically, we develop an optimized residual learning architecture for classifying multiple brain tumors and propose an improved variant of the Hunger Games Search algorithm (I-HGS) based on combining two enhancing strategies: Local Escaping Operator (LEO) and Brownian motion. These two strategies balance solution diversity and convergence speed, boosting the optimization performance and staying away from the local optima. First, we have evaluated the I-HGS algorithm on the IEEE Congress on Evolutionary Computation held in 2020 (CEC'2020) test functions, demonstrating that I-HGS outperformed the basic HGS and other popular algorithms regarding statistical convergence, and various measures. The suggested model is then applied to the optimization of the hyperparameters of the Residual Network 50 (ResNet50) model (I-HGS-ResNet50) for brain cancer identification, proving its overall efficacy. We utilize several publicly available, gold-standard datasets of brain MRI images. The proposed I-HGS-ResNet50 model is compared with other existing studies as well as with other deep learning architectures, including Visual Geometry Group 16-layer (VGG16), MobileNet, and Densely Connected Convolutional Network 201 (DenseNet201). The experiments demonstrated that the proposed I-HGS-ResNet50 model surpasses the previous studies and other well-known deep learning models. I-HGS-ResNet50 acquired an accuracy of 99.89%, 99.72%, and 99.88% for the three datasets. These results efficiently prove the potential of the proposed I-HGS-ResNet50 model for accurate brain tumor classification.
Collapse
Affiliation(s)
- Marwa M Emam
- Faculty of Computers and Information, Minia University, Minia, Egypt.
| | - Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia.
| | - Mona M Jamjoom
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia.
| | - Essam H Houssein
- Faculty of Computers and Information, Minia University, Minia, Egypt.
| |
Collapse
|
20
|
Liu H, Teng L, Fan L, Sun Y, Li H. A new ultra-wide-field fundus dataset to diabetic retinopathy grading using hybrid preprocessing methods. Comput Biol Med 2023; 157:106750. [PMID: 36931202 DOI: 10.1016/j.compbiomed.2023.106750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 02/12/2023] [Accepted: 03/06/2023] [Indexed: 03/09/2023]
Abstract
Diabetic retinopathy(DR) is a common early diabetic complication and one of the main causes of blindness. In clinical diagnosis and treatment, regular screening with fundus imaging is an effective way to prevent the development of DR. However, the regular fundus images used in most DR screening work have a small imaging range, narrow field of vision, and can not contain more complete lesion information, which leads to less ideal automatic DR grading results. In order to improve the accuracy of DR grading, we establish a dataset containing 101 ultra-wide-field(UWF) DR fundus images and propose a deep learning(DL) automatic classification method based on a new preprocessing method. The emerging UWF fundus images have the advantages of a large imaging range and wide field of vision and contain more information about the lesions. In data preprocessing, we design a data denoising method for UWF images and use data enhancement methods to improve their contrast and brightness to improve the classification effect. In order to verify the efficiency of our dataset and the effectiveness of our preprocessing method, we design a series of experiments including a variety of DL classification models. The experimental results show that we can achieve high classification accuracy by using only the backbone model. The most basic ResNet50 model reaches an average of classification accuracy(ACA) 0.66, Macro F1 0.6559, and Kappa 0.58. The best-performing Swin-S model reaches ACA 0.72, Macro F1 0.7018, and Kappa 0.65. DR grading using UWF images can achieve higher accuracy and efficiency, which has practical significance and value in clinical applications.
Collapse
Affiliation(s)
- Haomiao Liu
- Jilin University, College of Computer Science and Technology, No. 2699 Qianjin Street, Changchun, Jilin province, 130012, China; Jilin University, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, No. 2699 Qianjin Street, Changchun, Jilin province, 130012, China
| | - Lu Teng
- Ophthalmology Department, First Hospital of Jilin University, No. 1 Xinmin Street, Changchun, Jilin province, 130021, China
| | - Linhua Fan
- Ophthalmology Department, First Hospital of Jilin University, No. 1 Xinmin Street, Changchun, Jilin province, 130021, China
| | - Yabin Sun
- Ophthalmology Department, First Hospital of Jilin University, No. 1 Xinmin Street, Changchun, Jilin province, 130021, China.
| | - Huiying Li
- Jilin University, College of Computer Science and Technology, No. 2699 Qianjin Street, Changchun, Jilin province, 130012, China; Jilin University, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, No. 2699 Qianjin Street, Changchun, Jilin province, 130012, China.
| |
Collapse
|
21
|
Chen Y, Feng L, Zheng C, Zhou T, Liu L, Liu P, Chen Y. LDANet: Automatic lung parenchyma segmentation from CT images. Comput Biol Med 2023; 155:106659. [PMID: 36791550 DOI: 10.1016/j.compbiomed.2023.106659] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 01/27/2023] [Accepted: 02/09/2023] [Indexed: 02/12/2023]
Abstract
Automatic segmentation of the lung parenchyma from computed tomography (CT) images is helpful for the subsequent diagnosis and treatment of patients. In this paper, based on a deep learning algorithm, a lung dense attention network (LDANet) is proposed with two mechanisms: residual spatial attention (RSA) and gated channel attention (GCA). RSA is utilized to weight the spatial information of the lung parenchyma and suppress feature activation in irrelevant regions, while the weights of each channel are adaptively calibrated using GCA to implicitly predict potential key features. Then, a dual attention guidance module (DAGM) is designed to maximize the integration of the advantages of both mechanisms. In addition, LDANet introduces a lightweight dense block (LDB) that reuses feature information and a positioned transpose block (PTB) that realizes accurate positioning and gradually restores the image resolution until the predicted segmentation map is generated. Experiments are conducted on two public datasets, LIDC-IDRI and COVID-19 CT Segmentation, on which LDANet achieves Dice similarity coefficient values of 0.98430 and 0.98319, respectively, outperforming a state-of-the-art lung segmentation model. Additionally, the effectiveness of the main components of LDANet is demonstrated through ablation experiments.
Collapse
Affiliation(s)
- Ying Chen
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China
| | - Longfeng Feng
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China.
| | - Cheng Zheng
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China
| | - Taohui Zhou
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China
| | - Lan Liu
- Department of Medical Imaging, Jiangxi Cancer Hospital, Nanchang, 330029, PR China.
| | - Pengfei Liu
- Department of Medical Imaging, Jiangxi Cancer Hospital, Nanchang, 330029, PR China
| | - Yi Chen
- Key Laboratory of Intelligent Informatics for Safety & Emergency of Zhejiang Province, Wenzhou University, Wenzhou, 325035, PR China.
| |
Collapse
|
22
|
Bian W, Yang Y. Fast bilateral weighted least square for the detail enhancement of COVID-19 chest X-rays. Digit Health 2023; 9:20552076231200981. [PMID: 37706020 PMCID: PMC10496472 DOI: 10.1177/20552076231200981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 08/08/2023] [Indexed: 09/15/2023] Open
Abstract
Background X-ray is an effective measure in the diagnosis of coronavirus disease 2019. However, it suffers from low visibility and poor details. A plausible solution is to decompose the captured images and enhance the details. The bilateral weighted least square model can be an effective tool for this task. However, it is highly computationally expensive. Method In this article, we propose an efficient algorithm for the bilateral weighted least square model. We approximate the bilateral weight with the bilateral grid and then incorporate it into the optimization model. This significantly reduces the number of variables in the linear system. Therefore, the model can be efficiently solved. We employ the proposed algorithm to decompose the input X-rays into base and detail layers. The detail layers are then boosted and added back to the input to derive the detail-enhanced results. Results The subjective results indicate that our method achieves higher contrast than the best-performing method (442.30 > 410.09 , 426.40 > 403.34 , 564.51 > 531.38 ). Furthermore, our method is highly efficient. It takes 0.92 s to process a 720P color image on an Intel i7-6700 CPU. The objective results derive from the chi-square test indicate that subjects hold more positive attitudes toward our detail-enhanced images than the original X-ray images (3.53 > 2.72 , 3.42 > 2.61 , 3.5 > 2.56 ). Conclusion We have conducted extensive experiments to evaluate the proposed image detail enhancement method. It can be concluded that (1) our method could significantly improve the visibility of the X-ray images. (2) our method is fast and effective, thus facilitating real applications.
Collapse
Affiliation(s)
- Wenyan Bian
- The Affiliated People’s Hospital of Jiangsu University, Zhenjiang China
| | - Yang Yang
- Department of Computer Science, Jiangsu University, China
| |
Collapse
|