1
|
Zhao Z, Guo S, Han L, Wu L, Zhang Y, Yan B. Altruistic seagull optimization algorithm enables selection of radiomic features for predicting benign and malignant pulmonary nodules. Comput Biol Med 2024; 180:108996. [PMID: 39137669 DOI: 10.1016/j.compbiomed.2024.108996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2024] [Revised: 05/22/2024] [Accepted: 08/02/2024] [Indexed: 08/15/2024]
Abstract
Accurately differentiating indeterminate pulmonary nodules remains a significant challenge in clinical practice. This challenge becomes increasingly formidable when dealing with the vast radiomic features obtained from low-dose computed tomography, a lung cancer screening technique being rolling out in many areas of the world. Consequently, this study proposed the Altruistic Seagull Optimization Algorithm (AltSOA) for the selection of radiomic features in predicting the malignancy risk of pulmonary nodules. This innovative approach incorporated altruism into the traditional seagull optimization algorithm to seek a global optimal solution. A multi-objective fitness function was designed for training the pulmonary nodule prediction model, aiming to use fewer radiomic features while ensuring prediction performance. Among global radiomic features, the AltSOA identified 11 interested features, including the gray level co-occurrence matrix. This automatically selected panel of radiomic features enabled precise prediction (area under the curve = 0.8383 (95 % confidence interval 0.7862-0.8863)) of the malignancy risk of pulmonary nodules, surpassing the proficiency of radiologists. Furthermore, the interpretability, clinical utility, and generalizability of the pulmonary nodule prediction model were thoroughly discussed. All results consistently underscore the superiority of the AltSOA in predicting the malignancy risk of pulmonary nodules. And the proposed malignant risk prediction model for pulmonary nodules holds promise for enhancing existing lung cancer screening methods. The supporting source codes of this work can be found at: https://github.com/zzl2022/PBMPN.
Collapse
Affiliation(s)
- Zhilei Zhao
- National Key Lab of Autonomous Intelligent Unmanned Systems, School of Automation, Beijing Institute of Technology, Beijing, 100081, China.
| | - Shuli Guo
- National Key Lab of Autonomous Intelligent Unmanned Systems, School of Automation, Beijing Institute of Technology, Beijing, 100081, China.
| | - Lina Han
- Department of Cardiology, The Second Medical Center, Chinese PLA General Hospital, Beijing, 100853, China.
| | - Lei Wu
- National Key Lab of Autonomous Intelligent Unmanned Systems, School of Automation, Beijing Institute of Technology, Beijing, 100081, China.
| | - Yating Zhang
- National Key Lab of Autonomous Intelligent Unmanned Systems, School of Automation, Beijing Institute of Technology, Beijing, 100081, China.
| | - Biyu Yan
- National Key Lab of Autonomous Intelligent Unmanned Systems, School of Automation, Beijing Institute of Technology, Beijing, 100081, China.
| |
Collapse
|
2
|
Chen Z, Liu Y, Lin Z, Huang W. Understand how machine learning impact lung cancer research from 2010 to 2021: A bibliometric analysis. Open Med (Wars) 2024; 19:20230874. [PMID: 38463530 PMCID: PMC10921441 DOI: 10.1515/med-2023-0874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Revised: 11/18/2023] [Accepted: 11/20/2023] [Indexed: 03/12/2024] Open
Abstract
Advances in lung cancer research applying machine learning (ML) technology have generated many relevant literature. However, there is absence of bibliometric analysis review that aids a comprehensive understanding of this field and its progress. Present article for the first time performed a bibliometric analysis to clarify research status and focus from 2010 to 2021. In the analysis, a total of 2,312 relevant literature were searched and retrieved from the Web of Science Core Collection database. We conducted a bibliometric analysis and further visualization. During that time, exponentially growing annual publication and our model have shown a flourishing research prospect. Annual citation reached the peak in 2017. Researchers from United States and China have produced most of the relevant literature and strongest partnership between them. Medical image analysis and Nature appeared to bring more attention to the public. The computer-aided diagnosis, precision medicine, and survival prediction were the focus of research, reflecting the development trend at that period. ML did make a big difference in lung cancer research in the past decade.
Collapse
Affiliation(s)
- Zijian Chen
- Department of Cardiothoracic Surgery, The Second Affiliated Hospital of Shantou University Medical College, Shantou, China
| | - Yangqi Liu
- Department of Cardiothoracic Surgery, The Second Affiliated Hospital of Shantou University Medical College, Shantou, China
| | - Zeying Lin
- Department of Cardiothoracic Surgery, The Second Affiliated Hospital of Shantou University Medical College, Shantou, China
| | - Weizhe Huang
- Department of Cardiothoracic Surgery, The Second Affiliated Hospital of Shantou University Medical College, Shantou, China
| |
Collapse
|
3
|
Pan L, Yan X, Zheng Y, Huang L, Zhang Z, Fu R, Zheng B, Zheng S. Automatic pulmonary artery-vein separation in CT images using a twin-pipe network and topology reconstruction. PeerJ Comput Sci 2023; 9:e1537. [PMID: 37810355 PMCID: PMC10557495 DOI: 10.7717/peerj-cs.1537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 07/24/2023] [Indexed: 10/10/2023]
Abstract
Background With the wide application of CT scanning, the separation of pulmonary arteries and veins (A/V) based on CT images plays an important role for assisting surgeons in preoperative planning of lung cancer surgery. However, distinguishing between arteries and veins in chest CT images remains challenging due to the complex structure and the presence of their similarities. Methods We proposed a novel method for automatically separating pulmonary arteries and veins based on vessel topology information and a twin-pipe deep learning network. First, vessel tree topology is constructed by combining scale-space particles and multi-stencils fast marching (MSFM) methods to ensure the continuity and authenticity of the topology. Second, a twin-pipe network is designed to learn the multiscale differences between arteries and veins and the characteristics of the small arteries that closely accompany bronchi. Finally, we designed a topology optimizer that considers interbranch and intrabranch topological relationships to optimize the results of arteries and veins classification. Results The proposed approach is validated on the public dataset CARVE14 and our private dataset. Compared with ground truth, the proposed method achieves an average accuracy of 90.1% on the CARVE14 dataset, and 96.2% on our local dataset. Conclusions The method can effectively separate pulmonary arteries and veins and has good generalization for chest CT images from different devices, as well as enhanced and noncontrast CT image sequences from the same device.
Collapse
Affiliation(s)
- Lin Pan
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian, China
| | - Xiaochao Yan
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian, China
| | - Yaoyong Zheng
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian, China
| | - Liqin Huang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian, China
| | - Zhen Zhang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian, China
| | - Rongda Fu
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian, China
| | - Bin Zheng
- Key Laboratory of Cardio-Thoracic Surgery, Fujian Medical University, Fuzhou, Fujian, China
| | - Shaohua Zheng
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian, China
| |
Collapse
|
4
|
Zhu C, Hu P, Wang X, Zeng X, Shi L. A real-time computer-aided diagnosis method for hydatidiform mole recognition using deep neural network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 234:107510. [PMID: 37003042 DOI: 10.1016/j.cmpb.2023.107510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Revised: 02/20/2023] [Accepted: 03/22/2023] [Indexed: 06/19/2023]
Abstract
BACKGROUND AND OBJECTIVE Hydatidiform mole (HM) is one of the most common gestational trophoblastic diseases with malignant potential. Histopathological examination is the primary method for diagnosing HM. However, due to the obscure and confusing pathology features of HM, significant observer variability exists among pathologists, leading to over- and misdiagnosis in clinical practice. Efficient feature extraction can significantly improve the accuracy and speed of the diagnostic process. Deep neural network (DNN) has been proven to have excellent feature extraction and segmentation capabilities, which is widely used in clinical practice for many other diseases. We constructed a deep learning-based CAD method to recognize HM hydrops lesions under the microscopic view in real-time. METHODS To solve the challenge of lesion segmentation due to difficulties in extracting effective features from HM slide images, we proposed a hydrops lesion recognition module that employs DeepLabv3+ with our novel compound loss function and a stepwise training strategy to achieve great performance in recognizing hydrops lesions at both pixel and lesion level. Meanwhile, a Fourier transform-based image mosaic module and an edge extension module for image sequences were developed to make the recognition model more applicable to the case of moving slides in clinical practice. Such an approach also addresses the situation where the model has poor results for image edge recognition. RESULTS We evaluated our method using widely adopted DNNs on an HM dataset and chose DeepLabv3+ with our compound loss function as the segmentation model. The comparison experiments show that the edge extension module is able to improve the model performance by at most 3.4% regarding pixel-level IoU and 9.0% regarding lesion-level IoU. As for the final result, our method is able to achieve a pixel-level IoU of 77.0%, a precision of 86.0%, and a lesion-level recall of 86.2% while having a response time of 82 ms per frame. Experiments show that our method is able to display the full microscopic view with accurately labeled HM hydrops lesions following the movement of slides in real-time. CONCLUSIONS To the best of our knowledge, this is the first method to utilize deep neural networks in HM lesion recognition. This method provides a robust and accurate solution with powerful feature extraction and segmentation capabilities for auxiliary diagnosis of HM.
Collapse
Affiliation(s)
- Chengze Zhu
- Department of Automation, Tsinghua University, Beijing, 100084, China
| | - Pingge Hu
- Department of Automation, Tsinghua University, Beijing, 100084, China
| | - Xingtong Wang
- Department of Automation, Tsinghua University, Beijing, 100084, China
| | - Xianxu Zeng
- Department of Pathology, the Third Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, China
| | - Li Shi
- Department of Automation, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
5
|
Shen Z, Cao P, Yang J, Zaiane OR. WS-LungNet: A two-stage weakly-supervised lung cancer detection and diagnosis network. Comput Biol Med 2023; 154:106587. [PMID: 36709519 DOI: 10.1016/j.compbiomed.2023.106587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 01/13/2023] [Accepted: 01/22/2023] [Indexed: 01/26/2023]
Abstract
Computer-aided lung cancer diagnosis (CAD) system on computed tomography (CT) helps radiologists guide preoperative planning and prognosis assessment. The flexibility and scalability of deep learning methods are limited in lung CAD. In essence, two significant challenges to be solved are (1) Label scarcity due to cost annotations of CT images by experienced domain experts, and (2) Label inconsistency between the observed nodule malignancy and the patients' pathology evaluation. These two issues can be considered weak label problems. We address these issues in this paper by introducing a weakly-supervised lung cancer detection and diagnosis network (WS-LungNet), consisting of a semi-supervised computer-aided detection (Semi-CADe) that can segment 3D pulmonary nodules based on unlabeled data through adversarial learning to reduce label scarcity, as well as a cross-nodule attention computer-aided diagnosis (CNA-CADx) for evaluating malignancy at the patient level by modeling correlations between nodules via cross-attention mechanisms and thereby eliminating label inconsistency. Through extensive evaluations on the LIDC-IDRI public database, we show that our proposed method achieves 82.99% competition performance metric (CPM) on pulmonary nodule detection and 88.63% area under the curve (AUC) on lung cancer diagnosis. Extensive experiments demonstrate the advantage of WS-LungNet on nodule detection and malignancy evaluation tasks. Our promising results demonstrate the benefits and flexibility of the semi-supervised segmentation with adversarial learning and the nodule instance correlation learning with the attention mechanism. The results also suggest that making use of the unlabeled data and taking the relationship among nodules in a case into account are essential for lung cancer detection and diagnosis.
Collapse
Affiliation(s)
- Zhiqiang Shen
- College of Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
| | - Peng Cao
- College of Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China.
| | - Jinzhu Yang
- College of Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
| | - Osmar R Zaiane
- Alberta Machine Intelligence Institute, University of Alberta, Canada
| |
Collapse
|
6
|
Jin H, Yu C, Gong Z, Zheng R, Zhao Y, Fu Q. Machine learning techniques for pulmonary nodule computer-aided diagnosis using CT images: A systematic review. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
7
|
Jassim MM, Jaber MM. Systematic review for lung cancer detection and lung nodule classification: Taxonomy, challenges, and recommendation future works. JOURNAL OF INTELLIGENT SYSTEMS 2022. [DOI: 10.1515/jisys-2022-0062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Nowadays, lung cancer is one of the most dangerous diseases that require early diagnosis. Artificial intelligence has played an essential role in the medical field in general and in analyzing medical images and diagnosing diseases in particular, as it can reduce human errors that can occur with the medical expert when analyzing medical image. In this research study, we have done a systematic survey of the research published during the last 5 years in the diagnosis of lung cancer classification of lung nodules in 4 reliable databases (Science Direct, Scopus, web of science, and IEEE), and we selected 50 research paper using systematic literature review. The goal of this review work is to provide a concise overview of recent advancements in lung cancer diagnosis issues by machine learning and deep learning algorithms. This article summarizes the present state of knowledge on the subject. Addressing the findings offered in recent research publications gives the researchers a better grasp of the topic. We checked all the characteristics, such as challenges, recommendations for future work were analyzed in detail, and the published datasets and their source were presented to facilitate the researchers’ access to them and use it to develop the results achieved previously.
Collapse
Affiliation(s)
- Mustafa Mohammed Jassim
- Department of Computer Science, Informatics Institute for Postgraduate Studies (IIPS), Iraqi Commission for Computers and Informatics (ICCI) , Baghdad , Iraq
| | - Mustafa Musa Jaber
- Department of Medical Instruments Engineering Techniques, Dijlah University College , Baghdad , 10021 , Iraq
- Department of Medical Instruments Engineering Techniques, Al-Farahidi University , Baghdad , 10021 , Iraq
| |
Collapse
|
8
|
Huang YS, Chou PR, Chen HM, Chang YC, Chang RF. One-stage pulmonary nodule detection using 3-D DCNN with feature fusion and attention mechanism in CT image. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 220:106786. [PMID: 35398579 DOI: 10.1016/j.cmpb.2022.106786] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 03/28/2022] [Accepted: 03/29/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Lung cancer is the most common cause of cancer-related death in the world. Low-dose computed tomography (LDCT) is a widely used modality in lung cancer detection. The nodule is an abnormal tissue and may evolve into lung cancer. Hence, it is crucial to detect nodules in the early detection stage. However, reviewing the LDCT scans to observe suspicious nodules is a time-consuming task. Recently, designing a computer-aided detection (CADe) system with convolutional neural network (CNN) architecture has been proven that it is helpful for radiologists. Hence, in this study, a 3-D YOLO-based CADe system, 3-D OSAF-YOLOv3, is proposed for nodule detection in LDCT images. METHODS The proposed CADe system consists of data preprocessing, nodule detection, and non-maximum suppression algorithm (NMS). At first, the data preprocessing including the background elimination, the spacing normalization, and the volume of interest (VOI) extraction, are conducted to remove the non-lung region, normalize the image spacing, and divide LDCT image into numerous VOIs. Then, the VOIs are fed into the 3-D OSAF-YOLOv3 model, to detect the suspicious nodules. The proposed model is constructed by integrating the 3-D YOLOv3 with the one-shot aggregation module (OSA), the receptive field block (RFB), and the feature fusion scheme (FFS). Finally, the NMS algorithm is performed to eliminate the duplicated detection generated by the model. RESULTS In this study, the LUNA-16 dataset composed 1186 nodules from 888 LDCT scans and the competition performance metric (CPM) are used to evaluate our CADe system. In the experiment results, the proposed system can achieve a sensitivities rate of 0.962 with the false positive rate of 8 and complete a CPM value of 0.905. Moreover, according to the ablation study results, the employment of OSA module, RFB, and FFS could improve the detection performance actually. Furthermore, compared to other start-of-the-art (SOTA) models, our detection system could also achieve the higher performance. CONCLUSIONS In this study, a YOLO-based CADe system for nodule detection in CT image system integrating additional modules and scheme is proposed for nodule detection in LDCT. The result indicates that the proposed the modification can significantly improve detection performance.
Collapse
Affiliation(s)
- Yao-Sian Huang
- Department of Computer Science and Information Engineering, National Changhua University of Education, Changhua, Taiwan
| | - Ping-Ru Chou
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10617, Taiwan
| | - Hsin-Ming Chen
- Department of Medical Imaging, National Taiwan University Hospital Hsin-Chu Branch, Hsin-Chu, Taiwan
| | - Yeun-Chung Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 10617, Taiwan.
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10617, Taiwan; Graduate Institute of Network and Multimedia, National Taiwan University, Taipei, Taiwan; Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan; MOST Joint Research Center for AI Technology and All Vista Healthcare, Taipei, Taiwan.
| |
Collapse
|
9
|
Zhang X, Lee VC, Rong J, Lee JC, Liu F. Deep convolutional neural networks in thyroid disease detection: A multi-classification comparison by ultrasonography and computed tomography. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 220:106823. [PMID: 35489145 DOI: 10.1016/j.cmpb.2022.106823] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 02/20/2022] [Accepted: 04/18/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE As one of the largest endocrine organs in the human body, the thyroid gland regulates daily metabolism. Early detection of thyroid disease leads to reduced mortality rates. The diagnosis of thyroid disease is usually made by radiologists and pathologists, which heavily relies on their experience and expertise. To mitigate human false-positive diagnostic rates, this paper proves that deep learning-driven techniques yield promising performance for automatic detection of thyroid diseases which offers clinicians assistance regarding diagnostic decision-making. METHOD This research study is the first of its kind, which adopts two pre-operative medical image modalities for multi-classifying thyroid disease types (i.e., normal, thyroiditis, cystic, multi-nodular goiter, adenoma, and cancer). Using the current state-of-the-art performing deep convolutional neural network (CNN) architecture, this study builds a thyroid disease diagnostic model for distinguishing among the disease types. RESULTS The model obtains unprecedented performance for both medical image sets, and it reaches an accuracy of 0.972 and 0.942 for ultrasound images and computed tomography (CT) scans correspondingly. CONCLUSION The experimental results illustrate that the selected CNN can be adapted to both image modalities, indicating the feasibility of the deep learning model and emphasizing its further applications in clinics.
Collapse
Affiliation(s)
- Xinyu Zhang
- Department of Data Science and AI, Faculty of IT, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia
| | - Vincent Cs Lee
- Department of Data Science and AI, Faculty of IT, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| | - Jia Rong
- Department of Data Science and AI, Faculty of IT, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia
| | - James C Lee
- Monash University Endocrine Surgery Unit, Alfred Hospital, Melbourne, VIC 3004, Australia; Department of Surgery, Monash University, Melbourne, VIC 3168, Australia
| | - Feng Liu
- West China Hospital of Sichuan University, Chengdu City, Sichuan Province 332001, China
| |
Collapse
|
10
|
Liu D, Liu F, Tie Y, Qi L, Wang F. Res-trans networks for lung nodule classification. Int J Comput Assist Radiol Surg 2022; 17:1059-1068. [PMID: 35290646 DOI: 10.1007/s11548-022-02576-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 02/02/2022] [Indexed: 12/09/2022]
Abstract
PURPOSE Lung cancer usually presents as pulmonary nodules on early diagnostic images, and accurately estimating the malignancy of pulmonary nodules is crucial to the prevention and diagnosis of lung cancer. Recently, deep learning algorithms based on convolutional neural networks have shown potential for pulmonary nodules classification. However, the size of the nodules is very diverse, ranging from 3 to 30 mm, which makes classifying them to be a challenging task. In this study, we propose a novel architecture called Res-trans networks to classify nodules in computed tomography (CT) scans. METHODS We designed local and global blocks to extract features that capture the long-range dependencies between pixels to adapt to the correct classification of lung nodules of different sizes. Specifically, we designed residual blocks with convolutional operations to extract local features and transformer blocks with self-attention to capture global features. Moreover, the Res-trans network has a sequence fusion block that aggregates and extracts the sequence feature information output by the transformer block that improves classification accuracy. RESULTS Our proposed method is extensively evaluated on the public LIDC-IDRI dataset, which contains 1,018 CT scans. A tenfold cross-validation result shows that our method obtains better performance with AUC = 0.9628 and Accuracy = 0.9292 compared with recently leading methods. CONCLUSION In this paper, a network that can capture local and global features is proposed to classify nodules in chest CT. Experimental results show that our proposed method has better classification performance and can help radiologists to accurately analyze lung nodules.
Collapse
Affiliation(s)
- Dongxu Liu
- School of Information Engineering, Zhengzhou University, Zhengzhou, China
| | - Fenghui Liu
- Department of Respiratory and Sleep Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Yun Tie
- School of Information Engineering, Zhengzhou University, Zhengzhou, China.
| | - Lin Qi
- School of Information Engineering, Zhengzhou University, Zhengzhou, China
| | - Feng Wang
- Department of Oncology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| |
Collapse
|