1
|
Yang L, Gu Y, Bian G, Liu Y. MSDE-Net: A Multi-Scale Dual-Encoding Network for Surgical Instrument Segmentation. IEEE J Biomed Health Inform 2024; 28:4072-4083. [PMID: 38117619 DOI: 10.1109/jbhi.2023.3344716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2023]
Abstract
Minimally invasive surgery, which relies on surgical robots and microscopes, demands precise image segmentation to ensure safe and efficient procedures. Nevertheless, achieving accurate segmentation of surgical instruments remains challenging due to the complexity of the surgical environment. To tackle this issue, this paper introduces a novel multiscale dual-encoding segmentation network, termed MSDE-Net, designed to automatically and precisely segment surgical instruments. The proposed MSDE-Net leverages a dual-branch encoder comprising a convolutional neural network (CNN) branch and a transformer branch to effectively extract both local and global features. Moreover, an attention fusion block (AFB) is introduced to ensure effective information complementarity between the dual-branch encoding paths. Additionally, a multilayer context fusion block (MCF) is proposed to enhance the network's capacity to simultaneously extract global and local features. Finally, to extend the scope of global feature information under larger receptive fields, a multi-receptive field fusion (MRF) block is incorporated. Through comprehensive experimental evaluations on two publicly available datasets for surgical instrument segmentation, the proposed MSDE-Net demonstrates superior performance compared to existing methods.
Collapse
|
2
|
Urrea C, Garcia-Garcia Y, Kern J. Improving Surgical Scene Semantic Segmentation through a Deep Learning Architecture with Attention to Class Imbalance. Biomedicines 2024; 12:1309. [PMID: 38927516 PMCID: PMC11201157 DOI: 10.3390/biomedicines12061309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 06/01/2024] [Accepted: 06/11/2024] [Indexed: 06/28/2024] Open
Abstract
This article addresses the semantic segmentation of laparoscopic surgery images, placing special emphasis on the segmentation of structures with a smaller number of observations. As a result of this study, adjustment parameters are proposed for deep neural network architectures, enabling a robust segmentation of all structures in the surgical scene. The U-Net architecture with five encoder-decoders (U-Net5ed), SegNet-VGG19, and DeepLabv3+ employing different backbones are implemented. Three main experiments are conducted, working with Rectified Linear Unit (ReLU), Gaussian Error Linear Unit (GELU), and Swish activation functions. The applied loss functions include Cross Entropy (CE), Focal Loss (FL), Tversky Loss (TL), Dice Loss (DiL), Cross Entropy Dice Loss (CEDL), and Cross Entropy Tversky Loss (CETL). The performance of Stochastic Gradient Descent with momentum (SGDM) and Adaptive Moment Estimation (Adam) optimizers is compared. It is qualitatively and quantitatively confirmed that DeepLabv3+ and U-Net5ed architectures yield the best results. The DeepLabv3+ architecture with the ResNet-50 backbone, Swish activation function, and CETL loss function reports a Mean Accuracy (MAcc) of 0.976 and Mean Intersection over Union (MIoU) of 0.977. The semantic segmentation of structures with a smaller number of observations, such as the hepatic vein, cystic duct, Liver Ligament, and blood, verifies that the obtained results are very competitive and promising compared to the consulted literature. The proposed selected parameters were validated in the YOLOv9 architecture, which showed an improvement in semantic segmentation compared to the results obtained with the original architecture.
Collapse
Affiliation(s)
- Claudio Urrea
- Electrical Engineering Department, Faculty of Engineering, University of Santiago of Chile, Las Sophoras 165, Estación Central, Santiago 9170020, Chile; (Y.G.-G.); (J.K.)
| | | | | |
Collapse
|
3
|
Liu M, Han Y, Wang J, Wang C, Wang Y, Meijering E. LSKANet: Long Strip Kernel Attention Network for Robotic Surgical Scene Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1308-1322. [PMID: 38015689 DOI: 10.1109/tmi.2023.3335406] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2023]
Abstract
Surgical scene segmentation is a critical task in Robotic-assisted surgery. However, the complexity of the surgical scene, which mainly includes local feature similarity (e.g., between different anatomical tissues), intraoperative complex artifacts, and indistinguishable boundaries, poses significant challenges to accurate segmentation. To tackle these problems, we propose the Long Strip Kernel Attention network (LSKANet), including two well-designed modules named Dual-block Large Kernel Attention module (DLKA) and Multiscale Affinity Feature Fusion module (MAFF), which can implement precise segmentation of surgical images. Specifically, by introducing strip convolutions with different topologies (cascaded and parallel) in two blocks and a large kernel design, DLKA can make full use of region- and strip-like surgical features and extract both visual and structural information to reduce the false segmentation caused by local feature similarity. In MAFF, affinity matrices calculated from multiscale feature maps are applied as feature fusion weights, which helps to address the interference of artifacts by suppressing the activations of irrelevant regions. Besides, the hybrid loss with Boundary Guided Head (BGH) is proposed to help the network segment indistinguishable boundaries effectively. We evaluate the proposed LSKANet on three datasets with different surgical scenes. The experimental results show that our method achieves new state-of-the-art results on all three datasets with improvements of 2.6%, 1.4%, and 3.4% mIoU, respectively. Furthermore, our method is compatible with different backbones and can significantly increase their segmentation accuracy. Code is available at https://github.com/YubinHan73/LSKANet.
Collapse
|
4
|
Rueckert T, Rueckert D, Palm C. Methods and datasets for segmentation of minimally invasive surgical instruments in endoscopic images and videos: A review of the state of the art. Comput Biol Med 2024; 169:107929. [PMID: 38184862 DOI: 10.1016/j.compbiomed.2024.107929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 12/02/2023] [Accepted: 01/01/2024] [Indexed: 01/09/2024]
Abstract
In the field of computer- and robot-assisted minimally invasive surgery, enormous progress has been made in recent years based on the recognition of surgical instruments in endoscopic images and videos. In particular, the determination of the position and type of instruments is of great interest. Current work involves both spatial and temporal information, with the idea that predicting the movement of surgical tools over time may improve the quality of the final segmentations. The provision of publicly available datasets has recently encouraged the development of new methods, mainly based on deep learning. In this review, we identify and characterize datasets used for method development and evaluation and quantify their frequency of use in the literature. We further present an overview of the current state of research regarding the segmentation and tracking of minimally invasive surgical instruments in endoscopic images and videos. The paper focuses on methods that work purely visually, without markers of any kind attached to the instruments, considering both single-frame semantic and instance segmentation approaches, as well as those that incorporate temporal information. The publications analyzed were identified through the platforms Google Scholar, Web of Science, and PubMed. The search terms used were "instrument segmentation", "instrument tracking", "surgical tool segmentation", and "surgical tool tracking", resulting in a total of 741 articles published between 01/2015 and 07/2023, of which 123 were included using systematic selection criteria. A discussion of the reviewed literature is provided, highlighting existing shortcomings and emphasizing the available potential for future developments.
Collapse
Affiliation(s)
- Tobias Rueckert
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Germany.
| | - Daniel Rueckert
- Artificial Intelligence in Healthcare and Medicine, Klinikum rechts der Isar, Technical University of Munich, Germany; Department of Computing, Imperial College London, UK
| | - Christoph Palm
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Germany; Regensburg Center of Health Sciences and Technology (RCHST), OTH Regensburg, Germany
| |
Collapse
|
5
|
Ying M, Wang Y, Yang K, Wang H, Liu X. A deep learning knowledge distillation framework using knee MRI and arthroscopy data for meniscus tear detection. Front Bioeng Biotechnol 2024; 11:1326706. [PMID: 38292305 PMCID: PMC10825958 DOI: 10.3389/fbioe.2023.1326706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 12/22/2023] [Indexed: 02/01/2024] Open
Abstract
Purpose: To construct a deep learning knowledge distillation framework exploring the utilization of MRI alone or combing with distilled Arthroscopy information for meniscus tear detection. Methods: A database of 199 paired knee Arthroscopy-MRI exams was used to develop a multimodal teacher network and an MRI-based student network, which used residual neural networks architectures. A knowledge distillation framework comprising the multimodal teacher network T and the monomodal student network S was proposed. We optimized the loss functions of mean squared error (MSE) and cross-entropy (CE) to enable the student network S to learn arthroscopic information from the teacher network T through our deep learning knowledge distillation framework, ultimately resulting in a distilled student network S T. A coronal proton density (PD)-weighted fat-suppressed MRI sequence was used in this study. Fivefold cross-validation was employed, and the accuracy, sensitivity, specificity, F1-score, receiver operating characteristic (ROC) curves and area under the receiver operating characteristic curve (AUC) were used to evaluate the medial and lateral meniscal tears detection performance of the models, including the undistilled student model S, the distilled student model S T and the teacher model T. Results: The AUCs of the undistilled student model S, the distilled student model S T, the teacher model T for medial meniscus (MM) tear detection and lateral meniscus (LM) tear detection are 0.773/0.672, 0.792/0.751 and 0.834/0.746, respectively. The distilled student model S T had higher AUCs than the undistilled model S. After undergoing knowledge distillation processing, the distilled student model demonstrated promising results, with accuracy (0.764/0.734), sensitivity (0.838/0.661), and F1-score (0.680/0.754) for both medial and lateral tear detection better than the undistilled one with accuracy (0.734/0.648), sensitivity (0.733/0.607), and F1-score (0.620/0.673). Conclusion: Through the knowledge distillation framework, the student model S based on MRI benefited from the multimodal teacher model T and achieved an improved meniscus tear detection performance.
Collapse
Affiliation(s)
- Mengjie Ying
- Department of Orthopedics, Shanghai Sixth People’s Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yufan Wang
- Engineering Research Center for Digital Medicine of the Ministry of Education, Shanghai, China
- School of Biomedical Engineering and Med-X Research Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Kai Yang
- Department of Radiology, Shanghai Sixth People’s Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Haoyuan Wang
- Department of Orthopedics, Shanghai Sixth People’s Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xudong Liu
- Department of Orthopedics, Shanghai Sixth People’s Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
6
|
Wang Y, Lam HK, Xu Y, Yin F, Qian K. Multi-task learning framework to predict the status of central venous catheter based on radiographs. Artif Intell Med 2023; 146:102721. [PMID: 38042594 DOI: 10.1016/j.artmed.2023.102721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Revised: 09/29/2023] [Accepted: 11/14/2023] [Indexed: 12/04/2023]
Abstract
Hospital patients can have catheters and lines inserted during the course of their admission to give medicines for the treatment of medical issues, especially the central venous catheter (CVC). However, malposition of CVC will lead to many complications, even death. Clinicians always detect the status of the catheter to avoid the above issues via X-ray images. To reduce the workload of clinicians and improve the efficiency of CVC status detection, a multi-task learning framework for catheter status classification based on the convolutional neural network (CNN) is proposed. The proposed framework contains three significant components which are modified HRNet, multi-task supervision including segmentation supervision and heatmap regression supervision as well as classification branch. The modified HRNet maintaining high-resolution features from the start to the end can ensure to generation of high-quality assisted information for classification. The multi-task supervision can assist in alleviating the presence of other line-like structures such as other tubes and anatomical structures shown in the X-ray image. Furthermore, during the inference, this module is also considered as an interpretation interface to show where the framework pays attention to. Eventually, the classification branch is proposed to predict the class of the status of the catheter. A public CVC dataset is utilized to evaluate the performance of the proposed method, which gains 0.823 AUC (Area under the ROC curve) and 82.6% accuracy in the test dataset. Compared with two state-of-the-art methods (ATCM method and EDMC method), the proposed method can perform best.
Collapse
Affiliation(s)
- Yuhan Wang
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom
| | - Hak Keung Lam
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom.
| | - Yujia Xu
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom
| | - Faliang Yin
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom
| | - Kun Qian
- Center for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Campus, St Thomas' Hospital, Westminster Bridge Road, London, SE1 7EH, United Kingdom
| |
Collapse
|
7
|
Nuliqiman M, Xu M, Sun Y, Cao J, Chen P, Gao Q, Xu P, Ye J. Artificial Intelligence in Ophthalmic Surgery: Current Applications and Expectations. Clin Ophthalmol 2023; 17:3499-3511. [PMID: 38026589 PMCID: PMC10674717 DOI: 10.2147/opth.s438127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 11/09/2023] [Indexed: 12/01/2023] Open
Abstract
Artificial Intelligence (AI) has found rapidly growing applications in ophthalmology, achieving robust recognition and classification in most kind of ocular diseases. Ophthalmic surgery is one of the most delicate microsurgery, requiring high fineness and stability of surgeons. The massive demand of the AI assist ophthalmic surgery will constitute an important factor in boosting accelerate precision medicine. In clinical practice, it is instrumental to update and review the considerable evidence of the current AI technologies utilized in the investigation of ophthalmic surgery involved in both the progression and innovation of precision medicine. Bibliographic databases including PubMed and Google Scholar were searched using keywords such as "ophthalmic surgery", "surgical selection", "candidate screening", and "robot-assisted surgery" to find articles about AI technology published from 2018 to 2023. In addition to the Editorials and letters to the editor, all types of approaches are considered. In this paper, we will provide an up-to-date review of artificial intelligence in eye surgery, with a specific focus on its application to candidate screening, surgery selection, postoperative prediction, and real-time intraoperative guidance.
Collapse
Affiliation(s)
- Maimaiti Nuliqiman
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Mingyu Xu
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Yiming Sun
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Jing Cao
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Pengjie Chen
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Qi Gao
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Peifang Xu
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Juan Ye
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| |
Collapse
|
8
|
Shen W, Wang Y, Liu M, Wang J, Ding R, Zhang Z, Meijering E. Branch Aggregation Attention Network for Robotic Surgical Instrument Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3408-3419. [PMID: 37342952 DOI: 10.1109/tmi.2023.3288127] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/23/2023]
Abstract
Surgical instrument segmentation is of great significance to robot-assisted surgery, but the noise caused by reflection, water mist, and motion blur during the surgery as well as the different forms of surgical instruments would greatly increase the difficulty of precise segmentation. A novel method called Branch Aggregation Attention network (BAANet) is proposed to address these challenges, which adopts a lightweight encoder and two designed modules, named Branch Balance Aggregation module (BBA) and Block Attention Fusion module (BAF), for efficient feature localization and denoising. By introducing the unique BBA module, features from multiple branches are balanced and optimized through a combination of addition and multiplication to complement strengths and effectively suppress noise. Furthermore, to fully integrate the contextual information and capture the region of interest, the BAF module is proposed in the decoder, which receives adjacent feature maps from the BBA module and localizes the surgical instruments from both global and local perspectives by utilizing a dual branch attention mechanism. According to the experimental results, the proposed method has the advantage of being lightweight while outperforming the second-best method by 4.03%, 1.53%, and 1.34% in mIoU scores on three challenging surgical instrument datasets, respectively, compared to the existing state-of-the-art methods. Code is available at https://github.com/SWT-1014/BAANet.
Collapse
|
9
|
Semi-Supervised Medical Image Segmentation Guided by Bi-Directional Constrained Dual-Task Consistency. Bioengineering (Basel) 2023; 10:bioengineering10020225. [PMID: 36829720 PMCID: PMC9952498 DOI: 10.3390/bioengineering10020225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 01/28/2023] [Accepted: 01/31/2023] [Indexed: 02/10/2023] Open
Abstract
BACKGROUND Medical image processing tasks represented by multi-object segmentation are of great significance for surgical planning, robot-assisted surgery, and surgical safety. However, the exceptionally low contrast among tissues and limited available annotated data makes developing an automatic segmentation algorithm for pelvic CT challenging. METHODS A bi-direction constrained dual-task consistency model named PICT is proposed to improve segmentation quality by leveraging free unlabeled data. First, to learn more unmarked data features, it encourages the model prediction of the interpolated image to be consistent with the interpolation of the model prediction at the pixel, model, and data levels. Moreover, to constrain the error prediction of interpolation interference, PICT designs an auxiliary pseudo-supervision task that focuses on the underlying information of non-interpolation data. Finally, an effective loss algorithm for both consistency tasks is designed to ensure the complementary manner and produce more reliable predictions. RESULTS Quantitative experiments show that the proposed PICT achieves 87.18%, 96.42%, and 79.41% mean DSC score on ACDC, CTPelvic1k, and the individual Multi-tissue Pelvis dataset with gains of around 0.8%, 0.5%, and 1% compared to the state-of-the-art semi-supervised method. Compared to the baseline supervised method, the PICT brings over 3-9% improvements. CONCLUSIONS The developed PICT model can effectively leverage unlabeled data to improve segmentation quality of low contrast medical images. The segmentation result could improve the precision of surgical path planning and provide input for robot-assisted surgery.
Collapse
|
10
|
Luo YW, Chen HY, Li Z, Liu WP, Wang K, Zhang L, Fu P, Yue WQ, Bian GB. Fast instruments and tissues segmentation of micro-neurosurgical scene using high correlative non-local network. Comput Biol Med 2023; 153:106531. [PMID: 36638619 DOI: 10.1016/j.compbiomed.2022.106531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 12/25/2022] [Accepted: 12/31/2022] [Indexed: 01/05/2023]
Abstract
Surgical scene segmentation provides critical information for guidance in micro-neurosurgery. Segmentation of instruments and critical tissues contributes further to robot assisted surgery and surgical evaluation. However, due to the lack of relevant scene segmentation dataset, scale variation and local similarity, micro-neurosurgical segmentation faces many challenges. To address these issues, a high correlative non-local network (HCNNet), is proposed to aggregate multi-scale feature by optimized non-local mechanism. HCNNet adopts two-branch design to generate features of different scale efficiently, while the two branches share common weights in shallow layers. Several short-term dense concatenate (STDC) modules are combined as the backbone to capture both semantic and spatial information. Besides, a high correlative non-local module (HCNM) is designed to guide the upsampling process of the high-level feature by modeling global context generated from the low-level feature. It filters out confused pixels of different classes in the non-local correlation map. Meanwhile, a large segmentation dataset named NeuroSeg is constructed, which contains 15 types of instruments and 3 types of tissues that appear in meningioma resection surgery. The proposed HCNNet achieves the state-of-the-art performance on NeuroSeg, it reaches an inference speed of 54.85 FPS with the highest accuracy of 59.62% mIoU, 74.7% Dice, 70.55% mAcc and 87.12% aAcc.
Collapse
Affiliation(s)
- Yu-Wen Luo
- School of Artificial Intelligence and Data Science, Hebei University of Technology, Tianjin, 300131, China
| | - Hai-Yong Chen
- School of Artificial Intelligence and Data Science, Hebei University of Technology, Tianjin, 300131, China
| | - Zhen Li
- State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; School of Electronic and Information Engineering, Tongji University, Shanghai, 201804, China
| | - Wei-Peng Liu
- School of Artificial Intelligence and Data Science, Hebei University of Technology, Tianjin, 300131, China
| | - Ke Wang
- Beijing Tiantan Hospital, Capital Medical University, Beijing, 100190, China
| | - Li Zhang
- School of Artificial Intelligence and Data Science, Hebei University of Technology, Tianjin, 300131, China
| | - Pan Fu
- State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; School of Automation at Beijing Information Science and Technology University, Beijing, 100192, China
| | - Wen-Qian Yue
- School of Artificial Intelligence and Data Science, Hebei University of Technology, Tianjin, 300131, China
| | - Gui-Bin Bian
- State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China.
| |
Collapse
|