1
|
Qiu L, Zhong L, Li J, Feng W, Zhou C, Pan J. SFT-SGAT: A semi-supervised fine-tuning self-supervised graph attention network for emotion recognition and consciousness detection. Neural Netw 2024; 180:106643. [PMID: 39186838 DOI: 10.1016/j.neunet.2024.106643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 04/11/2024] [Accepted: 08/14/2024] [Indexed: 08/28/2024]
Abstract
Emotional recognition is highly important in the field of brain-computer interfaces (BCIs). However, due to the individual variability in electroencephalogram (EEG) signals and the challenges in obtaining accurate emotional labels, traditional methods have shown poor performance in cross-subject emotion recognition. In this study, we propose a cross-subject EEG emotion recognition method based on a semi-supervised fine-tuning self-supervised graph attention network (SFT-SGAT). First, we model multi-channel EEG signals by constructing a graph structure that dynamically captures the spatiotemporal topological features of EEG signals. Second, we employ a self-supervised graph attention neural network to facilitate model training, mitigating the impact of signal noise on the model. Finally, a semi-supervised approach is used to fine-tune the model, enhancing its generalization ability in cross-subject classification. By combining supervised and unsupervised learning techniques, the SFT-SGAT maximizes the utility of limited labeled data in EEG emotion recognition tasks, thereby enhancing the model's performance. Experiments based on leave-one-subject-out cross-validation demonstrate that SFT-SGAT achieves state-of-the-art cross-subject emotion recognition performance on the SEED and SEED-IV datasets, with accuracies of 92.04% and 82.76%, respectively. Furthermore, experiments conducted on a self-collected dataset comprising ten healthy subjects and eight patients with disorders of consciousness (DOCs) revealed that the SFT-SGAT attains high classification performance in healthy subjects (maximum accuracy of 95.84%) and was successfully applied to DOC patients, with four patients achieving emotion recognition accuracies exceeding 60%. The experiments demonstrate the effectiveness of the proposed SFT-SGAT model in cross-subject EEG emotion recognition and its potential for assessing levels of consciousness in patients with DOC.
Collapse
Affiliation(s)
- Lina Qiu
- School of Artificial Intelligence, South China Normal University, Guangzhou, 510630, China; Research Station in Mathematics, South China Normal University, Guangzhou, 510630, China.
| | - Liangquan Zhong
- School of Artificial Intelligence, South China Normal University, Guangzhou, 510630, China.
| | - Jianping Li
- School of Artificial Intelligence, South China Normal University, Guangzhou, 510630, China.
| | - Weisen Feng
- School of Artificial Intelligence, South China Normal University, Guangzhou, 510630, China.
| | - Chengju Zhou
- School of Artificial Intelligence, South China Normal University, Guangzhou, 510630, China.
| | - Jiahui Pan
- School of Artificial Intelligence, South China Normal University, Guangzhou, 510630, China.
| |
Collapse
|
2
|
Hamzah HA, Abdalla KK. EEG-based emotion recognition systems; comprehensive study. Heliyon 2024; 10:e31485. [PMID: 38818173 PMCID: PMC11137547 DOI: 10.1016/j.heliyon.2024.e31485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Accepted: 05/16/2024] [Indexed: 06/01/2024] Open
Abstract
Emotion recognition technology through EEG signal analysis is currently a fundamental concept in artificial intelligence. This recognition has major practical implications in emotional health care, human-computer interaction, and so on. This paper provides a comprehensive study of different methods for extracting electroencephalography (EEG) features for emotion recognition from four different perspectives, including time domain features, frequency domain features, time-frequency features, and nonlinear features. We summarize the current pattern recognition methods adopted in most related works, and with the rapid development of deep learning (DL) attracting the attention of researchers in this field, we pay more attention to deep learning-based studies and analyse the characteristics, advantages, disadvantages, and applicable scenarios. Finally, the current challenges and future development directions in this field were summarized. This paper can help novice researchers in this field gain a systematic understanding of the current status of emotion recognition research based on EEG signals and provide ideas for subsequent related research.
Collapse
Affiliation(s)
- Hussein Ali Hamzah
- Electrical Engineering Department, College of Engineering, University of Babylon, Iraq
| | - Kasim K. Abdalla
- Electrical Engineering Department, College of Engineering, University of Babylon, Iraq
| |
Collapse
|
3
|
Ng HW, Guan C. Subject-independent meta-learning framework towards optimal training of EEG-based classifiers. Neural Netw 2024; 172:106108. [PMID: 38219680 DOI: 10.1016/j.neunet.2024.106108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 11/13/2023] [Accepted: 01/05/2024] [Indexed: 01/16/2024]
Abstract
Advances in deep learning have shown great promise towards the application of performing high-accuracy Electroencephalography (EEG) signal classification in a variety of tasks. However, many EEG-based datasets are often plagued by the issue of high inter-subject signal variability. Robust deep learning models are notoriously difficult to train under such scenarios, often leading to subpar or widely varying performance across subjects under the leave-one-subject-out paradigm. Recently, the model agnostic meta-learning framework was introduced as a way to increase the model's ability to generalize towards new tasks. While the original framework focused on task-based meta-learning, this research aims to show that the meta-learning methodology can be modified towards subject-based signal classification while maintaining the same task objectives and achieve state-of-the-art performance. Namely, we propose the novel implementation of a few/zero-shot subject-independent meta-learning framework towards multi-class inner speech and binary class motor imagery classification. Compared to current subject-adaptive methods which utilize large number of labels from the target, the proposed framework shows its effectiveness in training zero-calibration and few-shot models for subject-independent EEG classification. The proposed few/zero-shot subject-independent meta-learning mechanism performs well on both small and large datasets and achieves robust, generalized performance across subjects. The results obtained shows a significant improvement over the current state-of-the-art, with the binary class motor imagery achieving 88.70% and the accuracy of multi-class inner speech achieving an average of 31.15%. Codes will be made available to public upon publication.
Collapse
Affiliation(s)
- Han Wei Ng
- Nanyang Technological University, 50 Nanyang Ave, 639798, Singapore; AI Singapore, 3 Research Link, 117602, Singapore.
| | - Cuntai Guan
- Nanyang Technological University, 50 Nanyang Ave, 639798, Singapore
| |
Collapse
|
4
|
Pan J, Liang R, He Z, Li J, Liang Y, Zhou X, He Y, Li Y. ST-SCGNN: A Spatio-Temporal Self-Constructing Graph Neural Network for Cross-Subject EEG-Based Emotion Recognition and Consciousness Detection. IEEE J Biomed Health Inform 2024; 28:777-788. [PMID: 38015677 DOI: 10.1109/jbhi.2023.3335854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2023]
Abstract
In this paper, a novel spatio-temporal self-constructing graph neural network (ST-SCGNN) is proposed for cross-subject emotion recognition and consciousness detection. For spatio-temporal feature generation, activation and connection pattern features are first extracted and then combined to leverage their complementary emotion-related information. Next, a self-constructing graph neural network with a spatio-temporal model is presented. Specifically, the graph structure of the neural network is dynamically updated by the self-constructing module of the input signal. Experiments based on the SEED and SEED-IV datasets showed that the model achieved average accuracies of 85.90% and 76.37%, respectively. Both values exceed the state-of-the-art metrics with the same protocol. In clinical besides, patients with disorders of consciousness (DOC) suffer severe brain injuries, and sufficient training data for EEG-based emotion recognition cannot be collected. Our proposed ST-SCGNN method for cross-subject emotion recognition was first attempted in training in ten healthy subjects and testing in eight patients with DOC. We found that two patients obtained accuracies significantly higher than chance level and showed similar neural patterns with healthy subjects. Covert consciousness and emotion-related abilities were thus demonstrated in these two patients. Our proposed ST-SCGNN for cross-subject emotion recognition could be a promising tool for consciousness detection in DOC patients.
Collapse
|
5
|
Moreno-Alcayde Y, Traver VJ, Leiva LA. Sneaky emotions: impact of data partitions in affective computing experiments with brain-computer interfacing. Biomed Eng Lett 2024; 14:103-113. [PMID: 38186953 PMCID: PMC10769959 DOI: 10.1007/s13534-023-00316-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 08/07/2023] [Accepted: 08/22/2023] [Indexed: 01/09/2024] Open
Abstract
Brain-Computer Interfacing (BCI) has shown promise in Machine Learning (ML) for emotion recognition. Unfortunately, how data are partitioned in training/test splits is often overlooked, which makes it difficult to attribute research findings to actual modeling improvements or to partitioning issues. We introduce the "data transfer rate" construct (i.e., how much data of the test samples are seen during training) and use it to examine data partitioning effects under several conditions. As a use case, we consider emotion recognition in videos using electroencephalogram (EEG) signals. Three data splits are considered, each representing a relevant BCI task: subject-independent (affective decoding), video-independent (affective annotation), and time-based (feature extraction). Model performance may change significantly (ranging e.g. from 50% to 90%) depending on how data is partitioned, in classification accuracy. This was evidenced in all experimental conditions tested. Our results show that (1) for affective decoding, it is hard to achieve performance above the baseline case (random classification) unless some data of the test subjects are considered in the training partition; (2) for affective annotation, having data from the same subject in training and test partitions, even though they correspond to different videos, also increases performance; and (3) later signal segments are generally more discriminative, but it is the number of segments (data points) what matters the most. Our findings not only have implications in how brain data are managed, but also in how experimental conditions and results are reported.
Collapse
Affiliation(s)
- Yoelvis Moreno-Alcayde
- Institute of New Imaging Technologies, Universitat Jaume I, Av. Vicent Sos Baynat, s/n, Castellón, 12071 Castellón Spain
| | - V. Javier Traver
- Institute of New Imaging Technologies, Universitat Jaume I, Av. Vicent Sos Baynat, s/n, Castellón, 12071 Castellón Spain
| | - Luis A. Leiva
- University of Luxembourg, Esch-sur-Alzette, Luxembourg
| |
Collapse
|
6
|
Li R, Ren C, Zhang S, Yang Y, Zhao Q, Hou K, Yuan W, Zhang X, Hu B. STSNet: a novel spatio-temporal-spectral network for subject-independent EEG-based emotion recognition. Health Inf Sci Syst 2023; 11:25. [PMID: 37265664 PMCID: PMC10229500 DOI: 10.1007/s13755-023-00226-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 04/28/2023] [Indexed: 06/03/2023] Open
Abstract
How to use the characteristics of EEG signals to obtain more complementary and discriminative data representation is an issue in EEG-based emotion recognition. Many studies have tried spatio-temporal or spatio-spectral feature fusion to obtain higher-level representations of EEG data. However, these studies ignored the complementarity between spatial, temporal and spectral domains of EEG signals, thus limiting the classification ability of models. This study proposed an end-to-end network based on ManifoldNet and BiLSTM networks, named STSNet. The STSNet first constructed a 4-D spatio-temporal-spectral data representation and a spatio-temporal data representation based on EEG signals in manifold space. After that, they were fed into the ManifoldNet network and the BiLSTM network respectively to calculate higher-level features and achieve spatio-temporal-spectral feature fusion. Finally, extensive comparative experiments were performed on two public datasets, DEAP and DREAMER, using the subject-independent leave-one-subject-out cross-validation strategy. On the DEAP dataset, the average accuracy of the valence and arousal are 69.38% and 71.88%, respectively; on the DREAMER dataset, the average accuracy of the valence and arousal are 78.26% and 82.37%, respectively. Experimental results show that the STSNet model has good emotion recognition performance.
Collapse
Affiliation(s)
- Rui Li
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 Gansu China
| | - Chao Ren
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 Gansu China
| | - Sipo Zhang
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 Gansu China
| | - Yikun Yang
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 Gansu China
| | - Qiqi Zhao
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 Gansu China
| | - Kechen Hou
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 Gansu China
| | - Wenjie Yuan
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 Gansu China
| | - Xiaowei Zhang
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 Gansu China
| | - Bin Hu
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 Gansu China
| |
Collapse
|
7
|
Chen J, Cui Y, Qian C, He E. A fine-tuning deep residual convolutional neural network for emotion recognition based on frequency-channel matrices representation of one-dimensional electroencephalography. Comput Methods Biomech Biomed Engin 2023:1-11. [PMID: 38017703 DOI: 10.1080/10255842.2023.2286918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 11/18/2023] [Indexed: 11/30/2023]
Abstract
Emotion recognition (ER) plays a crucial role in enabling machines to perceive human emotional and psychological states, thus enhancing human-machine interaction. Recently, there has been a growing interest in ER based on electroencephalogram (EEG) signals. However, due to the noisy, nonlinear, and nonstationary properties of electroencephalography signals, developing an automatic and high-accuracy ER system is still a challenging task. In this study, a pretrained deep residual convolutional neural network model, including 17 convolutional layers and one fully connected layer with transfer learning technique in combination frequency-channel matrices (FCM) of two-dimensional data based on Welch power spectral density estimate from the one-dimensional EEG data has been proposed for improving the ER by automatically learning the underlying intrinsic features of multi-channel EEG data. The experiment result shows a mean accuracy of 93.61 ± 0.84%, a mean precision of 94.70 ± 0.60%, a mean sensitivity of 95.13 ± 1.02%, a mean specificity of 91.04 ± 1.02%, and a mean F1-score of 94.91 ± 0.68%, respectively using 5-fold cross-validation on the DEAP dataset. Meanwhile, to better explore and understand how the proposed model works, we noted that the ranking of clustering effect of FCM for the same category by employing the t-distributed stochastic neighbor embedding strategy is: softmax layer activation is the best, the middle convolutional layer activation is the second, and the early max pooling layer activation is the worst. These findings confirm the promising potential of combining deep learning approaches with transfer learning techniques and FCM for effective ER tasks.
Collapse
Affiliation(s)
- Jichi Chen
- School of Mechanical Engineering, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Yuguo Cui
- School of Mechanical Engineering, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Cheng Qian
- School of Mechanical Engineering, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Enqiu He
- School of Chemical Equipment, Shenyang University of Technology, Liaoyang, Liaoning, China
| |
Collapse
|
8
|
Jafari M, Shoeibi A, Khodatars M, Bagherzadeh S, Shalbaf A, García DL, Gorriz JM, Acharya UR. Emotion recognition in EEG signals using deep learning methods: A review. Comput Biol Med 2023; 165:107450. [PMID: 37708717 DOI: 10.1016/j.compbiomed.2023.107450] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 08/03/2023] [Accepted: 09/01/2023] [Indexed: 09/16/2023]
Abstract
Emotions are a critical aspect of daily life and serve a crucial role in human decision-making, planning, reasoning, and other mental states. As a result, they are considered a significant factor in human interactions. Human emotions can be identified through various sources, such as facial expressions, speech, behavior (gesture/position), or physiological signals. The use of physiological signals can enhance the objectivity and reliability of emotion detection. Compared with peripheral physiological signals, electroencephalogram (EEG) recordings are directly generated by the central nervous system and are closely related to human emotions. EEG signals have the great spatial resolution that facilitates the evaluation of brain functions, making them a popular modality in emotion recognition studies. Emotion recognition using EEG signals presents several challenges, including signal variability due to electrode positioning, individual differences in signal morphology, and lack of a universal standard for EEG signal processing. Moreover, identifying the appropriate features for emotion recognition from EEG data requires further research. Finally, there is a need to develop more robust artificial intelligence (AI) including conventional machine learning (ML) and deep learning (DL) methods to handle the complex and diverse EEG signals associated with emotional states. This paper examines the application of DL techniques in emotion recognition from EEG signals and provides a detailed discussion of relevant articles. The paper explores the significant challenges in emotion recognition using EEG signals, highlights the potential of DL techniques in addressing these challenges, and suggests the scope for future research in emotion recognition using DL techniques. The paper concludes with a summary of its findings.
Collapse
Affiliation(s)
- Mahboobeh Jafari
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Afshin Shoeibi
- Data Science and Computational Intelligence Institute, University of Granada, Spain.
| | - Marjane Khodatars
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Sara Bagherzadeh
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Ahmad Shalbaf
- Department of Biomedical Engineering and Medical Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - David López García
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Juan M Gorriz
- Data Science and Computational Intelligence Institute, University of Granada, Spain; Department of Psychiatry, University of Cambridge, UK
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
9
|
Lin X, Chen J, Ma W, Tang W, Wang Y. EEG emotion recognition using improved graph neural network with channel selection. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107380. [PMID: 36745954 DOI: 10.1016/j.cmpb.2023.107380] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 01/11/2023] [Accepted: 01/26/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Emotion classification tasks based on electroencephalography (EEG) are an essential part of artificial intelligence, with promising applications in healthcare areas such as autism research and emotion detection in pregnant women. However, the complex data acquisition environment provides a variable number of EEG channels, which interferes with the model to simulate the process of information transfer in the human brain. Therefore, this paper proposes an improved graph convolution model with dynamic channel selection. METHODS The proposed model combines the advantages of 1D convolution and graph convolution to capture the intra- and inter-channel EEG features, respectively. We add functional connectivity in the graph structure that helps to simulate the relationship between brain regions further. In addition, an adjustable scale of channel selection can be performed based on the attention distribution in the graph structure. RESULTS We conducted various experiments on the DEAP-Twente, DEAP-Geneva, and SEED datasets and achieved average accuracies of 90.74%, 91%, and 90.22%, respectively, which exceeded most existing models. Meanwhile, with only 20% of the EEG channels retained, the models achieved average accuracies of 82.78%, 84%, and 83.93% on the above three datasets, respectively. CONCLUSIONS The experimental results show that the proposed model can achieve effective emotion classification in complex dataset environments. Also, the proposed channel selection method is informative for reducing the cost of affective computing.
Collapse
Affiliation(s)
- Xuefen Lin
- School of Information and Electronic Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, PR China
| | - Jielin Chen
- School of Information and Electronic Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, PR China.
| | - Weifeng Ma
- School of Information and Electronic Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, PR China
| | - Wei Tang
- School of Information and Electronic Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, PR China
| | - Yuchen Wang
- School of Information and Electronic Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, PR China
| |
Collapse
|
10
|
Li J, Wang F, Huang H, Qi F, Pan J. A novel semi-supervised meta learning method for subject-transfer brain-computer interface. Neural Netw 2023; 163:195-204. [PMID: 37062178 DOI: 10.1016/j.neunet.2023.03.039] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 02/22/2023] [Accepted: 03/28/2023] [Indexed: 04/09/2023]
Abstract
The brain-computer interface (BCI) provides a direct communication pathway between the human brain and external devices. However, the models trained for existing subjects perform poorly on new subjects, which is termed the subject calibration problem. In this paper, we propose a semi-supervised meta learning (SSML) method for subject-transfer calibration. The proposed SSML learns a model-agnostic meta learner with existing subjects and then fine-tunes the meta learner in a semi-supervised learning manner, i.e. using a few labelled samples and many unlabelled samples of the target subject for calibration. It is significant for BCI applications in which labelled data are scarce or expensive while unlabelled data are readily available. Three different BCI paradigms are tested: event-related potential detection, emotion recognition and sleep staging. The SSML achieved classification accuracies of 0.95, 0.89 and 0.83 in the benchmark datasets of three paradigms. The runtime complexity of SSML grows linearly as the number of samples of target subject increases so that is possible to apply it in real-time systems. This study is the first attempt to apply semi-supervised model-agnostic meta learning methodology for subject calibration. The experimental results demonstrated the effectiveness and potential of the SSML method for subject-transfer BCI applications.
Collapse
Affiliation(s)
- Jingcong Li
- School of Software, South China Normal University, Guangzhou, China; Pazhou Lab, Guangzhou, China
| | - Fei Wang
- School of Software, South China Normal University, Guangzhou, China; Pazhou Lab, Guangzhou, China
| | - Haiyun Huang
- School of Software, South China Normal University, Guangzhou, China; Pazhou Lab, Guangzhou, China
| | - Feifei Qi
- School of Internet Finance and Information Engineering, Guangdong University of Finance, Guangzhou, China; Pazhou Lab, Guangzhou, China
| | - Jiahui Pan
- School of Software, South China Normal University, Guangzhou, China; Pazhou Lab, Guangzhou, China.
| |
Collapse
|