1
|
Gasparini F, Grossi A, Giltri M, Nishinari K, Bandini S. Behavior and Task Classification Using Wearable Sensor Data: A Study across Different Ages. SENSORS (BASEL, SWITZERLAND) 2023; 23:3225. [PMID: 36991935 PMCID: PMC10055934 DOI: 10.3390/s23063225] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 03/10/2023] [Accepted: 03/13/2023] [Indexed: 06/19/2023]
Abstract
In this paper, we face the problem of task classification starting from physiological signals acquired using wearable sensors with experiments in a controlled environment, designed to consider two different age populations: young adults and older adults. Two different scenarios are considered. In the first one, subjects are involved in different cognitive load tasks, while in the second one, space varying conditions are considered, and subjects interact with the environment, changing the walking conditions and avoiding collision with obstacles. Here, we demonstrate that it is possible not only to define classifiers that rely on physiological signals to predict tasks that imply different cognitive loads, but it is also possible to classify both the population group age and the performed task. The whole workflow of data collection and analysis, starting from the experimental protocol, data acquisition, signal denoising, normalization with respect to subject variability, feature extraction and classification is described here. The dataset collected with the experiments together with the codes to extract the features of the physiological signals are made available for the research community.
Collapse
Affiliation(s)
- Francesca Gasparini
- Department of Informatics, Systems and Communication, University of Milano-Bicocca, 20126 Milan, Italy
| | - Alessandra Grossi
- Department of Informatics, Systems and Communication, University of Milano-Bicocca, 20126 Milan, Italy
| | - Marta Giltri
- Department of Informatics, Systems and Communication, University of Milano-Bicocca, 20126 Milan, Italy
| | - Katsuhiro Nishinari
- RCAST—Research Center for Advanced Science & Technology, The University of Tokyo, Tokyo 153-8904, Japan
| | - Stefania Bandini
- Department of Informatics, Systems and Communication, University of Milano-Bicocca, 20126 Milan, Italy
- RCAST—Research Center for Advanced Science & Technology, The University of Tokyo, Tokyo 153-8904, Japan
| |
Collapse
|
2
|
Abdel-Hamid L. An Efficient Machine Learning-Based Emotional Valence Recognition Approach Towards Wearable EEG. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23031255. [PMID: 36772295 PMCID: PMC9921881 DOI: 10.3390/s23031255] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 01/14/2023] [Accepted: 01/17/2023] [Indexed: 05/17/2023]
Abstract
Emotion artificial intelligence (AI) is being increasingly adopted in several industries such as healthcare and education. Facial expressions and tone of speech have been previously considered for emotion recognition, yet they have the drawback of being easily manipulated by subjects to mask their true emotions. Electroencephalography (EEG) has emerged as a reliable and cost-effective method to detect true human emotions. Recently, huge research effort has been put to develop efficient wearable EEG devices to be used by consumers in out of the lab scenarios. In this work, a subject-dependent emotional valence recognition method is implemented that is intended for utilization in emotion AI applications. Time and frequency features were computed from a single time series derived from the Fp1 and Fp2 channels. Several analyses were performed on the strongest valence emotions to determine the most relevant features, frequency bands, and EEG timeslots using the benchmark DEAP dataset. Binary classification experiments resulted in an accuracy of 97.42% using the alpha band, by that outperforming several approaches from literature by ~3-22%. Multiclass classification gave an accuracy of 95.0%. Feature computation and classification required less than 0.1 s. The proposed method thus has the advantage of reduced computational complexity as, unlike most methods in the literature, only two EEG channels were considered. In addition, minimal features concluded from the thorough analyses conducted in this study were used to achieve state-of-the-art performance. The implemented EEG emotion recognition method thus has the merits of being reliable and easily reproducible, making it well-suited for wearable EEG devices.
Collapse
Affiliation(s)
- Lamiaa Abdel-Hamid
- Department of Electronics & Communication, Faculty of Engineering, Misr International University (MIU), Heliopolis, Cairo P.O. Box 1 , Egypt
| |
Collapse
|
3
|
Singh U, Shaw R, Patra BK. A data augmentation and channel selection technique for grading human emotions on DEAP dataset. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
4
|
Walther D, Viehweg J, Haueisen J, Mäder P. A systematic comparison of deep learning methods for EEG time series analysis. Front Neuroinform 2023; 17:1067095. [PMID: 36911074 PMCID: PMC9995756 DOI: 10.3389/fninf.2023.1067095] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 01/30/2023] [Indexed: 02/25/2023] Open
Abstract
Analyzing time series data like EEG or MEG is challenging due to noisy, high-dimensional, and patient-specific signals. Deep learning methods have been demonstrated to be superior in analyzing time series data compared to shallow learning methods which utilize handcrafted and often subjective features. Especially, recurrent deep neural networks (RNN) are considered suitable to analyze such continuous data. However, previous studies show that they are computationally expensive and difficult to train. In contrast, feed-forward networks (FFN) have previously mostly been considered in combination with hand-crafted and problem-specific feature extractions, such as short time Fourier and discrete wavelet transform. A sought-after are easily applicable methods that efficiently analyze raw data to remove the need for problem-specific adaptations. In this work, we systematically compare RNN and FFN topologies as well as advanced architectural concepts on multiple datasets with the same data preprocessing pipeline. We examine the behavior of those approaches to provide an update and guideline for researchers who deal with automated analysis of EEG time series data. To ensure that the results are meaningful, it is important to compare the presented approaches while keeping the same experimental setup, which to our knowledge was never done before. This paper is a first step toward a fairer comparison of different methodologies with EEG time series data. Our results indicate that a recurrent LSTM architecture with attention performs best on less complex tasks, while the temporal convolutional network (TCN) outperforms all the recurrent architectures on the most complex dataset yielding a 8.61% accuracy improvement. In general, we found the attention mechanism to substantially improve classification results of RNNs. Toward a light-weight and online learning-ready approach, we found extreme learning machines (ELM) to yield comparable results for the less complex tasks.
Collapse
Affiliation(s)
- Dominik Walther
- Data-Intensive Systems and Visualization Group (dAI.SY), Technische Universität Ilmenau, Ilmenau, Germany
| | - Johannes Viehweg
- Data-Intensive Systems and Visualization Group (dAI.SY), Technische Universität Ilmenau, Ilmenau, Germany
| | - Jens Haueisen
- Institute of Biomedical Engineering and Informatics, Technische Universität Ilmenau, Ilmenau, Germany
| | - Patrick Mäder
- Data-Intensive Systems and Visualization Group (dAI.SY), Technische Universität Ilmenau, Ilmenau, Germany.,Faculty of Biological Sciences, Friedrich Schiller University, Jena, Germany
| |
Collapse
|
5
|
Cisnal A, Moreno-SanJuan V, Fraile JC, Turiel JP, de-la-Fuente E, Sánchez-Brizuela G. Assessment of the Patient’s Emotional Response with the RobHand Rehabilitation Platform: A Case Series Study. J Clin Med 2022; 11:jcm11154442. [PMID: 35956063 PMCID: PMC9369387 DOI: 10.3390/jcm11154442] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/27/2022] [Accepted: 07/28/2022] [Indexed: 12/04/2022] Open
Abstract
Cerebrovascular accidents have physical, cognitive and emotional effects. During rehabilitation, the main focus is placed on motor recovery, yet the patient’s emotional state should also be considered. For this reason, validating robotic rehabilitation systems should not only focus on their effectiveness related to the physical recovery but also on the patient’s emotional response. A case series study has been conducted with five stroke patients to assess their emotional response towards therapies using RobHand, a robotic hand rehabilitation platform. Emotional state was evaluated in three dimensions (arousal, valence and dominance) using a computer-based Self-Assessment Manikin (SAM) test. It was verified that the emotions induced by the RobHand platform were successfully distributed in the three-dimensional emotional space. The increase in dominance and the decrease in arousal during sessions reflects that patients had become familiar with the rehabilitation platform, resulting in an increased feeling of control and finding the platform less attractive. The results also reflect that patients found a therapy based on a virtual environment with a realistic scenario more pleasant and attractive.
Collapse
|
6
|
Cai J, Xiao R, Cui W, Zhang S, Liu G. Application of Electroencephalography-Based Machine Learning in Emotion Recognition: A Review. Front Syst Neurosci 2021; 15:729707. [PMID: 34887732 PMCID: PMC8649925 DOI: 10.3389/fnsys.2021.729707] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Accepted: 11/08/2021] [Indexed: 11/13/2022] Open
Abstract
Emotion recognition has become increasingly prominent in the medical field and human-computer interaction. When people’s emotions change under external stimuli, various physiological signals of the human body will fluctuate. Electroencephalography (EEG) is closely related to brain activity, making it possible to judge the subject’s emotional changes through EEG signals. Meanwhile, machine learning algorithms, which are good at digging out data features from a statistical perspective and making judgments, have developed by leaps and bounds. Therefore, using machine learning to extract feature vectors related to emotional states from EEG signals and constructing a classifier to separate emotions into discrete states to realize emotion recognition has a broad development prospect. This paper introduces the acquisition, preprocessing, feature extraction, and classification of EEG signals in sequence following the progress of EEG-based machine learning algorithms for emotion recognition. And it may help beginners who will use EEG-based machine learning algorithms for emotion recognition to understand the development status of this field. The journals we selected are all retrieved from the Web of Science retrieval platform. And the publication dates of most of the selected articles are concentrated in 2016–2021.
Collapse
Affiliation(s)
- Jing Cai
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Ruolan Xiao
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Wenjie Cui
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Shang Zhang
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Guangda Liu
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| |
Collapse
|
7
|
Hatipoglu Yilmaz B, Kose C. A novel signal to image transformation and feature level fusion for multimodal emotion recognition. ACTA ACUST UNITED AC 2021; 66:353-362. [PMID: 33823091 DOI: 10.1515/bmt-2020-0229] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Accepted: 03/12/2021] [Indexed: 11/15/2022]
Abstract
Emotion is one of the most complex and difficult expression to be predicted. Nowadays, many recognition systems that use classification methods have focused on different types of emotion recognition problems. In this paper, we aimed to propose a multimodal fusion method between electroencephalography (EEG) and electrooculography (EOG) signals for emotion recognition. Therefore, before the feature extraction stage, we applied different angle-amplitude transformations to EEG-EOG signals. These transformations take arbitrary time domain signals and convert them two-dimensional images named as Angle-Amplitude Graph (AAG). Then, we extracted image-based features using a scale invariant feature transform method, fused these features originates basically from EEG-EOG and lastly classified with support vector machines. To verify the validity of these proposed methods, we performed experiments on the multimodal DEAP dataset which is a benchmark dataset widely used for emotion analysis with physiological signals. In the experiments, we applied the proposed emotion recognition procedures on the arousal-valence dimensions. We achieved (91.53%) accuracy for the arousal space and (90.31%) for the valence space after fusion. Experimental results showed that the combination of AAG image features belonging to EEG-EOG signals in the baseline angle amplitude transformation approaches enhanced the classification performance on the DEAP dataset.
Collapse
Affiliation(s)
| | - Cemal Kose
- Department of Computer Engineering, Karadeniz Technical University, Trabzon, Turkey
| |
Collapse
|
8
|
Cabrera FE, Sánchez-Núñez P, Vaccaro G, Peláez JI, Escudero J. Impact of Visual Design Elements and Principles in Human Electroencephalogram Brain Activity Assessed with Spectral Methods and Convolutional Neural Networks. SENSORS 2021; 21:s21144695. [PMID: 34300436 PMCID: PMC8309592 DOI: 10.3390/s21144695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 07/02/2021] [Accepted: 07/05/2021] [Indexed: 11/30/2022]
Abstract
The visual design elements and principles (VDEPs) can trigger behavioural changes and emotions in the viewer, but their effects on brain activity are not clearly understood. In this paper, we explore the relationships between brain activity and colour (cold/warm), light (dark/bright), movement (fast/slow), and balance (symmetrical/asymmetrical) VDEPs. We used the public DEAP dataset with the electroencephalogram signals of 32 participants recorded while watching music videos. The characteristic VDEPs for each second of the videos were manually tagged for by a team of two visual communication experts. Results show that variations in the light/value, rhythm/movement, and balance in the music video sequences produce a statistically significant effect over the mean absolute power of the Delta, Theta, Alpha, Beta, and Gamma EEG bands (p < 0.05). Furthermore, we trained a Convolutional Neural Network that successfully predicts the VDEP of a video fragment solely by the EEG signal of the viewer with an accuracy ranging from 0.7447 for Colour VDEP to 0.9685 for Movement VDEP. Our work shows evidence that VDEPs affect brain activity in a variety of distinguishable ways and that a deep learning classifier can infer visual VDEP properties of the videos from EEG activity.
Collapse
Affiliation(s)
- Francisco E. Cabrera
- Department of Languages and Computer Sciences, School of Computer Science and Engineering, Universidad de Málaga, 29071 Málaga, Spain; (F.E.C.); (G.V.); (J.I.P.)
- Centre for Applied Social Research (CISA), Ada Byron Research Building, Universidad de Málaga, 29071 Málaga, Spain
- Instituto de Investigación Biomédica de Málaga (IBIMA), 29071 Málaga, Spain
| | - Pablo Sánchez-Núñez
- Centre for Applied Social Research (CISA), Ada Byron Research Building, Universidad de Málaga, 29071 Málaga, Spain
- Instituto de Investigación Biomédica de Málaga (IBIMA), 29071 Málaga, Spain
- Department of Audiovisual Communication and Advertising, Faculty of Communication Sciences, Universidad de Málaga, 29071 Málaga, Spain
- Correspondence: (P.S.-N.); (J.E.)
| | - Gustavo Vaccaro
- Department of Languages and Computer Sciences, School of Computer Science and Engineering, Universidad de Málaga, 29071 Málaga, Spain; (F.E.C.); (G.V.); (J.I.P.)
- Centre for Applied Social Research (CISA), Ada Byron Research Building, Universidad de Málaga, 29071 Málaga, Spain
- Instituto de Investigación Biomédica de Málaga (IBIMA), 29071 Málaga, Spain
| | - José Ignacio Peláez
- Department of Languages and Computer Sciences, School of Computer Science and Engineering, Universidad de Málaga, 29071 Málaga, Spain; (F.E.C.); (G.V.); (J.I.P.)
- Centre for Applied Social Research (CISA), Ada Byron Research Building, Universidad de Málaga, 29071 Málaga, Spain
- Instituto de Investigación Biomédica de Málaga (IBIMA), 29071 Málaga, Spain
| | - Javier Escudero
- School of Engineering, Institute for Digital Communications (IDCOM), The University of Edinburgh, 8 Thomas Bayes Rd, Edinburgh EH9 3FG, UK
- Correspondence: (P.S.-N.); (J.E.)
| |
Collapse
|
9
|
Gong S, Xing K, Cichocki A, Li J. Deep Learning in EEG: Advance of the Last Ten-Year Critical Period. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2021.3079712] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
10
|
Pandey B, Kumar Pandey D, Pratap Mishra B, Rhmann W. A comprehensive survey of deep learning in the field of medical imaging and medical natural language processing: Challenges and research directions. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2021. [DOI: 10.1016/j.jksuci.2021.01.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
11
|
Abstract
Many scientific studies have been concerned with building an automatic system to recognize emotions, and building such systems usually relies on brain signals. These studies have shown that brain signals can be used to classify many emotional states. This process is considered difficult, especially since the brain’s signals are not stable. Human emotions are generated as a result of reactions to different emotional states, which affect brain signals. Thus, the performance of emotion recognition systems by brain signals depends on the efficiency of the algorithms used to extract features, the feature selection algorithm, and the classification process. Recently, the study of electroencephalography (EEG) signaling has received much attention due to the availability of several standard databases, especially since brain signal recording devices have become available in the market, including wireless ones, at reasonable prices. This work aims to present an automated model for identifying emotions based on EEG signals. The proposed model focuses on creating an effective method that combines the basic stages of EEG signal handling and feature extraction. Different from previous studies, the main contribution of this work relies in using empirical mode decomposition/intrinsic mode functions (EMD/IMF) and variational mode decomposition (VMD) for signal processing purposes. Despite the fact that EMD/IMFs and VMD methods are widely used in biomedical and disease-related studies, they are not commonly utilized in emotion recognition. In other words, the methods used in the signal processing stage in this work are different from the methods used in literature. After the signal processing stage, namely in the feature extraction stage, two well-known technologies were used: entropy and Higuchi’s fractal dimension (HFD). Finally, in the classification stage, four classification methods were used—naïve Bayes, k-nearest neighbor (k-NN), convolutional neural network (CNN), and decision tree (DT)—for classifying emotional states. To evaluate the performance of our proposed model, experiments were applied to a common database called DEAP based on many evaluation models, including accuracy, specificity, and sensitivity. The experiments showed the efficiency of the proposed method; a 95.20% accuracy was achieved using the CNN-based method.
Collapse
|
12
|
Rim B, Sung NJ, Min S, Hong M. Deep Learning in Physiological Signal Data: A Survey. SENSORS (BASEL, SWITZERLAND) 2020; 20:E969. [PMID: 32054042 PMCID: PMC7071412 DOI: 10.3390/s20040969] [Citation(s) in RCA: 73] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 01/31/2020] [Accepted: 02/09/2020] [Indexed: 12/11/2022]
Abstract
Deep Learning (DL), a successful promising approach for discriminative and generative tasks, has recently proved its high potential in 2D medical imaging analysis; however, physiological data in the form of 1D signals have yet to be beneficially exploited from this novel approach to fulfil the desired medical tasks. Therefore, in this paper we survey the latest scientific research on deep learning in physiological signal data such as electromyogram (EMG), electrocardiogram (ECG), electroencephalogram (EEG), and electrooculogram (EOG). We found 147 papers published between January 2018 and October 2019 inclusive from various journals and publishers. The objective of this paper is to conduct a detailed study to comprehend, categorize, and compare the key parameters of the deep-learning approaches that have been used in physiological signal analysis for various medical applications. The key parameters of deep-learning approach that we review are the input data type, deep-learning task, deep-learning model, training architecture, and dataset sources. Those are the main key parameters that affect system performance. We taxonomize the research works using deep-learning method in physiological signal analysis based on: (1) physiological signal data perspective, such as data modality and medical application; and (2) deep-learning concept perspective such as training architecture and dataset sources.
Collapse
Affiliation(s)
- Beanbonyka Rim
- Department of Computer Science, Soonchunhyang University, Asan 31538, Korea
| | - Nak-Jun Sung
- Department of Computer Science, Soonchunhyang University, Asan 31538, Korea
| | - Sedong Min
- Department of Medical IT Engineering, Soonchunhyang University, Asan 31538, Korea
| | - Min Hong
- Department of Computer Software Engineering, Soonchunhyang University, Asan 31538, Korea
| |
Collapse
|
13
|
Yang F, Zhao X, Jiang W, Gao P, Liu G. Multi-method Fusion of Cross-Subject Emotion Recognition Based on High-Dimensional EEG Features. Front Comput Neurosci 2019; 13:53. [PMID: 31507396 PMCID: PMC6714862 DOI: 10.3389/fncom.2019.00053] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Accepted: 07/19/2019] [Indexed: 11/20/2022] Open
Abstract
Emotion recognition using electroencephalogram (EEG) signals has attracted significant research attention. However, it is difficult to improve the emotional recognition effect across subjects. In response to this difficulty, in this study, multiple features were extracted for the formation of high-dimensional features. Based on the high-dimensional features, an effective method for cross-subject emotion recognition was then developed, which integrated the significance test/sequential backward selection and the support vector machine (ST-SBSSVM). The effectiveness of the ST-SBSSVM was validated on a dataset for emotion analysis using physiological signals (DEAP) and the SJTU Emotion EEG Dataset (SEED). With respect to high-dimensional features, the ST-SBSSVM average improved the accuracy of cross-subject emotion recognition by 12.4% on the DEAP and 26.5% on the SEED when compared with common emotion recognition methods. The recognition accuracy obtained using ST-SBSSVM was as high as that obtained using sequential backward selection (SBS) on the DEAP dataset. However, on the SEED dataset, the recognition accuracy increased by ~6% using ST-SBSSVM from that using the SBS. Using the ST-SBSSVM, ~97% (DEAP) and 91% (SEED) of the program runtime was eliminated when compared with the SBS. Compared with recent similar works, the method developed in this study for emotion recognition across all subjects was found to be effective, and its accuracy was 72% (DEAP) and 89% (SEED).
Collapse
Affiliation(s)
- Fu Yang
- College of Electronic Information and Engineering, Southwest University, Chongqing, China.,Chongqing Key Laboratory of Nonlinear Circuit and Intelligent Information Processing, Chongqing, China
| | - Xingcong Zhao
- College of Electronic Information and Engineering, Southwest University, Chongqing, China.,Chongqing Key Laboratory of Nonlinear Circuit and Intelligent Information Processing, Chongqing, China
| | - Wenge Jiang
- College of Electronic Information and Engineering, Southwest University, Chongqing, China.,Chongqing Key Laboratory of Nonlinear Circuit and Intelligent Information Processing, Chongqing, China
| | - Pengfei Gao
- College of Electronic Information and Engineering, Southwest University, Chongqing, China.,Chongqing Key Laboratory of Nonlinear Circuit and Intelligent Information Processing, Chongqing, China
| | - Guangyuan Liu
- College of Electronic Information and Engineering, Southwest University, Chongqing, China.,Chongqing Key Laboratory of Nonlinear Circuit and Intelligent Information Processing, Chongqing, China
| |
Collapse
|