1
|
Li C, Li P, Zhang Y, Li N, Si Y, Li F, Cao Z, Chen H, Chen B, Yao D, Xu P. Effective Emotion Recognition by Learning Discriminative Graph Topologies in EEG Brain Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:10258-10272. [PMID: 37022389 DOI: 10.1109/tnnls.2023.3238519] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Multichannel electroencephalogram (EEG) is an array signal that represents brain neural networks and can be applied to characterize information propagation patterns for different emotional states. To reveal these inherent spatial graph features and increase the stability of emotion recognition, we propose an effective emotion recognition model that performs multicategory emotion recognition with multiple emotion-related spatial network topology patterns (MESNPs) by learning discriminative graph topologies in EEG brain networks. To evaluate the performance of our proposed MESNP model, we conducted single-subject and multisubject four-class classification experiments on two public datasets, MAHNOB-HCI and DEAP. Compared with existing feature extraction methods, the MESNP model significantly enhances the multiclass emotional classification performance in the single-subject and multisubject conditions. To evaluate the online version of the proposed MESNP model, we designed an online emotion monitoring system. We recruited 14 participants to conduct the online emotion decoding experiments. The average online experimental accuracy of the 14 participants was 84.56%, indicating that our model can be applied in affective brain-computer interface (aBCI) systems. The offline and online experimental results demonstrate that the proposed MESNP model effectively captures discriminative graph topology patterns and significantly improves emotion classification performance. Moreover, the proposed MESNP model provides a new scheme for extracting features from strongly coupled array signals.
Collapse
|
2
|
Chen L, Tang C, Wang Z, Zhang L, Gu B, Liu X, Ming D. Enhancing Motor Sequence Learning via Transcutaneous Auricular Vagus Nerve Stimulation (taVNS): An EEG Study. IEEE J Biomed Health Inform 2024; 28:1285-1296. [PMID: 38109248 DOI: 10.1109/jbhi.2023.3344176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2023]
Abstract
Motor learning plays a crucial role in human life, and various neuromodulation methods have been utilized to strengthen or improve it. Transcutaneous auricular vagus nerve stimulation (taVNS) has gained increasing attention due to its non-invasive nature, affordability and ease of implementation. Although the potential of taVNS on regulating motor learning has been suggested, its actual regulatory effect has yet been fully explored. Electroencephalogram (EEG) analysis provides an in-depth understanding of cognitive processes involved in motor learning so as to offer methodological support for regulation of motor learning. To investigate the effect of taVNS on motor learning, this study recruited 22 healthy subjects to participate a single-blind, sham-controlled, and within-subject serial reaction time task (SRTT) experiment. Every subject involved in two sessions at least one week apart and received a 20-minute active/sham taVNS in each session. Behavioral indicators as well as EEG characteristics during the task state, were extracted and analyzed. The results revealed that compared to the sham group, the active group showed higher learning performance. Additionally, the EEG results indicated that after taVNS, the motor-related cortical potential amplitudes and alpha-gamma modulation index decreased significantly and functional connectivity based on partial directed coherence towards frontal lobe was enhanced. These findings suggest that taVNS can improve motor learning, mainly through enhancing cognitive and memory functions rather than simple movement learning. This study confirms the positive regulatory effect of taVNS on motor learning, which is particularly promising as it offers a potential avenue for enhancing motor skills and facilitating rehabilitation.
Collapse
|
3
|
Song X, Huang P, Chen X, Xu M, Ming D. The frontooccipital interaction mechanism of high-frequency acoustoelectric signal. Cereb Cortex 2023; 33:10723-10735. [PMID: 37724433 DOI: 10.1093/cercor/bhad306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 07/13/2023] [Accepted: 07/15/2023] [Indexed: 09/20/2023] Open
Abstract
Based on acoustoelectric effect, acoustoelectric brain imaging has been proposed, which is a high spatiotemporal resolution neural imaging method. At the focal spot, brain electrical activity is encoded by focused ultrasound, and corresponding high-frequency acoustoelectric signal is generated. Previous studies have revealed that acoustoelectric signal can also be detected in other non-focal brain regions. However, the processing mechanism of acoustoelectric signal between different brain regions remains sparse. Here, with acoustoelectric signal generated in the left primary visual cortex, we investigated the spatial distribution characteristics and temporal propagation characteristics of acoustoelectric signal in the transmission. We observed a strongest transmission strength within the frontal lobe, and the global temporal statistics indicated that the frontal lobe features in acoustoelectric signal transmission. Then, cross-frequency phase-amplitude coupling was used to investigate the coordinated activity in the AE signal band range between frontal and occipital lobes. The results showed that intra-structural cross-frequency coupling and cross-structural coupling co-occurred between these two lobes, and, accordingly, high-frequency brain activity in the frontal lobe was effectively coordinated by distant occipital lobe. This study revealed the frontooccipital long-range interaction mechanism of acoustoelectric signal, which is the foundation of improving the performance of acoustoelectric brain imaging.
Collapse
Affiliation(s)
- Xizi Song
- Academy of Medical Engineering and Translation Medicine, Tianjin University, Tianjin 300072, China
| | - Peishan Huang
- Academy of Medical Engineering and Translation Medicine, Tianjin University, Tianjin 300072, China
| | - Xinrui Chen
- Academy of Medical Engineering and Translation Medicine, Tianjin University, Tianjin 300072, China
| | - Minpeng Xu
- Academy of Medical Engineering and Translation Medicine, Tianjin University, Tianjin 300072, China
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, China
| | - Dong Ming
- Academy of Medical Engineering and Translation Medicine, Tianjin University, Tianjin 300072, China
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, China
| |
Collapse
|
4
|
Jafari M, Shoeibi A, Khodatars M, Bagherzadeh S, Shalbaf A, García DL, Gorriz JM, Acharya UR. Emotion recognition in EEG signals using deep learning methods: A review. Comput Biol Med 2023; 165:107450. [PMID: 37708717 DOI: 10.1016/j.compbiomed.2023.107450] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 08/03/2023] [Accepted: 09/01/2023] [Indexed: 09/16/2023]
Abstract
Emotions are a critical aspect of daily life and serve a crucial role in human decision-making, planning, reasoning, and other mental states. As a result, they are considered a significant factor in human interactions. Human emotions can be identified through various sources, such as facial expressions, speech, behavior (gesture/position), or physiological signals. The use of physiological signals can enhance the objectivity and reliability of emotion detection. Compared with peripheral physiological signals, electroencephalogram (EEG) recordings are directly generated by the central nervous system and are closely related to human emotions. EEG signals have the great spatial resolution that facilitates the evaluation of brain functions, making them a popular modality in emotion recognition studies. Emotion recognition using EEG signals presents several challenges, including signal variability due to electrode positioning, individual differences in signal morphology, and lack of a universal standard for EEG signal processing. Moreover, identifying the appropriate features for emotion recognition from EEG data requires further research. Finally, there is a need to develop more robust artificial intelligence (AI) including conventional machine learning (ML) and deep learning (DL) methods to handle the complex and diverse EEG signals associated with emotional states. This paper examines the application of DL techniques in emotion recognition from EEG signals and provides a detailed discussion of relevant articles. The paper explores the significant challenges in emotion recognition using EEG signals, highlights the potential of DL techniques in addressing these challenges, and suggests the scope for future research in emotion recognition using DL techniques. The paper concludes with a summary of its findings.
Collapse
Affiliation(s)
- Mahboobeh Jafari
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Afshin Shoeibi
- Data Science and Computational Intelligence Institute, University of Granada, Spain.
| | - Marjane Khodatars
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Sara Bagherzadeh
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Ahmad Shalbaf
- Department of Biomedical Engineering and Medical Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - David López García
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Juan M Gorriz
- Data Science and Computational Intelligence Institute, University of Granada, Spain; Department of Psychiatry, University of Cambridge, UK
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
5
|
Akhand MAH, Maria MA, Kamal MAS, Murase K. Improved EEG-based emotion recognition through information enhancement in connectivity feature map. Sci Rep 2023; 13:13804. [PMID: 37612354 PMCID: PMC10447430 DOI: 10.1038/s41598-023-40786-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 08/16/2023] [Indexed: 08/25/2023] Open
Abstract
Electroencephalography (EEG), despite its inherited complexity, is a preferable brain signal for automatic human emotion recognition (ER), which is a challenging machine learning task with emerging applications. In any automatic ER, machine learning (ML) models classify emotions using the extracted features from the EEG signals, and therefore, such feature extraction is a crucial part of ER process. Recently, EEG channel connectivity features have been widely used in ER, where Pearson correlation coefficient (PCC), mutual information (MI), phase-locking value (PLV), and transfer entropy (TE) are well-known methods for connectivity feature map (CFM) construction. CFMs are typically formed in a two-dimensional configuration using the signals from two EEG channels, and such two-dimensional CFMs are usually symmetric and hold redundant information. This study proposes the construction of a more informative CFM that can lead to better ER. Specifically, the proposed innovative technique intelligently combines CFMs' measures of two different individual methods, and its outcomes are more informative as a fused CFM. Such CFM fusion does not incur additional computational costs in training the ML model. In this study, fused CFMs are constructed by combining every pair of methods from PCC, PLV, MI, and TE; and the resulting fused CFMs PCC + PLV, PCC + MI, PCC + TE, PLV + MI, PLV + TE, and MI + TE are used to classify emotion by convolutional neural network. Rigorous experiments on the DEAP benchmark EEG dataset show that the proposed CFMs deliver better ER performances than CFM with a single connectivity method (e.g., PCC). At a glance, PLV + MI-based ER is shown to be the most promising one as it outperforms the other methods.
Collapse
Affiliation(s)
- M A H Akhand
- Department of Computer Science and Engineering, Khulna University of Engineering & Technology, Khulna, 9203, Bangladesh.
| | - Mahfuza Akter Maria
- Department of Computer Science and Engineering, Khulna University of Engineering & Technology, Khulna, 9203, Bangladesh
| | - Md Abdus Samad Kamal
- Graduate School of Science and Technology, Gunma University, Kiryu, 376-8515, Japan
| | - Kazuyuki Murase
- Department of Information Technology, International Professional University of Technology in Osaka, 3-3-1 Umeda, Kita-ku, Osaka, 530-0001, Japan
| |
Collapse
|
6
|
Islam M, Lee T. Functional Connectivity Analysis in Multi-channel EEG for Emotion Detection with Phase Locking Value and 3D CNN. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083433 DOI: 10.1109/embc40787.2023.10340922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
The noise-assisted multivariate Empirical mode decomposition (NA-MEMD) is applied to multi-channel EEG signals to obtain narrow-band scale-aligned intrinsic mode functions (IMFs) upon which functional connectivity analysis is performed. The connectivity pattern in relation to inherent functional activity of brain is estimated with the phase locking value (PLV). Instantaneous phase difference among different EEG channels gives PLV that is used to build the functional connectivity map. The connectivity map yields spatial-temporal feature representation which is taken as input of the proposed emotion detection system. The spatial-temporal features can be learned with a 3D convolutional neural network for classifying emotion states. The proposed system is evaluated on two publicly available DEAP and SEED dataset for binary and multi-class emotion classification. On detecting low versus high level in the valence and arousal dimensions, the attained accuracy values are 97.37% and 96.26% respectively. Meanwhile, this system yields 94.78% and 99.54% accuracy on multi-class task on DEAP and SEED, which outperform previously reported systems with other deep learning models and conventional EEG features.
Collapse
|
7
|
Shahabi MS, Shalbaf A, Rostami R, Kazemi R. A convolutional recurrent neural network with attention for response prediction to repetitive transcranial magnetic stimulation in major depressive disorder. Sci Rep 2023; 13:10147. [PMID: 37349335 PMCID: PMC10287753 DOI: 10.1038/s41598-023-35545-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 05/19/2023] [Indexed: 06/24/2023] Open
Abstract
Prediction of response to Repetitive Transcranial Magnetic Stimulation (rTMS) can build a very effective treatment platform that helps Major Depressive Disorder (MDD) patients to receive timely treatment. We proposed a deep learning model powered up by state-of-the-art methods to classify responders (R) and non-responders (NR) to rTMS treatment. Pre-treatment Electro-Encephalogram (EEG) signal of public TDBRAIN dataset and 46 proprietary MDD subjects were utilized to create time-frequency representations using Continuous Wavelet Transform (CWT) to be fed into the two powerful pre-trained Convolutional Neural Networks (CNN) named VGG16 and EfficientNetB0. Equipping these Transfer Learning (TL) models with Bidirectional Long Short-Term Memory (BLSTM) and attention mechanism for the extraction of most discriminative spatiotemporal features from input images, can lead to superior performance in the prediction of rTMS treatment outcome. Five brain regions named Frontal, Central, Parietal, Temporal, and occipital were assessed and the highest evaluated performance in 46 proprietary MDD subjects was acquired for the Frontal region using the TL-LSTM-Attention model based on EfficientNetB0 with accuracy, sensitivity, specificity, and Area Under the Curve (AUC) of 97.1%, 97.3%, 97.0%, and 0.96 respectively. Additionally, to test the generalizability of the proposed models, these TL-BLSTM-Attention models were evaluated on a public dataset called TDBRAIN and the highest accuracy of 82.3%, the sensitivity of 80.2%, the specificity of 81.9% and the AUC of 0.83 were obtained. Therefore, advanced deep learning methods using a time-frequency representation of EEG signals from the frontal brain region and the convolutional recurrent neural networks equipped with the attention mechanism can construct an accurate platform for the prediction of response to the rTMS treatment.
Collapse
Affiliation(s)
- Mohsen Sadat Shahabi
- Department of Biomedical Engineering and Medical Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ahmad Shalbaf
- Department of Biomedical Engineering and Medical Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Reza Rostami
- Department of Psychology, University of Tehran, Tehran, Iran
| | - Reza Kazemi
- Department of Cognitive Psychology, Institute for Cognitive Science Studies, Tehran, Iran
| |
Collapse
|
8
|
Álvarez-Meza AM, Torres-Cardona HF, Orozco-Alzate M, Pérez-Nastar HD, Castellanos-Dominguez G. Affective Neural Responses Sonified through Labeled Correlation Alignment. SENSORS (BASEL, SWITZERLAND) 2023; 23:5574. [PMID: 37420740 DOI: 10.3390/s23125574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 06/10/2023] [Accepted: 06/11/2023] [Indexed: 07/09/2023]
Abstract
Sound synthesis refers to the creation of original acoustic signals with broad applications in artistic innovation, such as music creation for games and videos. Nonetheless, machine learning architectures face numerous challenges when learning musical structures from arbitrary corpora. This issue involves adapting patterns borrowed from other contexts to a concrete composition objective. Using Labeled Correlation Alignment (LCA), we propose an approach to sonify neural responses to affective music-listening data, identifying the brain features that are most congruent with the simultaneously extracted auditory features. For dealing with inter/intra-subject variability, a combination of Phase Locking Value and Gaussian Functional Connectivity is employed. The proposed two-step LCA approach embraces a separate coupling stage of input features to a set of emotion label sets using Centered Kernel Alignment. This step is followed by canonical correlation analysis to select multimodal representations with higher relationships. LCA enables physiological explanation by adding a backward transformation to estimate the matching contribution of each extracted brain neural feature set. Correlation estimates and partition quality represent performance measures. The evaluation uses a Vector Quantized Variational AutoEncoder to create an acoustic envelope from the tested Affective Music-Listening database. Validation results demonstrate the ability of the developed LCA approach to generate low-level music based on neural activity elicited by emotions while maintaining the ability to distinguish between the acoustic outputs.
Collapse
Affiliation(s)
| | | | - Mauricio Orozco-Alzate
- Signal Processing and Recognition Group, Universidad Nacional de Colombia, Manizales 170003, Colombia
| | - Hernán Darío Pérez-Nastar
- Signal Processing and Recognition Group, Universidad Nacional de Colombia, Manizales 170003, Colombia
| | | |
Collapse
|
9
|
Qiu Y, Lin F, Chen W, Xu M. Pre-training in Medical Data: A Survey. MACHINE INTELLIGENCE RESEARCH 2023. [PMCID: PMC9942039 DOI: 10.1007/s11633-022-1382-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/23/2023]
Abstract
Medical data refers to health-related information associated with regular patient care or as part of a clinical trial program. There are many categories of such data, such as clinical imaging data, bio-signal data, electronic health records (EHR), and multi-modality medical data. With the development of deep neural networks in the last decade, the emerging pre-training paradigm has become dominant in that it has significantly improved machine learning methods’ performance in a data-limited scenario. In recent years, studies of pre-training in the medical domain have achieved significant progress. To summarize these technology advancements, this work provides a comprehensive survey of recent advances for pre-training on several major types of medical data. In this survey, we summarize a large number of related publications and the existing benchmarking in the medical domain. Especially, the survey briefly describes how some pre-training methods are applied to or developed for medical data. From a data-driven perspective, we examine the extensive use of pre-training in many medical scenarios. Moreover, based on the summary of recent pre-training studies, we identify several challenges in this field to provide insights for future studies.
Collapse
Affiliation(s)
- Yixuan Qiu
- The University of Queensland, Brisbane, 4072 Australia
| | - Feng Lin
- The University of Queensland, Brisbane, 4072 Australia
| | - Weitong Chen
- The University of Adelaide, Adelaide, 5005 Australia
| | - Miao Xu
- The University of Queensland, Brisbane, 4072 Australia
| |
Collapse
|
10
|
Yusoff M, Haryanto T, Suhartanto H, Mustafa WA, Zain JM, Kusmardi K. Accuracy Analysis of Deep Learning Methods in Breast Cancer Classification: A Structured Review. Diagnostics (Basel) 2023; 13:diagnostics13040683. [PMID: 36832171 PMCID: PMC9955565 DOI: 10.3390/diagnostics13040683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 02/06/2023] [Accepted: 02/07/2023] [Indexed: 02/17/2023] Open
Abstract
Breast cancer is diagnosed using histopathological imaging. This task is extremely time-consuming due to high image complexity and volume. However, it is important to facilitate the early detection of breast cancer for medical intervention. Deep learning (DL) has become popular in medical imaging solutions and has demonstrated various levels of performance in diagnosing cancerous images. Nonetheless, achieving high precision while minimizing overfitting remains a significant challenge for classification solutions. The handling of imbalanced data and incorrect labeling is a further concern. Additional methods, such as pre-processing, ensemble, and normalization techniques, have been established to enhance image characteristics. These methods could influence classification solutions and be used to overcome overfitting and data balancing issues. Hence, developing a more sophisticated DL variant could improve classification accuracy while reducing overfitting. Technological advancements in DL have fueled automated breast cancer diagnosis growth in recent years. This paper reviewed studies on the capability of DL to classify histopathological breast cancer images, as the objective of this study was to systematically review and analyze current research on the classification of histopathological images. Additionally, literature from the Scopus and Web of Science (WOS) indexes was reviewed. This study assessed recent approaches for histopathological breast cancer image classification in DL applications for papers published up until November 2022. The findings of this study suggest that DL methods, especially convolution neural networks and their hybrids, are the most cutting-edge approaches currently in use. To find a new technique, it is necessary first to survey the landscape of existing DL approaches and their hybrid methods to conduct comparisons and case studies.
Collapse
Affiliation(s)
- Marina Yusoff
- Institute for Big Data Analytics and Artificial Intelligence (IBDAAI), Kompleks Al-Khawarizmi, Universiti Teknologi MARA (UiTM), Shah Alam 40450, Selangor, Malaysia
- College of Computing, Informatic and Media, Kompleks Al-Khawarizmi, Universiti Teknologi MARA (UiTM), Shah Alam 40450, Selangor, Malaysia
- Correspondence: (M.Y.); (W.A.M.)
| | - Toto Haryanto
- Department of Computer Science, IPB University, Bogor 16680, Indonesia
| | - Heru Suhartanto
- Faculty of Computer Science, Universitas Indonesia, Depok 16424, Indonesia
| | - Wan Azani Mustafa
- Faculty of Electrical Engineering Technology, Universiti Malaysia Perlis, UniCITI Alam Campus, Sungai Chuchuh, Padang Besar 02100, Perlis, Malaysia
- Correspondence: (M.Y.); (W.A.M.)
| | - Jasni Mohamad Zain
- Institute for Big Data Analytics and Artificial Intelligence (IBDAAI), Kompleks Al-Khawarizmi, Universiti Teknologi MARA (UiTM), Shah Alam 40450, Selangor, Malaysia
- College of Computing, Informatic and Media, Kompleks Al-Khawarizmi, Universiti Teknologi MARA (UiTM), Shah Alam 40450, Selangor, Malaysia
| | - Kusmardi Kusmardi
- Department of Anatomical Pathology, Faculty of Medicine, Universitas Indonesia/Cipto Mangunkusumo Hospital, Jakarta 10430, Indonesia
- Human Cancer Research Cluster, Indonesia Medical Education and Research Institute, Universitas Indonesia, Jakarta 10430, Indonesia
| |
Collapse
|
11
|
Breast cancer classification by a new approach to assessing deep neural network-based uncertainty quantification methods. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|