1
|
Jiang D, Yan L, Mayrand F. Emotion expressions and cognitive impairments in the elderly: review of the contactless detection approach. Front Digit Health 2024; 6:1335289. [PMID: 39040877 PMCID: PMC11260803 DOI: 10.3389/fdgth.2024.1335289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 06/20/2024] [Indexed: 07/24/2024] Open
Abstract
The aging population in Canada has been increasing continuously throughout the past decades. Amongst this demographic, around 11% suffer from some form of cognitive decline. While diagnosis through traditional means (i.e., Magnetic Resonance Imagings (MRIs), positron emission tomography (PET) scans, cognitive assessments, etc.) has been successful at detecting this decline, there remains unexplored measures of cognitive health that could reduce stress and cost for the elderly population, including approaches for early detection and preventive methods. Such efforts could additionally contribute to reducing the pressure and stress on the Canadian healthcare system, as well as improve the quality of life of the elderly population. Previous evidence has demonstrated emotional facial expressions being altered in individuals with various cognitive conditions such as dementias, mild cognitive impairment, and geriatric depression. This review highlights the commonalities among these cognitive health conditions, and research behind the contactless assessment methods to monitor the health and cognitive well-being of the elderly population through emotion expression. The contactless detection approach covered by this review includes automated facial expression analysis (AFEA), electroencephalogram (EEG) technologies and heart rate variability (HRV). In conclusion, a discussion of the potentials of the existing technologies and future direction of a novel assessment design through fusion of AFEA, EEG and HRV measures to increase detection of cognitive decline in a contactless and remote manner will be presented.
Collapse
Affiliation(s)
- Di Jiang
- Medical Devices Research Centre, National Research Council of Canada, Boucherville, QC, Canada
| | - Luowei Yan
- Department of Psychology, McGill University, Montreal, QC, Canada
| | - Florence Mayrand
- Department of Psychology, McGill University, Montreal, QC, Canada
| |
Collapse
|
2
|
Mark JA, Curtin A, Kraft AE, Ziegler MD, Ayaz H. Mental workload assessment by monitoring brain, heart, and eye with six biomedical modalities during six cognitive tasks. FRONTIERS IN NEUROERGONOMICS 2024; 5:1345507. [PMID: 38533517 PMCID: PMC10963413 DOI: 10.3389/fnrgo.2024.1345507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 02/15/2024] [Indexed: 03/28/2024]
Abstract
Introduction The efficiency and safety of complex high precision human-machine systems such as in aerospace and robotic surgery are closely related to the cognitive readiness, ability to manage workload, and situational awareness of their operators. Accurate assessment of mental workload could help in preventing operator error and allow for pertinent intervention by predicting performance declines that can arise from either work overload or under stimulation. Neuroergonomic approaches based on measures of human body and brain activity collectively can provide sensitive and reliable assessment of human mental workload in complex training and work environments. Methods In this study, we developed a new six-cognitive-domain task protocol, coupling it with six biomedical monitoring modalities to concurrently capture performance and cognitive workload correlates across a longitudinal multi-day investigation. Utilizing two distinct modalities for each aspect of cardiac activity (ECG and PPG), ocular activity (EOG and eye-tracking), and brain activity (EEG and fNIRS), 23 participants engaged in four sessions over 4 weeks, performing tasks associated with working memory, vigilance, risk assessment, shifting attention, situation awareness, and inhibitory control. Results The results revealed varying levels of sensitivity to workload within each modality. While certain measures exhibited consistency across tasks, neuroimaging modalities, in particular, unveiled meaningful differences between task conditions and cognitive domains. Discussion This is the first comprehensive comparison of these six brain-body measures across multiple days and cognitive domains. The findings underscore the potential of wearable brain and body sensing methods for evaluating mental workload. Such comprehensive neuroergonomic assessment can inform development of next generation neuroadaptive interfaces and training approaches for more efficient human-machine interaction and operator skill acquisition.
Collapse
Affiliation(s)
- Jesse A. Mark
- School of Biomedical Engineering, Science, and Health Systems, Drexel University, Philadelphia, PA, United States
| | - Adrian Curtin
- School of Biomedical Engineering, Science, and Health Systems, Drexel University, Philadelphia, PA, United States
| | - Amanda E. Kraft
- Advanced Technology Laboratories, Lockheed Martin, Arlington, VA, United States
| | - Matthias D. Ziegler
- Advanced Technology Laboratories, Lockheed Martin, Arlington, VA, United States
| | - Hasan Ayaz
- School of Biomedical Engineering, Science, and Health Systems, Drexel University, Philadelphia, PA, United States
- Department of Psychological and Brain Sciences, College of Arts and Sciences, Drexel University, Philadelphia, PA, United States
- Drexel Solutions Institute, Drexel University, Philadelphia, PA, United States
- A. J. Drexel Autism Institute, Drexel University, Philadelphia, PA, United States
- Department of Family and Community Health, University of Pennsylvania, Philadelphia, PA, United States
- Center for Injury Research and Prevention, Children's Hospital of Philadelphia, Philadelphia, PA, United States
| |
Collapse
|
3
|
Levit B, Funk PF, Hanein Y. Soft electrodes for simultaneous bio-potential and bio-impedance study of the face. Biomed Phys Eng Express 2024; 10:025036. [PMID: 38350124 DOI: 10.1088/2057-1976/ad28cb] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 02/13/2024] [Indexed: 02/15/2024]
Abstract
The human body's vascular system is a finely regulated network: blood vessels can change in shape (i.e. constrict, or dilate), their elastic response may shift and they may undergo temporary and partial blockages due to pressure applied by skeletal muscles in their immediate vicinity. Simultaneous measurement of muscle activation and the corresponding changes in vessel diameter, in particular at anatomical regions such as the face, is challenging, and how muscle activation constricts blood vessels has been experimentally largely overlooked. Here we report on a new electronic skin technology for facial investigations to address this challenge. The technology consists of screen-printed dry carbon electrodes on soft polyurethane substrate. Two dry electrode arrays were placed on the face: One array for bio-potential measurements to capture muscle activity and a second array for bio-impedance. For the bio-potential signals, independent component analysis (ICA) was used to differentiate different muscle activations. Four-contact bio-impedance measurements were used to extract changes (related to artery volume change), as well as beats per minute (BPM). We performed concurrent bio-potential and bio-impedance measurements in the face. From the simultaneous measurements we successfully captured fluctuations in the superficial temporal artery diameter in response to facial muscle activity, which ultimately changes blood flow. The observed changes in the face, following muscle activation, were consistent with measurements in the forearm and were found to be notably more intricate. Both at the arm and the face, a clear increase in the baseline impedance was recorded during muscle activation (artery narrowing), while the impedance changes signifying the pulse had a clear repetitive trend only at the forearm. These results reveal the direct connection between muscle activation and the blood vessels in their vicinity and start to unveil the complex mechanisms through which facial muscles might modulate blood flow and possibly affect human physiology.
Collapse
Affiliation(s)
- Bara Levit
- School of Physics, Tel Aviv University, Tel Aviv, Israel
| | - Paul F Funk
- Department of Otolaryngology, Head and Neck Surgery, University Hospital Jena, Friedrich Schiller University Jena, Jena, Germany
- School of Electrical Engineering, Tel Aviv University, Tel Aviv, Israel
- Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv University, Tel Aviv, Israel
| | - Yael Hanein
- School of Electrical Engineering, Tel Aviv University, Tel Aviv, Israel
- Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
4
|
Rinella S, Massimino S, Fallica PG, Giacobbe A, Donato N, Coco M, Neri G, Parenti R, Perciavalle V, Conoci S. Emotion Recognition: Photoplethysmography and Electrocardiography in Comparison. BIOSENSORS 2022; 12:811. [PMID: 36290948 PMCID: PMC9599834 DOI: 10.3390/bios12100811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 09/23/2022] [Accepted: 09/26/2022] [Indexed: 06/16/2023]
Abstract
Automatically recognizing negative emotions, such as anger or stress, and also positive ones, such as euphoria, can contribute to improving well-being. In real-life, emotion recognition is a difficult task since many of the technologies used for this purpose in both laboratory and clinic environments, such as electroencephalography (EEG) and electrocardiography (ECG), cannot realistically be used. Photoplethysmography (PPG) is a non-invasive technology that can be easily integrated into wearable sensors. This paper focuses on the comparison between PPG and ECG concerning their efficacy in detecting the psychophysical and affective states of the subjects. It has been confirmed that the levels of accuracy in the recognition of affective variables obtained by PPG technology are comparable to those achievable with the more traditional ECG technology. Moreover, the affective psychological condition of the participants (anxiety and mood levels) may influence the psychophysiological responses recorded during the experimental tests.
Collapse
Affiliation(s)
- Sergio Rinella
- Department of Educational Sciences, University of Catania, via Biblioteca 4, 95124 Catania, Italy
| | - Simona Massimino
- Department of Biomedical and Biotechnological Sciences, Section of Physiology, University of Catania, via S. Sofia 89, 95125 Catania, Italy
| | - Piero Giorgio Fallica
- INSTM (National Interuniversity Consortium of Science and Technology of Materials), via G. Giusti 9, 50121 Firenze, Italy
| | - Alberto Giacobbe
- Department of Engineering, University of Messina, Contrada Di Dio, 98158 Messina, Italy
| | - Nicola Donato
- Department of Engineering, University of Messina, Contrada Di Dio, 98158 Messina, Italy
| | - Marinella Coco
- Department of Educational Sciences, University of Catania, via Biblioteca 4, 95124 Catania, Italy
| | - Giovanni Neri
- Department of Engineering, University of Messina, Contrada Di Dio, 98158 Messina, Italy
| | - Rosalba Parenti
- Department of Biomedical and Biotechnological Sciences, Section of Physiology, University of Catania, via S. Sofia 89, 95125 Catania, Italy
| | - Vincenzo Perciavalle
- Department of Sciences of Life, Kore University of Enna, Cittadella Universitaria, 94100 Enna, Italy
| | - Sabrina Conoci
- Department of Chemical, Biological, Pharmaceutical and Environmental Science, University of Messina, Viale F. Stagno d’Alcontres 31, Vill. S. Agata, 98166 Messina, Italy
- LAB Sense Beyond Nano—URT Department of Sciences Physics and Technologies of Matter (DSFTM) CNR, Viale F. Stagno d’Alcontres 31, Vill. S. Agata, 98166 Messina, Italy
- Department of Chemistry ‘‘Giacomo Ciamician’’, University of Bologna, Via Selmi 2, 40126 Bologna, Italy
- Istituto per la Microelettronica e Microsistemi, Consiglio Nazionale delle Ricerche (CNR-IMM), Strada VIII n. 5, 95121 Catania, Italy
| |
Collapse
|
5
|
Su Y, Zhang Z, Li X, Zhang B, Ma H. The multiscale 3D convolutional network for emotion recognition based on electroencephalogram. Front Neurosci 2022; 16:872311. [PMID: 36046470 PMCID: PMC9420984 DOI: 10.3389/fnins.2022.872311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Accepted: 07/20/2022] [Indexed: 11/26/2022] Open
Abstract
Emotion recognition based on EEG (electroencephalogram) has become a research hotspot in the field of brain-computer interfaces (BCI). Compared with traditional machine learning, the convolutional neural network model has substantial advantages in automatic feature extraction in EEG-based emotion recognition. Motivated by the studies that multiple smaller scale kernels could increase non-linear expression than a larger scale, we propose a 3D convolutional neural network model with multiscale convolutional kernels to recognize emotional states based on EEG signals. We select more suitable time window data to carry out the emotion recognition of four classes (low valence vs. low arousal, low valence vs. high arousal, high valence vs. low arousal, and high valence vs. high arousal). The results using EEG signals in the DEAP and SEED-IV datasets show accuracies for our proposed emotion recognition network model (ERN) of 95.67 and 89.55%, respectively. The experimental results demonstrate that the proposed approach is potentially useful for enhancing emotional experience in BCI.
Collapse
Affiliation(s)
- Yun Su
- School of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
- *Correspondence: Yun Su,
| | - Zhixuan Zhang
- School of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| | - Xuan Li
- School of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| | - Bingtao Zhang
- School of Electronic and Information Engineering, Lanzhou Jiaotong University, Lanzhou, China
| | - Huifang Ma
- School of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| |
Collapse
|
6
|
Khanam F, Hossain AA, Ahmad M. Electroencephalogram-based cognitive load level classification using wavelet decomposition and support vector machine. BRAIN-COMPUTER INTERFACES 2022. [DOI: 10.1080/2326263x.2022.2109855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Affiliation(s)
- Farzana Khanam
- Department of Biomedical Engineering, Khulna University of Engineering & Technology (KUET), Khulna, Bangladesh
| | - A.B.M. Aowlad Hossain
- Department of Electronics and Communication Engineering, Khulna University of Engineering & Technology (KUET), Khulna, Bangladesh
| | - Mohiuddin Ahmad
- Department of Electrical and Electronic Engineering, Khulna University of Engineering & Technology (KUET), Khulna, Bangladesh
| |
Collapse
|
7
|
Li R, Yang D, Fang F, Hong KS, Reiss AL, Zhang Y. Concurrent fNIRS and EEG for Brain Function Investigation: A Systematic, Methodology-Focused Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22155865. [PMID: 35957421 PMCID: PMC9371171 DOI: 10.3390/s22155865] [Citation(s) in RCA: 31] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/27/2022] [Accepted: 07/30/2022] [Indexed: 05/29/2023]
Abstract
Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) stand as state-of-the-art techniques for non-invasive functional neuroimaging. On a unimodal basis, EEG has poor spatial resolution while presenting high temporal resolution. In contrast, fNIRS offers better spatial resolution, though it is constrained by its poor temporal resolution. One important merit shared by the EEG and fNIRS is that both modalities have favorable portability and could be integrated into a compatible experimental setup, providing a compelling ground for the development of a multimodal fNIRS-EEG integration analysis approach. Despite a growing number of studies using concurrent fNIRS-EEG designs reported in recent years, the methodological reference of past studies remains unclear. To fill this knowledge gap, this review critically summarizes the status of analysis methods currently used in concurrent fNIRS-EEG studies, providing an up-to-date overview and guideline for future projects to conduct concurrent fNIRS-EEG studies. A literature search was conducted using PubMed and Web of Science through 31 August 2021. After screening and qualification assessment, 92 studies involving concurrent fNIRS-EEG data recordings and analyses were included in the final methodological review. Specifically, three methodological categories of concurrent fNIRS-EEG data analyses, including EEG-informed fNIRS analyses, fNIRS-informed EEG analyses, and parallel fNIRS-EEG analyses, were identified and explained with detailed description. Finally, we highlighted current challenges and potential directions in concurrent fNIRS-EEG data analyses in future research.
Collapse
Affiliation(s)
- Rihui Li
- Center for Interdisciplinary Brain Sciences Research, Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA 94305, USA
- Department of Biomedical Engineering, University of Houston, Houston, TX 77004, USA
| | - Dalin Yang
- School of Mechanical Engineering, Pusan National University, Pusan 43241, Korea
- Mallinckrodt Institute of Radiology, Washington University School of Medicine in St. Louis, 4515 McKinley Avenue, St. Louis, MO 63110, USA
| | - Feng Fang
- Department of Biomedical Engineering, University of Houston, Houston, TX 77004, USA
| | - Keum-Shik Hong
- School of Mechanical Engineering, Pusan National University, Pusan 43241, Korea
| | - Allan L. Reiss
- Center for Interdisciplinary Brain Sciences Research, Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Yingchun Zhang
- Department of Biomedical Engineering, University of Houston, Houston, TX 77004, USA
| |
Collapse
|
8
|
Das A, Mock J, Irani F, Huang Y, Najafirad P, Golob E. Multimodal explainable AI predicts upcoming speech behavior in adults who stutter. Front Neurosci 2022; 16:912798. [PMID: 35979337 PMCID: PMC9376608 DOI: 10.3389/fnins.2022.912798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 07/04/2022] [Indexed: 11/18/2022] Open
Abstract
A key goal of cognitive neuroscience is to better understand how dynamic brain activity relates to behavior. Such dynamics, in terms of spatial and temporal patterns of brain activity, are directly measured with neurophysiological methods such as EEG, but can also be indirectly expressed by the body. Autonomic nervous system activity is the best-known example, but, muscles in the eyes and face can also index brain activity. Mostly parallel lines of artificial intelligence research show that EEG and facial muscles both encode information about emotion, pain, attention, and social interactions, among other topics. In this study, we examined adults who stutter (AWS) to understand the relations between dynamic brain and facial muscle activity and predictions about future behavior (fluent or stuttered speech). AWS can provide insight into brain-behavior dynamics because they naturally fluctuate between episodes of fluent and stuttered speech behavior. We focused on the period when speech preparation occurs, and used EEG and facial muscle activity measured from video to predict whether the upcoming speech would be fluent or stuttered. An explainable self-supervised multimodal architecture learned the temporal dynamics of both EEG and facial muscle movements during speech preparation in AWS, and predicted fluent or stuttered speech at 80.8% accuracy (chance=50%). Specific EEG and facial muscle signals distinguished fluent and stuttered trials, and systematically varied from early to late speech preparation time periods. The self-supervised architecture successfully identified multimodal activity that predicted upcoming behavior on a trial-by-trial basis. This approach could be applied to understanding the neural mechanisms driving variable behavior and symptoms in a wide range of neurological and psychiatric disorders. The combination of direct measures of neural activity and simple video data may be applied to developing technologies that estimate brain state from subtle bodily signals.
Collapse
Affiliation(s)
- Arun Das
- Secure AI and Autonomy Laboratory, University of Texas at San Antonio, San Antonio, TX, United States
- UPMC Hillman Cancer Center, University of Pittsburgh Medical Center, Pittsburgh, PA, United States
| | - Jeffrey Mock
- Cognitive Neuroscience Laboratory, University of Texas at San Antonio, San Antonio, TX, United States
| | - Farzan Irani
- Department of Communication Disorders, Texas State University, San Marcos, TX, United States
| | - Yufei Huang
- UPMC Hillman Cancer Center, University of Pittsburgh Medical Center, Pittsburgh, PA, United States
| | - Peyman Najafirad
- Secure AI and Autonomy Laboratory, University of Texas at San Antonio, San Antonio, TX, United States
| | - Edward Golob
- Cognitive Neuroscience Laboratory, University of Texas at San Antonio, San Antonio, TX, United States
| |
Collapse
|
9
|
Saffaryazdi N, Wasim ST, Dileep K, Nia AF, Nanayakkara S, Broadbent E, Billinghurst M. Using Facial Micro-Expressions in Combination With EEG and Physiological Signals for Emotion Recognition. Front Psychol 2022; 13:864047. [PMID: 35837650 PMCID: PMC9275379 DOI: 10.3389/fpsyg.2022.864047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 05/30/2022] [Indexed: 11/13/2022] Open
Abstract
Emotions are multimodal processes that play a crucial role in our everyday lives. Recognizing emotions is becoming more critical in a wide range of application domains such as healthcare, education, human-computer interaction, Virtual Reality, intelligent agents, entertainment, and more. Facial macro-expressions or intense facial expressions are the most common modalities in recognizing emotional states. However, since facial expressions can be voluntarily controlled, they may not accurately represent emotional states. Earlier studies have shown that facial micro-expressions are more reliable than facial macro-expressions for revealing emotions. They are subtle, involuntary movements responding to external stimuli that cannot be controlled. This paper proposes using facial micro-expressions combined with brain and physiological signals to more reliably detect underlying emotions. We describe our models for measuring arousal and valence levels from a combination of facial micro-expressions, Electroencephalography (EEG) signals, galvanic skin responses (GSR), and Photoplethysmography (PPG) signals. We then evaluate our model using the DEAP dataset and our own dataset based on a subject-independent approach. Lastly, we discuss our results, the limitations of our work, and how these limitations could be overcome. We also discuss future directions for using facial micro-expressions and physiological signals in emotion recognition.
Collapse
Affiliation(s)
- Nastaran Saffaryazdi
- Empathic Computing Laboratory, Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand
| | - Syed Talal Wasim
- Empathic Computing Laboratory, Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand
| | - Kuldeep Dileep
- Empathic Computing Laboratory, Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand
| | - Alireza Farrokhi Nia
- Empathic Computing Laboratory, Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand
| | - Suranga Nanayakkara
- Augmented Human Laboratory, Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand
| | - Elizabeth Broadbent
- Department of Psychological Medicine, The University of Auckland, Auckland, New Zealand
| | - Mark Billinghurst
- Empathic Computing Laboratory, Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand
| |
Collapse
|
10
|
Arif A, Jawad Khan M, Javed K, Sajid H, Rubab S, Naseer N, Irfan Khan T. Hemodynamic Response Detection Using Integrated EEG-fNIRS-VPA for BCI. COMPUTERS, MATERIALS & CONTINUA 2022; 70:535-555. [DOI: 10.32604/cmc.2022.018318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 04/21/2021] [Indexed: 09/01/2023]
|
11
|
Tan Y, Sun Z, Duan F, Solé-Casals J, Caiafa CF. A multimodal emotion recognition method based on facial expressions and electroencephalography. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103029] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
12
|
Neethirajan S. The Use of Artificial Intelligence in Assessing Affective States in Livestock. Front Vet Sci 2021; 8:715261. [PMID: 34409091 PMCID: PMC8364945 DOI: 10.3389/fvets.2021.715261] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 07/09/2021] [Indexed: 12/24/2022] Open
Abstract
In order to promote the welfare of farm animals, there is a need to be able to recognize, register and monitor their affective states. Numerous studies show that just like humans, non-human animals are able to feel pain, fear and joy amongst other emotions, too. While behaviorally testing individual animals to identify positive or negative states is a time and labor consuming task to complete, artificial intelligence and machine learning open up a whole new field of science to automatize emotion recognition in production animals. By using sensors and monitoring indirect measures of changes in affective states, self-learning computational mechanisms will allow an effective categorization of emotions and consequently can help farmers to respond accordingly. Not only will this possibility be an efficient method to improve animal welfare, but early detection of stress and fear can also improve productivity and reduce the need for veterinary assistance on the farm. Whereas affective computing in human research has received increasing attention, the knowledge gained on human emotions is yet to be applied to non-human animals. Therefore, a multidisciplinary approach should be taken to combine fields such as affective computing, bioengineering and applied ethology in order to address the current theoretical and practical obstacles that are yet to be overcome.
Collapse
Affiliation(s)
- Suresh Neethirajan
- Farmworx, Animal Sciences Department, Wageningen University & Research, Wageningen, Netherlands
| |
Collapse
|
13
|
|
14
|
Arif S, Khan MJ, Naseer N, Hong KS, Sajid H, Ayaz Y. Vector Phase Analysis Approach for Sleep Stage Classification: A Functional Near-Infrared Spectroscopy-Based Passive Brain-Computer Interface. Front Hum Neurosci 2021; 15:658444. [PMID: 33994983 PMCID: PMC8121150 DOI: 10.3389/fnhum.2021.658444] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 03/09/2021] [Indexed: 11/13/2022] Open
Abstract
A passive brain-computer interface (BCI) based upon functional near-infrared spectroscopy (fNIRS) brain signals is used for earlier detection of human drowsiness during driving tasks. This BCI modality acquired hemodynamic signals of 13 healthy subjects from the right dorsolateral prefrontal cortex (DPFC) of the brain. Drowsiness activity is recorded using a continuous-wave fNIRS system and eight channels over the right DPFC. During the experiment, sleep-deprived subjects drove a vehicle in a driving simulator while their cerebral oxygen regulation (CORE) state was continuously measured. Vector phase analysis (VPA) was used as a classifier to detect drowsiness state along with sleep stage-based threshold criteria. Extensive training and testing with various feature sets and classifiers are done to justify the adaptation of threshold criteria for any subject without requiring recalibration. Three statistical features (mean oxyhemoglobin, signal peak, and the sum of peaks) along with six VPA features (trajectory slopes of VPA indices) were used. The average accuracies for the five classifiers are 90.9% for discriminant analysis, 92.5% for support vector machines, 92.3% for nearest neighbors, 92.4% for both decision trees, and ensembles over all subjects' data. Trajectory slopes of CORE vector magnitude and angle: m(|R|) and m(∠R) are the best-performing features, along with ensemble classifier with the highest accuracy of 95.3% and minimum computation time of 40 ms. The statistical significance of the results is validated with a p-value of less than 0.05. The proposed passive BCI scheme demonstrates a promising technique for online drowsiness detection using VPA along with sleep stage classification.
Collapse
Affiliation(s)
- Saad Arif
- School of Mechanical and Manufacturing Engineering, National University of Sciences and Technology, Islamabad, Pakistan
| | - Muhammad Jawad Khan
- School of Mechanical and Manufacturing Engineering, National University of Sciences and Technology, Islamabad, Pakistan.,National Center of Artificial Intelligence (NCAI), Islamabad, Pakistan
| | - Noman Naseer
- Department of Mechatronics Engineering, Air University, Islamabad, Pakistan
| | - Keum-Shik Hong
- School of Mechanical Engineering, Pusan National University, Busan, South Korea
| | - Hasan Sajid
- School of Mechanical and Manufacturing Engineering, National University of Sciences and Technology, Islamabad, Pakistan.,National Center of Artificial Intelligence (NCAI), Islamabad, Pakistan
| | - Yasar Ayaz
- School of Mechanical and Manufacturing Engineering, National University of Sciences and Technology, Islamabad, Pakistan.,National Center of Artificial Intelligence (NCAI), Islamabad, Pakistan
| |
Collapse
|