1
|
Thibodeaux DN, Shaik MA, Kim SH, Voleti V, Zhao HT, Benezra SE, Nwokeabia CJ, Hillman EMC. Audiovisualization of real-time neuroimaging data. PLoS One 2024; 19:e0297435. [PMID: 38381733 PMCID: PMC10881001 DOI: 10.1371/journal.pone.0297435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 01/04/2024] [Indexed: 02/23/2024] Open
Abstract
Advancements in brain imaging techniques have significantly expanded the size and complexity of real-time neuroimaging and behavioral data. However, identifying patterns, trends and synchronies within these datasets presents a significant computational challenge. Here, we demonstrate an approach that can translate time-varying neuroimaging data into unique audiovisualizations consisting of audible representations of dynamic data merged with simplified, color-coded movies of spatial components and behavioral recordings. Multiple variables can be encoded as different musical instruments, letting the observer differentiate and track multiple dynamic parameters in parallel. This representation enables intuitive assimilation of these datasets for behavioral correlates and spatiotemporal features such as patterns, rhythms and motifs that could be difficult to detect through conventional data interrogation methods. These audiovisual representations provide a novel perception of the organization and patterns of real-time activity in the brain, and offer an intuitive and compelling method for complex data visualization for a wider range of applications.
Collapse
Affiliation(s)
- David N. Thibodeaux
- Laboratory for Functional Optical Imaging, Departments of Biomedical Engineering and Radiology, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States of America
| | - Mohammed A. Shaik
- Laboratory for Functional Optical Imaging, Departments of Biomedical Engineering and Radiology, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States of America
| | - Sharon H. Kim
- Laboratory for Functional Optical Imaging, Departments of Biomedical Engineering and Radiology, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States of America
| | - Venkatakaushik Voleti
- Laboratory for Functional Optical Imaging, Departments of Biomedical Engineering and Radiology, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States of America
| | - Hanzhi T. Zhao
- Laboratory for Functional Optical Imaging, Departments of Biomedical Engineering and Radiology, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States of America
| | - Sam E. Benezra
- Laboratory for Functional Optical Imaging, Departments of Biomedical Engineering and Radiology, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States of America
| | - Chinwendu J. Nwokeabia
- Laboratory for Functional Optical Imaging, Departments of Biomedical Engineering and Radiology, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States of America
| | - Elizabeth M. C. Hillman
- Laboratory for Functional Optical Imaging, Departments of Biomedical Engineering and Radiology, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States of America
| |
Collapse
|
2
|
Norata D, Broggi S, Alvisi L, Lattanzi S, Brigo F, Tinuper P. The EEG pen-on-paper sound: History and recent advances. Seizure 2023; 107:67-70. [PMID: 36965379 DOI: 10.1016/j.seizure.2023.03.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 03/02/2023] [Accepted: 03/14/2023] [Indexed: 03/17/2023] Open
Abstract
The electroencephalogram (EEG) is one of the most useful technologies for brain research and clinical neurology, characterized by non-invasiveness and high time resolution. The acquired traces are visibly displayed, but various studies investigate the translation of brain waves in sound (i.e., a process called sonification). Several articles have been published since 1934 about the sonification of EEG traces, in the attempt to identify the "brain-sound." However, for a long time this sonification technique was not used for clinical purposes. The analog EEG was in fact already equipped with an auditory output, although rarely mentioned in scientific papers: the pen-on-paper noise made by the writer unit. EEG technologists often relied on the sound that pens made on paper to facilitate the diagnosis. This article provides a sample of analog video-EEG recordings with audio support representing the strengths of a combined visual-and-auditory detection of different types of seizures. The purpose of the present article is to illustrate how the analog EEG "sounded," as well as to highlight the advantages of this pen-writing noise. It was considered so useful that early digital EEG devices could be equipped with special software to duplicate it digitally. Even in the present days, the sonification can be considered as an attempt to modify the EEG practice using auditory neurofeedback with applications in therapeutic interventions, cognitive improvement, and basic research.
Collapse
Affiliation(s)
- Davide Norata
- Neurological Clinic and Stroke Unit, Department of Experimental and Clinical Medicine (DiMSC), Marche Polytechnic University, Via Conca 71, Ancona 60020, Italy.
| | - Serena Broggi
- Neurological Clinic and Stroke Unit, Department of Experimental and Clinical Medicine (DiMSC), Marche Polytechnic University, Via Conca 71, Ancona 60020, Italy
| | - Lara Alvisi
- Dipartimento di Scienze Biomediche e Neuromotorie, University of Bologna, Bologna, Italy; IRCCS Istituto delle Scienze Neurologiche di Bologna, Epilepsy Center (full member of the European Reference Network EpiCARE), Bologna, Italy
| | - Simona Lattanzi
- Neurological Clinic and Stroke Unit, Department of Experimental and Clinical Medicine (DiMSC), Marche Polytechnic University, Via Conca 71, Ancona 60020, Italy
| | - Francesco Brigo
- Department of Neurology, Hospital of Merano (SABES-ASDAA), Merano-Meran, Italy
| | - Paolo Tinuper
- Dipartimento di Scienze Biomediche e Neuromotorie, University of Bologna, Bologna, Italy; IRCCS Istituto delle Scienze Neurologiche di Bologna, Epilepsy Center (full member of the European Reference Network EpiCARE), Bologna, Italy
| | | |
Collapse
|
3
|
Long S, Ding R, Wang J, Yu Y, Lu J, Yao D. Sleep Quality and Electroencephalogram Delta Power. Front Neurosci 2022; 15:803507. [PMID: 34975393 PMCID: PMC8715081 DOI: 10.3389/fnins.2021.803507] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 11/29/2021] [Indexed: 11/28/2022] Open
Abstract
Delta activity on electroencephalogram (EEG) is considered a biomarker of homeostatic sleep drive. Delta power is often associated with sleep duration and intensity. Here, we reviewed the literature to explore how sleep quality was influenced by changes in delta power. However, we found that both the decrease and increase in delta power could indicate a higher sleep quality due to the various factors below. First, the differences in changes in delta power in patients whose sleep quality is lower than that of the healthy controls may be related to the different diseases they suffered from. We found that the patients mainly suffered from borderline personality disorder, and Rett syndrome may have a higher delta power than healthy individuals. Meanwhile, patients who are affected by Asperger syndrome, respiratory failure, chronic fatigue, and post-traumatic stress disorder have lower delta power. Second, if the insomnia patients received the therapy, the difference may be caused by the treatment method. Cognitive or music therapy shows that a better therapeutic effect is associated with decreased delta power, whereas in drug treatment, there is an opposite change in delta power. Last, for healthy people, the difference in delta change may be related to sleep stages. The higher sleep quality is associated with increased delta power during the NREM period, whereas a deceased delta change accompanies higher sleep quality during the REM period. Our work summarizes the effect of changes in delta power on sleep quality and may positively impact the monitoring and intervention of sleep quality.
Collapse
Affiliation(s)
- Siyu Long
- MOE Key Lab for Neuroinformation, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China.,School of Life Sciences and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Rui Ding
- MOE Key Lab for Neuroinformation, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China.,School of Life Sciences and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Junce Wang
- MOE Key Lab for Neuroinformation, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China.,School of Life Sciences and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Yue Yu
- MOE Key Lab for Neuroinformation, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China
| | - Jing Lu
- MOE Key Lab for Neuroinformation, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China.,School of Life Sciences and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Dezhong Yao
- MOE Key Lab for Neuroinformation, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China.,School of Life Sciences and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
4
|
Bacomics: a comprehensive cross area originating in the studies of various brain-apparatus conversations. Cogn Neurodyn 2020; 14:425-442. [PMID: 32655708 DOI: 10.1007/s11571-020-09577-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 02/17/2020] [Accepted: 03/05/2020] [Indexed: 12/20/2022] Open
Abstract
The brain is the most important organ of the human body, and the conversations between the brain and an apparatus can not only reveal a normally functioning or a dysfunctional brain but also can modulate the brain. Here, the apparatus may be a nonbiological instrument, such as a computer, and the consequent brain-computer interface is now a very popular research area with various applications. The apparatus may also be a biological organ or system, such as the gut and muscle, and their efficient conversations with the brain are vital for a healthy life. Are there any common bases that bind these different scenarios? Here, we propose a new comprehensive cross area: Bacomics, which comes from brain-apparatus conversations (BAC) + omics. We take Bacomics to cover at least three situations: (1) The brain is normal, but the conversation channel is disabled, as in amyotrophic lateral sclerosis. The task is to reconstruct or open up new channels to reactivate the brain function. (2) The brain is in disorder, such as in Parkinson's disease, and the work is to utilize existing or open up new channels to intervene, repair and modulate the brain by medications or stimulation. (3) Both the brain and channels are in order, and the goal is to enhance coordinated development between the brain and apparatus. In this paper, we elaborate the connotation of BAC into three aspects according to the information flow: the issue of output to the outside (BAC-1), the issue of input to the brain (BAC-2) and the issue of unity of brain and apparatus (BAC-3). More importantly, there are no less than five principles that may be taken as the cornerstones of Bacomics, such as feedforward and feedback control, brain plasticity, harmony, the unity of opposites and systems principles. Clearly, Bacomics integrates these seemingly disparate domains, but more importantly, opens a much wider door for the research and development of the brain, and the principles further provide the general framework in which to realize or optimize these various conversations.
Collapse
|
5
|
Gao D, Long S, Yang H, Cheng Y, Guo S, Yu Y, Liu T, Dong L, Lu J, Yao D. SWS Brain-Wave Music May Improve the Quality of Sleep: An EEG Study. Front Neurosci 2020; 14:67. [PMID: 32116514 PMCID: PMC7026372 DOI: 10.3389/fnins.2020.00067] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2019] [Accepted: 01/16/2020] [Indexed: 01/06/2023] Open
Abstract
Aim This study investigated the neural mechanisms of brain-wave music on sleep quality. Background Sleep disorders are a common health problem in our society and may result in fatigue, depression, and problems in daytime functioning. Previous studies have shown that brain-wave music generated from electroencephalography (EEG) signals could emotionally affect our nervous system and have positive effects on sleep. However, the neural mechanisms of brain-wave music on the quality of sleep need to be clarified. Methods A total of 33 young participants were recruited and randomly divided into three groups. The participants listened to rapid eye movement (REM) brain-wave music (Group 1: 13 subjects), slow-wave sleep (SWS) brain-wave music (Group 2: 11 subjects), or white noise (WN) (Control Group: 9 subjects) for 20 min before bedtime for 6 days. EEG and other physiological signals were recorded by polysomnography. Results We found that the sleep efficiency increased in the SWS group but decreased in REM and WN groups. The sleep efficiency in the SWS group was ameliorated [t(10) = −1.943, p = 0.076]. In the EEG power spectral density analysis, the delta power spectral density in the REM group and in the control group increased, while that in the SWS group decreased [F(2,31) = 7.909, p = 0.005]. In the network analysis, the functional connectivity (FC), assessed with Pearson correlation coefficients, showed that the connectivity strength decreased [t(10) = 1.969, p = 0.073] between the left frontal lobe (F3) and left parietal lobe (C3) in the SWS group. In addition, there was a negative correlation between the FC of the left frontal lobe and the left parietal lobe and sleep latency in the SWS group (r = −0.527, p = 0.064). Conclusion Slow-wave sleep brain-wave music may have a positive effect on sleep quality, while REM brain-wave music or WN may not have a positive effect. Furthermore, better sleep quality might be caused by a decrease in the power spectral density of the delta band of EEG and an increase in the FC between the left frontal lobe and the left parietal lobe. SWS brain-wave music could be a safe and inexpensive method for clinical use if confirmed by more data.
Collapse
Affiliation(s)
- Dongrui Gao
- School of Computer Science, Chengdu University of Information Technology, Chengdu, China.,The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China.,Center for Information in Biomedicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Siyu Long
- Center for Information in Biomedicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Hua Yang
- Department of Composition, Sichuan Conservatory of Music, Chengdu, China
| | - Yibo Cheng
- Center for Information in Biomedicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Sijia Guo
- Center for Information in Biomedicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Yue Yu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China
| | - Tiejun Liu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China.,Center for Information in Biomedicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Li Dong
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China.,Center for Information in Biomedicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Jing Lu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China.,Center for Information in Biomedicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Dezhong Yao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China.,Center for Information in Biomedicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
6
|
Sanyal S, Nag S, Banerjee A, Sengupta R, Ghosh D. Music of brain and music on brain: a novel EEG sonification approach. Cogn Neurodyn 2019; 13:13-31. [PMID: 30728868 PMCID: PMC6339862 DOI: 10.1007/s11571-018-9502-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2018] [Revised: 07/18/2018] [Accepted: 08/22/2018] [Indexed: 11/29/2022] Open
Abstract
Can we hear the sound of our brain? Is there any technique which can enable us to hear the neuro-electrical impulses originating from the different lobes of brain? The answer to all these questions is YES. In this paper we present a novel method with which we can sonify the electroencephalogram (EEG) data recorded in "control" state as well as under the influence of a simple acoustical stimuli-a tanpura drone. The tanpura has a very simple construction yet the tanpura drone exhibits very complex acoustic features, which is generally used for creation of an ambience during a musical performance. Hence, for this pilot project we chose to study the nonlinear correlations between musical stimulus (tanpura drone as well as music clips) and sonified EEG data. Till date, there have been no study which deals with the direct correlation between a bio-signal and its acoustic counterpart and also tries to see how that correlation varies under the influence of different types of stimuli. This study tries to bridge this gap and looks for a direct correlation between music signal and EEG data using a robust mathematical microscope called Multifractal Detrended Cross Correlation Analysis (MFDXA). For this, we took EEG data of 10 participants in 2 min "control condition" (i.e. with white noise) and in 2 min 'tanpura drone' (musical stimulus) listening condition. The same experimental paradigm was repeated for two emotional music, "Chayanat" and "Darbari Kanada". These are well known Hindustani classical ragas which conventionally portray contrast emotional attributes, also verified from human response data. Next, the EEG signals from different electrodes were sonified and MFDXA technique was used to assess the degree of correlation (or the cross correlation coefficient γx) between the EEG signals and the music clips. The variation of γx for different lobes of brain during the course of the experiment provides interesting new information regarding the extraordinary ability of music stimuli to engage several areas of the brain significantly unlike any other stimuli (which engages specific domains only).
Collapse
Affiliation(s)
- Shankha Sanyal
- Sir C.V. Raman Centre for Physics and Music, Jadavpur University, Kolkata, India
- Department of Physics, Jadavpur University, Kolkata, India
| | - Sayan Nag
- Department of Electrical Engineering, Jadavpur University, Kolkata, India
| | - Archi Banerjee
- Sir C.V. Raman Centre for Physics and Music, Jadavpur University, Kolkata, India
- Department of Physics, Jadavpur University, Kolkata, India
| | - Ranjan Sengupta
- Sir C.V. Raman Centre for Physics and Music, Jadavpur University, Kolkata, India
| | - Dipak Ghosh
- Sir C.V. Raman Centre for Physics and Music, Jadavpur University, Kolkata, India
| |
Collapse
|
7
|
Wu D. Hearing the Sound in the Brain: Influences of Different EEG References. Front Neurosci 2018; 12:148. [PMID: 29593487 PMCID: PMC5859362 DOI: 10.3389/fnins.2018.00148] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Accepted: 02/23/2018] [Indexed: 11/13/2022] Open
Abstract
If the scalp potential signals, the electroencephalogram (EEG), are due to neural "singers" in the brain, how could we listen to them with less distortion? One crucial point is that the data recording on the scalp should be faithful and accurate, thus the choice of reference electrode is a vital factor determining the faithfulness of the data. In this study, music on the scalp derived from data in the brain using three different reference electrodes were compared, including approximate zero reference-reference electrode standardization technique (REST), average reference (AR), and linked mastoids reference (LM). The classic music pieces in waveform format were used as simulated sources inside a head model, and they were forward calculated to scalp as standard potential recordings, i.e., waveform format music from the brain with true zero reference. Then these scalp music was re-referenced into REST, AR, and LM based data, and compared with the original forward data (true zero reference). For real data, the EEG recorded in an orthodontic pain control experiment were utilized for music generation with the three references, and the scale free index (SFI) of these music pieces were compared. The results showed that in the simulation for only one source, different references do not change the music/waveform; for two sources or more, REST provide the most faithful music/waveform to the original ones inside the brain, and the distortions caused by AR and LM were spatial locations of both source and scalp electrode dependent. The brainwave music from the real EEG data showed that REST and AR make the differences of SFI between two states more recognized and found the frontal is the main region that producing the music. In conclusion, REST can reconstruct the true signals approximately, and it can be used to help to listen to the true voice of the neural singers in the brain.
Collapse
Affiliation(s)
- Dan Wu
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China.,The Key Laboratory for NeuroInformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
8
|
Abstract
Many methods have been developed to translate a human electroencephalogram (EEG) into music. In addition to EEG, functional magnetic resonance imaging (fMRI) is another method used to study the brain and can reflect physiological processes. In 2012, we established a method to use simultaneously recorded fMRI and EEG signals to produce EEG-fMRI music, which represents a step toward scale-free brain music. In this study, we used a neural mass model, the Jansen-Rit model, to simulate activity in several cortical brain regions. The interactions between different brain regions were represented by the average normalized diffusion tensor imaging (DTI) structural connectivity with a coupling coefficient that modulated the coupling strength. Seventy-eight brain regions were adopted from the Automated Anatomical Labeling (AAL) template. Furthermore, we used the Balloon-Windkessel hemodynamic model to transform neural activity into a blood-oxygen-level dependent (BOLD) signal. Because the fMRI BOLD signal changes slowly, we used a sampling rate of 250 Hz to produce the temporal series for music generation. Then, the BOLD music was generated for each region using these simulated BOLD signals. Because the BOLD signal is scale free, these music pieces were also scale free, which is similar to classic music. Here, to simulate the case of an epileptic patient, we changed the parameter that determined the amplitude of the excitatory postsynaptic potential (EPSP) in the neural mass model. Finally, we obtained BOLD music for healthy and epileptic patients. The differences in levels of arousal between the 2 pieces of music may provide a potential tool for discriminating the different populations if the differences can be confirmed by more real data.
Collapse
Affiliation(s)
- Jing Lu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation
- School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China
| | - Sijia Guo
- School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China
| | - Mingming Chen
- School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China
| | - Weixia Wang
- School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China
| | - Hua Yang
- School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China
- Department of Composition, Sichuan Conservatory of Music, Chengdu, Sichuan, China
| | - Daqing Guo
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation
- School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China
| | - Dezhong Yao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation
- School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China
| |
Collapse
|
9
|
Grant M, Faghihi N. Generation of 1/f noise from a broken-symmetry model for the arbitrary absolute pitch of musical melodies. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:EL490. [PMID: 29195450 DOI: 10.1121/1.5011150] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
A model is presented to generate power spectrum noise with intensity proportional to 1/f as a function of frequency f. The model arises from a broken-symmetry variable, which corresponds to absolute pitch, where fluctuations occur in an attempt to restore that symmetry, influenced by interactions in the creation of musical melodies.
Collapse
Affiliation(s)
- Martin Grant
- Physics Department, McGill University, Rutherford building, 3600 rue University, Montreal, Quebec H3A 2T8, Canada ,
| | - Niloufar Faghihi
- Physics Department, McGill University, Rutherford building, 3600 rue University, Montreal, Quebec H3A 2T8, Canada ,
| |
Collapse
|
10
|
Pinegger A, Hiebel H, Wriessnegger SC, Müller-Putz GR. Composing only by thought: Novel application of the P300 brain-computer interface. PLoS One 2017; 12:e0181584. [PMID: 28877175 PMCID: PMC5587109 DOI: 10.1371/journal.pone.0181584] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2017] [Accepted: 07/03/2017] [Indexed: 11/19/2022] Open
Abstract
The P300 event-related potential is a well-known pattern in the electroencephalogram (EEG). This kind of brain signal is used for many different brain-computer interface (BCI) applications, e.g., spellers, environmental controllers, web browsers, or for painting. In recent times, BCI systems are mature enough to leave the laboratories to be used by the end-users, namely severely disabled people. Therefore, new challenges arise and the systems should be implemented and evaluated according to user-centered design (USD) guidelines. We developed and implemented a new system that utilizes the P300 pattern to compose music. Our Brain Composing system consists of three parts: the EEG acquisition device, the P300-based BCI, and the music composing software. Seventeen musical participants and one professional composer performed a copy-spelling, a copy-composing, and a free-composing task with the system. According to the USD guidelines, we investigated the efficiency, the effectiveness and subjective criteria in terms of satisfaction, enjoyment, frustration, and attractiveness. The musical participants group achieved high average accuracies: 88.24% (copy-spelling), 88.58% (copy-composing), and 76.51% (free-composing). The professional composer achieved also high accuracies: 100% (copy-spelling), 93.62% (copy-composing), and 98.20% (free-composing). General results regarding the subjective criteria evaluation were that the participants enjoyed the usage of the Brain Composing system and were highly satisfied with the system. Showing very positive results with healthy people in this study, this was the first step towards a music composing system for severely disabled people.
Collapse
Affiliation(s)
- Andreas Pinegger
- Institute of Neural Engineering, Graz University of Technology, Graz, Austria
| | - Hannah Hiebel
- Institute of Psychology, University of Graz, Graz, Austria
| | | | | |
Collapse
|
11
|
Cheung S, Han E, Kushki A, Anagnostou E, Biddiss E. Biomusic: An Auditory Interface for Detecting Physiological Indicators of Anxiety in Children. Front Neurosci 2016; 10:401. [PMID: 27625593 PMCID: PMC5003931 DOI: 10.3389/fnins.2016.00401] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2016] [Accepted: 08/15/2016] [Indexed: 11/13/2022] Open
Abstract
For children with profound disabilities affecting communication, it can be extremely challenging to identify salient emotions such as anxiety. If left unmanaged, anxiety can lead to hypertension, cardiovascular disease, and other psychological diagnoses. Physiological signals of the autonomic nervous system are indicative of anxiety, but can be difficult to interpret for non-specialist caregivers. This paper evaluates an auditory interface for intuitive detection of anxiety from physiological signals. The interface, called "Biomusic," maps physiological signals to music (i.e., electrodermal activity to melody; skin temperature to musical key; heart rate to drum beat; respiration to a "whooshing" embellishment resembling the sound of an exhalation). The Biomusic interface was tested in two experiments. Biomusic samples were generated from physiological recordings of typically developing children (n = 10) and children with autism spectrum disorders (n = 5) during relaxing and anxiety-provoking conditions. Adult participants (n = 16) were then asked to identify "anxious" or "relaxed" states by listening to the samples. In a classification task with 30 Biomusic samples (1 relaxed state, 1 anxious state per child), classification accuracy, sensitivity, and specificity were 80.8% [standard error (SE) = 2.3], 84.9% (SE = 3.0), and 76.8% (SE = 3.9), respectively. Participants were able to form an early and accurate impression of the anxiety state within 12.1 (SE = 0.7) seconds of hearing the Biomusic with very little training (i.e., < 10 min) and no contextual information. Biomusic holds promise for monitoring, communication, and biofeedback systems for anxiety management.
Collapse
Affiliation(s)
- Stephanie Cheung
- Institute of Biomaterials and Biomedical Engineering, University of TorontoToronto, ON, Canada; Bloorview Research Institute, Holland Bloorview Kids Rehabilitation HospitalToronto, ON, Canada
| | - Elizabeth Han
- Institute of Biomaterials and Biomedical Engineering, University of TorontoToronto, ON, Canada; Bloorview Research Institute, Holland Bloorview Kids Rehabilitation HospitalToronto, ON, Canada
| | - Azadeh Kushki
- Institute of Biomaterials and Biomedical Engineering, University of TorontoToronto, ON, Canada; Bloorview Research Institute, Holland Bloorview Kids Rehabilitation HospitalToronto, ON, Canada
| | - Evdokia Anagnostou
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation HospitalToronto, ON, Canada; Department of Paediatrics, University of TorontoToronto, ON, Canada
| | - Elaine Biddiss
- Institute of Biomaterials and Biomedical Engineering, University of TorontoToronto, ON, Canada; Bloorview Research Institute, Holland Bloorview Kids Rehabilitation HospitalToronto, ON, Canada
| |
Collapse
|
12
|
Huang R, Wang J, Wu D, Long H, Yang X, Liu H, Gao X, Zhao R, Lai W. The effects of customised brainwave music on orofacial pain induced by orthodontic tooth movement. Oral Dis 2016; 22:766-774. [PMID: 27417074 DOI: 10.1111/odi.12542] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2016] [Revised: 07/08/2016] [Accepted: 07/08/2016] [Indexed: 02/05/2023]
Affiliation(s)
- R Huang
- State Key Laboratory of Oral Diseases; Department of Orthodontics; West China Hospital of Stomatology; Sichuan University; Chengdu Sichuan China
| | - J Wang
- Department of Stomatology; Shanghai Tenth People's Hospital; Tongji University School of Medicine; Shanghai China
| | - D Wu
- School of Computer and Information; Beijing Jiaotong University; Beijing China
| | - H Long
- State Key Laboratory of Oral Diseases; Department of Orthodontics; West China Hospital of Stomatology; Sichuan University; Chengdu Sichuan China
| | - X Yang
- State Key Laboratory of Oral Diseases; Department of Orthodontics; West China Hospital of Stomatology; Sichuan University; Chengdu Sichuan China
- Department of Stomatology; Shanghai Tenth People's Hospital; Tongji University School of Medicine; Shanghai China
| | - H Liu
- State Key Laboratory of Oral Diseases; Department of Orthodontics; West China Hospital of Stomatology; Sichuan University; Chengdu Sichuan China
| | - X Gao
- State Key Laboratory of Oral Diseases; Department of Orthodontics; West China Hospital of Stomatology; Sichuan University; Chengdu Sichuan China
| | - R Zhao
- State Key Laboratory of Oral Diseases; Department of Orthodontics; West China Hospital of Stomatology; Sichuan University; Chengdu Sichuan China
| | - W Lai
- State Key Laboratory of Oral Diseases; Department of Orthodontics; West China Hospital of Stomatology; Sichuan University; Chengdu Sichuan China
| |
Collapse
|
13
|
Daly I, Williams D, Kirke A, Weaver J, Malik A, Hwang F, Miranda E, Nasuto SJ. Affective brain–computer music interfacing. J Neural Eng 2016; 13:046022. [DOI: 10.1088/1741-2560/13/4/046022] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
|
14
|
Wu D, Kendrick KM, Levitin DJ, Li C, Yao D. Bach Is the Father of Harmony: Revealed by a 1/f Fluctuation Analysis across Musical Genres. PLoS One 2015; 10:e0142431. [PMID: 26545104 PMCID: PMC4636347 DOI: 10.1371/journal.pone.0142431] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2015] [Accepted: 10/21/2015] [Indexed: 11/27/2022] Open
Abstract
Harmony is a fundamental attribute of music. Close connections exist between music and mathematics since both pursue harmony and unity. In music, the consonance of notes played simultaneously partly determines our perception of harmony; associates with aesthetic responses; and influences the emotion expression. The consonance could be considered as a window to understand and analyze harmony. Here for the first time we used a 1/f fluctuation analysis to investigate whether the consonance fluctuation structure in music with a wide range of composers and genres followed the scale free pattern that has been found for pitch, melody, rhythm, human body movements, brain activity, natural images and geographical features. We then used a network graph approach to investigate which composers were the most influential both within and across genres. Our results showed that patterns of consonance in music did follow scale-free characteristics, suggesting that this feature is a universally evolved one in both music and the living world. Furthermore, our network analysis revealed that Bach’s harmony patterns were having the most influence on those used by other composers, followed closely by Mozart.
Collapse
Affiliation(s)
- Dan Wu
- Department of Biomedical Engineering, School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Keith M. Kendrick
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | | | - Chaoyi Li
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
- Center for Life Sciences, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, China
| | - Dezhong Yao
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
- * E-mail:
| |
Collapse
|
15
|
Li Y, Rui X, Li S, Pu F. Investigation of global and local network properties of music perception with culturally different styles of music. Comput Biol Med 2014; 54:37-43. [PMID: 25212116 DOI: 10.1016/j.compbiomed.2014.08.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2013] [Revised: 07/23/2014] [Accepted: 08/16/2014] [Indexed: 11/16/2022]
Abstract
BACKGROUND Graph theoretical analysis has recently become a popular research tool in neuroscience, however, there have been very few studies on brain responses to music perception, especially when culturally different styles of music are involved. METHODS Electroencephalograms were recorded from ten subjects listening to Chinese traditional music, light music and western classical music. For event-related potentials, phase coherence was calculated in the alpha band and then constructed into correlation matrices. Clustering coefficients and characteristic path lengths were evaluated for global properties, while clustering coefficients and efficiency were assessed for local network properties. RESULTS Perception of light music and western classical music manifested small-world network properties, especially with a relatively low proportion of weights of correlation matrices. For local analysis, efficiency was more discernible than clustering coefficient. Nevertheless, there was no significant discrimination between Chinese traditional and western classical music perception. CONCLUSIONS Perception of different styles of music introduces different network properties, both globally and locally. Research into both global and local network properties has been carried out in other areas; however, this is a preliminary investigation aimed at suggesting a possible new approach to brain network properties in music perception.
Collapse
Affiliation(s)
- Yan Li
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China; Research Institute of Beihang University in Shenzhen, Shenzhen 518057, China
| | - Xue Rui
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China
| | - Shuyu Li
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China
| | - Fang Pu
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China; State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China.
| |
Collapse
|
16
|
Wu D, Li CY, Yao DZ. An ensemble with the Chinese pentatonic scale using electroencephalogram from both hemispheres. Neurosci Bull 2013; 29:581-7. [PMID: 23604597 PMCID: PMC5561954 DOI: 10.1007/s12264-013-1334-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2012] [Accepted: 11/19/2012] [Indexed: 10/26/2022] Open
Abstract
To listen to brain activity as a piece of music, we previously proposed scale-free brainwave music (SFBM) technology, which translated the scalp electroencephalogram (EEG) into musical notes according to the power law of both the EEG and music. In this study, the methodology was further extended to ensemble music on two channels from the two hemispheres. EEG data from two channels symmetrically located on the left and right hemispheres were translated into MIDI sequences by SFBM, and the EEG parameters modulated the pitch, duration and volume of each note. Then, the two sequences were filtered into an ensemble with two voices: the pentatonic scale (traditional Chinese music) or the heptatonic scale (standard Western music). We demonstrated differences in harmony between the two scales generated at different sleep stages, with the pentatonic scale being more harmonious. The harmony intervals of this brain ensemble at various sleep stages followed the power law. Compared with the heptatonic scale, it was easier to distinguish the different stages using the pentatonic scale. These results suggested that the hemispheric ensemble can represent brain activity by variations in pitch, tempo and harmony. The ensemble with the pentatonic scale sounds more consonant, and partially reflects the relations of the two hemispheres. This can be used to distinguish the different states of brain activity and provide a new perspective on EEG analysis.
Collapse
Affiliation(s)
- Dan Wu
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 China
| | - Chao-Yi Li
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 China
- Center for Life Sciences, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, 200031 China
| | - De-Zhong Yao
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 China
| |
Collapse
|
17
|
Wu D, Li C, Yao D. Scale-free brain quartet: artistic filtering of multi-channel brainwave music. PLoS One 2013; 8:e64046. [PMID: 23717527 PMCID: PMC3661572 DOI: 10.1371/journal.pone.0064046] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2013] [Accepted: 04/07/2013] [Indexed: 11/19/2022] Open
Abstract
To listen to the brain activities as a piece of music, we proposed the scale-free brainwave music (SFBM) technology, which translated scalp EEGs into music notes according to the power law of both EEG and music. In the present study, the methodology was extended for deriving a quartet from multi-channel EEGs with artistic beat and tonality filtering. EEG data from multiple electrodes were first translated into MIDI sequences by SFBM, respectively. Then, these sequences were processed by a beat filter which adjusted the duration of notes in terms of the characteristic frequency. And the sequences were further filtered from atonal to tonal according to a key defined by the analysis of the original music pieces. Resting EEGs with eyes closed and open of 40 subjects were utilized for music generation. The results revealed that the scale-free exponents of the music before and after filtering were different: the filtered music showed larger variety between the eyes-closed (EC) and eyes-open (EO) conditions, and the pitch scale exponents of the filtered music were closer to 1 and thus it was more approximate to the classical music. Furthermore, the tempo of the filtered music with eyes closed was significantly slower than that with eyes open. With the original materials obtained from multi-channel EEGs, and a little creative filtering following the composition process of a potential artist, the resulted brainwave quartet opened a new window to look into the brain in an audible musical way. In fact, as the artistic beat and tonal filters were derived from the brainwaves, the filtered music maintained the essential properties of the brain activities in a more musical style. It might harmonically distinguish the different states of the brain activities, and therefore it provided a method to analyze EEGs from a relaxed audio perspective.
Collapse
Affiliation(s)
- Dan Wu
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Chaoyi Li
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
- Center for Life Sciences, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, China
| | - Dezhong Yao
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
18
|
Lu J, Wu D, Yang H, Luo C, Li C, Yao D. Scale-free brain-wave music from simultaneously EEG and fMRI recordings. PLoS One 2012; 7:e49773. [PMID: 23166768 PMCID: PMC3498178 DOI: 10.1371/journal.pone.0049773] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2012] [Accepted: 10/12/2012] [Indexed: 11/18/2022] Open
Abstract
In the past years, a few methods have been developed to translate human EEG to music. In 2009, PloS One 4 e5915, we developed a method to generate scale-free brainwave music where the amplitude of EEG was translated to music pitch according to the power law followed by both of them, the period of an EEG waveform is translated directly to the duration of a note, and the logarithm of the average power change of EEG is translated to music intensity according to the Fechner's law. In this work, we proposed to adopt simultaneously-recorded fMRI signal to control the intensity of the EEG music, thus an EEG-fMRI music is generated by combining two different and simultaneous brain signals. And most importantly, this approach further realized power law for music intensity as fMRI signal follows it. Thus the EEG-fMRI music makes a step ahead in reflecting the physiological process of the scale-free brain.
Collapse
Affiliation(s)
- Jing Lu
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Dan Wu
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Hua Yang
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
- Sichuan Conservatory of Music, Chengdu, China
| | - Cheng Luo
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Chaoyi Li
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
- Center for Life Sciences, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, China
| | - Dezhong Yao
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
19
|
Levitin DJ, Chordia P, Menon V. Musical rhythm spectra from Bach to Joplin obey a 1/f power law. Proc Natl Acad Sci U S A 2012; 109:3716-20. [PMID: 22355125 PMCID: PMC3309746 DOI: 10.1073/pnas.1113828109] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Much of our enjoyment of music comes from its balance of predictability and surprise. Musical pitch fluctuations follow a 1/f power law that precisely achieves this balance. Musical rhythms, especially those of Western classical music, are considered highly regular and predictable, and this predictability has been hypothesized to underlie rhythm's contribution to our enjoyment of music. Are musical rhythms indeed entirely predictable and how do they vary with genre and composer? To answer this question, we analyzed the rhythm spectra of 1,788 movements from 558 compositions of Western classical music. We found that an overwhelming majority of rhythms obeyed a 1/f(β) power law across 16 subgenres and 40 composers, with β ranging from ∼0.5-1. Notably, classical composers, whose compositions are known to exhibit nearly identical 1/f pitch spectra, demonstrated distinctive 1/f rhythm spectra: Beethoven's rhythms were among the most predictable, and Mozart's among the least. Our finding of the ubiquity of 1/f rhythm spectra in compositions spanning nearly four centuries demonstrates that, as with musical pitch, musical rhythms also exhibit a balance of predictability and surprise that could contribute in a fundamental way to our aesthetic experience of music. Although music compositions are intended to be performed, the fact that the notated rhythms follow a 1/f spectrum indicates that such structure is no mere artifact of performance or perception, but rather, exists within the written composition before the music is performed. Furthermore, composers systematically manipulate (consciously or otherwise) the predictability in 1/f rhythms to give their compositions unique identities.
Collapse
Affiliation(s)
- Daniel J. Levitin
- Department of Psychology, School of Computer Science, and School of Music, McGill University, Montreal, QC, Canada H3A 1B1
| | - Parag Chordia
- School of Music, Georgia Institute of Technology, Atlanta, GA 30332; and
| | - Vinod Menon
- Program in Neurosciences, Department of Psychiatry and Behavioral Sciences and Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA 94305
| |
Collapse
|
20
|
Lin YP, Wang CH, Jung TP, Wu TL, Jeng SK, Duann JR, Chen JH. EEG-based emotion recognition in music listening. IEEE Trans Biomed Eng 2010; 57:1798-806. [PMID: 20442037 DOI: 10.1109/tbme.2010.2048568] [Citation(s) in RCA: 283] [Impact Index Per Article: 20.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Ongoing brain activity can be recorded as electroencephalograph (EEG) to discover the links between emotional states and brain activity. This study applied machine-learning algorithms to categorize EEG dynamics according to subject self-reported emotional states during music listening. A framework was proposed to optimize EEG-based emotion recognition by systematically 1) seeking emotion-specific EEG features and 2) exploring the efficacy of the classifiers. Support vector machine was employed to classify four emotional states (joy, anger, sadness, and pleasure) and obtained an averaged classification accuracy of 82.29% +/- 3.06% across 26 subjects. Further, this study identified 30 subject-independent features that were most relevant to emotional processing across subjects and explored the feasibility of using fewer electrodes to characterize the EEG dynamics during music listening. The identified features were primarily derived from electrodes placed near the frontal and the parietal lobes, consistent with many of the findings in the literature. This study might lead to a practical system for noninvasive assessment of the emotional states in practical or clinical applications.
Collapse
Affiliation(s)
- Yuan-Pin Lin
- Department of Electrical Engineering, National Taiwan University, Taipei 10617, Taiwan.
| | | | | | | | | | | | | |
Collapse
|
21
|
Wu D, Li C, Yin Y, Zhou C, Yao D. Music composition from the brain signal: representing the mental state by music. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2010; 2010:267671. [PMID: 20300580 PMCID: PMC2837898 DOI: 10.1155/2010/267671] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/18/2009] [Revised: 09/29/2009] [Accepted: 12/22/2009] [Indexed: 11/23/2022]
Abstract
This paper proposes a method to translate human EEG into music, so as to represent mental state by music. The arousal levels of the brain mental state and music emotion are implicitly used as the bridge between the mind world and the music. The arousal level of the brain is based on the EEG features extracted mainly by wavelet analysis, and the music arousal level is related to the musical parameters such as pitch, tempo, rhythm, and tonality. While composing, some music principles (harmonics and structure) were taken into consideration. With EEGs during various sleep stages as an example, the music generated from them had different patterns of pitch, rhythm, and tonality. 35 volunteers listened to the music pieces, and significant difference in music arousal levels was found. It implied that different mental states may be identified by the corresponding music, and so the music from EEG may be a potential tool for EEG monitoring, biofeedback therapy, and so forth.
Collapse
Affiliation(s)
- Dan Wu
- 1Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Chaoyi Li
- 1Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
- 2Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China
| | - Yu Yin
- 1Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Changzheng Zhou
- 1Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
- 3Department of Students' Affairs, Arts Education Centre, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Dezhong Yao
- 1Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
- *Dezhong Yao:
| |
Collapse
|