1
|
Norata D, Broggi S, Alvisi L, Lattanzi S, Brigo F, Tinuper P. The EEG pen-on-paper sound: History and recent advances. Seizure 2023; 107:67-70. [PMID: 36965379 DOI: 10.1016/j.seizure.2023.03.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 03/02/2023] [Accepted: 03/14/2023] [Indexed: 03/17/2023] Open
Abstract
The electroencephalogram (EEG) is one of the most useful technologies for brain research and clinical neurology, characterized by non-invasiveness and high time resolution. The acquired traces are visibly displayed, but various studies investigate the translation of brain waves in sound (i.e., a process called sonification). Several articles have been published since 1934 about the sonification of EEG traces, in the attempt to identify the "brain-sound." However, for a long time this sonification technique was not used for clinical purposes. The analog EEG was in fact already equipped with an auditory output, although rarely mentioned in scientific papers: the pen-on-paper noise made by the writer unit. EEG technologists often relied on the sound that pens made on paper to facilitate the diagnosis. This article provides a sample of analog video-EEG recordings with audio support representing the strengths of a combined visual-and-auditory detection of different types of seizures. The purpose of the present article is to illustrate how the analog EEG "sounded," as well as to highlight the advantages of this pen-writing noise. It was considered so useful that early digital EEG devices could be equipped with special software to duplicate it digitally. Even in the present days, the sonification can be considered as an attempt to modify the EEG practice using auditory neurofeedback with applications in therapeutic interventions, cognitive improvement, and basic research.
Collapse
Affiliation(s)
- Davide Norata
- Neurological Clinic and Stroke Unit, Department of Experimental and Clinical Medicine (DiMSC), Marche Polytechnic University, Via Conca 71, Ancona 60020, Italy.
| | - Serena Broggi
- Neurological Clinic and Stroke Unit, Department of Experimental and Clinical Medicine (DiMSC), Marche Polytechnic University, Via Conca 71, Ancona 60020, Italy
| | - Lara Alvisi
- Dipartimento di Scienze Biomediche e Neuromotorie, University of Bologna, Bologna, Italy; IRCCS Istituto delle Scienze Neurologiche di Bologna, Epilepsy Center (full member of the European Reference Network EpiCARE), Bologna, Italy
| | - Simona Lattanzi
- Neurological Clinic and Stroke Unit, Department of Experimental and Clinical Medicine (DiMSC), Marche Polytechnic University, Via Conca 71, Ancona 60020, Italy
| | - Francesco Brigo
- Department of Neurology, Hospital of Merano (SABES-ASDAA), Merano-Meran, Italy
| | - Paolo Tinuper
- Dipartimento di Scienze Biomediche e Neuromotorie, University of Bologna, Bologna, Italy; IRCCS Istituto delle Scienze Neurologiche di Bologna, Epilepsy Center (full member of the European Reference Network EpiCARE), Bologna, Italy
| | | |
Collapse
|
2
|
Belo J, Clerc M, Schön D. EEG-Based Auditory Attention Detection and Its Possible Future Applications for Passive BCI. FRONTIERS IN COMPUTER SCIENCE 2021. [DOI: 10.3389/fcomp.2021.661178] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
The ability to discriminate and attend one specific sound source in a complex auditory environment is a fundamental skill for efficient communication. Indeed, it allows us to follow a family conversation or discuss with a friend in a bar. This ability is challenged in hearing-impaired individuals and more precisely in those with a cochlear implant (CI). Indeed, due to the limited spectral resolution of the implant, auditory perception remains quite poor in a noisy environment or in presence of simultaneous auditory sources. Recent methodological advances allow now to detect, on the basis of neural signals, which auditory stream within a set of multiple concurrent streams an individual is attending to. This approach, called EEG-based auditory attention detection (AAD), is based on fundamental research findings demonstrating that, in a multi speech scenario, cortical tracking of the envelope of the attended speech is enhanced compared to the unattended speech. Following these findings, other studies showed that it is possible to use EEG/MEG (Electroencephalography/Magnetoencephalography) to explore auditory attention during speech listening in a Cocktail-party-like scenario. Overall, these findings make it possible to conceive next-generation hearing aids combining customary technology and AAD. Importantly, AAD has also a great potential in the context of passive BCI, in the educational context as well as in the context of interactive music performances. In this mini review, we firstly present the different approaches of AAD and the main limitations of the global concept. We then expose its potential applications in the world of non-clinical passive BCI.
Collapse
|
3
|
New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music. ENTROPY 2020; 22:e22121384. [PMID: 33297582 PMCID: PMC7762429 DOI: 10.3390/e22121384] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 12/03/2020] [Accepted: 12/03/2020] [Indexed: 11/25/2022]
Abstract
Interactive music uses wearable sensors (i.e., gestural interfaces—GIs) and biometric datasets to reinvent traditional human–computer interaction and enhance music composition. In recent years, machine learning (ML) has been important for the artform. This is because ML helps process complex biometric datasets from GIs when predicting musical actions (termed performance gestures). ML allows musicians to create novel interactions with digital media. Wekinator is a popular ML software amongst artists, allowing users to train models through demonstration. It is built on the Waikato Environment for Knowledge Analysis (WEKA) framework, which is used to build supervised predictive models. Previous research has used biometric data from GIs to train specific ML models. However, previous research does not inform optimum ML model choice, within music, or compare model performance. Wekinator offers several ML models. Thus, we used Wekinator and the Myo armband GI and study three performance gestures for piano practice to solve this problem. Using these, we trained all models in Wekinator and investigated their accuracy, how gesture representation affects model accuracy and if optimisation can arise. Results show that neural networks are the strongest continuous classifiers, mapping behaviour differs amongst continuous models, optimisation can occur and gesture representation disparately affects model mapping behaviour; impacting music practice.
Collapse
|
4
|
Gao D, Long S, Yang H, Cheng Y, Guo S, Yu Y, Liu T, Dong L, Lu J, Yao D. SWS Brain-Wave Music May Improve the Quality of Sleep: An EEG Study. Front Neurosci 2020; 14:67. [PMID: 32116514 PMCID: PMC7026372 DOI: 10.3389/fnins.2020.00067] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2019] [Accepted: 01/16/2020] [Indexed: 01/06/2023] Open
Abstract
Aim This study investigated the neural mechanisms of brain-wave music on sleep quality. Background Sleep disorders are a common health problem in our society and may result in fatigue, depression, and problems in daytime functioning. Previous studies have shown that brain-wave music generated from electroencephalography (EEG) signals could emotionally affect our nervous system and have positive effects on sleep. However, the neural mechanisms of brain-wave music on the quality of sleep need to be clarified. Methods A total of 33 young participants were recruited and randomly divided into three groups. The participants listened to rapid eye movement (REM) brain-wave music (Group 1: 13 subjects), slow-wave sleep (SWS) brain-wave music (Group 2: 11 subjects), or white noise (WN) (Control Group: 9 subjects) for 20 min before bedtime for 6 days. EEG and other physiological signals were recorded by polysomnography. Results We found that the sleep efficiency increased in the SWS group but decreased in REM and WN groups. The sleep efficiency in the SWS group was ameliorated [t(10) = −1.943, p = 0.076]. In the EEG power spectral density analysis, the delta power spectral density in the REM group and in the control group increased, while that in the SWS group decreased [F(2,31) = 7.909, p = 0.005]. In the network analysis, the functional connectivity (FC), assessed with Pearson correlation coefficients, showed that the connectivity strength decreased [t(10) = 1.969, p = 0.073] between the left frontal lobe (F3) and left parietal lobe (C3) in the SWS group. In addition, there was a negative correlation between the FC of the left frontal lobe and the left parietal lobe and sleep latency in the SWS group (r = −0.527, p = 0.064). Conclusion Slow-wave sleep brain-wave music may have a positive effect on sleep quality, while REM brain-wave music or WN may not have a positive effect. Furthermore, better sleep quality might be caused by a decrease in the power spectral density of the delta band of EEG and an increase in the FC between the left frontal lobe and the left parietal lobe. SWS brain-wave music could be a safe and inexpensive method for clinical use if confirmed by more data.
Collapse
Affiliation(s)
- Dongrui Gao
- School of Computer Science, Chengdu University of Information Technology, Chengdu, China.,The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China.,Center for Information in Biomedicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Siyu Long
- Center for Information in Biomedicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Hua Yang
- Department of Composition, Sichuan Conservatory of Music, Chengdu, China
| | - Yibo Cheng
- Center for Information in Biomedicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Sijia Guo
- Center for Information in Biomedicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Yue Yu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China
| | - Tiejun Liu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China.,Center for Information in Biomedicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Li Dong
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China.,Center for Information in Biomedicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Jing Lu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China.,Center for Information in Biomedicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Dezhong Yao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China.,Center for Information in Biomedicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
5
|
Wu D. Hearing the Sound in the Brain: Influences of Different EEG References. Front Neurosci 2018; 12:148. [PMID: 29593487 PMCID: PMC5859362 DOI: 10.3389/fnins.2018.00148] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Accepted: 02/23/2018] [Indexed: 11/13/2022] Open
Abstract
If the scalp potential signals, the electroencephalogram (EEG), are due to neural "singers" in the brain, how could we listen to them with less distortion? One crucial point is that the data recording on the scalp should be faithful and accurate, thus the choice of reference electrode is a vital factor determining the faithfulness of the data. In this study, music on the scalp derived from data in the brain using three different reference electrodes were compared, including approximate zero reference-reference electrode standardization technique (REST), average reference (AR), and linked mastoids reference (LM). The classic music pieces in waveform format were used as simulated sources inside a head model, and they were forward calculated to scalp as standard potential recordings, i.e., waveform format music from the brain with true zero reference. Then these scalp music was re-referenced into REST, AR, and LM based data, and compared with the original forward data (true zero reference). For real data, the EEG recorded in an orthodontic pain control experiment were utilized for music generation with the three references, and the scale free index (SFI) of these music pieces were compared. The results showed that in the simulation for only one source, different references do not change the music/waveform; for two sources or more, REST provide the most faithful music/waveform to the original ones inside the brain, and the distortions caused by AR and LM were spatial locations of both source and scalp electrode dependent. The brainwave music from the real EEG data showed that REST and AR make the differences of SFI between two states more recognized and found the frontal is the main region that producing the music. In conclusion, REST can reconstruct the true signals approximately, and it can be used to help to listen to the true voice of the neural singers in the brain.
Collapse
Affiliation(s)
- Dan Wu
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China.,The Key Laboratory for NeuroInformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
6
|
Daly I, Williams D, Kirke A, Weaver J, Malik A, Hwang F, Miranda E, Nasuto SJ. Affective brain–computer music interfacing. J Neural Eng 2016; 13:046022. [DOI: 10.1088/1741-2560/13/4/046022] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
|
7
|
Abstract
Down through the ages, music has been universally valued for its therapeutic properties based on the psychological and physiological responses in humans. However, the underlying mechanisms of the psychological and physiological responses to music have been poorly identified and defined. Without clarification, a concept can be misused, thereby diminishing its importance for application to nursing research and practice. The purpose of this article was for the clarification of the concept of music therapy based on Walker and Avant’s concept analysis strategy. A review of recent nursing and health-related literature covering the years 2007–2014 was performed on the concepts of music, music therapy, preferred music, and individualized music. As a result of the search, the attributes, antecedents, and consequences of music therapy were identified, defined, and used to develop a conceptual model of music therapy. The conceptual model of music therapy provides direction for developing music interventions for nursing research and practice to be tested in various settings to improve various patient outcomes. Based on Walker and Avant’s concept analysis strategy, model and contrary cases are included. Implications for future nursing research and practice to use the psychological and physiological responses to music therapy are discussed.
Collapse
|
8
|
Wu D, Li C, Yao D. Scale-free brain quartet: artistic filtering of multi-channel brainwave music. PLoS One 2013; 8:e64046. [PMID: 23717527 PMCID: PMC3661572 DOI: 10.1371/journal.pone.0064046] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2013] [Accepted: 04/07/2013] [Indexed: 11/19/2022] Open
Abstract
To listen to the brain activities as a piece of music, we proposed the scale-free brainwave music (SFBM) technology, which translated scalp EEGs into music notes according to the power law of both EEG and music. In the present study, the methodology was extended for deriving a quartet from multi-channel EEGs with artistic beat and tonality filtering. EEG data from multiple electrodes were first translated into MIDI sequences by SFBM, respectively. Then, these sequences were processed by a beat filter which adjusted the duration of notes in terms of the characteristic frequency. And the sequences were further filtered from atonal to tonal according to a key defined by the analysis of the original music pieces. Resting EEGs with eyes closed and open of 40 subjects were utilized for music generation. The results revealed that the scale-free exponents of the music before and after filtering were different: the filtered music showed larger variety between the eyes-closed (EC) and eyes-open (EO) conditions, and the pitch scale exponents of the filtered music were closer to 1 and thus it was more approximate to the classical music. Furthermore, the tempo of the filtered music with eyes closed was significantly slower than that with eyes open. With the original materials obtained from multi-channel EEGs, and a little creative filtering following the composition process of a potential artist, the resulted brainwave quartet opened a new window to look into the brain in an audible musical way. In fact, as the artistic beat and tonal filters were derived from the brainwaves, the filtered music maintained the essential properties of the brain activities in a more musical style. It might harmonically distinguish the different states of the brain activities, and therefore it provided a method to analyze EEGs from a relaxed audio perspective.
Collapse
Affiliation(s)
- Dan Wu
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Chaoyi Li
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
- Center for Life Sciences, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, China
| | - Dezhong Yao
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
9
|
Lu J, Wu D, Yang H, Luo C, Li C, Yao D. Scale-free brain-wave music from simultaneously EEG and fMRI recordings. PLoS One 2012; 7:e49773. [PMID: 23166768 PMCID: PMC3498178 DOI: 10.1371/journal.pone.0049773] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2012] [Accepted: 10/12/2012] [Indexed: 11/18/2022] Open
Abstract
In the past years, a few methods have been developed to translate human EEG to music. In 2009, PloS One 4 e5915, we developed a method to generate scale-free brainwave music where the amplitude of EEG was translated to music pitch according to the power law followed by both of them, the period of an EEG waveform is translated directly to the duration of a note, and the logarithm of the average power change of EEG is translated to music intensity according to the Fechner's law. In this work, we proposed to adopt simultaneously-recorded fMRI signal to control the intensity of the EEG music, thus an EEG-fMRI music is generated by combining two different and simultaneous brain signals. And most importantly, this approach further realized power law for music intensity as fMRI signal follows it. Thus the EEG-fMRI music makes a step ahead in reflecting the physiological process of the scale-free brain.
Collapse
Affiliation(s)
- Jing Lu
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Dan Wu
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Hua Yang
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
- Sichuan Conservatory of Music, Chengdu, China
| | - Cheng Luo
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Chaoyi Li
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
- Center for Life Sciences, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, China
| | - Dezhong Yao
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
10
|
Mikutta C, Altorfer A, Strik W, Koenig T. Emotions, Arousal, and Frontal Alpha Rhythm Asymmetry During Beethoven’s 5th Symphony. Brain Topogr 2012; 25:423-30. [PMID: 22534936 DOI: 10.1007/s10548-012-0227-0] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2011] [Accepted: 04/06/2012] [Indexed: 10/28/2022]
|