1
|
Cirelli LK, Talukder LS, Kragness HE. Infant attention to rhythmic audiovisual synchrony is modulated by stimulus properties. Front Psychol 2024; 15:1393295. [PMID: 39027053 PMCID: PMC11256966 DOI: 10.3389/fpsyg.2024.1393295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Accepted: 06/06/2024] [Indexed: 07/20/2024] Open
Abstract
Musical interactions are a common and multimodal part of an infant's daily experiences. Infants hear their parents sing while watching their lips move and see their older siblings dance along to music playing over the radio. Here, we explore whether 8- to 12-month-old infants associate musical rhythms they hear with synchronous visual displays by tracking their dynamic visual attention to matched and mismatched displays. Visual attention was measured using eye-tracking while they attended to a screen displaying two videos of a finger tapping at different speeds. These videos were presented side by side while infants listened to an auditory rhythm (high or low pitch) synchronized with one of the two videos. Infants attended more to the low-pitch trials than to the high-pitch trials but did not display a preference for attending to the synchronous hand over the asynchronous hand within trials. Exploratory evidence, however, suggests that tempo, pitch, and rhythmic complexity interactively engage infants' visual attention to a tapping hand, especially when that hand is aligned with the auditory stimulus. For example, when the rhythm was complex and the auditory stimulus was low in pitch, infants attended to the fast hand more when it aligned with the auditory stream than to misaligned trials. These results suggest that the audiovisual integration in rhythmic non-speech contexts is influenced by stimulus properties.
Collapse
Affiliation(s)
- Laura K. Cirelli
- Department of Psychology, University of Toronto Scarborough, Toronto, ON, Canada
| | - Labeeb S. Talukder
- Department of Psychology, University of Toronto Scarborough, Toronto, ON, Canada
| | - Haley E. Kragness
- Department of Psychology, University of Toronto Scarborough, Toronto, ON, Canada
- Psychology Department, Bucknell University, Lewisburg, PA, United States
| |
Collapse
|
2
|
Moura N, Fonseca P, Vilas-Boas JP, Serra S. Increased body movement equals better performance? Not always! Musical style determines motion degree perceived as optimal in music performance. PSYCHOLOGICAL RESEARCH 2024; 88:1314-1330. [PMID: 38329559 PMCID: PMC11142955 DOI: 10.1007/s00426-024-01928-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 01/18/2024] [Indexed: 02/09/2024]
Abstract
Musicians' body behaviour has a preponderant role in audience perception. We investigated how performers' motion is perceived depending on the musical style and musical expertise. To further explore the effect of visual input, stimuli were presented in audio-only, audio-visual and visual-only conditions. We used motion and audio recordings of expert saxophone players playing two contrasting excerpts (positively and negatively valenced). For each excerpt, stimuli represented five motion degrees with increasing quantity of motion (QoM) and distinct predominant gestures. In the experiment (online and in-person), 384 participants rated performance recordings for expressiveness, professionalism and overall quality. Results revealed that, for the positively valenced excerpt, ratings increased as a function of QoM, whilst for the negatively valenced, the recording with predominant flap motion was favoured. Musicianship did not have a significant effect in motion perception. Concerning multisensory integration, both musicians and non-musicians presented visual dominance in the positively valenced excerpt, whereas in the negatively valenced, musicians shifted to auditory dominance. Our findings demonstrate that musical style not only determines the way observers perceive musicians' movement as adequate, but also that it can promote changes in multisensory integration.
Collapse
Affiliation(s)
- Nádia Moura
- Research Centre in Science and Technology of the Arts (CITAR), School of Arts, Universidade Católica Portuguesa, Porto, Portugal.
- Porto Biomechanics Laboratory (LABIOMEP), Faculty of Sport, University of Porto, Porto, Portugal.
| | - Pedro Fonseca
- Porto Biomechanics Laboratory (LABIOMEP), Faculty of Sport, University of Porto, Porto, Portugal
| | - João Paulo Vilas-Boas
- Porto Biomechanics Laboratory (LABIOMEP), Faculty of Sport, University of Porto, Porto, Portugal
- Centre of Research, Education, Innovation and Intervention in Sport (CIFI2D), Faculty of Sport, University of Porto, Porto, Portugal
| | - Sofia Serra
- Research Centre in Science and Technology of the Arts (CITAR), School of Arts, Universidade Católica Portuguesa, Porto, Portugal
- Instituto de Etnomusicologia-Centro de Estudos em Música e Dança (INET-MD), Departamento de Comunicação e Arte, Universidade de Aveiro, Aveiro, Portugal
| |
Collapse
|
3
|
Liu H, Peng XG, Gao R, Yang K, Zhao YB. Comparative analysis of noise and music exposure on inflammatory responses on lipopolysaccharide-induced septic rats. Hum Exp Toxicol 2024; 43:9603271241282584. [PMID: 39240701 DOI: 10.1177/09603271241282584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/08/2024]
Abstract
OBJECTIVE Environmental factors such as noise and music can significantly impact physiological responses, including inflammation. This study explored how environmental factors like noise and music affect lipopolysaccharide (LPS)-induced inflammation, with a focus on systemic and organ-specific responses. MATERIALS AND METHODS 24 Wistar rats were divided into four groups (n = 6 per group): Control group, LPS group, noise-exposed group, and music-exposed group. All rats, except for the Control group, received 10 mg/kg LPS intraperitoneally. The rats in the noise-exposed group were exposed to 95 dB noise, and the music-exposed group listened to Mozart's K. 448 music (65-75 dB) for 1 h daily over 7 days. An enzyme-linked immunosorbent assay was utilized to detect the levels of inflammatory cytokines, including tumor necrosis factor-α (TNF-α) and interleukin-1β (IL-1β), in serum and tissues (lung, liver, and kidney). Western blot examined the phosphorylation levels of nuclear factor-κB (NF-κB) p65 in organ tissues. RESULTS Compared with the Control group, LPS-induced sepsis rats displayed a significant increase in the levels of TNF-α and IL-1β in serum, lung, liver, and kidney tissues, as well as a remarkable elevation in the p-NF-κB p65 protein expression in lung, liver, and kidney tissues. Noise exposure further amplified these inflammatory markers, while music exposure reduced them in LPS-induced sepsis rats. CONCLUSION Noise exposure exacerbates inflammation by activating the NF-κB pathway, leading to the up-regulation of inflammatory markers during sepsis. On the contrary, music exposure inhibits NF-κB signaling, indicating a potential therapeutic effect in reducing inflammation.
Collapse
Affiliation(s)
- Hu Liu
- Department of Emergency and Critical Care Center, Renmin Hospital, Hubei University of Medicine, Hubei, China
| | - Xing-Guo Peng
- Department of Emergency and Critical Care Center, Renmin Hospital, Hubei University of Medicine, Hubei, China
| | - Ran Gao
- Department of Emergency and Critical Care Center, Renmin Hospital, Hubei University of Medicine, Hubei, China
| | - Kai Yang
- Department of Emergency and Critical Care Center, Renmin Hospital, Hubei University of Medicine, Hubei, China
| | - Yan-Bo Zhao
- Department of Emergency and Critical Care Center, Renmin Hospital, Hubei University of Medicine, Hubei, China
| |
Collapse
|
4
|
Andrievskaia P, Berti S, Spaniol J, Keshavarz B. Exploring neurophysiological correlates of visually induced motion sickness using electroencephalography (EEG). Exp Brain Res 2023; 241:2463-2473. [PMID: 37650899 DOI: 10.1007/s00221-023-06690-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 08/12/2023] [Indexed: 09/01/2023]
Abstract
Visually induced motion sickness (VIMS) is a common phenomenon when using visual devices such as smartphones and virtual reality applications, with symptoms including nausea, fatigue, and headache. To date, the neuro-cognitive processes underlying VIMS are not fully understood. Previous studies using electroencephalography (EEG) delivered mixed findings, with some reporting an increase in delta and theta power, and others reporting increases in alpha and beta frequencies. The goal of the study was to gain further insight into EEG correlates for VIMS. Participants viewed a VIMS-inducing visual stimulus, composed of moving black-and-white vertical bars presented on an array of three adjacent monitors. The EEG was recorded during visual stimulation and VIMS ratings were recorded after each trial using the Fast Motion Sickness Scale. Time-frequency analyses were conducted comparing neural activity of participants reporting minimal VIMS (n = 21) and mild-moderate VIMS (n = 12). Results suggested a potential increase in delta power in the centro-parietal regions (CP2) and a decrease in alpha power in the central regions (Cz) for participants experiencing mild-moderate VIMS compared to those with minimal VIMS. Event-related spectral perturbations (ERSPs) suggested that group differences in EEG activity developed with increasing duration of a trial. These results support the hypothesis that the EEG might be sensitive to differences in information processing in VIMS and minimal VIMS contexts, and indicate that it may be possible to identify neurophysiological correlate of VIMS. Differences in EEG activity related to VIMS may reflect differential processing of conflicting visual and vestibular sensory information.
Collapse
Affiliation(s)
- Polina Andrievskaia
- KITE Research Institute, Toronto Rehabilitation Institute-University Health Network, 550 University Avenue, Toronto, ON, M5G 2A2, Canada
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| | - Stefan Berti
- Department of Clinical Psychology and Neuropsychology, Johannes Gutenberg University, Mainz, Germany
| | - Julia Spaniol
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| | - Behrang Keshavarz
- KITE Research Institute, Toronto Rehabilitation Institute-University Health Network, 550 University Avenue, Toronto, ON, M5G 2A2, Canada.
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada.
| |
Collapse
|
5
|
Frame J, Gugliano M, Bai E, Brielmann A, Belfi AM. Your ears don't change what your eyes like: People can independently report the pleasure of music and images. J Exp Psychol Hum Percept Perform 2023; 49:774-785. [PMID: 37141037 PMCID: PMC10247479 DOI: 10.1037/xhp0001118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Observers can make independent aesthetic judgments of at least two images presented briefly and simultaneously. However, it is unknown whether this is the case for two stimuli of different sensory modalities. Here, we investigated whether individuals can judge auditory and visual stimuli independently, and whether stimulus duration influences such judgments. Participants (N = 120, across two experiments and a replication) saw images of paintings and heard excerpts of music, presented simultaneously for 2 s (Experiment 1) or 5 s (Experiment 2). After the stimuli were presented, participants rated how much pleasure they felt from the stimulus (music, image, or combined pleasure of both, depending on which was cued) on a 9-point scale. Finally, participants completed a baseline rating block where they rated each stimulus in isolation. We used the baseline ratings to predict ratings of audiovisual presentations. Across both experiments, the root mean square errors (RMSEs) obtained from leave-one-out-cross-validation analyses showed that people's ratings of music and images were unbiased by the simultaneously presented other stimulus, and ratings of both were best described as the arithmetic mean of the ratings from the individual presentations at the end of the experiment. This pattern of results replicates previous findings on simultaneously presented images, indicating that participants can ignore the pleasure of an irrelevant stimulus regardless of the sensory modality and duration of stimulus presentation. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
- Jessica Frame
- Department of Psychological Science, Missouri University of Science and Technology
| | | | - Elena Bai
- Department of Psychological Science, Missouri University of Science and Technology
| | - Aenne Brielmann
- Department of Computational Neuroscience, Max-Planck Institute for Biological Cybernetics
| | - Amy M. Belfi
- Department of Psychological Science, Missouri University of Science and Technology
| |
Collapse
|
6
|
Xu J, Guo X, Liu M, Xu H, Huang J. Self-construal priming modulates sonic seasoning. Front Psychol 2023; 14:1041202. [PMID: 37077846 PMCID: PMC10106597 DOI: 10.3389/fpsyg.2023.1041202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Accepted: 03/02/2023] [Indexed: 04/05/2023] Open
Abstract
Introduction“Sonic seasoning” is when music influences the real taste experiences of consumers. “Self-construal” is how individuals perceive, understand, and interpret themselves. Numerous studies have shown that independent and interdependent self-construal priming can affect a person's cognition and behavior; however, their moderating effect on the sonic seasoning effect remains unclear.MethodsThis experiment was a 2 (self-construal priming: independent self-construal or interdependent self-construal) × 2 (chocolate: milk chocolate or dark chocolate) × 2 (emotional music: positive emotional music or negative emotional music) mixed design, and explored the moderating role of self-construal priming and the effect of emotional music on taste by comparing participants' evaluations of chocolates while listening to positive or negative music after different levels of self-construal priming.ResultsAfter initiating independent self-construal, participants increased their ratings of milk chocolate sweetness when listening to music that elicited positive emotions, t(32) = 3.11, p = 0.004, Cohen's d = 0.54, 95% CI = [0.33, 1.61]. In contrast, interdependent self-construal priming led participants to perceive dark chocolate as sweeter when they heard positive music, t(29) = 3.63, p = 0.001, Cohen's d = 0.66, 95%CI = [0.44, 1.56].DiscussionThis study provides evidence for improving people's individual eating experience and enjoyment of food.
Collapse
Affiliation(s)
- Jingxian Xu
- Department of Psychology, Soochow University, Suzhou, China
| | - Xiyu Guo
- Department of Psychology, Soochow University, Suzhou, China
| | - Mengying Liu
- Department of Psychology, Soochow University, Suzhou, China
| | - Hui Xu
- School of Public Affairs, Zhejiang University, Hangzhou, China
- Hui Xu
| | - Jianping Huang
- Department of Psychology, Soochow University, Suzhou, China
- *Correspondence: Jianping Huang
| |
Collapse
|
7
|
Dong H, Li N, Fan L, Wei J, Xu J. Integrative interaction of emotional speech in audio-visual modality. Front Neurosci 2022; 16:797277. [PMID: 36440282 PMCID: PMC9695733 DOI: 10.3389/fnins.2022.797277] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 09/21/2022] [Indexed: 11/13/2022] Open
Abstract
Emotional clues are always expressed in many ways in our daily life, and the emotional information we receive is often represented by multiple modalities. Successful social interactions require a combination of multisensory cues to accurately determine the emotion of others. The integration mechanism of multimodal emotional information has been widely investigated. Different brain activity measurement methods were used to determine the location of brain regions involved in the audio-visual integration of emotional information, mainly in the bilateral superior temporal regions. However, the methods adopted in these studies are relatively simple, and the materials of the study rarely contain speech information. The integration mechanism of emotional speech in the human brain still needs further examinations. In this paper, a functional magnetic resonance imaging (fMRI) study was conducted using event-related design to explore the audio-visual integration mechanism of emotional speech in the human brain by using dynamic facial expressions and emotional speech to express emotions of different valences. Representational similarity analysis (RSA) based on regions of interest (ROIs), whole brain searchlight analysis, modality conjunction analysis and supra-additive analysis were used to analyze and verify the role of relevant brain regions. Meanwhile, a weighted RSA method was used to evaluate the contributions of each candidate model in the best fitted model of ROIs. The results showed that only the left insula was detected by all methods, suggesting that the left insula played an important role in the audio-visual integration of emotional speech. Whole brain searchlight analysis, modality conjunction analysis and supra-additive analysis together revealed that the bilateral middle temporal gyrus (MTG), right inferior parietal lobule and bilateral precuneus might be involved in the audio-visual integration of emotional speech from other aspects.
Collapse
Affiliation(s)
- Haibin Dong
- Tianjin Key Lab of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Na Li
- Tianjin Key Lab of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Lingzhong Fan
- Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Jianguo Wei
- Tianjin Key Lab of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Junhai Xu
- Tianjin Key Lab of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China
- *Correspondence: Junhai Xu,
| |
Collapse
|
8
|
Sújar A, Martín-Moratinos M, Rodrigo-Yanguas M, Bella-Fernández M, González-Tardón C, Delgado-Gómez D, Blasco-Fontecilla H. Developing Serious Video Games to Treat Attention Deficit Hyperactivity Disorder: Tutorial Guide. JMIR Serious Games 2022; 10:e33884. [PMID: 35916694 PMCID: PMC9379781 DOI: 10.2196/33884] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 04/12/2022] [Accepted: 06/12/2022] [Indexed: 12/03/2022] Open
Abstract
Video game–based therapeutic interventions have demonstrated some effectiveness in decreasing the symptoms of attention deficit hyperactivity disorder (ADHD). Compared with more traditional strategies within the multimodal treatment of ADHD, video games have certain advantages such as being comfortable, flexible, and cost-efficient. However, establishing the most appropriate type(s) of video games that should be used for this treatment remains a matter of debate, including the commercial existing video games or serious video games that are specifically constructed to target specific disorders. This guide represents a starting point for developing serious video games aimed at treating ADHD. We summarize the key points that need to be addressed to generate an effective and motivating game-based treatment. Following recommendations from the literature to create game-based treatments, we describe the development stages of a serious video game for treating ADHD. Game design should consider the interests of future users; game mechanics should be based on cognitive exercises; and therapeutic mechanisms must include the control of difficulty, engagement, motivation, time constraints, and reinforcement. To elaborate upon this guide, we performed a narrative review focused on the use of video games for the treatment of ADHD, and were inspired by our own experience during the development of the game “The Secret Trail of Moon.”
Collapse
Affiliation(s)
- Aarón Sújar
- Department of Psychiatry, Hospital Universitario Puerta de Hierro Majadahonda, Majadahonda, Spain
- Department of Computer Engineering, Universidad Rey Juan Carlos, Madrid, Spain
| | - Marina Martín-Moratinos
- Department of Psychiatry, Hospital Universitario Puerta de Hierro Majadahonda, Majadahonda, Spain
- Faculty of Medicine, Universidad Autónoma de Madrid, Madrid, Spain
| | - María Rodrigo-Yanguas
- Department of Psychiatry, Hospital Universitario Puerta de Hierro Majadahonda, Majadahonda, Spain
- Faculty of Medicine, Universidad Autónoma de Madrid, Madrid, Spain
| | - Marcos Bella-Fernández
- Department of Psychiatry, Hospital Universitario Puerta de Hierro Majadahonda, Majadahonda, Spain
- Faculty of Psychology, Universidad Autónoma de Madrid, Madrid, Spain
- Department of Psychology, Universidad Pontificia de Comillas, Madrid, Spain
| | | | | | - Hilario Blasco-Fontecilla
- Department of Psychiatry, Hospital Universitario Puerta de Hierro Majadahonda, Majadahonda, Spain
- Faculty of Medicine, Universidad Autónoma de Madrid, Madrid, Spain
- ITA Mental Health, Madrid, Spain
- Centro de Investigación Biomédica en Red Salud Mental, Madrid, Spain
| |
Collapse
|
9
|
Saiu S, Grosso E. Controlled audio-visual stimulation for anxiety reduction. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 223:106898. [PMID: 35780520 DOI: 10.1016/j.cmpb.2022.106898] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 03/09/2022] [Accepted: 05/16/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Recent clinical data suggest that 75% of patients undergoing surgery are anxious, despite pharmacological measures to relieve anxiety. As an alternative to the administration of drugs, the scientific literature reports the relevant psychophysiological effects of auditory and visual stimulation in reducing preoperative anxiety. The main objective of this study is the development of a portable computer-controlled device for the simultaneous combined administration of audio-visual stimuli and the evaluation of this device through the collection and the statistical analysis of psychophysiological parameters strictly related to the state of anxiety. METHODS A new algorithmic approach for the real-time association of sounds and colours is proposed and implemented in a low-cost architectural platform. The combined administration of auditory and visual stimuli is tested on 220 subjects undergoing dental surgery; in particular, psychophysiological parameters are collected and evaluated in four experimental conditions, in order to demonstrate the efficacy of cross-modal stimulation (auditory and visual) compared to non-pharmacological treatments based on monomodal stimuli (auditory or visual). RESULTS Non-parametric statistical techniques applied to the recorded experimental data show that the experimental conditions considered significantly differ. Pairwise comparisons between experimental groups show that the combined administration of sounds and colors significantly reduces the level of anxiety, systolic blood pressure and heart rate to a greater extent than monomodal stimulation. CONCLUSION The study demonstrates the potential benefits of a device for the combined administration of auditory and visual stimuli. The developed device has proven effective in reducing preoperative anxiety levels, becoming a serious candidate for non-pharmacological therapies. The study also encourages a deeper investigation of models capable of better capturing the potential of cross-modal stimulation, maximizing the desired effects (relaxation, arousal) on patients awaiting specific medical treatments.
Collapse
Affiliation(s)
- Salvatore Saiu
- Research Fellow in Computer Science, University of Sassari Computer Vision Laboratory, Sassari, Italy.
| | - Enrico Grosso
- Full Professor in Computer Science, Computer Vision Laboratory, University of Sassari, Sassari, Italy
| |
Collapse
|
10
|
Learning Chinese Classical Music with the Aid of Soundscape by Using Intelligent Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2085413. [PMID: 35602646 PMCID: PMC9122685 DOI: 10.1155/2022/2085413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 02/14/2022] [Accepted: 02/21/2022] [Indexed: 11/17/2022]
Abstract
A soundscape is a sound environment of the awareness of auditory perception and social or cultural understanding. To improve the subjective initiative of the Chinese classical music students, this study explores new learning modes and methods using soundscape and investigates its learning effect using intelligent music software. To examine the emotional experience of players in playing before and after learning, 50 students from music majors and 50 from non-music majors were selected. Results show that in the positive and negative emotional classical music experiment, most music majors and non-music majors have weak emotional experience at the beginning, and only a few have a strong emotional experience, which could reach 6 points. In the second scoring, most majors have a score of about 7 points, indicating strong emotional experience and a few have a score of about 4 points and 9 points, representing that there were relatively few majors with weak emotional experience and strong emotional experience. The overall emotional experience score is low in the comparison between non-music majors and music majors, and the second score in the entire experiment is significantly higher than the first score, signifying that the learning effect of players is obvious, and intelligent music software and soundscape play a role in the exploration of Chinese classical music.
Collapse
|
11
|
He J, Ren H, Li J, Dong M, Dai L, Li Z, Miao Y, Li Y, Tan P, Gu L, Chen X, Tang J. Deficits in Sense of Body Ownership, Sensory Processing, and Temporal Perception in Schizophrenia Patients With/Without Auditory Verbal Hallucinations. Front Neurosci 2022; 16:831714. [PMID: 35495040 PMCID: PMC9046910 DOI: 10.3389/fnins.2022.831714] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 03/15/2022] [Indexed: 11/13/2022] Open
Abstract
It has been claimed that individuals with schizophrenia have difficulty in self-recognition and, consequently, are unable to identify the sources of their sensory perceptions or thoughts, resulting in delusions, hallucinations, and unusual experiences of body ownership. The deficits also contribute to the enhanced rubber hand illusion (RHI; a body perception illusion, induced by synchronous visual and tactile stimulation). Evidence based on RHI paradigms is emerging that auditory information can make an impact on the sense of body ownership, which relies on the process of multisensory inputs and integration. Hence, we assumed that auditory verbal hallucinations (AVHs), as an abnormal auditory perception, could be linked with body ownership, and the RHI paradigm could be conducted in patients with AVHs to explore the underlying mechanisms. In this study, we investigated the performance of patients with/without AVHs in the RHI. We administered the RHI paradigm to 80 patients with schizophrenia (47 with AVHs and 33 without AVHs) and 36 healthy controls. We conducted the experiment under two conditions (synchronous and asynchronous) and evaluated the RHI effects by both objective and subjective measures. Both patient groups experienced the RHI more quickly and strongly than HCs. The RHI effects of patients with AVHs were significantly smaller than those of patients without AVHs. Another important finding was that patients with AVHs did not show a reduction in RHI under asynchronous conditions. These results emphasize the disturbances of the sense of body ownership in schizophrenia patients with/without AVHs and the associations with AVHs. Furthermore, it is suggested that patients with AVHs may have multisensory processing dysfunctions and internal timing deficits.
Collapse
Affiliation(s)
- Jingqi He
- Department of Psychiatry, National Clinical Research Center for Mental Disorders, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Honghong Ren
- Department of Psychiatry, National Clinical Research Center for Mental Disorders, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Jinguang Li
- Department of Psychiatry, National Clinical Research Center for Mental Disorders, The Second Xiangya Hospital of Central South University, Changsha, China
- Affiliated Wuhan Mental Health Center, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Min Dong
- Guangdong Mental Health Center, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Lulin Dai
- Department of Neurosurgery, Center for Functional Neurosurgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhijun Li
- Department of Psychiatry, National Clinical Research Center for Mental Disorders, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Yating Miao
- Department of Psychiatry, National Clinical Research Center for Mental Disorders, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Yunjin Li
- Department of Pathology, School of Basic Medical Sciences, Central South University, Changsha, China
| | - Peixuan Tan
- Department of Medical Psychology and Behavioral Medicine, School of Public Health, Guangxi Medical University, Nanning, China
| | - Lin Gu
- RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
- Research Center for Advanced Science and Technology (RCAST), University of Tokyo, Tokyo, Japan
| | - Xiaogang Chen
- Department of Psychiatry, National Clinical Research Center for Mental Disorders, The Second Xiangya Hospital of Central South University, Changsha, China
- *Correspondence: Xiaogang Chen,
| | - Jinsong Tang
- Department of Psychiatry, Sir Run-Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
- Zigong Mental Health Center, Zigong, China
- Jinsong Tang,
| |
Collapse
|
12
|
Subliminal audio-visual temporal congruency in music videos enhances perceptual pleasure. Neurosci Lett 2022; 779:136623. [DOI: 10.1016/j.neulet.2022.136623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 03/31/2022] [Accepted: 04/05/2022] [Indexed: 11/19/2022]
|
13
|
Lange EB, Fünderich J, Grimm H. Multisensory integration of musical emotion perception in singing. PSYCHOLOGICAL RESEARCH 2022; 86:2099-2114. [PMID: 35001181 PMCID: PMC9470688 DOI: 10.1007/s00426-021-01637-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 12/16/2021] [Indexed: 11/25/2022]
Abstract
We investigated how visual and auditory information contributes to emotion communication during singing. Classically trained singers applied two different facial expressions (expressive/suppressed) to pieces from their song and opera repertoire. Recordings of the singers were evaluated by laypersons or experts, presented to them in three different modes: auditory, visual, and audio–visual. A manipulation check confirmed that the singers succeeded in manipulating the face while keeping the sound highly expressive. Analyses focused on whether the visual difference or the auditory concordance between the two versions determined perception of the audio–visual stimuli. When evaluating expressive intensity or emotional content a clear effect of visual dominance showed. Experts made more use of the visual cues than laypersons. Consistency measures between uni-modal and multimodal presentations did not explain the visual dominance. The evaluation of seriousness was applied as a control. The uni-modal stimuli were rated as expected, but multisensory evaluations converged without visual dominance. Our study demonstrates that long-term knowledge and task context affect multisensory integration. Even though singers’ orofacial movements are dominated by sound production, their facial expressions can communicate emotions composed into the music, and observes do not rely on audio information instead. Studies such as ours are important to understand multisensory integration in applied settings.
Collapse
Affiliation(s)
- Elke B Lange
- Department of Music, Max Planck Institute for Empirical Aesthetics (MPIEA), Grüneburgweg 14, 60322, Frankfurt/M., Germany.
| | - Jens Fünderich
- Department of Music, Max Planck Institute for Empirical Aesthetics (MPIEA), Grüneburgweg 14, 60322, Frankfurt/M., Germany.,University of Erfurt, Erfurt, Germany
| | - Hartmut Grimm
- Department of Music, Max Planck Institute for Empirical Aesthetics (MPIEA), Grüneburgweg 14, 60322, Frankfurt/M., Germany
| |
Collapse
|
14
|
Beccacece L, Abondio P, Cilli E, Restani D, Luiselli D. Human Genomics and the Biocultural Origin of Music. Int J Mol Sci 2021; 22:5397. [PMID: 34065521 PMCID: PMC8160972 DOI: 10.3390/ijms22105397] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 05/03/2021] [Accepted: 05/18/2021] [Indexed: 12/11/2022] Open
Abstract
Music is an exclusive feature of humankind. It can be considered as a form of universal communication, only partly comparable to the vocalizations of songbirds. Many trends of research in this field try to address music origins, as well as the genetic bases of musicality. On one hand, several hypotheses have been made on the evolution of music and its role, but there is still debate, and comparative studies suggest a gradual evolution of some abilities underlying musicality in primates. On the other hand, genome-wide studies highlight several genes associated with musical aptitude, confirming a genetic basis for different musical skills which humans show. Moreover, some genes associated with musicality are involved also in singing and song learning in songbirds, suggesting a likely evolutionary convergence between humans and songbirds. This comprehensive review aims at presenting the concept of music as a sociocultural manifestation within the current debate about its biocultural origin and evolutionary function, in the context of the most recent discoveries related to the cross-species genetics of musical production and perception.
Collapse
Affiliation(s)
- Livia Beccacece
- Laboratory of Molecular Anthropology, Department of Biological, Geological and Environmental Sciences, University of Bologna, 40126 Bologna, Italy;
| | - Paolo Abondio
- Laboratory of Molecular Anthropology, Department of Biological, Geological and Environmental Sciences, University of Bologna, 40126 Bologna, Italy;
| | - Elisabetta Cilli
- Department of Cultural Heritage, University of Bologna—Ravenna Campus, 48121 Ravenna, Italy; (E.C.); (D.R.)
| | - Donatella Restani
- Department of Cultural Heritage, University of Bologna—Ravenna Campus, 48121 Ravenna, Italy; (E.C.); (D.R.)
| | - Donata Luiselli
- Department of Cultural Heritage, University of Bologna—Ravenna Campus, 48121 Ravenna, Italy; (E.C.); (D.R.)
| |
Collapse
|
15
|
Fedotchev A, Parin S, Savchuk L, Polevaya S. Mechanisms of Light and Music Stimulation Controlled by a Person's own Brain and Heart Biopotentials or Those of another Person. Sovrem Tekhnologii Med 2020; 12:23-28. [PMID: 34795989 PMCID: PMC8596279 DOI: 10.17691/stm2020.12.4.03] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2020] [Indexed: 11/14/2022] Open
Abstract
The aim of the study was to carry out comparative analysis of effects observed in subjects exposed to light and music stimulation controlled by their own brain and heart biopotentials (closed-loop method) or biopotentials of another person. MATERIALS AND METHODS Volunteers under stress participated in two experiments in pairs. In the first experiment, light and music stimulation effects formed in each subject in a pair on the basis of their own brain and heart biopotentials, while in the second experiment, they formed on the basis of biopotentials of the other subject. RESULTS Both types of exposure caused reducing the tension of the regulatory systems in the body, reducing stress levels and improving the emotional state due to the mechanisms of multisensory integration and neuroplasticity. A significant increase in the power of the main EEG rhythms, accompanied by significant positive changes in psychological testing results and positive emotional responses to stimulation was observed only during light and music stimulation controlled by the subjects' own brain and heart biopotentials. These data are attributable to the integration of perception and processing of interoceptive signals significant for humans into the resonance mechanisms of the central nervous system, providing normalization of functional state due to stimulation. CONCLUSION The data obtained can be used for developing the effective methods of personalized light and music stimulation aimed at timely elimination of functional disorders and returning the human body to homeostasis.
Collapse
Affiliation(s)
- A.I. Fedotchev
- Leading Researcher, Laboratory of Reception Mechanisms; Institute of Cell Biophysics, Russian Academy of Sciences, 3 Institutskaya St., Pushchino, Moscow Region, 142290, Russia
| | - S.B. Parin
- Professor, Department of Psychophysiology ; National Research Lobachevsky State University of Nizhni Novgorod, 23 Prospekt Gagarina, Nizhny Novgorod, 603950, Russia
| | - L.V. Savchuk
- PhD Student; National Research Lobachevsky State University of Nizhni Novgorod, 23 Prospekt Gagarina, Nizhny Novgorod, 603950, Russia
| | - S.A. Polevaya
- Head of the Department of Psychophysiology National Research Lobachevsky State University of Nizhni Novgorod, 23 Prospekt Gagarina, Nizhny Novgorod, 603950, Russia
| |
Collapse
|