1
|
Shatzer HE, Russo FA. Brightening the Study of Listening Effort with Functional Near-Infrared Spectroscopy: A Scoping Review. Semin Hear 2023; 44:188-210. [PMID: 37122884 PMCID: PMC10147513 DOI: 10.1055/s-0043-1766105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/09/2023] Open
Abstract
Listening effort is a long-standing area of interest in auditory cognitive neuroscience. Prior research has used multiple techniques to shed light on the neurophysiological mechanisms underlying listening during challenging conditions. Functional near-infrared spectroscopy (fNIRS) is growing in popularity as a tool for cognitive neuroscience research, and its recent advances offer many potential advantages over other neuroimaging modalities for research related to listening effort. This review introduces the basic science of fNIRS and its uses for auditory cognitive neuroscience. We also discuss its application in recently published studies on listening effort and consider future opportunities for studying effortful listening with fNIRS. After reading this article, the learner will know how fNIRS works and summarize its uses for listening effort research. The learner will also be able to apply this knowledge toward generation of future research in this area.
Collapse
Affiliation(s)
- Hannah E. Shatzer
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| | - Frank A. Russo
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| |
Collapse
|
2
|
Rovetti J, Sumantry D, Russo FA. Exposure to nonnative-accented speech reduces listening effort and improves social judgments of the speaker. Sci Rep 2023; 13:2808. [PMID: 36797318 PMCID: PMC9935874 DOI: 10.1038/s41598-023-29082-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 01/30/2023] [Indexed: 02/18/2023] Open
Abstract
Prior research has revealed a native-accent advantage, whereby nonnative-accented speech is more difficult to process than native-accented speech. Nonnative-accented speakers also experience more negative social judgments. In the current study, we asked three questions. First, does exposure to nonnative-accented speech increase speech intelligibility or decrease listening effort, thereby narrowing the native-accent advantage? Second, does lower intelligibility or higher listening effort contribute to listeners' negative social judgments of speakers? Third and finally, does increased intelligibility or decreased listening effort with exposure to speech bring about more positive social judgments of speakers? To address these questions, normal-hearing adults listened to a block of English sentences with a native accent and a block with nonnative accent. We found that once participants were accustomed to the task, intelligibility was greater for nonnative-accented speech and increased similarly with exposure for both accents. However, listening effort decreased only for nonnative-accented speech, soon reaching the level of native-accented speech. In addition, lower intelligibility and higher listening effort was associated with lower ratings of speaker warmth, speaker competence, and willingness to interact with the speaker. Finally, competence ratings increased over time to a similar extent for both accents, with this relationship fully mediated by intelligibility and listening effort. These results offer insight into how listeners process and judge unfamiliar speakers.
Collapse
Affiliation(s)
- Joseph Rovetti
- grid.39381.300000 0004 1936 8884Department of Psychology, Western University, London, ON N6A 3K7 Canada ,Department of Psychology, Toronto Metropolitan University, Toronto, ON M5B 2K3 Canada
| | - David Sumantry
- Department of Psychology, Toronto Metropolitan University, Toronto, ON M5B 2K3 Canada
| | - Frank A. Russo
- Department of Psychology, Toronto Metropolitan University, Toronto, ON M5B 2K3 Canada
| |
Collapse
|
3
|
Sharma VV, Thaut M, Russo FA, Alain C. Absolute pitch: neurophysiological evidence for early brain activity in prefrontal cortex. Cereb Cortex 2023; 33:6465-6473. [PMID: 36702477 DOI: 10.1093/cercor/bhac517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 12/01/2022] [Accepted: 12/02/2022] [Indexed: 01/28/2023] Open
Abstract
Absolute pitch (AP) is the ability to rapidly label pitch without an external reference. The speed of AP labeling may be related to faster sensory processing. We compared time needed for auditory processing in AP musicians, non-AP musicians, and nonmusicians (NM) using high-density electroencephalographic recording. Participants responded to pure tones and sung voice. Stimuli evoked a negative deflection peaking at ~100 ms (N1) post-stimulus onset, followed by a positive deflection peaking at ~200 ms (P2). N1 latency was shortest in AP, intermediate in non-AP musicians, and longest in NM. Source analyses showed decreased auditory cortex and increased frontal cortex contributions to N1 for complex tones compared with pure tones. Compared with NM, AP musicians had weaker source currents in left auditory cortex but stronger currents in left inferior frontal gyrus (IFG) during N1, and stronger currents in left IFG during P2. Compared with non-AP musicians, AP musicians exhibited stronger source currents in right insula and left IFG during N1, and stronger currents in left IFG during P2. Non-AP musicians had stronger N1 currents in right auditory cortex than nonmusicians. Currents in left IFG and left auditory cortex were correlated to response times exclusively in AP. Findings suggest a left frontotemporal network supports rapid pitch labeling in AP.
Collapse
Affiliation(s)
- Vivek V Sharma
- Neurosciences and Mental Health, Research Institute, Hospital for Sick Children, 686 Bay Street, Toronto, ON M5G 0A8, Canada
| | - Michael Thaut
- Music and Health Sciences, Faculty of Music, University of Toronto, 90 Wellesley Street West, Toronto, ON M5S 1C5, Canada
| | - Frank A Russo
- Department of Psychology, Toronto Metropolitan University, 350 Victoria Street, Toronto, ON M5B 2K3, Canada
| | - Claude Alain
- Rotman Research Institute, Baycrest Health Sciences, 3560 Bathurst Street, Toronto, ON M6A 2E1, Canada.,Department of Psychology, University of Toronto, 100 St. George Street, Toronto, ON M5S 3G3, Canada
| |
Collapse
|
4
|
Russo FA, Mallik A, Thomson Z, de Raadt St. James A, Dupuis K, Cohen D. Developing a music-based digital therapeutic to help manage the neuropsychiatric symptoms of dementia. Front Digit Health 2023; 5:1064115. [PMID: 36744277 PMCID: PMC9895844 DOI: 10.3389/fdgth.2023.1064115] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 01/02/2023] [Indexed: 01/22/2023] Open
Abstract
The greying of the world is leading to a rapid acceleration in both the healthcare costs and caregiver burden that are associated with dementia. There is an urgent need to develop new, easily scalable modalities of support. This perspective paper presents the theoretical background, rationale, and development plans for a music-based digital therapeutic to manage the neuropsychiatric symptoms of dementia, particularly agitation and anxiety. We begin by presenting the findings of a survey we conducted with key opinion leaders. The findings highlight the value of a music-based digital therapeutic for treating neuropsychiatric symptoms, particularly agitation and anxiety. We then consider the neural substrates of these neuropsychiatric symptoms before going on to evaluate randomized control trials on the efficacy of music-based interventions in their treatment. Finally, we present our development plans for the adaptation of an existing music-based digital therapeutic that was previously shown to be efficacious in the treatment of adult anxiety symptoms.
Collapse
Affiliation(s)
- Frank A. Russo
- Department of Psychology, Toronto Metropolitan University, Toronto, ON, Canada,KITE, Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada,LUCID Inc., Toronto, ON, Canada,Correspondence: Frank A. Russo
| | | | | | | | - Kate Dupuis
- Center for Elder Research, Sheridan College, Oakville, ON, Canada
| | - Dan Cohen
- Right to Music, New York, NY, United States
| |
Collapse
|
5
|
Good A, Earle E, Vezer E, Gilmore S, Livingstone S, Russo FA. Community Choir Improves Vocal Production Measures in Individuals Living with Parkinson's Disease. J Voice 2023:S0892-1997(22)00391-5. [PMID: 36642592 DOI: 10.1016/j.jvoice.2022.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 11/30/2022] [Accepted: 12/01/2022] [Indexed: 01/15/2023]
Abstract
OBJECTIVES Parkinson's disease (PD) is a neurodegenerative disease leading to motor impairments and dystonia across diverse muscle groups including vocal muscles. The vocal production challenges associated with PD have received considerably less research attention than the primary gross motor symptoms of the disease despite having a substantial effect on quality of life. Increasingly, people living with PD are discovering group singing as an asset-based approach to community building that is purported to strengthen vocal muscles and improve vocal quality. STUDY DESIGN/METHODS The present study investigated the impact of community choir on vocal production in people living with PD across two sites. Prior to and immediately following a 12-week community choir at each site, vocal testing included a range of vocal-acoustic measures, including lowest and highest achievable pitch, duration of phonation, loudness, jitter, and shimmer. RESULTS Results showed that group singing significantly improved some, though not all, measures of vocal production. Group singing improved lowest pitch (both groups), duration (both groups), intensity (one group), and jitter (one group) and shimmer (both groups). CONCLUSIONS These findings support community choir as a feasible and scalable complementary approach to managing vocal production challenges associated with PD.
Collapse
Affiliation(s)
- Arla Good
- Department of Psychology, Toronto Metropolitan University, Toronto, Ontario.
| | | | - Esztella Vezer
- Department of Psychology, Toronto Metropolitan University, Toronto, Ontario
| | - Sean Gilmore
- Department of Psychology, Toronto Metropolitan University, Toronto, Ontario
| | - Steven Livingstone
- Department of Computer Science, Ontario Tech University, Oshawa, Ontario
| | - Frank A Russo
- Department of Psychology, Toronto Metropolitan University, Toronto, Ontario
| |
Collapse
|
6
|
Good A, Peets KF, Choma BL, Russo FA. Singing foreign songs promotes shared common humanity in elementary school children. J Applied Social Pyschol 2022. [DOI: 10.1111/jasp.12917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Arla Good
- Department of Psychology Toronto Metropolitan University Toronto Ontario Canada
| | - Kathleen F. Peets
- School of Early Childhood Studies Toronto Metropolitan University Toronto Ontario Canada
| | - Becky L. Choma
- Department of Psychology Toronto Metropolitan University Toronto Ontario Canada
| | - Frank A. Russo
- Department of Psychology Toronto Metropolitan University Toronto Ontario Canada
| |
Collapse
|
7
|
Abstract
Interindividual differences in music-related reward have been characterized as involving five main facets: musical seeking, emotion evocation, mood regulation, social reward, and sensory-motor. An interesting concept related to how humans decode music as a rewarding experience is music transcendence or absorption (i.e., music-driven states of complete immersion, including momentary loss of self-consciousness or even time-space disorientation). Here, we investigated the relation between previously characterized facets of music reward and individual differences in music absorption. A first sample of participants (N = 370) completed both the Barcelona Music Reward Questionnaire (BMRQ) and the Absorption in Music Scale (AIMS). Results showed that both constructs were highly interrelated (r = 0.78, p < 0.001), indicating that higher music reward sensitivity is associated with a greater tendency to music-related absorption states. In addition, four items from the AIMS were identified as suitable to be added to an extended version of the BMRQ (eBMRQ). A second sample (N = 550) completed the eBMRQ for a validation study. Exploratory and confirmatory factor analyses on the whole sample (N = 920) showed the reliable psychometric properties of the eBMRQ and suggested that taking into account an absorption facet could contribute to a better characterization of individual differences in the sensitivity to experience music-related reward and pleasure.
Collapse
Affiliation(s)
- Gemma Cardona
- Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain.,Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute, L'Hospitalet de Llobregat, Barcelona, Spain
| | - Laura Ferreri
- Laboratoire d'Etude des Mécanismes Cognitifs, Université Lumière Lyon 2, Lyon, France.,Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | | | - Frank A Russo
- Department of Psychology, Ryerson University, Toronto, Ontario, Canada
| | - Antoni Rodriguez-Fornells
- Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain.,Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute, L'Hospitalet de Llobregat, Barcelona, Spain.,Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain
| |
Collapse
|
8
|
Rovetti J, Copelli F, Russo FA. Audio and visual speech emotion activate the left pre-supplementary motor area. Cogn Affect Behav Neurosci 2022; 22:291-303. [PMID: 34811708 DOI: 10.3758/s13415-021-00961-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/03/2021] [Indexed: 06/13/2023]
Abstract
Sensorimotor brain areas have been implicated in the recognition of emotion expressed on the face and through nonverbal vocalizations. However, no previous study has assessed whether sensorimotor cortices are recruited during the perception of emotion in speech-a signal that includes both audio (speech sounds) and visual (facial speech movements) components. To address this gap in the literature, we recruited 24 participants to listen to speech clips produced in a way that was either happy, sad, or neutral in expression. These stimuli also were presented in one of three modalities: audio-only (hearing the voice but not seeing the face), video-only (seeing the face but not hearing the voice), or audiovisual. Brain activity was recorded using electroencephalography, subjected to independent component analysis, and source-localized. We found that the left presupplementary motor area was more active in response to happy and sad stimuli than neutral stimuli, as indexed by greater mu event-related desynchronization. This effect did not differ by the sensory modality of the stimuli. Activity levels in other sensorimotor brain areas did not differ by emotion, although they were greatest in response to visual-only and audiovisual stimuli. One possible explanation for the pre-SMA result is that this brain area may actively support speech emotion recognition by using our extensive experience expressing emotion to generate sensory predictions that in turn guide our perception.
Collapse
Affiliation(s)
- Joseph Rovetti
- Department of Psychology, Ryerson University, Toronto, ON, M5B 2K3, Canada
- Department of Psychology, Western University, London, ON, Canada
| | - Fran Copelli
- Department of Psychology, Ryerson University, Toronto, ON, M5B 2K3, Canada
| | - Frank A Russo
- Department of Psychology, Ryerson University, Toronto, ON, M5B 2K3, Canada.
| |
Collapse
|
9
|
Abstract
Background and objectives
Music and auditory beat stimulation (ABS) in the theta frequency range (4–7 Hz) are sound-based anxiety treatments that have been independently investigated in prior studies. Here, the anxiety-reducing potential of calm music combined with theta ABS was examined in a large sample of participants.
Methods
An open-label randomized controlled trial was conducted with participants taking anxiolytics (n = 163). Participants were randomly assigned using the Qualtrics randomizer algorithm, to a single session of sound-based treatment in one of four parallel arms: combined (music & ABS; n = 39), music-alone (n = 36), ABS-alone (n = 41), or pink noise (control; n = 47). Pre- and post-intervention somatic and cognitive state anxiety measures were collected along with trait anxiety, personality measures and musical preferences. The study was completed online using a custom application.
Results
Based on trait anxiety scores participants were separated into moderate and high trait anxiety sub-groups. Among participants with moderate trait anxiety, we observed reductions in somatic anxiety that were greater in combined and music-alone conditions than in the pink noise condition; and reductions in cognitive state anxiety that were greater in the combined condition than in the music-alone, ABS-alone, and pink noise conditions. While we also observed reductions in somatic and cognitive state anxiety in participants with high trait anxiety, the conditions were not well differentiated.
Conclusions
Sound-based treatments are effective in reducing somatic and cognitive state anxiety. For participants with moderate trait anxiety, combined conditions were most efficacious.
Collapse
Affiliation(s)
- Adiel Mallik
- Department of Psychology, Ryerson University, Toronto, Ontario, Canada
- * E-mail:
| | - Frank A. Russo
- Department of Psychology, Ryerson University, Toronto, Ontario, Canada
| |
Collapse
|
10
|
Picou EM, Singh G, Russo FA. A Comparison between a remote testing and a laboratory test setting for evaluating emotional responses to non-speech sounds. Int J Audiol 2021; 61:799-808. [PMID: 34883031 DOI: 10.1080/14992027.2021.2007422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVE To evaluate remote testing as a tool for measuring emotional responses to non-speech sounds. DESIGN Participants self-reported their hearing status and rated valence and arousal in response to non-speech sounds on an Internet crowdsourcing platform. These ratings were compared to data obtained in a laboratory setting with participants who had confirmed normal or impaired hearing. STUDY SAMPLE Adults with normal and impaired hearing. RESULTS In both settings, participants with hearing loss rated pleasant sounds as less pleasant than did their peers with normal hearing. The difference in valence ratings between groups was generally smaller when measured in the remote setting than in the laboratory setting. This difference was the result of participants with normal hearing rating sounds as less extreme (less pleasant, less unpleasant) in the remote setting than did their peers in the laboratory setting, whereas no such difference was noted for participants with hearing loss. Ratings of arousal were similar from participants with normal and impaired hearing; the similarity persisted in both settings. CONCLUSIONS In both test settings, participants with hearing loss rated pleasant sounds as less pleasant than did their normal hearing counterparts. Future work is warranted to explain the ratings of participants with normal hearing.
Collapse
Affiliation(s)
- Erin M Picou
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Gurjit Singh
- Phonak, Canada, Mississauga, Canada.,Department of Psychology, Ryerson University, Toronto, Canada.,Department of Speech-Language Pathology, University of Toronto, Toronto, Canada
| | - Frank A Russo
- Department of Psychology, Ryerson University, Toronto, Canada
| |
Collapse
|
11
|
Copelli F, Rovetti J, Ammirante P, Russo FA. Human mirror neuron system responsivity to unimodal and multimodal presentations of action. Exp Brain Res 2021; 240:537-548. [PMID: 34817643 DOI: 10.1007/s00221-021-06266-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 11/01/2021] [Indexed: 11/28/2022]
Abstract
This study aims to clarify unresolved questions from two earlier studies by McGarry et al. Exp Brain Res 218(4): 527-538, 2012 and Kaplan and Iacoboni Cogn Process 8: 103-113, 2007 on human mirror neuron system (hMNS) responsivity to multimodal presentations of actions. These questions are: (1) whether the two frontal areas originally identified by Kaplan and Iacoboni (ventral premotor cortex [vPMC] and inferior frontal gyrus [IFG]) are both part of the hMNS (i.e., do they respond to execution as well as observation), (2) whether both areas yield effects of biologicalness (biological, control) and modality (audio, visual, audiovisual), and (3) whether the vPMC is preferentially responsive to multimodal input. To resolve these questions about the hMNS, we replicated and extended McGarry et al.'s electroencephalography (EEG) study, while incorporating advanced source localization methods. Participants were asked to execute movements (ripping paper) as well as observe those movements across the same three modalities (audio, visual, and audiovisual), all while 64-channel EEG data was recorded. Two frontal sources consistent with those identified in prior studies showed mu event-related desynchronization (mu-ERD) under execution and observation conditions. These sources also showed a greater response to biological movement than to control stimuli as well as a distinct visual advantage, with greater responsivity to visual and audiovisual compared to audio conditions. Exploratory analyses of mu-ERD in the vPMC under visual and audiovisual observation conditions suggests that the hMNS tracks the magnitude of visual movement over time.
Collapse
Affiliation(s)
- Fran Copelli
- Department of Psychology, Ryerson University, Toronto, ON, Canada
| | - Joseph Rovetti
- Department of Psychology, Ryerson University, Toronto, ON, Canada
| | - Paolo Ammirante
- Department of Psychology, Ryerson University, Toronto, ON, Canada
| | - Frank A Russo
- Department of Psychology, Ryerson University, Toronto, ON, Canada.
| |
Collapse
|
12
|
Montalto C, Russo FA, Uccello A, Carli S, Gazmawi R, Galazzi M, Tua L, Acquaro M, Ferlini M, Mandurino-Mirizzi A, Marinoni B, Gnecchi M, Costantino I, Oltrona-Visconti L, Leonardi S. Clinical utility of the academic research consortium new proposed criteria for high bleeding risk definition in patients with acute coronary syndromes. Eur Heart J 2021. [DOI: 10.1093/eurheartj/ehab724.1415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Abstract
Background
The Academic Research Consortium High Bleeding Risk (ARC-HBR) criteria have been proposed to stratify the bleeding risk of patients undergoing percutaneous coronary intervention (PCI). While most criteria were established, 4 criteria have been proposed on a de novo basis.
Purpose
We assessed the prevalence and prognosis of new ARC-HBR criteria in a contemporary, prospective, multicenter, quality-improvement registry of all-comers patients with acute coronary syndromes.
Methods
Between 2016 and 2020, consecutive subjects were enrolled; baseline characteristics and medications were prospectively collected, and patients were followed-up at 1 year. All clinical events (including bleeding) were adjudicated by an independent committee. All 17 ARC-HBR criteria were individually evaluated by reviewing patients' charts.
Results
Of the 2804 patients enrolled, 782 (28.0%) met the ARC-HBR definition and 47 (6%) of them experienced a major BARC 3 or 5) bleeding at 1-year. HBR patients had a significantly higher risk of BARC 3–5 bleedings (HR for: 3.07; 95% CI: 2.02–4.67; p<0.0001; Fig. 1A), BARC 2–5 (HR: 1.845; 95% CI: 1.4–2.42; p<0.0001). Fig. 1B indicates the proportion of patients meeting each criterion. Age, (moderate or severe) chronic kidney disease, (moderate or severe) anemia and oral anticoagulant therapy included 88% of HBR patients.
The 4 new ARC-HBR criteria, all together, were present in only 1.7% of our population: 1.0% was planned for major surgery while on dual antiplatelet therapy, 0.5% had a recent intracranial hemorrhage/ictus or brain arteriovenous malformations, 0.1% had hepatic cirrhosis with portal hypertension and 0.1% had a recent surgery or trauma. In a multivariable Cox regression analysis including individual ARC-HBR criteria, only CKD (major and minor criteria), anemia (major and minor criteria) and cancer were the independent predictors of BARC 3–5 events with a concordance-index for this model of 0.698 (p<0.001). In a second model including only CKD (major criterion), anemia (major criterion), age and oral anticoagulation therapy, all these criteria were independent predictors of BARC 3–5 events with a concordance index of 0.674 (pmodel<0.001 for the model) (Fig. 2).
Conclusion
Almost one third of contemporary ACS patients was at HBR according to the ARC-HBR definition and these patients presented a significantly higher risk of bleedings at 1-year. The most common 4 criteria (age, CKD, anemia, and oral anticoagulant therapy) allowed the identification of 88% of HBR patients. The newly proposed HBR criteria were extremely rare and therefore challenging to validate and of uncertain clinical utility. These data may inform and simplify clinical decision making and provide priority for future directions of HBR definitions.
Funding Acknowledgement
Type of funding sources: None. Figure 1Figure 2
Collapse
Affiliation(s)
| | | | | | - S Carli
- University of Pavia, Pavia, Italy
| | | | | | - L Tua
- University of Pavia, Pavia, Italy
| | | | - M Ferlini
- Policlinic Foundation San Matteo IRCCS, Division of Cardiology, Pavia, Italy
| | - A Mandurino-Mirizzi
- Policlinic Foundation San Matteo IRCCS, Division of Cardiology, Pavia, Italy
| | - B Marinoni
- Policlinic Foundation San Matteo IRCCS, Division of Cardiology, Pavia, Italy
| | | | - I Costantino
- Policlinic Foundation San Matteo IRCCS, Division of Cardiology, Pavia, Italy
| | - L Oltrona-Visconti
- Policlinic Foundation San Matteo IRCCS, Division of Cardiology, Pavia, Italy
| | | |
Collapse
|
13
|
Montalto C, Carli S, Gargiulo C, Russo FA, Gazmawi R, Tua L, Galazzi M, Acquaro M, Guida G, Disabato G, Attanasio A, Camporotondo R, Guida S, Oltrona-Visconti L, Leonardi S. Prognosis and prescriptions of glifozines in candidates patients in a prospective, multicenter, quality-improvement study of patients with acute coronary syndrome. Eur Heart J 2021. [DOI: 10.1093/eurheartj/ehab724.1211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Abstract
Background
Sodium-glucose transporter 2 inhibitors (SGLT2-i) have demonstrated substantial improvement in clinical outcomes for patients with heart failure (HF) and chronic kidney disease (CKD) with or without diabetes mellitus (DM). Prescription patterns and outcome of SGLT2-i candidates in patients hospitalized for an acute coronary syndrome (ACS) are less well established.
Purpose
We aimed to assess the proportion of candidates to SGLT2-i and to characterize their clinical outcome in a contemporary, prospective, multicenter, quality-improvement study of all-comers patients with ACS. We also aimed to ascertain prescriptions of SGLT2-i at discharge.
Methods
Between 2018 and 2020, subjects were enrolled in the study; baseline characteristics and medications were prospectively collected, and patients were followed-up at 1 year. Subjects were considered candidates to SGLT2-i if any of the following were: (i) known (medically treated) or new (HbA1c >6.5%) diagnosis of type 2 DM; (ii) left ventricular systolic dysfunction (LVSD; new or known left ventricular ejection fraction <40%) or clinical HF; (iii) CKD (estimated glomerular filtration rate 25–74 mL/min/m2, according to DAPA-CKD trial eligibility).
Results
Of the 2804 consecutive ACS patients enrolled, 798 (28.5%) had new or known DM and only 10 were already on SGLT2-I at baseline. Additionally, 1,098 (39.2%) patients qualified for SGLT2-i prescription as having known or new LVSD or HF, and 803 (28.6%) as having CKD. (Fig. 1A) Overall, these 1,767 (63.1%) SGLT2-i candidates had substantially higher hazard of death as compared to no candidate (Hazard Ratio [HR] at 1-year: 6.82; 95% Confidence Interval: 4.32–10.8; p<0.001; Fig. 1B) and each indication to SGLT2-i independently predicted death at 1 year (HR: 2.30/2.11/3.06; 95% CI: 1.78–2.97/1.62–2.74/2.35–3.97; all p<0.0001; for DM, HF, CKD, respectively; Fig. 2). At discharge, only 18 (1.0% of the candidates) were prescribed with SGLT2-i and, of those with DM, having a diabetological consultation before discharged modestly but significantly increased the likelihood of being discharged with SGLT2-i (4.3% vs. 6.6%; p=0.0015).
Conclusion
Most (two out of three) contemporary ACS patients are candidates to SGLT2-i therapy, and they have a significant and substantial higher risk of mortality at 1-year as compared to no candidates. Current prescription rates are still extremely low (1%) and highlight opportunity for quality improvement and multidisciplinary decision-making.
Funding Acknowledgement
Type of funding sources: None. Figure 1Figure 2
Collapse
Affiliation(s)
| | - S Carli
- University of Pavia, Pavia, Italy
| | | | | | | | - L Tua
- University of Pavia, Pavia, Italy
| | | | | | - G Guida
- University of Pavia, Pavia, Italy
| | | | | | - R Camporotondo
- Policlinic Foundation San Matteo IRCCS, Division of Cardiology, Pavia, Italy
| | - S Guida
- Policlinic Foundation San Matteo IRCCS, Division of Cardiology, Pavia, Italy
| | - L Oltrona-Visconti
- Policlinic Foundation San Matteo IRCCS, Division of Cardiology, Pavia, Italy
| | | |
Collapse
|
14
|
Sharma VV, Thaut M, Russo FA, Alain C. Neural Dynamics of Inhibitory Control in Musicians with Absolute Pitch: Theta Synchrony as an Oscillatory Signature of Information Conflict. Cereb Cortex Commun 2021; 2:tgab043. [PMID: 34514414 PMCID: PMC8423588 DOI: 10.1093/texcom/tgab043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 06/15/2021] [Accepted: 06/23/2021] [Indexed: 11/22/2022] Open
Abstract
Absolute pitch (AP) is the ability to identify an auditory pitch without prior context. Current theories posit AP involves automatic retrieval of referents. We tested interference in well-matched AP musicians, non-AP musicians, and nonmusicians with three auditory Stroop tasks. Stimuli were one of two sung pitches with congruent or incongruent verbal cues. The tasks used different lexicons: binary concrete adjectives (i.e., words: Low/High), syllables with no obvious semantic properties (i.e., solmization: Do/So), and abstract semiotic labels (i.e., orthographic: C/G). Participants were instructed to respond to pitch regardless of verbal information during electroencephalographic recording. Incongruent stimuli of words and solmization tasks increased errors and slowed response times (RTs), which was reversed in nonmusicians for the orthographic task. AP musicians made virtually no errors, but their RTs slowed for incongruent stimuli. Frontal theta (4–7 Hz) event-related synchrony was significantly enhanced during incongruence between 350 and 550 ms poststimulus onset in AP, regardless of lexicon or behavior. This effect was found in non-AP musicians and nonmusicians for word task, while orthographic task showed a reverse theta congruency effect. Findings suggest theta synchrony indexes conflict detection in AP. High beta (21–29 Hz) desynchrony indexes response conflict detection in non-AP musicians. Alpha (8–12 Hz) synchrony may reflect top-down attention.
Collapse
Affiliation(s)
- Vivek V Sharma
- Neurosciences and Mental Health Program, Hospital for Sick Children, Toronto, ON M5G 0A4, Canada
| | - Michael Thaut
- Music and Health Sciences, Faculty of Music, University of Toronto, Toronto, ON M5S 2C5, Canada
| | - Frank A Russo
- Department of Psychology, Ryerson University, Toronto, ON M5B 2K3, Canada
| | - Claude Alain
- Music and Health Sciences, Faculty of Music, University of Toronto, Toronto, ON M5S 2C5, Canada
| |
Collapse
|
15
|
Good A, Sims L, Clarke K, Russo FA. Indigenous youth reconnect with cultural identity: The evaluation of a community- and school-based traditional music program. J Community Psychol 2021; 49:588-604. [PMID: 33314203 DOI: 10.1002/jcop.22481] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Revised: 10/02/2020] [Accepted: 10/03/2020] [Indexed: 06/12/2023]
Abstract
Reconnecting Indigenous youth with their cultural traditions has been identified as an essential part of healing the intergenerational effects of forced assimilation policies. Past work suggests that learning the music of one's culture can foster cultural identity and community bonding, which may serve as protective factors for well-being. An 8-week traditional song and dance program was implemented in a school setting for Indigenous youth. An evaluation was conducted using a mixed-method design to determine the impact of the program on 35 youth in the community. A triangulation of qualitative and quantitative data revealed several important themes, including personal development, cultural development, social development, student engagement in school-based programming, and perpetuating cultural knowledge. The program provided students with an opportunity to connect with their cultural traditions through activities that encouraged self and cultural expression. Community responses suggested that this type of programming is highly valued among Indigenous communities.
Collapse
Affiliation(s)
- Arla Good
- Ryerson University, Toronto, Ontario, Canada
| | - Lori Sims
- Selkirk First Nation, Pelly Crossing, Yukon Territory, Canada
| | - Keith Clarke
- Yukon Department of Education, Government of Yukon, Whitehorse, Yukon Territory, Canada
| | | |
Collapse
|
16
|
Abstract
The ability to synchronize movements to a rhythmic stimulus, referred to as sensorimotor synchronization (SMS), is a behavioral measure of beat perception. Although SMS is generally superior when rhythms are presented in the auditory modality, recent research has demonstrated near-equivalent SMS for vibrotactile presentations of isochronous rhythms [Ammirante, P., Patel, A. D., & Russo, F. A. Synchronizing to auditory and tactile metronomes: A test of the auditory-motor enhancement hypothesis. Psychonomic Bulletin & Review, 23, 1882-1890, 2016]. The current study aimed to replicate and extend this study by incorporating a neural measure of beat perception. Nonmusicians were asked to tap to rhythms or to listen passively while EEG data were collected. Rhythmic complexity (isochronous, nonisochronous) and presentation modality (auditory, vibrotactile, bimodal) were fully crossed. Tapping data were consistent with those observed by Ammirante et al. (2016), revealing near-equivalent SMS for isochronous rhythms across modality conditions and a drop-off in SMS for nonisochronous rhythms, especially in the vibrotactile condition. EEG data revealed a greater degree of neural entrainment for isochronous compared to nonisochronous trials as well as for auditory and bimodal compared to vibrotactile trials. These findings led us to three main conclusions. First, isochronous rhythms lead to higher levels of beat perception than nonisochronous rhythms across modalities. Second, beat perception is generally enhanced for auditory presentations of rhythm but still possible under vibrotactile presentation conditions. Finally, exploratory analysis of neural entrainment at harmonic frequencies suggests that beat perception may be enhanced for bimodal presentations of rhythm.
Collapse
|
17
|
Rovetti J, Goy H, Nurgitz R, Russo FA. Comparing verbal working memory load in auditory and visual modalities using functional near-infrared spectroscopy. Behav Brain Res 2021; 402:113102. [PMID: 33422594 DOI: 10.1016/j.bbr.2020.113102] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 11/29/2020] [Accepted: 12/27/2020] [Indexed: 10/22/2022]
Abstract
The verbal identity n-back task is commonly used to assess verbal working memory (VWM) capacity. Only three studies have compared brain activation during the n-back when using auditory and visual stimuli. The earliest study, a positron emission tomography study of the 3-back, found no differences in VWM-related brain activation between n-back modalities. In contrast, two subsequent functional magnetic resonance imaging (fMRI) studies of the 2-back found that auditory VWM was associated with greater left dorsolateral prefrontal cortex (DL-PFC) activation than visual VWM, perhaps suggesting that auditory VWM requires more cognitive effort than its visual counterpart. The current study aimed to assess whether DL-PFC activation (i.e., cognitive effort) differs by VWM modality. To do this, 16 younger adults completed an auditory and visual n-back, both at four levels of VWM load. Concurrently, activation of the PFC was measured using functional near-infrared spectroscopy (fNIRS), a silent neuroimaging method. We found that DL-PFC activation increased with VWM load, but it was not affected by VWM modality or the interaction between load and modality. This supports the view that both VWM modalities require similar cognitive effort, and perhaps that previous fMRI results were an artefact of scanner noise. We also found that, across conditions, DL-PFC activation was positively correlated with reaction time. This may further support DL-PFC activation as an index of cognitive effort, and fNIRS as a method to measure it.
Collapse
Affiliation(s)
- Joseph Rovetti
- Department of Psychology, Ryerson University, 350 Victoria St, Toronto, ON M5B 2K3, Canada.
| | - Huiwen Goy
- Department of Psychology, Ryerson University, 350 Victoria St, Toronto, ON M5B 2K3, Canada.
| | - Rebecca Nurgitz
- Department of Psychology, Ryerson University, 350 Victoria St, Toronto, ON M5B 2K3, Canada.
| | - Frank A Russo
- Department of Psychology, Ryerson University, 350 Victoria St, Toronto, ON M5B 2K3, Canada.
| |
Collapse
|
18
|
Rovetti J, Goy H, Pichora-Fuller MK, Russo FA. Functional Near-Infrared Spectroscopy as a Measure of Listening Effort in Older Adults Who Use Hearing Aids. Trends Hear 2020; 23:2331216519886722. [PMID: 31722613 PMCID: PMC6856975 DOI: 10.1177/2331216519886722] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Listening effort may be reduced when hearing aids improve access to the acoustic
signal. However, this possibility is difficult to evaluate because many
neuroimaging methods used to measure listening effort are incompatible with
hearing aid use. Functional near-infrared spectroscopy (fNIRS), which can be
used to measure the concentration of oxygen in the prefrontal cortex (PFC),
appears to be well-suited to this application. The first aim of this study was
to establish whether fNIRS could measure cognitive effort during listening in
older adults who use hearing aids. The second aim was to use fNIRS to determine
if listening effort, a form of cognitive effort, differed depending on whether
or not hearing aids were used when listening to sound presented at 35 dB SL
(flat gain). Sixteen older adults who were experienced hearing aid users
completed an auditory n-back task and a visual n-back task; both tasks were
completed with and without hearing aids. We found that PFC oxygenation increased
with n-back working memory demand in both modalities, supporting the use of
fNIRS to measure cognitive effort during listening in this population. PFC
oxygenation was weakly and nonsignificantly correlated with self-reported
listening effort and reaction time, respectively, suggesting that PFC
oxygenation assesses a dimension of listening effort that differs from these
other measures. Furthermore, the extent to which hearing aids reduced PFC
oxygenation in the left lateral PFC was positively correlated with age and
pure-tone average thresholds. The implications of these findings as well as
future directions are discussed.
Collapse
Affiliation(s)
- Joseph Rovetti
- Department of Psychology, Ryerson University, Toronto, ON, Canada
| | - Huiwen Goy
- Department of Psychology, Ryerson University, Toronto, ON, Canada
| | | | - Frank A Russo
- Department of Psychology, Ryerson University, Toronto, ON, Canada.,Toronto Rehabilitation Institute, ON, Canada
| |
Collapse
|
19
|
Abstract
Background Hearing protection devices (HPDs) are often used in the workplace to prevent hearing damage caused by noise. However, a factor that can lead to hearing loss in the workplace is improper HPD fitting, and the previous literature has shown that instructing workers on how to properly insert their HPDs can make a significant difference in the degree of attenuation. Methods Two studies were completed on a total of 33 Hydro One workers. A FitCheck Solo field attenuation estimation system was used to measure the personal attenuation rating (PAR) before and after providing one-on-one fitting instructions. In addition, external ear canal diameters were measured, and a questionnaire with items related to frequency of use, confidence, and discomfort was administered. Results Training led to an improvement in HPD attenuation, particularly for participants with poorer PARs before training. The questionnaire results indicated that much HPD discomfort is caused by heat, humidity, and communication difficulties. External ear canal asymmetry did not appear to significantly influence the measured PAR. Conclusion In accordance with the previous literature, our studies suggest that one-on-one instruction is an effective training method for HPD use. Addressing discomfort issues from heat, humidity, and communication issues could help to improve the use of HPDs in the workplace. Further research into the effects of canal asymmetry on the PAR is needed.
Collapse
Affiliation(s)
- Fran Copelli
- Psychology Department, Ryerson University, 350 Victoria St. Toronto, ON, M5B 2K3, Canada
| | - Alberto Behar
- Psychology Department, Ryerson University, 350 Victoria St. Toronto, ON, M5B 2K3, Canada
| | - Tina Ngoc Le
- Psychology Department, Ryerson University, 350 Victoria St. Toronto, ON, M5B 2K3, Canada
| | - Frank A Russo
- Psychology Department, Ryerson University, 350 Victoria St. Toronto, ON, M5B 2K3, Canada
| |
Collapse
|
20
|
Abstract
The perception of an event is strongly influenced by the context in which it occurs. Here, we examined the effect of a rhythmic context on detection of asynchrony in both the auditory and vibrotactile modalities. Using the method of constant stimuli and a two-alternative forced choice (2AFC), participants were presented with pairs of pure tones played either simultaneously or with various levels of stimulus onset asynchrony (SOA). Target stimuli in both modalities were nested within either: (i) a regularly occurring, predictable rhythm (ii) an irregular, unpredictable rhythm, or (iii) no rhythm at all. Vibrotactile asynchrony detection had higher thresholds and showed greater variability than auditory asynchrony detection in general. Asynchrony detection thresholds for auditory targets but not vibrotactile targets were significantly reduced when the target stimulus was embedded in a regular rhythm as compared to no rhythm. Embedding within an irregular rhythm produced no such improvement. The observed modality asymmetries are interpreted with regard to the superior temporal resolution of the auditory system and specialized brain circuitry supporting auditory-motor coupling.
Collapse
Affiliation(s)
- Andrew P Lauzon
- Department of Psychology, York University, 4700 Keele St, Toronto, ON, M3J 1P3, Canada.
- Centre for Vision Research, York University, Toronto, ON, Canada.
| | - Frank A Russo
- Department of Psychology, Ryerson University, Toronto, ON, Canada
| | - Laurence R Harris
- Department of Psychology, York University, 4700 Keele St, Toronto, ON, M3J 1P3, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
| |
Collapse
|
21
|
Dubinsky E, Wood EA, Nespoli G, Russo FA. Short-Term Choir Singing Supports Speech-in-Noise Perception and Neural Pitch Strength in Older Adults With Age-Related Hearing Loss. Front Neurosci 2019; 13:1153. [PMID: 31849572 PMCID: PMC6892838 DOI: 10.3389/fnins.2019.01153] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Accepted: 10/11/2019] [Indexed: 12/22/2022] Open
Abstract
Prior studies have demonstrated musicianship enhancements of various aspects of auditory and cognitive processing in older adults, but musical training has rarely been examined as an intervention for mitigating age-related declines in these abilities. The current study investigates whether 10 weeks of choir participation can improve aspects of auditory processing in older adults, particularly speech-in-noise (SIN) perception. A choir-singing group and an age- and audiometrically-matched do-nothing control group underwent pre- and post-testing over a 10-week period. Linear mixed effects modeling in a regression analysis showed that choir participants demonstrated improvements in speech-in-noise perception, pitch discrimination ability, and the strength of the neural representation of speech fundamental frequency. Choir participants' gains in SIN perception were mediated by improvements in pitch discrimination, which was in turn predicted by the strength of the neural representation of speech stimuli (FFR), suggesting improvements in pitch processing as a possible mechanism for this SIN perceptual improvement. These findings support the hypothesis that short-term choir participation is an effective intervention for mitigating age-related hearing losses.
Collapse
Affiliation(s)
- Ella Dubinsky
- Department of Psychology, Ryerson University, Toronto, ON, Canada
| | - Emily A. Wood
- Department of Psychology, Ryerson University, Toronto, ON, Canada
| | - Gabriel Nespoli
- Department of Psychology, Ryerson University, Toronto, ON, Canada
| | - Frank A. Russo
- Department of Psychology, Ryerson University, Toronto, ON, Canada
- Toronto Rehabilitation Institute, Toronto, ON, Canada
| |
Collapse
|
22
|
Behar A, Chasin M, Mosher S, Abdoli-Eramaki M, Russo FA. Noise exposure and hearing loss in classical orchestra musicians: A five-year follow-up. Noise Health 2019; 20:42-46. [PMID: 29676294 PMCID: PMC5926315 DOI: 10.4103/nah.nah_39_17] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
Introduction: This study is a follow-up to prior research from our group that attempts to relate noise exposure and hearing thresholds in active performing musicians of the National Ballet of Canada Orchestra. Materials and Methods: Exposures obtained in early 2010 were compared to exposures obtained in early 2017 (the present study). In addition, audiometric thresholds obtained in early 2012 were compared to thresholds obtained in early 2017 (the present study). This collection of measurements presents an opportunity to observe the regularities in the patterns of exposure as well as threshold changes that may be expected in active orchestra musicians over a 5-year span. Results: The pattern of noise exposure across instrument groups, which was consistent over the two time points, reveals highest exposures among brass, percussion/basses, and woodwinds. However, the average noise exposure across groups and time was consistently below 85 dBA, which suggests no occupational hazard. These observations were corroborated by audiometric thresholds, which were generally (a) in the normal range and (b) unchanged in the 5-year period between measurements. Conclusion: Because exposure levels were consistently below 85 dBA and changes in audiometric thresholds were minimal, we conclude that musicians experienced little-to-no risk of noise-induced hearing loss.
Collapse
Affiliation(s)
- Alberto Behar
- Department of Psychology, Ryerson University, Toronto, ON, Canada
| | - Marshall Chasin
- Musician Clinic of , Ryerson University, Toronto, ON, Canada
| | - Steve Mosher
- Canadian Federation of Musicians, Ryerson University, Toronto, ON, Canada
| | | | - Frank A Russo
- Department of Psychology, Ryerson University, Toronto, ON, Canada
| |
Collapse
|
23
|
Busse V, Jungclaus J, Roden I, Russo FA, Kreutz G. Combining Song-And Speech-Based Language Teaching: An Intervention With Recently Migrated Children. Front Psychol 2018; 9:2386. [PMID: 30546337 PMCID: PMC6279872 DOI: 10.3389/fpsyg.2018.02386] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2018] [Accepted: 11/13/2018] [Indexed: 11/13/2022] Open
Abstract
There is growing evidence that singing can have a positive effect on language learning, but few studies have explored its benefit for children who have recently migrated to a new country. In the present study, recently migrated children (N = 35) received three 40-min sessions where all students learnt the lyrics of two songs designed to simulate language learning through alternating teaching modalities (singing and speaking). Children improved their language knowledge significantly including on tasks targeting the transfer of grammatical skills, an area largely neglected in previous studies. This improvement was sustainable over the retention interval. However, the two teaching modalities did not show differential effects on cued recall of song lyrics indicating that singing and speaking are equally effective when used in combination with one another. Taken together, the data suggest that singing may be useful as an additional teaching strategy, irrespective of initial language proficiency, warranting more research on songs as a supplement for grammar instruction.
Collapse
Affiliation(s)
- Vera Busse
- English, University of Vechta, Vechta, Germany
| | - Jana Jungclaus
- Department of Educational Sciences, University of Oldenburg, Oldenburg, Germany.,Department of Music, Speech and Music Lab, University of Oldenburg, Oldenburg, Germany
| | - Ingo Roden
- Department of Educational Sciences, University of Oldenburg, Oldenburg, Germany.,Department of Music, Speech and Music Lab, University of Oldenburg, Oldenburg, Germany
| | - Frank A Russo
- Department of Psychology, Ryerson University, Toronto, ON, Canada
| | - Gunter Kreutz
- Department of Music, Speech and Music Lab, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
24
|
Abstract
Vocal emotion perception is an important part of speech communication and social interaction. Although older adults with normal audiograms are known to be less accurate at identifying vocal emotion compared to younger adults, little is known about how older adults with hearing loss perceive vocal emotion or whether hearing aids improve the perception of emotional speech. In the main experiment, older hearing aid users were presented with sentences spoken in seven emotion conditions, with and without their own hearing aids. Listeners reported the words that they heard as well as the emotion portrayed in each sentence. The use of hearing aids improved word-recognition accuracy in quiet from 38.1% (unaided) to 65.1% (aided) but did not significantly change emotion-identification accuracy (36.0% unaided, 41.8% aided). In a follow-up experiment, normal-hearing young listeners were tested on the same stimuli. Normal-hearing younger listeners and older listeners with hearing loss showed similar patterns in how emotion affected word-recognition performance but different patterns in how emotion affected emotion-identification performance. In contrast to the present findings, previous studies did not find age-related differences between younger and older normal-hearing listeners in how emotion affected emotion-identification performance. These findings suggest that there are changes to emotion identification caused by hearing loss that are beyond those that can be attributed to normal aging, and that hearing aids do not compensate for these changes.
Collapse
Affiliation(s)
- Huiwen Goy
- 1 Ryerson University, Toronto, Ontario, Canada
| | | | - Gurjit Singh
- 1 Ryerson University, Toronto, Ontario, Canada.,3 Phonak AG, Stäfa, Switzerland.,4 Department of Speech-Language Pathology, University of Toronto, Toronto, Ontario, Canada.,5 Toronto Rehabilitation Institute, University Health Network, Toronto, Ontario, Canada
| | - Frank A Russo
- 1 Ryerson University, Toronto, Ontario, Canada.,5 Toronto Rehabilitation Institute, University Health Network, Toronto, Ontario, Canada
| |
Collapse
|
25
|
Livingstone SR, Russo FA. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS One 2018; 13:e0196391. [PMID: 29768426 PMCID: PMC5955500 DOI: 10.1371/journal.pone.0196391] [Citation(s) in RCA: 162] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Accepted: 04/12/2018] [Indexed: 11/19/2022] Open
Abstract
The RAVDESS is a validated multimodal database of emotional speech and song. The database is gender balanced consisting of 24 professional actors, vocalizing lexically-matched statements in a neutral North American accent. Speech includes calm, happy, sad, angry, fearful, surprise, and disgust expressions, and song contains calm, happy, sad, angry, and fearful emotions. Each expression is produced at two levels of emotional intensity, with an additional neutral expression. All conditions are available in face-and-voice, face-only, and voice-only formats. The set of 7356 recordings were each rated 10 times on emotional validity, intensity, and genuineness. Ratings were provided by 247 individuals who were characteristic of untrained research participants from North America. A further set of 72 participants provided test-retest data. High levels of emotional validity and test-retest intrarater reliability were reported. Corrected accuracy and composite "goodness" measures are presented to assist researchers in the selection of stimuli. All recordings are made freely available under a Creative Commons license and can be downloaded at https://doi.org/10.5281/zenodo.1188976.
Collapse
Affiliation(s)
- Steven R. Livingstone
- Department of Psychology, Ryerson University, Toronto, Canada
- Department of Computer Science and Information Systems, University of Wisconsin-River Falls, Wisconsin, WI, United States of America
| | - Frank A. Russo
- Department of Psychology, Ryerson University, Toronto, Canada
| |
Collapse
|
26
|
Vempala NN, Russo FA. Editorial: Bridging Music Informatics With Music Cognition. Front Psychol 2018; 9:633. [PMID: 29867629 PMCID: PMC5952036 DOI: 10.3389/fpsyg.2018.00633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2018] [Accepted: 04/16/2018] [Indexed: 11/24/2022] Open
Affiliation(s)
- Naresh N. Vempala
- Psychology, Ryerson University, Toronto, ON, Canada
- Nuralogix Corporation, Toronto, ON, Canada
- *Correspondence: Naresh N. Vempala
| | | |
Collapse
|
27
|
Abstract
Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML) methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.
Collapse
Affiliation(s)
- Naresh N Vempala
- SMART Lab, Department of Psychology, Ryerson University, Toronto, ON, Canada
| | - Frank A Russo
- SMART Lab, Department of Psychology, Ryerson University, Toronto, ON, Canada.,Toronto Rehabilitation Institute, Toronto, ON, Canada
| |
Collapse
|
28
|
|
29
|
Abstract
Congenital amusia is a condition in which an individual suffers from a deficit of musical pitch perception and production. Individuals suffering from congenital amusia generally tend to abstain from musical activities. Here, we present the unique case of Tim Falconer, a self-described musicophile who also suffers from congenital amusia. We describe and assess Tim's attempts to train himself out of amusia through a self-imposed 18-month program of formal vocal training and practice. We tested Tim with respect to music perception and vocal production across seven sessions including pre- and post-training assessments. We also obtained diffusion-weighted images of his brain to assess connectivity between auditory and motor planning areas via the arcuate fasciculus (AF). Tim's behavioral and brain data were compared to that of normal and amusic controls. While Tim showed temporary gains in his singing ability, he did not reach normal levels, and these gains faded when he was not engaged in regular lessons and practice. Tim did show some sustained gains with respect to the perception of musical rhythm and meter. We propose that Tim's lack of improvement in pitch perception and production tasks is due to long-standing and likely irreversible reduction in connectivity along the AF fiber tract.
Collapse
Affiliation(s)
- Jonathan M P Wilbiks
- a Department of Psychology , Ryerson University , Toronto , Canada.,b Department of Psychology , Mount Allison University , Sackville , Canada
| | - Dominique T Vuvan
- c Department of Psychology , Skidmore College , Saratoga Springs , NY , USA.,d International Laboratory for Brain, Music and Sound Research (BRAMS) , Montreal , Canada
| | - Pier-Yves Girard
- d International Laboratory for Brain, Music and Sound Research (BRAMS) , Montreal , Canada.,e Département de psychologie , Université de Montréal , Montreal , Canada
| | - Isabelle Peretz
- d International Laboratory for Brain, Music and Sound Research (BRAMS) , Montreal , Canada.,e Département de psychologie , Université de Montréal , Montreal , Canada
| | - Frank A Russo
- a Department of Psychology , Ryerson University , Toronto , Canada
| |
Collapse
|
30
|
Abstract
Abstract. Previous research involving preschool children and adults suggests that moving in synchrony with others can foster cooperation. Song provides a rich oscillatory framework that supports synchronous movement and may thus be considered a powerful agent of positive social relations. In the current study, we assessed this hypothesis in a group of primary-school aged children with diverse ethnic and socioeconomic backgrounds. Children participated in one of three activity conditions: group singing, group art, or competitive games. They were then asked to play a prisoner’s dilemma game as a measure of cooperation. Results showed that children who engaged in group singing were more cooperative than children who engaged in group art or competitive games.
Collapse
Affiliation(s)
- Arla Good
- Department of Psychology, Ryerson University, Toronto, ON, Canada
| | - Frank A. Russo
- Department of Psychology, Ryerson University, Toronto, ON, Canada
| |
Collapse
|
31
|
Abel MK, Li HC, Russo FA, Schlaug G, Loui P. Audiovisual Interval Size Estimation Is Associated with Early Musical Training. PLoS One 2016; 11:e0163589. [PMID: 27760134 PMCID: PMC5070837 DOI: 10.1371/journal.pone.0163589] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2016] [Accepted: 09/12/2016] [Indexed: 11/18/2022] Open
Abstract
Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants’ ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants’ ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.
Collapse
Affiliation(s)
- Mary Kathryn Abel
- Harvard College, Cambridge, Massachusetts, United States of America
- Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, United States of America
| | - H. Charles Li
- Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, United States of America
| | | | - Gottfried Schlaug
- Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, United States of America
| | - Psyche Loui
- Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, United States of America
- Wesleyan University, Middletown, Connecticut, United States of America
- * E-mail:
| |
Collapse
|
32
|
Abstract
Striking changes in sensitivity to tonality across the pitch range are reported. Participants were presented a key-defining context (do-mi-do-sol) followed by one of the 12 chromatic tones of the octave, and rated the goodness of fit of the probe tone to the context. The set of ratings, called the probe-tone profile, was compared to an established standardised profile for the Western tonal hierarchy. The presentation of context and probe tones at low and high pitch registers resulted in significantly reduced sensitivity to tonality. Sensitivity was especially poor for presentations in the lowest octaves where inharmonicity levels were substantially above the threshold for detection. We propose that sensitivity to tonality may be influenced by pitch salience (or a co-varying factor such as exposure to pitch distributional information) as well as suprathreshold inharmonicity.
Collapse
Affiliation(s)
- Frank A Russo
- Department of Psychology, Ryerson University, 350 Victoria Street, Toronto, ON M5B 2K3, Canada.
| | | | | | | |
Collapse
|
33
|
Livingstone SR, Vezer E, McGarry LM, Lang AE, Russo FA. Deficits in the Mimicry of Facial Expressions in Parkinson's Disease. Front Psychol 2016; 7:780. [PMID: 27375505 PMCID: PMC4894910 DOI: 10.3389/fpsyg.2016.00780] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2016] [Accepted: 05/09/2016] [Indexed: 11/21/2022] Open
Abstract
Background: Humans spontaneously mimic the facial expressions of others, facilitating social interaction. This mimicking behavior may be impaired in individuals with Parkinson's disease, for whom the loss of facial movements is a clinical feature. Objective: To assess the presence of facial mimicry in patients with Parkinson's disease. Method: Twenty-seven non-depressed patients with idiopathic Parkinson's disease and 28 age-matched controls had their facial muscles recorded with electromyography while they observed presentations of calm, happy, sad, angry, and fearful emotions. Results: Patients exhibited reduced amplitude and delayed onset in the zygomaticus major muscle region (smiling response) following happy presentations (patients M = 0.02, 95% confidence interval [CI] −0.15 to 0.18, controls M = 0.26, CI 0.14 to 0.37, ANOVA, effect size [ES] = 0.18, p < 0.001). Although patients exhibited activation of the corrugator supercilii and medial frontalis (frowning response) following sad and fearful presentations, the frontalis response to sad presentations was attenuated relative to controls (patients M = 0.05, CI −0.08 to 0.18, controls M = 0.21, CI 0.09 to 0.34, ANOVA, ES = 0.07, p = 0.017). The amplitude of patients' zygomaticus activity in response to positive emotions was found to be negatively correlated with response times for ratings of emotional identification, suggesting a motor-behavioral link (r = –0.45, p = 0.02, two-tailed). Conclusions: Patients showed decreased mimicry overall, mimicking other peoples' frowns to some extent, but presenting with profoundly weakened and delayed smiles. These findings open a new avenue of inquiry into the “masked face” syndrome of PD.
Collapse
Affiliation(s)
- Steven R Livingstone
- Department of Psychology, Ryerson UniversityToronto, ON, Canada; Department of Computer Science and Information Systems, University of Wisconsin-River FallsWisconsin, WI, USA; Toronto Rehabilitation InstituteToronto, ON, Canada
| | - Esztella Vezer
- Department of Psychology, Ryerson University Toronto, ON, Canada
| | - Lucy M McGarry
- Department of Psychology, Ryerson University Toronto, ON, Canada
| | - Anthony E Lang
- Division of Neurology, Department of Medicine, University of TorontoToronto, ON, Canada; Morton and Gloria Shulman Movement Disorder Centre at The Toronto Western HospitalToronto, ON, Canada
| | - Frank A Russo
- Department of Psychology, Ryerson UniversityToronto, ON, Canada; Toronto Rehabilitation InstituteToronto, ON, Canada
| |
Collapse
|
34
|
|
35
|
Peck KJ, Girard TA, Russo FA, Fiocco AJ. Music and Memory in Alzheimer’s Disease and The Potential Underlying Mechanisms. J Alzheimers Dis 2016; 51:949-59. [DOI: 10.3233/jad-150998] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
36
|
Kirchberger M, Russo FA. Dynamic Range Across Music Genres and the Perception of Dynamic Compression in Hearing-Impaired Listeners. Trends Hear 2016; 20:2331216516630549. [PMID: 26868955 PMCID: PMC4753356 DOI: 10.1177/2331216516630549] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2015] [Revised: 12/21/2015] [Accepted: 01/13/2016] [Indexed: 11/22/2022] Open
Abstract
Dynamic range compression serves different purposes in the music and hearing-aid industries. In the music industry, it is used to make music louder and more attractive to normal-hearing listeners. In the hearing-aid industry, it is used to map the variable dynamic range of acoustic signals to the reduced dynamic range of hearing-impaired listeners. Hence, hearing-aided listeners will typically receive a dual dose of compression when listening to recorded music. The present study involved an acoustic analysis of dynamic range across a cross section of recorded music as well as a perceptual study comparing the efficacy of different compression schemes. The acoustic analysis revealed that the dynamic range of samples from popular genres, such as rock or rap, was generally smaller than the dynamic range of samples from classical genres, such as opera and orchestra. By comparison, the dynamic range of speech, based on recordings of monologues in quiet, was larger than the dynamic range of all music genres tested. The perceptual study compared the effect of the prescription rule NAL-NL2 with a semicompressive and a linear scheme. Music subjected to linear processing had the highest ratings for dynamics and quality, followed by the semicompressive and the NAL-NL2 setting. These findings advise against NAL-NL2 as a prescription rule for recorded music and recommend linear settings.
Collapse
|
37
|
Kirchberger M, Russo FA. Harmonic Frequency Lowering: Effects on the Perception of Music Detail and Sound Quality. Trends Hear 2016; 20:2331216515626131. [PMID: 26834122 PMCID: PMC4737978 DOI: 10.1177/2331216515626131] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2015] [Revised: 12/16/2015] [Accepted: 12/16/2015] [Indexed: 12/03/2022] Open
Abstract
A novel algorithm for frequency lowering in music was developed and experimentally tested in hearing-impaired listeners. Harmonic frequency lowering (HFL) combines frequency transposition and frequency compression to preserve the harmonic content of music stimuli. Listeners were asked to make judgments regarding detail and sound quality in music stimuli. Stimuli were presented under different signal processing conditions: original, low-pass filtered, HFL, and nonlinear frequency compressed. Results showed that participants reported perceiving the most detail in the HFL condition. In addition, there was no difference in sound quality across conditions.
Collapse
|
38
|
|
39
|
|
40
|
Livingstone SR, Choi DH, Russo FA. The influence of vocal training and acting experience on measures of voice quality and emotional genuineness. Front Psychol 2014; 5:156. [PMID: 24639659 PMCID: PMC3945712 DOI: 10.3389/fpsyg.2014.00156] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2013] [Accepted: 02/08/2014] [Indexed: 11/23/2022] Open
Abstract
Vocal training through singing and acting lessons is known to modify acoustic parameters of the voice. While the effects of singing training have been well documented, the role of acting experience on the singing voice remains unclear. In two experiments, we used linear mixed models to examine the relationships between the relative amounts of acting and singing experience on the acoustics and perception of the male singing voice. In Experiment 1, 12 male vocalists were recorded while singing with five different emotions, each with two intensities. Acoustic measures of pitch accuracy, jitter, and harmonics-to-noise ratio (HNR) were examined. Decreased pitch accuracy and increased jitter, indicative of a lower “voice quality,” were associated with more years of acting experience, while increased pitch accuracy was associated with more years of singing lessons. We hypothesized that the acoustic deviations exhibited by more experienced actors was an intentional technique to increase the genuineness or truthfulness of their emotional expressions. In Experiment 2, listeners rated vocalists’ emotional genuineness. Vocalists with more years of acting experience were rated as more genuine than vocalists with less acting experience. No relationship was reported for singing training. Increased genuineness was associated with decreased pitch accuracy, increased jitter, and a higher HNR. These effects may represent a shifting of priorities by male vocalists with acting experience to emphasize emotional genuineness over pitch accuracy or voice quality in their singing performances.
Collapse
Affiliation(s)
- Steven R Livingstone
- Department of Psychology, Ryerson University Toronto, ON, Canada ; Toronto Rehabilitation Institute Toronto, ON, Canada
| | - Deanna H Choi
- Department of Psychology, Queen's University Kingston, ON, Canada
| | - Frank A Russo
- Department of Psychology, Ryerson University Toronto, ON, Canada ; Toronto Rehabilitation Institute Toronto, ON, Canada
| |
Collapse
|
41
|
Russo FA, Vempala NN, Sandstrom GM. Predicting musically induced emotions from physiological inputs: linear and neural network models. Front Psychol 2013; 4:468. [PMID: 23964250 PMCID: PMC3737459 DOI: 10.3389/fpsyg.2013.00468] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2013] [Accepted: 07/05/2013] [Indexed: 11/13/2022] Open
Abstract
Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of “felt” emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants—heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.
Collapse
Affiliation(s)
- Frank A Russo
- SMART Lab, Department of Psychology, Ryerson University Toronto, ON, Canada ; Communication Team, Toronto Rehabilitation Institute Toronto, ON, Canada
| | | | | |
Collapse
|
42
|
Abstract
Two experiments investigated deaf individuals' ability to discriminate between same-sex talkers based on vibrotactile stimulation alone. Nineteen participants made same/different judgments on pairs of utterances presented to the lower back through voice coils embedded in a conforming chair. Discrimination of stimuli matched for F0, duration, and perceived magnitude was successful for pairs of spoken sentences in Experiment 1 (median percent correct = 83%) and pairs of vowel utterances in Experiment 2 (median percent correct = 75%). Greater difference in spectral tilt between “different” pairs strongly predicted their discriminability in both experiments. The current findings support the hypothesis that discrimination of complex vibrotactile stimuli involves the cortical integration of spectral information filtered through frequency-tuned skin receptors.
Collapse
Affiliation(s)
- Paolo Ammirante
- Department of Psychology, Ryerson University, Toronto, Canada
| | - Frank A. Russo
- Department of Psychology, Ryerson University, Toronto, Canada
- * E-mail:
| | - Arla Good
- Department of Psychology, Ryerson University, Toronto, Canada
| | - Deborah I. Fels
- Centre for Learning Technologies, Ryerson University, Toronto, Canada
| |
Collapse
|
43
|
Abstract
Five experiments investigated the ability to discriminate between musical timbres based on vibrotactile stimulation alone. Participants made same/different judgments on pairs of complex waveforms presented sequentially to the back through voice coils embedded in a conforming chair. Discrimination between cello, piano, and trombone tones matched for F0, duration, and magnitude was above chance with white noise masking the sound output of the voice coils (Experiment 1), with additional masking to control for bone-conducted sound (Experiment 2), and among a group of deaf individuals (Experiment 4a). Hearing (Experiment 3) and deaf individuals (Experiment 4b) also successfully discriminated between dull and bright timbres varying only with regard to spectral centroid. We propose that, as with auditory discrimination of musical timbre, vibrotactile discrimination may involve the cortical integration of filtered output from frequency-tuned mechanoreceptors functioning as critical bands.
Collapse
Affiliation(s)
- Frank A Russo
- Department of Psychology, Ryerson University, Toronto, ON, Canada.
| | | | | |
Collapse
|
44
|
Abstract
Previous studies demonstrate that perception of action presented audio-visually facilitates greater mirror neuron system (MNS) activity in humans (Kaplan and Iacoboni in Cogn Process 8(2):103-113, 2007) and non-human primates (Keysers et al. in Exp Brain Res 153(4):628-636, 2003) than perception of action presented unimodally. In the current study, we examined whether audio-visual facilitation of the MNS can be indexed using electroencephalography (EEG) measurement of the mu rhythm. The mu rhythm is an EEG oscillation with peaks at 10 and 20 Hz that is suppressed during the execution and perception of action and is speculated to reflect activity in the premotor and inferior parietal cortices as a result of MNS activation (Pineda in Behav Brain Funct 4(1):47, 2008). Participants observed experimental stimuli unimodally (visual-alone or audio-alone) or bimodally during randomized presentations of two hands ripping a sheet of paper, and a control video depicting a box moving up and down. Audio-visual perception of action stimuli led to greater event-related desynchrony (ERD) of the 8-13 Hz mu rhythm compared to unimodal perception of the same stimuli over the C3 electrode, as well as in a left central cluster when data were examined in source space. These results are consistent with Kaplan and Iacoboni's (in Cogn Process 8(2):103-113, 2007), findings that indicate audio-visual facilitation of the MNS; our left central cluster was localized approximately 13.89 mm away from the ventral premotor cluster identified in their fMRI study, suggesting that these clusters originate from similar sources. Consistency of results in electrode space and component space support the use of ICA as a valid source localization tool.
Collapse
Affiliation(s)
- Lucy M McGarry
- Department of Psychology, Ryerson University, 350 Victoria St., Toronto, ON M5B 2K3, Canada.
| | | | | | | |
Collapse
|
45
|
|
46
|
|
47
|
Abstract
The ideomotor principle predicts that perception will modulate action where overlap exists between perceptual and motor representations of action. This effect is demonstrated with auditory stimuli. Previous perceptual evidence suggests that pitch contour and pitch distance in tone sequences may elicit tonal motion effects consistent with listeners' implicit awareness of the lawful dynamics of locomotive bodies. To examine modulating effects of perception on action, participants in a continuation tapping task produced a steady tempo. Auditory tones were triggered by each tap. Pitch contour randomly and persistently varied within trials. Pitch distance between successive tones varied between trials. Although participants were instructed to ignore them, tones systematically affected finger dynamics and timing. Where pitch contour implied positive acceleration, the following tap and the intertap interval (ITI) that it completed were faster. Where pitch contour implied negative acceleration, the following tap and the ITI that it completed were slower. Tempo was faster with greater pitch distance. Musical training did not predict the magnitude of these effects. There were no generalized effects on timing variability. Pitch contour findings demonstrate how tonal motion may elicit the spontaneous production of accents found in expressive music performance.
Collapse
|
48
|
Karam M, Russo FA, Fels DI. Designing the Model Human Cochlea: An Ambient Crossmodal Audio-Tactile Display. IEEE Trans Haptics 2009; 2:160-169. [PMID: 27788080 DOI: 10.1109/toh.2009.32] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We present a model human cochlea (MHC), a sensory substitution technique and system that translates auditory information into vibrotactile stimuli using an ambient, tactile display. The model is used in the current study to translate music into discrete vibration signals displayed along the back of the body using a chair form factor. Voice coils facilitate the direct translation of auditory information onto the multiple discrete vibrotactile channels, which increases the potential to identify sections of the music that would otherwise be masked by the combined signal. One of the central goals of this work has been to improve accessibility to the emotional information expressed in music for users who are deaf or hard of hearing. To this end, we present our prototype of the MHC, two models of sensory substitution to support the translation of existing and new music, and some of the design challenges encountered throughout the development process. Results of a series of experiments conducted to assess the effectiveness of the MHC are discussed, followed by an overview of future directions for this research.
Collapse
|
49
|
|
50
|
Abstract
Magnitude estimation was used to assess the experience of urgency in pulse-train stimuli (pulsed white noise) ranging from 3.13 to 200 Hz. At low pulse rates, pulses were easily resolved. At high pulse rates, pulses fused together leading to a tonal sensation with a clear pitch level. Urgency ratings followed a nonmonotonic (polynomial) function with local maxima at 17.68 and 200 Hz. The same stimuli were also used in response time and pitch scaling experiments. Response times were negatively correlated with urgency ratings. Pitch scaling results indicated that urgency of pulse trains is mediated by the perceptual constructs of speed and pitch.
Collapse
Affiliation(s)
- Frank A Russo
- Department of Psychology, Ryerson University, Toronto, Canada
| | | |
Collapse
|