1
|
Gurariy G, Randall R, Greenberg AS. Neuroimaging evidence for the direct role of auditory scene analysis in object perception. Cereb Cortex 2023; 33:6257-6272. [PMID: 36562994 PMCID: PMC10183742 DOI: 10.1093/cercor/bhac501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 11/29/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022] Open
Abstract
Auditory Scene Analysis (ASA) refers to the grouping of acoustic signals into auditory objects. Previously, we have shown that perceived musicality of auditory sequences varies with high-level organizational features. Here, we explore the neural mechanisms mediating ASA and auditory object perception. Participants performed musicality judgments on randomly generated pure-tone sequences and manipulated versions of each sequence containing low-level changes (amplitude; timbre). Low-level manipulations affected auditory object perception as evidenced by changes in musicality ratings. fMRI was used to measure neural activation to sequences rated most and least musical, and the altered versions of each sequence. Next, we generated two partially overlapping networks: (i) a music processing network (music localizer) and (ii) an ASA network (base sequences vs. ASA manipulated sequences). Using Representational Similarity Analysis, we correlated the functional profiles of each ROI to a model generated from behavioral musicality ratings as well as models corresponding to low-level feature processing and music perception. Within overlapping regions, areas near primary auditory cortex correlated with low-level ASA models, whereas right IPS was correlated with musicality ratings. Shared neural mechanisms that correlate with behavior and underlie both ASA and music perception suggests that low-level features of auditory stimuli play a role in auditory object perception.
Collapse
Affiliation(s)
- Gennadiy Gurariy
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| | - Richard Randall
- School of Music and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Adam S Greenberg
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| |
Collapse
|
2
|
Sun L, Li C, Wang S, Si Q, Lin M, Wang N, Sun J, Li H, Liang Y, Wei J, Zhang X, Zhang J. Left frontal eye field encodes sound locations during passive listening. Cereb Cortex 2023; 33:3067-3079. [PMID: 35858212 DOI: 10.1093/cercor/bhac261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 06/02/2022] [Accepted: 06/04/2022] [Indexed: 11/12/2022] Open
Abstract
Previous studies reported that auditory cortices (AC) were mostly activated by sounds coming from the contralateral hemifield. As a result, sound locations could be encoded by integrating opposite activations from both sides of AC ("opponent hemifield coding"). However, human auditory "where" pathway also includes a series of parietal and prefrontal regions. It was unknown how sound locations were represented in those high-level regions during passive listening. Here, we investigated the neural representation of sound locations in high-level regions by voxel-level tuning analysis, regions-of-interest-level (ROI-level) laterality analysis, and ROI-level multivariate pattern analysis. Functional magnetic resonance imaging data were collected while participants listened passively to sounds from various horizontal locations. We found that opponent hemifield coding of sound locations not only existed in AC, but also spanned over intraparietal sulcus, superior parietal lobule, and frontal eye field (FEF). Furthermore, multivariate pattern representation of sound locations in both hemifields could be observed in left AC, right AC, and left FEF. Overall, our results demonstrate that left FEF, a high-level region along the auditory "where" pathway, encodes sound locations during passive listening in two ways: a univariate opponent hemifield activation representation and a multivariate full-field activation pattern representation.
Collapse
Affiliation(s)
- Liwei Sun
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Chunlin Li
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Songjian Wang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Qian Si
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Meng Lin
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Ningyu Wang
- Department of Otorhinolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
| | - Jun Sun
- Department of Radiology, Beijing Youan Hospital, Capital Medical University, Beijing 100069, China
| | - Hongjun Li
- Department of Radiology, Beijing Youan Hospital, Capital Medical University, Beijing 100069, China
| | - Ying Liang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Jing Wei
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Xu Zhang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Juan Zhang
- Department of Otorhinolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
| |
Collapse
|
3
|
Francis AL. Adding noise is a confounded nuisance. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1375. [PMID: 36182286 DOI: 10.1121/10.0013874] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 08/15/2022] [Indexed: 06/16/2023]
Abstract
A wide variety of research and clinical assessments involve presenting speech stimuli in the presence of some kind of noise. Here, I selectively review two theoretical perspectives and discuss ways in which these perspectives may help researchers understand the consequences for listeners of adding noise to a speech signal. I argue that adding noise changes more about the listening task than merely making the signal more difficult to perceive. To fully understand the effects of an added noise on speech perception, we must consider not just how much the noise affects task difficulty, but also how it affects all of the systems involved in understanding speech: increasing message uncertainty, modifying attentional demand, altering affective response, and changing motivation to perform the task.
Collapse
Affiliation(s)
- Alexander L Francis
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, Indiana 47907, USA
| |
Collapse
|
4
|
Ging-Jehli NR, Arnold LE, Roley-Roberts ME, deBeus R. Characterizing Underlying Cognitive Components of ADHD Presentations and Co-morbid Diagnoses: A Diffusion Decision Model Analysis. J Atten Disord 2022; 26:706-722. [PMID: 34085557 DOI: 10.1177/10870547211020087] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
OBJECTIVE To Explore whether subtypes and comorbidities of attention-deficit hyperactivity disorder (ADHD) induce distinct biases in cognitive components involved in information processing. METHOD Performance on the Integrated Visual and Auditory Continuous Performance Test (IVA-CPT) was compared between 150 children (aged 7 to 10) with ADHD, grouped by DSM-5 presentation (ADHD-C, ADHD-I) or co-morbid diagnoses (anxiety, oppositional defiant disorder [ODD], both, neither), and 60 children without ADHD. Diffusion decision modeling decomposed performance into cognitive components. RESULTS Children with ADHD had poorer information integration than controls. Children with ADHD-C were more sensitive to changes in presentation modality (auditory/visual) than those with ADHD-I and controls. Above and beyond these results, children with ADHD+anxiety+ODD had larger increases in response biases when targets became frequent than children with ADHD-only or with ADHD and one comorbidity. CONCLUSION ADHD presentations and comorbidities have distinct cognitive characteristics quantifiable using DDM and IVA-CPT. We discuss implications for tailored cognitive-behavioral therapy.
Collapse
|
5
|
Pérez-Albéniz A, Gil M, Díez-Gómez A, Martín-Seoane G, Lucas-Molina B. Gambling in Spanish Adolescents: Prevalence and Association with Mental Health Indicators. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 19:129. [PMID: 35010388 PMCID: PMC8750538 DOI: 10.3390/ijerph19010129] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 12/20/2021] [Accepted: 12/22/2021] [Indexed: 06/14/2023]
Abstract
Concern about the development of behavioral addictions in adolescence, including gambling, has increased in recent years. Evidence shows that problem gambling can lead to personal, social, or health problems. However, even though gambling is an illegal activity, studies on this problem are quite limited in Spain. The main objective of this study was to analyze the prevalence of gambling in adolescents in Spain. Moreover, gambling behaviors were examined according to gender and age, and their possible relationship with several mental health indicators was analyzed. The results showed that 20.6% of the adolescents who participated in the study had gambled money in the past year. The highest gambling prevalence was found in boys and in adolescents from the age of 16 years old. Moreover, the results showed that gambling behavior was related to different mental health indicators.
Collapse
Affiliation(s)
- Alicia Pérez-Albéniz
- Department of Educational Sciences, University of La Rioja, 26002 Logrono, Spain; (M.G.); (A.D.-G.)
| | - Mario Gil
- Department of Educational Sciences, University of La Rioja, 26002 Logrono, Spain; (M.G.); (A.D.-G.)
| | - Adriana Díez-Gómez
- Department of Educational Sciences, University of La Rioja, 26002 Logrono, Spain; (M.G.); (A.D.-G.)
| | - Gema Martín-Seoane
- Department of Research and Psychology Education, Complutense University of Madrid, 28223 Pozuelo de Alarcon, Spain;
| | - Beatriz Lucas-Molina
- Department of Developmental and Educational Psychology, University of Valencia, 46010 Valencia, Spain
| |
Collapse
|
6
|
O'Brien B, Juhas B, Bieńkiewicz M, Buloup F, Bringoux L, Bourdin C. Sonification of Golf Putting Gesture Reduces Swing Movement Variability in Novices. RESEARCH QUARTERLY FOR EXERCISE AND SPORT 2021; 92:301-310. [PMID: 32101511 DOI: 10.1080/02701367.2020.1726859] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2019] [Accepted: 02/03/2020] [Indexed: 06/10/2023]
Abstract
Purpose: To study whether novices can use sonification to enhance golf putting performance and swing movements. Method: Forty participants first performed a series of 2 m and 4 m putts, where swing velocities associated with successful trials were used to calculate their mean velocity profile (MVP). Participants were then divided into four groups with different auditory conditions: static pink noise unrelated to movement, auditory guidance based on personalized MVP, and two sonification strategies that mapped the real-time error between observed and MVP swings to modulate either the stereo display or roughness of the auditory guidance signal. Participants then performed a series of 2 m and 4 m putts with the auditory condition designated to their group. Results: In general our results showed significant correlations between swing movement variability and putting performance for all sonification groups. More specifically, in comparison to the group exposed to static pink noise, participants who were presented auditory guidance significantly reduced the deviation from their average swing movement. In addition, participants exposed to error-based sonification with stereo display modulation significantly lowered their variability in timing swing movements. These results provide further evidence of the benefits of sonification for novices performing complex motor skill tasks. Conclusions: More importantly, our findings suggest participants were able to better use online error-based sonification rather than auditory guidance to reduce variability in the execution and timing of their movements.
Collapse
|
7
|
Cao M, Luo Y, Wu Z, Mazzola CA, Catania L, Alvarez TL, Halperin JM, Biswal B, Li X. Topological Aberrance of Structural Brain Network Provides Quantitative Substrates of Post-Traumatic Brain Injury Attention Deficits in Children. Brain Connect 2021; 11:651-662. [PMID: 33765837 DOI: 10.1089/brain.2020.0866] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Background: Traumatic brain injury (TBI)-induced attention deficits are among the most common long-term cognitive consequences in children. Most of the existing studies attempting to understand the neuropathological underpinnings of cognitive and behavioral impairments in TBI have utilized heterogeneous samples and resulted in inconsistent findings. The current research proposed to investigate topological properties of the structural brain network in children with TBI and their relationship with post-TBI attention problems in a more homogeneous subgroup of children who had severe post-TBI attention deficits (TBI-A). Materials and Methods: A total of 31 children with TBI-A and 35 group-matched controls were involved in the study. Diffusion tensor imaging-based probabilistic tractography and graph theoretical techniques were used to construct the structural brain network in each subject. Network topological properties were calculated in both global level and regional (nodal) level. Between-group comparisons among the topological network measures and analyses for searching brain-behavioral were all corrected for multiple comparisons using Bonferroni method. Results: Compared with controls, the TBI-A group showed significantly higher nodal local efficiency and nodal clustering coefficient in left inferior frontal gyrus and right transverse temporal gyrus, whereas significantly lower nodal clustering coefficient in left supramarginal gyrus and lower nodal local efficiency in left parahippocampal gyrus. The temporal lobe topological alterations were significantly associated with the post-TBI inattentive and hyperactive symptoms in the TBI-A group. Conclusion: The results suggest that TBI-related structural re-modularity in the white matter subnetworks associated with temporal lobe may play a critical role in the onset of severe post-TBI attention deficits in children. These findings provide valuable input for understanding the neurobiological substrates of post-TBI attention deficits, and have the potential to serve as quantitatively measurable criteria guiding the development of more timely and tailored strategies for diagnoses and treatments to the affected individuals. Impact statement This study provides a new insight into the neurobiological substrates associated with post-traumatic brain injury attention deficits (TBI-A) in children, by evaluating topological alterations of the structural brain network. The results demonstrated that relative to group-matched controls, the children with TBI-A had significantly altered nodal local efficiency and nodal clustering coefficient in temporal lobe, which strongly linked to elevated inattentive and hyperactive symptoms in the TBI-A group. These findings suggested that white matter structural re-modularity in subnetworks associated with temporal lobe may serve as quantitatively measurable biomarkers for early prediction and diagnosis of post-TBI attention deficits in children.
Collapse
Affiliation(s)
- Meng Cao
- Department of Biomedical Engineering and New Jersey Institute of Technology, Newark, New Jersey, USA
| | - Yuyang Luo
- Department of Biomedical Engineering and New Jersey Institute of Technology, Newark, New Jersey, USA
| | - Ziyan Wu
- Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, New Jersey, USA
| | | | - Lori Catania
- North Jersey Neurodevelopmental Center, North Haledon, New Jersey, USA
| | - Tara L Alvarez
- Department of Biomedical Engineering and New Jersey Institute of Technology, Newark, New Jersey, USA
| | - Jeffrey M Halperin
- Department of Psychology, Queens College, City University of New York, New York, New York, USA
| | - Bharat Biswal
- Department of Biomedical Engineering and New Jersey Institute of Technology, Newark, New Jersey, USA
| | - Xiaobo Li
- Department of Biomedical Engineering and New Jersey Institute of Technology, Newark, New Jersey, USA.,Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, New Jersey, USA
| |
Collapse
|
8
|
Lin HY, Chang WD, Hsieh HC, Yu WH, Lee P. Relationship between intraindividual auditory and visual attention in children with ADHD. RESEARCH IN DEVELOPMENTAL DISABILITIES 2021; 108:103808. [PMID: 33242747 DOI: 10.1016/j.ridd.2020.103808] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2020] [Revised: 10/13/2020] [Accepted: 10/26/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND AIM Most previous attention-deficit/hyperactivity disorder (ADHD) studies have used only a single sensory modality (usually vision) to investigate attentional problems, although patients with ADHD might display deficits of auditory attention similar to their visual attention. This study explored intraindividual auditory and visual attention in children with and without ADHD to examine the relationship between these two dimensions of attention. METHODS Attentional performances of 140 children (70 children with ADHD and 70 typically developing peers) were measured through the Test of Variables of Attention (TOVA) in the present study. RESULTS For both groups, most attentional indices showed significant differences between the two modalities (d ranging from 0.32 to 0.72). The correlation coefficients of most of the attentional variables in children with ADHD were lower than their typically developing peers. All attentional indices of children with ADHD (ranging from 12.8%-55.7%) were much higher than those of their typically developing peers (ranging from 1.4%-8.6%). CONCLUSION These results not only indicate that typically developing children display more consistent attentional performance, but also support the view that children with ADHD may show attention deficiency in one modality but not necessarily in the other.
Collapse
Affiliation(s)
- Hung-Yu Lin
- Department of Occupational Therapy at Asia University, Taichung, Taiwan.
| | - Wen-Dien Chang
- Department of Sport Performance at National Taiwan University of Sport, Taichung, Taiwan
| | - Hsieh-Chun Hsieh
- Department of Special Education at National Tsing Hua University, Hsinchu, Taiwan
| | - Wan-Hui Yu
- Department of Occupational Therapy at Asia University, Taichung, Taiwan
| | - Posen Lee
- Department of Occupational Therapy at I-Shou University, Kaohsiung, Taiwan
| |
Collapse
|
9
|
Abstract
To achieve visual space constancy, our brain remaps eye-centered projections of visual objects across saccades. Here, we measured saccade trajectory curvature following the presentation of visual, auditory, and audiovisual distractors in a double-step saccade task to investigate if this stability mechanism also accounts for localized sounds. We found that saccade trajectories systematically curved away from the position at which either a light or a sound was presented, suggesting that both modalities are represented in eye-centered oculomotor centers. Importantly, the same effect was observed when the distractor preceded the execution of the first saccade. These results suggest that oculomotor centers keep track of visual, auditory and audiovisual objects by remapping their eye-centered representations across saccades. Furthermore, they argue for the existence of a supra-modal map which keeps track of multi-sensory object locations across our movements to create an impression of space constancy.
Collapse
|
10
|
Battal C, Occelli V, Bertonati G, Falagiarda F, Collignon O. General Enhancement of Spatial Hearing in Congenitally Blind People. Psychol Sci 2020; 31:1129-1139. [PMID: 32846109 DOI: 10.1177/0956797620935584] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Vision is thought to support the development of spatial abilities in the other senses. If this is true, how does spatial hearing develop in people lacking visual experience? We comprehensively addressed this question by investigating auditory-localization abilities in 17 congenitally blind and 17 sighted individuals using a psychophysical minimum-audible-angle task that lacked sensorimotor confounds. Participants were asked to compare the relative position of two sound sources located in central and peripheral, horizontal and vertical, or frontal and rear spaces. We observed unequivocal enhancement of spatial-hearing abilities in congenitally blind people, irrespective of the field of space that was assessed. Our results conclusively demonstrate that visual experience is not a prerequisite for developing optimal spatial-hearing abilities and that, in striking contrast, the lack of vision leads to a general enhancement of auditory-spatial skills.
Collapse
Affiliation(s)
- Ceren Battal
- Institute for Research in Psychology, Institute of Neuroscience, Université Catholique de Louvain.,Center for Mind/Brain Sciences, University of Trento
| | | | | | - Federica Falagiarda
- Institute for Research in Psychology, Institute of Neuroscience, Université Catholique de Louvain
| | - Olivier Collignon
- Institute for Research in Psychology, Institute of Neuroscience, Université Catholique de Louvain.,Center for Mind/Brain Sciences, University of Trento
| |
Collapse
|
11
|
What and where in the auditory systems of sighted and early blind individuals: Evidence from representational similarity analysis. J Neurol Sci 2020; 413:116805. [PMID: 32259708 DOI: 10.1016/j.jns.2020.116805] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2019] [Revised: 03/14/2020] [Accepted: 03/24/2020] [Indexed: 11/24/2022]
Abstract
Separated ventral and dorsal streams in auditory system have been proposed to process sound identification and localization respectively. Despite the popularity of the dual-pathway model, it remains controversial how much independence two neural pathways enjoy and whether visual experiences can influence the distinct cortical organizational scheme. In this study, representational similarity analysis (RSA) was used to explore the functional roles of distinct cortical regions that lay within either the ventral or dorsal auditory streams of sighted and early blind (EB) participants. We found functionally segregated auditory networks in both sighted and EB groups where anterior superior temporal gyrus (aSTG) and inferior frontal junction (IFJ) were more related to the sound identification, while posterior superior temporal gyrus (pSTG) and inferior parietal lobe (IPL) preferred the sound localization. The findings indicated visual experiences may not have an influence on this functional dissociation and the cortex of the human brain may be organized as task-specific and modality-independent strategies. Meanwhile, partial overlap of spatial and non-spatial auditory information processing was observed, illustrating the existence of interaction between the two auditory streams. Furthermore, we investigated the effect of visual experiences on the neural bases of auditory perception and observed the cortical reorganization in EB participants in whom middle occipital gyrus was recruited to process auditory information. Our findings examined the distinct cortical networks that abstractly encoded sound identification and localization, and confirmed the existence of interaction from the multivariate perspective. Furthermore, the results suggested visual experience might not impact the functional specialization of auditory regions.
Collapse
|
12
|
Frankowska N, Parzuchowski M, Wojciszke B, Olszanowski M, Winkielman P. Rear negativity: Verbal messages coming from behind are perceived as more negative. EUROPEAN JOURNAL OF SOCIAL PSYCHOLOGY 2020. [DOI: 10.1002/ejsp.2649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Natalia Frankowska
- SWPS University of Social Sciences and Humanities Warsaw Poland
- Center of Research on Cognition and Behavior SWPS University of Social Sciences and Humanities Sopot Poland
| | - Michal Parzuchowski
- Center of Research on Cognition and Behavior SWPS University of Social Sciences and Humanities Sopot Poland
| | - Bogdan Wojciszke
- Center of Research on Cognition and Behavior SWPS University of Social Sciences and Humanities Sopot Poland
| | | | - Piotr Winkielman
- SWPS University of Social Sciences and Humanities Warsaw Poland
- University of California San Diego La Jolla CA USA
| |
Collapse
|
13
|
Abstract
There are functional and anatomical distinctions between the neural systems involved in the recognition of sounds in the environment and those involved in the sensorimotor guidance of sound production and the spatial processing of sound. Evidence for the separation of these processes has historically come from disparate literatures on the perception and production of speech, music and other sounds. More recent evidence indicates that there are computational distinctions between the rostral and caudal primate auditory cortex that may underlie functional differences in auditory processing. These functional differences may originate from differences in the response times and temporal profiles of neurons in the rostral and caudal auditory cortex, suggesting that computational accounts of primate auditory pathways should focus on the implications of these temporal response differences.
Collapse
|
14
|
Stenzel H, Francombe J, Jackson PJB. Limits of Perceived Audio-Visual Spatial Coherence as Defined by Reaction Time Measurements. Front Neurosci 2019; 13:451. [PMID: 31191211 PMCID: PMC6538976 DOI: 10.3389/fnins.2019.00451] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Accepted: 04/23/2019] [Indexed: 11/30/2022] Open
Abstract
The ventriloquism effect describes the phenomenon of audio and visual signals with common features, such as a voice and a talking face merging perceptually into one percept even if they are spatially misaligned. The boundaries of the fusion of spatially misaligned stimuli are of interest for the design of multimedia products to ensure a perceptually satisfactory product. They have mainly been studied using continuous judgment scales and forced-choice measurement methods. These results vary greatly between different studies. The current experiment aims to evaluate audio-visual fusion using reaction time (RT) measurements as an indirect method of measurement to overcome these great variances. A two-alternative forced-choice (2AFC) word recognition test was designed and tested with noise and multi-talker speech background distractors. Visual signals were presented centrally and audio signals were presented between 0° and 31° audio-visual offset in azimuth. RT data were analyzed separately for the underlying Simon effect and attentional effects. In the case of the attentional effects, three models were identified but no single model could explain the observed RTs for all participants so data were grouped and analyzed accordingly. The results show that significant differences in RTs are measured from 5° to 10° onwards for the Simon effect. The attentional effect varied at the same audio-visual offset for two out of the three defined participant groups. In contrast with the prior research, these results suggest that, even for speech signals, small audio-visual offsets influence spatial integration subconsciously.
Collapse
Affiliation(s)
- Hanne Stenzel
- Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, United Kingdom
| | | | - Philip J. B. Jackson
- Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, United Kingdom
| |
Collapse
|
15
|
Lemaitre G, Pyles JA, Halpern AR, Navolio N, Lehet M, Heller LM. Who's that Knocking at My Door? Neural Bases of Sound Source Identification. Cereb Cortex 2019; 28:805-818. [PMID: 28052922 DOI: 10.1093/cercor/bhw397] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2016] [Accepted: 12/14/2016] [Indexed: 11/13/2022] Open
Abstract
When hearing knocking on a door, a listener typically identifies both the action (forceful and repeated impacts) and the object (a thick wooden board) causing the sound. The current work studied the neural bases of sound source identification by switching listeners' attention toward these different aspects of a set of simple sounds during functional magnetic resonance imaging scanning: participants either discriminated the action or the material that caused the sounds, or they simply discriminated meaningless scrambled versions of them. Overall, discriminating action and material elicited neural activity in a left-lateralized frontoparietal network found in other studies of sound identification, wherein the inferior frontal sulcus and the ventral premotor cortex were under the control of selective attention and sensitive to task demand. More strikingly, discriminating materials elicited increased activity in cortical regions connecting auditory inputs to semantic, motor, and even visual representations, whereas discriminating actions did not increase activity in any regions. These results indicate that discriminating and identifying material requires deeper processing of the stimuli than discriminating actions. These results are consistent with previous studies suggesting that auditory perception is better suited to comprehend the actions than the objects producing sounds in the listeners' environment.
Collapse
Affiliation(s)
- Guillaume Lemaitre
- Carnegie Mellon University, Department of Psychology and Center for Neural Basis of Cognition, Pittsburgh, PA 15213, USA
| | - John A Pyles
- Carnegie Mellon University, Department of Psychology and Center for Neural Basis of Cognition, Pittsburgh, PA 15213, USA
| | - Andrea R Halpern
- Bucknell University, Department of Psychology, Lewisburg 17837, PA, USA
| | - Nicole Navolio
- Carnegie Mellon University, Department of Psychology and Center for Neural Basis of Cognition, Pittsburgh, PA 15213, USA
| | - Matthew Lehet
- Carnegie Mellon University, Department of Psychology and Center for Neural Basis of Cognition, Pittsburgh, PA 15213, USA
| | - Laurie M Heller
- Carnegie Mellon University, Department of Psychology and Center for Neural Basis of Cognition, Pittsburgh, PA 15213, USA
| |
Collapse
|
16
|
Auditory spatial attention capture, disengagement, and response selection in normal aging. Atten Percept Psychophys 2019; 81:270-280. [PMID: 30338454 DOI: 10.3758/s13414-018-1611-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Attention control is a core element of cognitive aging, but the specific mechanisms that differ with age are unclear. Here we used a novel auditory spatial attention task to evaluate stimulus processing at the level of early attention capture, later response selection, and the lingering effects of attention capture across trials in young and older adults. We found that the shapes of spatial attention capture gradients were remarkably similar in young and older adults, but only the older group had lingering effects of attention capture on the next trial. Response selection for stimulus-response incompatibilities took longer in older subjects, but primarily when attending to the midline location. The results suggest that the likelihood and spatial tuning of attention capture is comparable among groups, but once attention is captured, older subjects take longer to disengage. Age differences in response selection were supported, but may not be a general feature of cognitive aging.
Collapse
|
17
|
Bieńkiewicz MMN, Bringoux L, Buloup F, Rodger M, Craig C, Bourdin C. The Limitations of Being a Copycat: Learning Golf Putting Through Auditory and Visual Guidance. Front Psychol 2019; 10:92. [PMID: 30800082 PMCID: PMC6376899 DOI: 10.3389/fpsyg.2019.00092] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Accepted: 01/14/2019] [Indexed: 11/24/2022] Open
Abstract
The goal of this study was to investigate whether sensory cues carrying the kinematic template of expert performance (produced by mapping movement to a sound or visual cue) displayed prior to and during movement execution can enhance motor learning of a new skill (golf putting) in a group of novices. We conducted a motor learning study on a sample of 30 participants who were divided into three groups: a control, an auditory guide and visual guide group. The learning phase comprised of two sessions per week over a period of 4 weeks, giving rise to eight sessions. In each session participants made 20 shots to three different putting distances. All participants had their measurements taken at separate sessions without any guidance: baseline, transfer (different distances) and retention 2 weeks later. Results revealed a subtle improvement in goal attainment and a decrease in kinematic variability in the sensory groups (auditory and visual) compared to the control group. The comparable changes in performance between the visual and auditory guide groups, particularly during training, supports the idea that temporal patterns relevant to motor control can be perceived similarly through either visual or auditory modalities. This opens up the use of auditory displays to inform motor learning in tasks or situations where visual attention is otherwise constrained or unsuitable. Further research into the most useful template actions to display to learners may thus still support effective auditory guidance in motor learning.
Collapse
Affiliation(s)
| | - Lionel Bringoux
- Aix-Marseille Université, CNRS, ISM, UMR 7287, Marseille, France
| | - Franck Buloup
- Aix-Marseille Université, CNRS, ISM, UMR 7287, Marseille, France
| | - Matthew Rodger
- School of Psychology, Queen's University of Belfast, Belfast, United Kingdom
| | - Cathy Craig
- INCISIV Ltd, Belfast, United Kingdom.,School of Psychology at Ulster University, Coleraine, United Kingdom
| | | |
Collapse
|
18
|
Johnston SK, Hennessey NW, Leitão S. Determinants of assessing efficiency within auditory attention networks. THE JOURNAL OF GENERAL PSYCHOLOGY 2019; 146:134-169. [DOI: 10.1080/00221309.2018.1541861] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
19
|
Not All Predictions Are Equal: "What" and "When" Predictions Modulate Activity in Auditory Cortex through Different Mechanisms. J Neurosci 2018; 38:8680-8693. [PMID: 30143578 DOI: 10.1523/jneurosci.0369-18.2018] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2018] [Revised: 07/22/2018] [Accepted: 07/26/2018] [Indexed: 11/21/2022] Open
Abstract
Using predictions based on environmental regularities is fundamental for adaptive behavior. While it is widely accepted that predictions across different stimulus attributes (e.g., time and content) facilitate sensory processing, it is unknown whether predictions across these attributes rely on the same neural mechanism. Here, to elucidate the neural mechanisms of predictions, we combine invasive electrophysiological recordings (human electrocorticography in 4 females and 2 males) with computational modeling while manipulating predictions about content ("what") and time ("when"). We found that "when" predictions increased evoked activity over motor and prefrontal regions both at early (∼180 ms) and late (430-450 ms) latencies. "What" predictability, however, increased evoked activity only over prefrontal areas late in time (420-460 ms). Beyond these dissociable influences, we found that "what" and "when" predictability interactively modulated the amplitude of early (165 ms) evoked responses in the superior temporal gyrus. We modeled the observed neural responses using biophysically realistic neural mass models, to better understand whether "what" and "when" predictions tap into similar or different neurophysiological mechanisms. Our modeling results suggest that "what" and "when" predictability rely on complementary neural processes: "what" predictions increased short-term plasticity in auditory areas, whereas "when" predictability increased synaptic gain in motor areas. Thus, content and temporal predictions engage complementary neural mechanisms in different regions, suggesting domain-specific prediction signaling along the cortical hierarchy. Encoding predictions through different mechanisms may endow the brain with the flexibility to efficiently signal different sources of predictions, weight them by their reliability, and allow for their encoding without mutual interference.SIGNIFICANCE STATEMENT Predictions of different stimulus features facilitate sensory processing. However, it is unclear whether predictions of different attributes rely on similar or different neural mechanisms. By combining invasive electrophysiological recordings of cortical activity with experimental manipulations of participants' predictions about content and time of acoustic events, we found that the two types of predictions had dissociable influences on cortical activity, both in terms of the regions involved and the timing of the observed effects. Further, our biophysical modeling analysis suggests that predictability of content and time rely on complementary neural processes: short-term plasticity in auditory areas and synaptic gain in motor areas, respectively. This suggests that predictions of different features are encoded with complementary neural mechanisms in different brain regions.
Collapse
|
20
|
Alain C, Khatamian Y, He Y, Lee Y, Moreno S, Leung AWS, Bialystok E. Different neural activities support auditory working memory in musicians and bilinguals. Ann N Y Acad Sci 2018; 1423:435-446. [PMID: 29771462 DOI: 10.1111/nyas.13717] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2017] [Revised: 03/13/2018] [Accepted: 03/17/2018] [Indexed: 02/28/2024]
Abstract
Musical training and bilingualism benefit executive functioning and working memory (WM)-however, the brain networks supporting this advantage are not well specified. Here, we used functional magnetic resonance imaging and the n-back task to assess WM for spatial (sound location) and nonspatial (sound category) auditory information in musician monolingual (musicians), nonmusician bilinguals (bilinguals), and nonmusician monolinguals (controls). Musicians outperformed bilinguals and controls on the nonspatial WM task. Overall, spatial and nonspatial WM were associated with greater activity in dorsal and ventral brain regions, respectively. Increasing WM load yielded similar recruitment of the anterior-posterior attention network in all three groups. In both tasks and both levels of difficulty, musicians showed lower brain activity than controls in superior prefrontal frontal gyrus and dorsolateral prefrontal cortex (DLPFC) bilaterally, a finding that may reflect improved and more efficient use of neural resources. Bilinguals showed enhanced activity in language-related areas (i.e., left DLPFC and left supramarginal gyrus) relative to musicians and controls, which could be associated with the need to suppress interference associated with competing semantic activations from multiple languages. These findings indicate that the auditory WM advantage in musicians and bilinguals is mediated by different neural networks specific to each life experience.
Collapse
Affiliation(s)
- Claude Alain
- Rotman Research Institute, Baycrest Centre, University of Toronto, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Yasha Khatamian
- Rotman Research Institute, Baycrest Centre, University of Toronto, Toronto, Ontario, Canada
| | - Yu He
- Rotman Research Institute, Baycrest Centre, University of Toronto, Toronto, Ontario, Canada
| | - Yunjo Lee
- Rotman Research Institute, Baycrest Centre, University of Toronto, Toronto, Ontario, Canada
| | - Sylvain Moreno
- School of Interactive Arts and Technology, Simon Fraser University, Burnaby, British Columbia, Canada
- Digital Health Hub, Innovation Boulevard, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Ada W S Leung
- Rotman Research Institute, Baycrest Centre, University of Toronto, Toronto, Ontario, Canada
- Department of Occupational Therapy, University of Alberta, Edmonton, Alberta, Canada
| | - Ellen Bialystok
- Rotman Research Institute, Baycrest Centre, University of Toronto, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
| |
Collapse
|
21
|
Asutay E, Västfjäll D. Exposure to arousal-inducing sounds facilitates visual search. Sci Rep 2017; 7:10363. [PMID: 28871100 PMCID: PMC5583323 DOI: 10.1038/s41598-017-09975-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2017] [Accepted: 08/02/2017] [Indexed: 11/23/2022] Open
Abstract
Exposure to affective stimuli could enhance perception and facilitate attention via increasing alertness, vigilance, and by decreasing attentional thresholds. However, evidence on the impact of affective sounds on perception and attention is scant. Here, a novel aspect of affective facilitation of attention is studied: whether arousal induced by task-irrelevant auditory stimuli could modulate attention in a visual search. In two experiments, participants performed a visual search task with and without auditory-cues that preceded the search. Participants were faster in locating high-salient targets compared to low-salient targets. Critically, search times and search slopes decreased with increasing auditory-induced arousal while searching for low-salient targets. Taken together, these findings suggest that arousal induced by sounds can facilitate attention in a subsequent visual search. This novel finding provides support for the alerting function of the auditory system by showing an auditory-phasic alerting effect in visual attention. The results also indicate that stimulus arousal modulates the alerting effect. Attention and perception are our everyday tools to navigate our surrounding world and the current findings showing that affective sounds could influence visual attention provide evidence that we make use of affective information during perceptual processing.
Collapse
Affiliation(s)
- Erkin Asutay
- Department of Behavioral Sciences and Learning, Linköping University, Linköping, SE-58183, Sweden.
| | - Daniel Västfjäll
- Department of Behavioral Sciences and Learning, Linköping University, Linköping, SE-58183, Sweden.,Decision Research, Eugene, OR, 97401, USA
| |
Collapse
|
22
|
Joyce AW. Mechanisms of automaticity and anticipatory control in fluid intelligence. APPLIED NEUROPSYCHOLOGY. CHILD 2017; 6:212-223. [PMID: 28489422 DOI: 10.1080/21622965.2017.1317486] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
The constructs of fluid (Gf) and crystalized (Gc) intelligence represent an early attempt to describe the mechanisms of problem solving in the vertebrate brain. Modern neuroscience demonstrates that problem solving involves interplay between the mechanisms of automaticity and anticipatory control, enabling nature's elegant solution to the challenges animals face in their environment. Studies of neural functioning are making clear the primary role of cortical-subcortical interactions in the manifestation of intelligent behavior in humans and other vertebrates. A tridimensional model of intelligent problem solving is explored, wherein the basal ganglia system (BGS) and cerebrocerebellar system (CCS) interact within large scale brain networks. The BGS and CCS work together to enable automaticity to occur. The BGS enables the organism to learn what to do through a powerful instrumental learning system. The BGS also regulates when behavior is released through an inhibitory system which is incredibly sensitive to context. The CCS enables the organism to learn how to perform adaptive behaviors. Internal cerebellar models enable gradual improvements in the quality of behavioral output. The BGS and CCS interact within large scale brain networks, including the dorsal attention network (DAN), ventral attention network (VAN), default mode network (DMN) and frontoparietal network (FPN). The interactions of these systems enable vertebrate organisms to develop a vast array of complex adaptive behaviors. The benefits and importance of developing clinical tests to measure the integrity of these systems is considered.
Collapse
Affiliation(s)
- Arthur W Joyce
- a Private Practice , Clinical Neuropsychology , Irving , Texas , USA
| |
Collapse
|
23
|
Salo E, Salmela V, Salmi J, Numminen J, Alho K. Brain activity associated with selective attention, divided attention and distraction. Brain Res 2017; 1664:25-36. [PMID: 28363436 DOI: 10.1016/j.brainres.2017.03.021] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2016] [Revised: 02/21/2017] [Accepted: 03/22/2017] [Indexed: 11/16/2022]
Abstract
Top-down controlled selective or divided attention to sounds and visual objects, as well as bottom-up triggered attention to auditory and visual distractors, has been widely investigated. However, no study has systematically compared brain activations related to all these types of attention. To this end, we used functional magnetic resonance imaging (fMRI) to measure brain activity in participants performing a tone pitch or a foveal grating orientation discrimination task, or both, distracted by novel sounds not sharing frequencies with the tones or by extrafoveal visual textures. To force focusing of attention to tones or gratings, or both, task difficulty was kept constantly high with an adaptive staircase method. A whole brain analysis of variance (ANOVA) revealed fronto-parietal attention networks for both selective auditory and visual attention. A subsequent conjunction analysis indicated partial overlaps of these networks. However, like some previous studies, the present results also suggest segregation of prefrontal areas involved in the control of auditory and visual attention. The ANOVA also suggested, and another conjunction analysis confirmed, an additional activity enhancement in the left middle frontal gyrus related to divided attention supporting the role of this area in top-down integration of dual task performance. Distractors expectedly disrupted task performance. However, contrary to our expectations, activations specifically related to the distractors were found only in the auditory and visual cortices. This suggests gating of the distractors from further processing perhaps due to strictly focused attention in the current demanding discrimination tasks.
Collapse
Affiliation(s)
- Emma Salo
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland; Advanced Magnetic Imaging Centre, Aalto Neuroimaging, Aalto University School of Science and Technology, Espoo, Finland.
| | - Viljami Salmela
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland; Advanced Magnetic Imaging Centre, Aalto Neuroimaging, Aalto University School of Science and Technology, Espoo, Finland
| | - Juha Salmi
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland; Advanced Magnetic Imaging Centre, Aalto Neuroimaging, Aalto University School of Science and Technology, Espoo, Finland; Faculty of Arts, Psychology and Theology, Åbo Akademi University, Turku, Finland
| | - Jussi Numminen
- Helsinki Medical Imaging Centre, Helsinki University Hospital, Helsinki, Finland
| | - Kimmo Alho
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland; Advanced Magnetic Imaging Centre, Aalto Neuroimaging, Aalto University School of Science and Technology, Espoo, Finland
| |
Collapse
|
24
|
Ciria LF, Muñoz MA, Gea J, Peña N, Miranda JGV, Montoya P, Vila J. Head movement measurement: An alternative method for posturography studies. Gait Posture 2017; 52:100-106. [PMID: 27888694 DOI: 10.1016/j.gaitpost.2016.11.020] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/05/2015] [Revised: 07/26/2016] [Accepted: 11/09/2016] [Indexed: 02/02/2023]
Abstract
The present study evaluated the measurement of head movements as a valid method for postural emotional studies using the comparison of simultaneous recording of center of pressure (COP) sway as criterion. Thirty female students viewed a set of 12 pleasant, 12 unpleasant and 12 neutral pictures from the International Affective Picture System, repeated twice, using a block presentation procedure while standing on a force platform (AMTI AccuSway). Head movements were recorded using a webcam (©KPC139E) located in the ceiling in line with the force platform and a light-emitting diode (LED) placed on the top of the head. Open source software (CvMob 3.1) was used to process the data. High indices of correlation and coherence between head and COP sway were observed. In addition, pleasant pictures, compared with unpleasant pictures, elicited greater body sway in the anterior-posterior axis, suggesting an approach response to appetitive stimuli. Thus, the measurement of head movement can be an alternative or complementary method to recording COP for studying human postural changes.
Collapse
Affiliation(s)
- L F Ciria
- Human Psychophysiology and Health Group, Mind, Brain and Behavior Research Center-CIMCYC, University of Granada, Spain
| | - M A Muñoz
- Human Psychophysiology and Health Group, Mind, Brain and Behavior Research Center-CIMCYC, University of Granada, Spain.
| | - J Gea
- Research Institute oF Health Sciences (IUNICS), University of Balearic Islands (UIB), Palma, Spain
| | - N Peña
- Department of Physiotherapy, Faculdade de Ciências da Saúde, Federal University of Bahia, Salvador, Brazil
| | - J G V Miranda
- Department of Physics of the Earth and the Environment, Instituto de Fisica Federal University of Bahia, Salvador, Brazil
| | - P Montoya
- Research Institute oF Health Sciences (IUNICS), University of Balearic Islands (UIB), Palma, Spain
| | - J Vila
- Human Psychophysiology and Health Group, Mind, Brain and Behavior Research Center-CIMCYC, University of Granada, Spain
| |
Collapse
|
25
|
Asutay E, Västfjäll D. Auditory attentional selection is biased by reward cues. Sci Rep 2016; 6:36989. [PMID: 27841363 PMCID: PMC5107919 DOI: 10.1038/srep36989] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2016] [Accepted: 10/24/2016] [Indexed: 11/10/2022] Open
Abstract
Auditory attention theories suggest that humans are able to decompose the complex acoustic input into separate auditory streams, which then compete for attentional resources. How this attentional competition is influenced by motivational salience of sounds is, however, not well-understood. Here, we investigated whether a positive motivational value associated with sounds could bias the attentional selection in an auditory detection task. Participants went through a reward-learning period, where correct attentional selection of one stimulus (CS+) lead to higher rewards compared to another stimulus (CS-). We assessed the impact of reward-learning by comparing perceptual sensitivity before and after the learning period, when CS+ and CS- were presented as distractors for a different target. Performance decreased after reward-learning when CS+ was a distractor, while it increased when CS- was a distractor. Thus, the findings show that sounds that were associated with high rewards captures attention involuntarily. Additionally, when successful inhibition of a particular sound (CS-) was associated with high rewards then it became easier to ignore it. The current findings have important implications for the understanding of the organizing principles of auditory perception and provide, for the first time, clear behavioral evidence for reward-dependent attentional learning in the auditory domain in humans.
Collapse
Affiliation(s)
- Erkin Asutay
- Behavioral Sciences and Learning, Linköping University, SE - 581 83, Linköping, Sweden.,Civil and Environmental Engineering, Chalmers University of Technology, SE - 412 96, Gothenburg, Sweden
| | - Daniel Västfjäll
- Behavioral Sciences and Learning, Linköping University, SE - 581 83, Linköping, Sweden.,Decision Research, 1201 Oak Street, Suite 200 Eugene, OR, USA
| |
Collapse
|
26
|
Salgado-Montejo A, Marmolejo-Ramos F, Alvarado JA, Arboleda JC, Suarez DR, Spence C. Drawing sounds: representing tones and chords spatially. Exp Brain Res 2016; 234:3509-3522. [PMID: 27501731 DOI: 10.1007/s00221-016-4747-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Accepted: 07/29/2016] [Indexed: 11/28/2022]
Affiliation(s)
- Alejandro Salgado-Montejo
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, OX1 3UD, UK.
- Universidad de La Sabana, Chía, Colombia.
| | | | - Jorge A Alvarado
- Department of Industrial Engineering, Pontificia Universidad Javeriana, Bogotá, Colombia
| | | | - Daniel R Suarez
- Department of Industrial Engineering, Pontificia Universidad Javeriana, Bogotá, Colombia
| | - Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, OX1 3UD, UK
| |
Collapse
|
27
|
Stewart HJ, Amitay S. Modality-specificity of Selective Attention Networks. Front Psychol 2015; 6:1826. [PMID: 26635709 PMCID: PMC4658445 DOI: 10.3389/fpsyg.2015.01826] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2015] [Accepted: 11/11/2015] [Indexed: 11/18/2022] Open
Abstract
Objective: To establish the modality specificity and generality of selective attention networks. Method: Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. Results: The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled “general attention.” The third component was labeled “auditory attention,” as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as “spatial orienting” and “spatial conflict,” respectively—they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task—all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). Conclusions: These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific.
Collapse
Affiliation(s)
- Hannah J Stewart
- Medical Research Council Institute of Hearing Research Nottingham, UK
| | - Sygal Amitay
- Medical Research Council Institute of Hearing Research Nottingham, UK
| |
Collapse
|
28
|
Zündorf IC, Lewald J, Karnath HO. Testing the dual-pathway model for auditory processing in human cortex. Neuroimage 2015; 124:672-681. [PMID: 26388552 DOI: 10.1016/j.neuroimage.2015.09.026] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2015] [Revised: 09/09/2015] [Accepted: 09/10/2015] [Indexed: 11/16/2022] Open
Abstract
Analogous to the visual system, auditory information has been proposed to be processed in two largely segregated streams: an anteroventral ("what") pathway mainly subserving sound identification and a posterodorsal ("where") stream mainly subserving sound localization. Despite the popularity of this assumption, the degree of separation of spatial and non-spatial auditory information processing in cortex is still under discussion. In the present study, a statistical approach was implemented to investigate potential behavioral dissociations for spatial and non-spatial auditory processing in stroke patients, and voxel-wise lesion analyses were used to uncover their neural correlates. The results generally provided support for anatomically and functionally segregated auditory networks. However, some degree of anatomo-functional overlap between "what" and "where" aspects of processing was found in the superior pars opercularis of right inferior frontal gyrus (Brodmann area 44), suggesting the potential existence of a shared target area of both auditory streams in this region. Moreover, beyond the typically defined posterodorsal stream (i.e., posterior superior temporal gyrus, inferior parietal lobule, and superior frontal sulcus), occipital lesions were found to be associated with sound localization deficits. These results, indicating anatomically and functionally complex cortical networks for spatial and non-spatial auditory processing, are roughly consistent with the dual-pathway model of auditory processing in its original form, but argue for the need to refine and extend this widely accepted hypothesis.
Collapse
Affiliation(s)
- Ida C Zündorf
- Center of Neurology, Division of Neuropsychology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, Bochum, Germany; Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Hans-Otto Karnath
- Center of Neurology, Division of Neuropsychology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany; Department of Psychology, University of South Carolina, Columbia, SC 29208, USA.
| |
Collapse
|
29
|
Lee J, Spence C. Audiovisual crossmodal cuing effects in front and rear space. Front Psychol 2015; 6:1086. [PMID: 26284010 PMCID: PMC4519676 DOI: 10.3389/fpsyg.2015.01086] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2015] [Accepted: 07/14/2015] [Indexed: 11/27/2022] Open
Abstract
The participants in the present study had to make speeded elevation discrimination responses to visual targets presented to the left or right of central fixation following the presentation of a task-irrelevant auditory cue on either the same or opposite side. In Experiment 1, the cues were presented from in front of the participants (from the same azimuthal positions as the visual targets). A standard crossmodal exogenous spatial cuing effect was observed, with participants responding significantly faster in the elevation discrimination task to visual targets when both the auditory cues and the visual targets were presented on the same side. Experiment 2 replicated the exogenous spatial cuing effect for frontal visual targets following both front and rear auditory cues. The results of Experiment 3 demonstrated that the participants had little difficulty in correctly discriminating the location from which the sounds were presented. Thus, taken together, the results of the three experiments reported here demonstrate that the exact co-location of auditory cues and visual targets is not necessary to attract spatial attention. Implications of these results for the design of real-world warning signals are discussed.
Collapse
Affiliation(s)
- Jae Lee
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford Oxford, UK
| | - Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford Oxford, UK
| |
Collapse
|
30
|
Koziol LF, Barker LA, Joyce AW, Hrin S. Structure and function of large-scale brain systems. APPLIED NEUROPSYCHOLOGY-CHILD 2015; 3:236-44. [PMID: 25268685 DOI: 10.1080/21622965.2014.946797] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
This article introduces the functional neuroanatomy of large-scale brain systems. Both the structure and functions of these brain networks are presented. All human behavior is the result of interactions within and between these brain systems. This system of brain function completely changes our understanding of how cognition and behavior are organized within the brain, replacing the traditional lesion model. Understanding behavior within the context of brain network interactions has profound implications for modifying abstract constructs such as attention, learning, and memory. These constructs also must be understood within the framework of a paradigm shift, which emphasizes ongoing interactions within a dynamically changing environment.
Collapse
|
31
|
Badcock JC. A Neuropsychological Approach to Auditory Verbal Hallucinations and Thought Insertion - Grounded in Normal Voice Perception. ACTA ACUST UNITED AC 2015; 7:631-652. [PMID: 27617046 PMCID: PMC4995233 DOI: 10.1007/s13164-015-0270-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
A neuropsychological perspective on auditory verbal hallucinations (AVH) links key phenomenological features of the experience, such as voice location and identity, to functionally separable pathways in normal human audition. Although this auditory processing stream (APS) framework has proven valuable for integrating research on phenomenology with cognitive and neural accounts of hallucinatory experiences, it has not yet been applied to other symptoms presumed to be closely related to AVH – such as thought insertion (TI). In this paper, I propose that an APS framework offers a useful way of thinking about the experience of TI as well as AVH, providing a common conceptual framework for both. I argue that previous self-monitoring theories struggle to account for both the differences and similarities in the characteristic features of AVH and TI, which can be readily accommodated within an APS framework. Furthermore, the APS framework can be integrated with predictive processing accounts of psychotic symptoms; makes predictions about potential sites of prediction error signals; and may offer a template for understanding a range of other symptoms beyond AVH and TI.
Collapse
Affiliation(s)
- Johanna C Badcock
- Centre for Clinical Research in Neuropsychiatry, School of Psychiatry and Clinical Neurosciences, University of Western Australia, Crawley, 6009 Western Australia
| |
Collapse
|
32
|
Martin K, Johnstone P, Hedrick M. Auditory and visual localization accuracy in young children and adults. Int J Pediatr Otorhinolaryngol 2015; 79:844-851. [PMID: 25841637 DOI: 10.1016/j.ijporl.2015.03.016] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2015] [Revised: 03/16/2015] [Accepted: 03/17/2015] [Indexed: 11/30/2022]
Abstract
OBJECTIVE This study aimed to measure and compare sound and light source localization ability in young children and adults who have normal hearing and normal/corrected vision in order to determine the extent to which age, type of stimuli, and stimulus order affects sound localization accuracy. METHODS Two experiments were conducted. The first involved a group of adults only. The second involved a group of 30 children aged 3 to 5 years. Testing occurred in a sound-treated booth containing a semi-circular array of 15 loudspeakers set at 10° intervals from -70° to 70° azimuth. Each loudspeaker had a tiny light bulb and a small picture fastened underneath. Seven of the loudspeakers were used to randomly test sound and light source identification. The sound stimulus was the word "baseball". The light stimulus was a flashing of a light bulb triggered by the digital signal of the word "baseball". Each participant was asked to face 0° azimuth, and identify the location of the test stimulus upon presentation. Adults used a computer mouse to click on an icon; children responded by verbally naming or walking toward the picture underneath the corresponding loudspeaker or light. A mixed experimental design using repeated measures was used to determine the effect of age and stimulus type on localization accuracy in children and adults. A mixed experimental design was used to compare the effect of stimulus order (light first/last) and varying or fixed intensity sound on localization accuracy in children and adults. RESULTS Localization accuracy was significantly better for light stimuli than sound stimuli for children and adults. Children, compared to adults, showed significantly greater localization errors for audition. Three-year-old children had significantly greater sound localization errors compared to 4- and 5-year olds. Adults performed better on the sound localization task when the light localization task occurred first. CONCLUSIONS Young children can understand and attend to localization tasks, but show poorer localization accuracy than adults in sound localization. This may be a reflection of differences in sensory modality development and/or central processes in young children, compared to adults.
Collapse
Affiliation(s)
- Karen Martin
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, United States
| | - Patti Johnstone
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, United States
| | - Mark Hedrick
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, United States.
| |
Collapse
|
33
|
Asutay E, Västfjäll D. Negative emotion provides cues for orienting auditory spatial attention. Front Psychol 2015; 6:618. [PMID: 26029149 PMCID: PMC4428076 DOI: 10.3389/fpsyg.2015.00618] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2014] [Accepted: 04/26/2015] [Indexed: 11/20/2022] Open
Abstract
The auditory stimuli provide information about the objects and events around us. They can also carry biologically significant emotional information (such as unseen dangers and conspecific vocalizations), which provides cues for allocation of attention and mental resources. Here, we investigated whether task-irrelevant auditory emotional information can provide cues for orientation of auditory spatial attention. We employed a covert spatial orienting task: the dot-probe task. In each trial, two task-irrelevant auditory cues were simultaneously presented at two separate locations (left-right or front-back). Environmental sounds were selected to form emotional vs. neutral, emotional vs. emotional, and neutral vs. neutral cue pairs. The participants' task was to detect the location of an acoustic target that was presented immediately after the task-irrelevant auditory cues. The target was presented at the same location as one of the auditory cues. The results indicated that participants were significantly faster to locate the target when it replaced the negative cue compared to when it replaced the neutral cue. The positive cues did not produce a clear attentional bias. Further, same valence pairs (emotional-emotional or neutral-neutral) did not modulate reaction times due to a lack of spatial attention capture by one cue in the pair. Taken together, the results indicate that negative affect can provide cues for the orientation of spatial attention in the auditory domain.
Collapse
Affiliation(s)
- Erkin Asutay
- Division of Applied Acoustics, Department of Civil and Environmental Engineering, Chalmers University of Technology, Gothenburg, Sweden
| | - Daniel Västfjäll
- Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
- Decision Research, Eugene, OR, USA
| |
Collapse
|
34
|
|
35
|
Abstract
In spatial perception, visual information has higher acuity than auditory information and we often misperceive sound-source locations when spatially disparate visual stimuli are presented simultaneously. Ventriloquists make good use of this auditory illusion. In this study, we investigated neural substrates of the ventriloquism effect to understand the neural mechanism of multimodal integration. This study was performed in 2 steps. First, we investigated how sound locations were represented in the auditory cortex. Secondly, we investigated how simultaneous presentation of spatially disparate visual stimuli affects neural processing of sound locations. Based on the population rate code hypothesis that assumes monotonic sensitivity to sound azimuth across populations of broadly tuned neurons, we expected a monotonic increase of blood oxygenation level-dependent (BOLD) signals for more contralateral sounds. Consistent with this hypothesis, we found that BOLD signals in the posterior superior temporal gyrus increased monotonically as a function of sound azimuth. We also observed attenuation of the monotonic azimuthal sensitivity by spatially disparate visual stimuli. The alteration of the neural pattern was considered to reflect the neural mechanism of the ventriloquism effect. Our findings indicate that conflicting audiovisual spatial information of an event is associated with an attenuation of neural processing of auditory spatial localization.
Collapse
Affiliation(s)
- Akiko Callan
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka 565-0871, Japan
| | - Daniel Callan
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka 565-0871, Japan
| | - Hiroshi Ando
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka 565-0871, Japan
| |
Collapse
|
36
|
Using an auditory sensory substitution device to augment vision: evidence from eye movements. Exp Brain Res 2014; 233:851-60. [DOI: 10.1007/s00221-014-4160-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2014] [Accepted: 11/24/2014] [Indexed: 10/24/2022]
|
37
|
Effects of spatial response coding on distractor processing: evidence from auditory spatial negative priming tasks with keypress, joystick, and head movement responses. Atten Percept Psychophys 2014; 77:293-310. [PMID: 25214304 DOI: 10.3758/s13414-014-0760-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Prior studies of spatial negative priming indicate that distractor-assigned keypress responses are inhibited as part of visual, but not auditory, processing. However, recent evidence suggests that static keypress responses are not directly activated by spatially presented sounds and, therefore, might not call for an inhibitory process. In order to investigate the role of response inhibition in auditory processing, we used spatially directed responses that have been shown to result in direct response activation to irrelevant sounds. Participants localized a target sound by performing manual joystick responses (Experiment 1) or head movements (Experiment 2B) while ignoring a concurrent distractor sound. Relations between prime distractor and probe target were systematically manipulated (repeated vs. changed) with respect to identity and location. Experiment 2A investigated the influence of distractor sounds on spatial parameters of head movements toward target locations and showed that distractor-assigned responses are immediately inhibited to prevent false responding in the ongoing trial. Interestingly, performance in Experiments 1 and 2B was not generally impaired when the probe target appeared at the location of the former prime distractor and required a previously withheld and presumably inhibited response. Instead, performance was impaired only when prime distractor and probe target mismatched in terms of location or identity, which fully conforms to the feature-mismatching hypothesis. Together, the results suggest that response inhibition operates in auditory processing when response activation is provided but is presumably too short-lived to affect responding on the subsequent trial.
Collapse
|
38
|
Recasens M, Grimm S, Wollbrink A, Pantev C, Escera C. Encoding of nested levels of acoustic regularity in hierarchically organized areas of the human auditory cortex. Hum Brain Mapp 2014; 35:5701-16. [PMID: 24996147 DOI: 10.1002/hbm.22582] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2013] [Revised: 04/29/2014] [Accepted: 06/28/2014] [Indexed: 11/10/2022] Open
Abstract
Our auditory system is able to encode acoustic regularity of growing levels of complexity to model and predict incoming events. Recent evidence suggests that early indices of deviance detection in the time range of the middle-latency responses (MLR) precede the mismatch negativity (MMN), a well-established error response associated with deviance detection. While studies suggest that only the MMN, but not early deviance-related MLR, underlie complex regularity levels, it is not clear whether these two mechanisms interplay during scene analysis by encoding nested levels of acoustic regularity, and whether neuronal sources underlying local and global deviations are hierarchically organized. We registered magnetoencephalographic evoked fields to rapidly presented four-tone local sequences containing a frequency change. Temporally integrated local events, in turn, defined global regularities, which were infrequently violated by a tone repetition. A global magnetic mismatch negativity (MMNm) was obtained at 140-220 ms when breaking the global regularity, but no deviance-related effects were shown in early latencies. Conversely, Nbm (45-55 ms) and Pbm (60-75 ms) deflections of the MLR, and an earlier MMNm response at 120-160 ms, responded to local violations. Distinct neuronal generators in the auditory cortex underlay the processing of local and global regularity violations, suggesting that nested levels of complexity of auditory object representations are represented in separated cortical areas. Our results suggest that the different processing stages and anatomical areas involved in the encoding of auditory representations, and the subsequent detection of its violations, are hierarchically organized in the human auditory cortex.
Collapse
Affiliation(s)
- Marc Recasens
- Institute for Brain, Cognition and Behavior (IR3C), University of Barcelona, 08035, Catalonia, Spain; Cognitive Neuroscience Research Group, Department of Psychiatry and Clinical Psychobiology, University of Barcelona, 08035, Catalonia, Spain
| | | | | | | | | |
Collapse
|
39
|
Koziol LF, Joyce AW, Wurglitz G. The Neuropsychology of Attention: Revisiting the “Mirsky Model”. APPLIED NEUROPSYCHOLOGY-CHILD 2014; 3:297-307. [DOI: 10.1080/21622965.2013.870016] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
40
|
Sugimori E, Mitchell KJ, Raye CL, Greene EJ, Johnson MK. Brain mechanisms underlying reality monitoring for heard and imagined words. Psychol Sci 2014; 25:403-13. [PMID: 24443396 PMCID: PMC6069600 DOI: 10.1177/0956797613505776] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Using functional MRI, we investigated reality monitoring for auditory information. During scanning, healthy young adults heard words in another person's voice and imagined hearing other words in that same voice. Later, outside the scanner, participants judged words as "heard," "imagined," or "new." An area of left middle frontal gyrus (Brodmann's area, or BA, 6) was more active at encoding for imagined items subsequently correctly called "imagined" than for items incorrectly called "heard." An area of left inferior frontal gyrus (BA 45, 44) was more active at encoding for items subsequently called "heard" than "imagined," regardless of the actual source of the item. Scores on an Auditory Hallucination Experience Scale were positively related to activity in superior temporal gyrus (BA 22) for imagined words incorrectly called "heard." We suggest that activity in these areas reflects cognitive operations information (middle frontal gyrus) and semantic and perceptual detail (inferior frontal gyrus and superior temporal gyrus, respectively) used to make reality-monitoring attributions.
Collapse
|
41
|
Attention to memory: orienting attention to sound object representations. PSYCHOLOGICAL RESEARCH 2013; 78:439-52. [PMID: 24352689 DOI: 10.1007/s00426-013-0531-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2013] [Accepted: 11/29/2013] [Indexed: 01/08/2023]
Abstract
Despite a growing acceptance that attention and memory interact, and that attention can be focused on an active internal mental representation (i.e., reflective attention), there has been a paucity of work focusing on reflective attention to 'sound objects' (i.e., mental representations of actual sound sources in the environment). Further research on the dynamic interactions between auditory attention and memory, as well as its degree of neuroplasticity, is important for understanding how sound objects are represented, maintained, and accessed in the brain. This knowledge can then guide the development of training programs to help individuals with attention and memory problems. This review article focuses on attention to memory with an emphasis on behavioral and neuroimaging studies that have begun to explore the mechanisms that mediate reflective attentional orienting in vision and more recently, in audition. Reflective attention refers to situations in which attention is oriented toward internal representations rather than focused on external stimuli. We propose four general principles underlying attention to short-term memory. Furthermore, we suggest that mechanisms involved in orienting attention to visual object representations may also apply for orienting attention to sound object representations.
Collapse
|
42
|
Tao Q, Chan CCH, Luo YJ, Li JJ, Ting KH, Wang J, Lee TMC. How does experience modulate auditory spatial processing in individuals with blindness? Brain Topogr 2013; 28:506-19. [PMID: 24322827 PMCID: PMC4408360 DOI: 10.1007/s10548-013-0339-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2013] [Accepted: 11/21/2013] [Indexed: 11/24/2022]
Abstract
Comparing early- and late-onset blindness in individuals offers a unique model for studying the influence of visual experience on neural processing. This study investigated how prior visual experience would modulate auditory spatial processing among blind individuals. BOLD responses of early- and late-onset blind participants were captured while performing a sound localization task. The task required participants to listen to novel “Bat-ears” sounds, analyze the spatial information embedded in the sounds, and specify out of 15 locations where the sound would have been emitted. In addition to sound localization, participants were assessed on visuospatial working memory and general intellectual abilities. The results revealed common increases in BOLD responses in the middle occipital gyrus, superior frontal gyrus, precuneus, and precentral gyrus during sound localization for both groups. Between-group dissociations, however, were found in the right middle occipital gyrus and left superior frontal gyrus. The BOLD responses in the left superior frontal gyrus were significantly correlated with accuracy on sound localization and visuospatial working memory abilities among the late-onset blind participants. In contrast, the accuracy on sound localization only correlated with BOLD responses in the right middle occipital gyrus among the early-onset counterpart. The findings support the notion that early-onset blind individuals rely more on the occipital areas as a result of cross-modal plasticity for auditory spatial processing, while late-onset blind individuals rely more on the prefrontal areas which subserve visuospatial working memory.
Collapse
Affiliation(s)
- Qian Tao
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China
| | | | | | | | | | | | | |
Collapse
|
43
|
Du Y, He Y, Arnott SR, Ross B, Wu X, Li L, Alain C. Rapid tuning of auditory "what" and "where" pathways by training. ACTA ACUST UNITED AC 2013; 25:496-506. [PMID: 24042339 DOI: 10.1093/cercor/bht251] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Behavioral improvement within the first hour of training is commonly explained as procedural learning (i.e., strategy changes resulting from task familiarization). However, it may additionally reflect a rapid adjustment of the perceptual and/or attentional system in a goal-directed task. In support of this latter hypothesis, we show feature-specific gains in performance for groups of participants briefly trained to use either a spectral or spatial difference between 2 vowels presented simultaneously during a vowel identification task. In both groups, the neuromagnetic activity measured during the vowel identification task following training revealed source activity in auditory cortices, prefrontal, inferior parietal, and motor areas. More importantly, the contrast between the 2 groups revealed a striking double dissociation in which listeners trained on spectral or spatial cues showed higher source activity in ventral ("what") and dorsal ("where") brain areas, respectively. These feature-specific effects indicate that brief training can implicitly bias top-down processing to a trained acoustic cue and induce a rapid recalibration of the ventral and dorsal auditory streams during speech segregation and identification.
Collapse
Affiliation(s)
- Yi Du
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1 Department of Psychology, Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China and
| | - Yu He
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1
| | - Stephen R Arnott
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1
| | - Bernhard Ross
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1
| | - Xihong Wu
- Department of Psychology, Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China and
| | - Liang Li
- Department of Psychology, Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China and
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1 Department of Psychology, University of Toronto, Ontario, Canada M8V 2S4
| |
Collapse
|
44
|
Bailey T. Beyond DSM: the role of auditory processing in attention and its disorders. APPLIED NEUROPSYCHOLOGY-CHILD 2013; 1:112-20. [PMID: 23428298 DOI: 10.1080/21622965.2012.703890] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
This article reviews and synthesizes recent research regarding auditory processing, attention, and their roles in generating both adaptive and maladaptive behavioral responses. Research in these areas is beginning to converge on the role of polymorphisms associated with catecholamine metabolism and transport, particularly the neurotransmitter dopamine. The synthesis offered in this article appears to be the first to argue that genetic differences in dopamine metabolism may be the common factor in four disparate disorders that are often observed to be comorbid, i.e., attention-deficit hyperactivity disorder, auditory processing disorders, developmental language disorders, and reading disorders.
Collapse
Affiliation(s)
- Teresa Bailey
- Department of Research, Athena Academy, Palo Alto, CA, USA.
| |
Collapse
|
45
|
Murchison NM, Proctor RW. Spatial Compatibility Effects With Unimanual and Bimanual Wheel-Rotation Responses: An Homage to Guiard (1983). J Mot Behav 2013; 45:441-54. [DOI: 10.1080/00222895.2013.823906] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
46
|
Huang S, Seidman LJ, Rossi S, Ahveninen J. Distinct cortical networks activated by auditory attention and working memory load. Neuroimage 2013; 83:1098-108. [PMID: 23921102 DOI: 10.1016/j.neuroimage.2013.07.074] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2013] [Revised: 07/25/2013] [Accepted: 07/28/2013] [Indexed: 02/03/2023] Open
Abstract
Auditory attention and working memory (WM) allow for selection and maintenance of relevant sound information in our minds, respectively, thus underlying goal-directed functioning in everyday acoustic environments. It is still unclear whether these two closely coupled functions are based on a common neural circuit, or whether they involve genuinely distinct subfunctions with separate neuronal substrates. In a full factorial functional MRI (fMRI) design, we independently manipulated the levels of auditory-verbal WM load and attentional interference using modified Auditory Continuous Performance Tests. Although many frontoparietal regions were jointly activated by increases of WM load and interference, there was a double dissociation between prefrontal cortex (PFC) subareas associated selectively with either auditory attention or WM. Specifically, anterior dorsolateral PFC (DLPFC) and the right anterior insula were selectively activated by increasing WM load, whereas subregions of middle lateral PFC and inferior frontal cortex (IFC) were associated with interference only. Meanwhile, a superadditive interaction between interference and load was detected in left medial superior frontal cortex, suggesting that in this area, activations are not only overlapping, but reflect a common resource pool recruited by increased attentional and WM demands. Indices of WM-specific suppression of anterolateral non-primary auditory cortices (AC) and attention-specific suppression of primary AC were also found, possibly reflecting suppression/interruption of sound-object processing of irrelevant stimuli during continuous task performance. Our results suggest a double dissociation between auditory attention and working memory in subregions of anterior DLPFC vs. middle lateral PFC/IFC in humans, respectively, in the context of substantially overlapping circuits.
Collapse
Affiliation(s)
- Samantha Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA.
| | | | | | | |
Collapse
|
47
|
Zündorf IC, Lewald J, Karnath HO. Neural correlates of sound localization in complex acoustic environments. PLoS One 2013; 8:e64259. [PMID: 23691185 PMCID: PMC3653868 DOI: 10.1371/journal.pone.0064259] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2013] [Accepted: 04/09/2013] [Indexed: 12/05/2022] Open
Abstract
Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality.
Collapse
Affiliation(s)
- Ida C. Zündorf
- Division of Neuropsychology, Center of Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Ruhr University Bochum, Bochum, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Hans-Otto Karnath
- Division of Neuropsychology, Center of Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
- Department of Psychology, University of South Carolina, Columbia, South Carolina, United States of America
- * E-mail:
| |
Collapse
|
48
|
Shape-specific activation of occipital cortex in an early blind echolocation expert. Neuropsychologia 2013; 51:938-49. [DOI: 10.1016/j.neuropsychologia.2013.01.024] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2012] [Revised: 01/21/2013] [Accepted: 01/27/2013] [Indexed: 02/02/2023]
|
49
|
Salo E, Rinne T, Salonen O, Alho K. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks. Brain Res 2013; 1496:55-69. [DOI: 10.1016/j.brainres.2012.12.013] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2012] [Revised: 12/03/2012] [Accepted: 12/08/2012] [Indexed: 11/24/2022]
|
50
|
Spatial localization of auditory stimuli in human auditory cortex is based on both head-independent and head-centered coordinate systems. J Neurosci 2012; 32:13501-9. [PMID: 23015439 DOI: 10.1523/jneurosci.1315-12.2012] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
In humans, whose ears are fixed on the head, auditory stimuli are initially registered in space relative to the head. Eventually, locations of sound sources need to be encoded also relative to the body, or in absolute allocentric space, to allow orientation toward the sounds sources and consequent action. We can therefore distinguish between two spatial representation systems: a basic head-centered coordinate system and a more complex head-independent system. In an ERP experiment, we attempted to reveal which of these two coordinate systems is represented in the human auditory cortex. We dissociated the two systems using the mismatch negativity (MMN), a well studied EEG effect evoked by acoustic deviations. Contrary to previous findings suggesting that only primary head-related information is present at this early stage of processing, we observed significant MMN effects for both head-independent and head-centered deviant stimuli. Our findings thus reveal that both primary head-related and secondary body- or world-related reference frames are represented at this stage of auditory processing.
Collapse
|