1
|
Zhang Q, Luo C, Ngetich R, Zhang J, Jin Z, Li L. Visual Selective Attention P300 Source in Frontal-Parietal Lobe: ERP and fMRI Study. Brain Topogr 2022; 35:636-650. [PMID: 36178537 DOI: 10.1007/s10548-022-00916-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 09/03/2022] [Indexed: 11/28/2022]
Abstract
Visual selective attention can be achieved into bottom-up and top-down attention. Different selective attention tasks involve different attention control ways. The pop-out task requires more bottom-up attention, whereas the search task involves more top-down attention. P300, which is the positive potential generated by the brain in the latency of 300 ~ 600 ms after stimulus, reflects the processing of attention. There is no consensus on the P300 source. The aim of present study is to study the source of P300 elicited by different visual selective attention. We collected thirteen participants' P300 elicited by pop-out and search tasks with event-related potentials (ERP). We collected twenty-six participants' activation brain regions in pop-out and search tasks with functional magnetic resonance imaging (fMRI). And we analyzed the sources of P300 using the ERP and fMRI integration with high temporal resolution and high spatial resolution. ERP results indicated that the pop-out task induced larger P300 than the search task. P300 induced by the two tasks distributed at frontal and parietal lobes, with P300 induced by the pop-out task mainly at the parietal lobe and that induced by the search task mainly at the frontal lobe. Further ERP and fMRI integration analysis showed that neural difference sources of P300 were the right precentral gyrus, left superior frontal gyrus (medial orbital), left middle temporal gyrus, left rolandic operculum, right postcentral gyrus, and left angular gyrus. Our study suggests that the frontal and parietal lobes contribute to the P300 component of visual selective attention.
Collapse
Affiliation(s)
- Qiuzhu Zhang
- MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Cimei Luo
- MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Ronald Ngetich
- MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Junjun Zhang
- MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Zhenlan Jin
- MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Ling Li
- MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.
| |
Collapse
|
2
|
Raghavendra S, Lee S, Chun H, Martin BA, Tan CT. Cortical entrainment to speech produced by cochlear implant talkers and normal-hearing talkers. Front Neurosci 2022; 16:927872. [PMID: 36017176 PMCID: PMC9396306 DOI: 10.3389/fnins.2022.927872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 07/01/2022] [Indexed: 11/13/2022] Open
Abstract
Cochlear implants (CIs) are commonly used to restore the ability to hear in those with severe or profound hearing loss. CIs provide the necessary auditory feedback for them to monitor and control speech production. However, the speech produced by CI users may not be fully restored to achieve similar perceived sound quality to that produced by normal-hearing talkers and this difference is easily noticeable in their daily conversation. In this study, we attempt to address this difference as perceived by normal-hearing listeners, when listening to continuous speech produced by CI talkers and normal-hearing talkers. We used a regenerative model to decode and reconstruct the speech envelope from the single-trial electroencephalogram (EEG) recorded on the scalp of the normal-hearing listeners. Bootstrap Spearman correlation between the actual speech envelope and the envelope reconstructed from the EEG was computed as a metric to quantify the difference in response to the speech produced by the two talker groups. The same listeners were asked to rate the perceived sound quality of the speech produced by the two talker groups as a behavioral sound quality assessment. The results show that both the perceived sound quality ratings and the computed metric, which can be seen as the degree of cortical entrainment to the actual speech envelope across the normal-hearing listeners, were higher in value for speech produced by normal hearing talkers than that for CI talkers. The first purpose of the study was to determine how well the envelope of speech is represented neurophysiologically via its similarity to the envelope reconstructed from EEG. The second purpose was to show how well this representation of speech for both CI and normal hearing talker groups differentiates in term of perceived sound quality.
Collapse
Affiliation(s)
- Shruthi Raghavendra
- Department of Electrical and Computer Engineering, University of Texas at Dallas, Richardson, TX, United States
- *Correspondence: Shruthi Raghavendra,
| | - Sungmin Lee
- Department of Speech-Language Pathology and Audiology, Tongmyong University, Busan, South Korea
| | - Hyungi Chun
- Graduate Center, City University of New York, New York City, NY, United States
| | - Brett A. Martin
- Graduate Center, City University of New York, New York City, NY, United States
| | - Chin-Tuan Tan
- Department of Electrical and Computer Engineering, University of Texas at Dallas, Richardson, TX, United States
| |
Collapse
|
3
|
Uhrig S, Perkis A, Möller S, Svensson UP, Behne DM. Effects of Spatial Speech Presentation on Listener Response Strategy for Talker-Identification. Front Neurosci 2022; 15:730744. [PMID: 35153653 PMCID: PMC8831717 DOI: 10.3389/fnins.2021.730744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Accepted: 12/13/2021] [Indexed: 11/28/2022] Open
Abstract
This study investigates effects of spatial auditory cues on human listeners' response strategy for identifying two alternately active talkers (“turn-taking” listening scenario). Previous research has demonstrated subjective benefits of audio spatialization with regard to speech intelligibility and talker-identification effort. So far, the deliberate activation of specific perceptual and cognitive processes by listeners to optimize their task performance remained largely unexamined. Spoken sentences selected as stimuli were either clean or degraded due to background noise or bandpass filtering. Stimuli were presented via three horizontally positioned loudspeakers: In a non-spatial mode, both talkers were presented through a central loudspeaker; in a spatial mode, each talker was presented through the central or a talker-specific lateral loudspeaker. Participants identified talkers via speeded keypresses and afterwards provided subjective ratings (speech quality, speech intelligibility, voice similarity, talker-identification effort). In the spatial mode, presentations at lateral loudspeaker locations entailed quicker behavioral responses, which were significantly slower in comparison to a talker-localization task. Under clean speech, response times globally increased in the spatial vs. non-spatial mode (across all locations); these “response time switch costs,” presumably being caused by repeated switching of spatial auditory attention between different locations, diminished under degraded speech. No significant effects of spatialization on subjective ratings were found. The results suggested that when listeners could utilize task-relevant auditory cues about talker location, they continued to rely on voice recognition instead of localization of talker sound sources as primary response strategy. Besides, the presence of speech degradations may have led to increased cognitive control, which in turn compensated for incurring response time switch costs.
Collapse
Affiliation(s)
- Stefan Uhrig
- Department of Electronic Systems, Norwegian University of Science and Technology, Trondheim, Norway
- Quality and Usability Lab, Technische Universität Berlin, Berlin, Germany
- *Correspondence: Stefan Uhrig
| | - Andrew Perkis
- Department of Electronic Systems, Norwegian University of Science and Technology, Trondheim, Norway
| | - Sebastian Möller
- Quality and Usability Lab, Technische Universität Berlin, Berlin, Germany
- Speech and Language Technology, German Research Center for Artificial Intelligence, Berlin, Germany
| | - U. Peter Svensson
- Department of Electronic Systems, Norwegian University of Science and Technology, Trondheim, Norway
| | - Dawn M. Behne
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
4
|
Hu F, Wang H, Wang Q, Feng N, Chen J, Zhang T. Acrophobia Quantified by EEG Based on CNN Incorporating Granger Causality. Int J Neural Syst 2020; 31:2050069. [PMID: 33357152 DOI: 10.1142/s0129065720500690] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The aim of this study is to quantify acrophobia and provide safety advices for high-altitude workers. Considering that acrophobia is a fuzzy quantity that cannot be accurately evaluated by conventional detection methods, we propose a comprehensive solution to quantify acrophobia. Specifically, this study simulates a virtual reality environment called High-altitude Plank Walking Challenge, which provides a safe and controlled experimental environment for subjects. Besides, a method named Granger Causality Convolutional Neural Network (GCCNN) combining convolutional neural network and Granger causality functional brain network is proposed to analyze the subjects' noninvasive scalp EEG signals. Here, the GCCNN method is used to distinguish the subjects with severe acrophobia, moderate acrophobia, and no acrophobia in a three-class classification task or no acrophobia and acrophobia in a two-class classification task. Compared with the mainstream methods, the GCCNN method achieves better classification performance, with an accuracy of 98.74% for the two-class classification task (no acrophobia versus acrophobia) and of 98.47% for the three-class classification task (no acrophobia versus moderate acrophobia versus severe acrophobia). Consequently, our proposed GCCNN method can provide more accurate quantitative results than the comparative methods, making it to be more competitive in further practical applications.
Collapse
Affiliation(s)
- Fo Hu
- Department of Mechanical Engineering and Automation, Northeastern University, Heping District, Shenyang, Liaoning 110819, P. R. China
| | - Hong Wang
- Department of Mechanical Engineering and Automation, Northeastern University, Heping District, Shenyang, Liaoning 110819, P. R. China
| | - Qiaoxiu Wang
- Department of Mechanical Engineering and Automation, Northeastern University, Heping District, Shenyang, Liaoning 110819, P. R. China
| | - Naishi Feng
- Department of Mechanical Engineering and Automation, Northeastern University, Heping District, Shenyang, Liaoning 110819, P. R. China
| | - Jichi Chen
- Department of Mechanical Engineering and Automation, Northeastern University, Heping District, Shenyang, Liaoning 110819, P. R. China
| | - Tao Zhang
- Department of Mechanical Engineering and Automation, Northeastern University, Heping District, Shenyang, Liaoning 110819, P. R. China
| |
Collapse
|