1
|
Hansen TA, O’Leary RM, Svirsky MA, Wingfield A. Self-pacing ameliorates recall deficit when listening to vocoded discourse: a cochlear implant simulation. Front Psychol 2023; 14:1225752. [PMID: 38054180 PMCID: PMC10694252 DOI: 10.3389/fpsyg.2023.1225752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 11/07/2023] [Indexed: 12/07/2023] Open
Abstract
Introduction In spite of its apparent ease, comprehension of spoken discourse represents a complex linguistic and cognitive operation. The difficulty of such an operation can increase when the speech is degraded, as is the case with cochlear implant users. However, the additional challenges imposed by degraded speech may be mitigated to some extent by the linguistic context and pace of presentation. Methods An experiment is reported in which young adults with age-normal hearing recalled discourse passages heard with clear speech or with noise-band vocoding used to simulate the sound of speech produced by a cochlear implant. Passages were varied in inter-word predictability and presented either without interruption or in a self-pacing format that allowed the listener to control the rate at which the information was delivered. Results Results showed that discourse heard with clear speech was better recalled than discourse heard with vocoded speech, discourse with a higher average inter-word predictability was better recalled than discourse with a lower average inter-word predictability, and self-paced passages were recalled better than those heard without interruption. Of special interest was the semantic hierarchy effect: the tendency for listeners to show better recall for main ideas than mid-level information or detail from a passage as an index of listeners' ability to understand the meaning of a passage. The data revealed a significant effect of inter-word predictability, in that passages with lower predictability had an attenuated semantic hierarchy effect relative to higher-predictability passages. Discussion Results are discussed in terms of broadening cochlear implant outcome measures beyond current clinical measures that focus on single-word and sentence repetition.
Collapse
Affiliation(s)
- Thomas A. Hansen
- Department of Psychology, Brandeis University, Waltham, MA, United States
| | - Ryan M. O’Leary
- Department of Psychology, Brandeis University, Waltham, MA, United States
| | - Mario A. Svirsky
- Department of Otolaryngology, NYU Langone Medical Center, New York, NY, United States
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, MA, United States
| |
Collapse
|
2
|
Köksal Ersöz E, Aguilar C, Chossat P, Krupa M, Lavigne F. Neuronal mechanisms for sequential activation of memory items: Dynamics and reliability. PLoS One 2020; 15:e0231165. [PMID: 32298290 PMCID: PMC7161983 DOI: 10.1371/journal.pone.0231165] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Accepted: 03/17/2020] [Indexed: 11/19/2022] Open
Abstract
In this article we present a biologically inspired model of activation of memory items in a sequence. Our model produces two types of sequences, corresponding to two different types of cerebral functions: activation of regular or irregular sequences. The switch between the two types of activation occurs through the modulation of biological parameters, without altering the connectivity matrix. Some of the parameters included in our model are neuronal gain, strength of inhibition, synaptic depression and noise. We investigate how these parameters enable the existence of sequences and influence the type of sequences observed. In particular we show that synaptic depression and noise drive the transitions from one memory item to the next and neuronal gain controls the switching between regular and irregular (random) activation.
Collapse
Affiliation(s)
| | - Carlos Aguilar
- Lab by MANTU, Amaris Research Unit, Route des Colles, Biot, France
| | - Pascal Chossat
- Project Team MathNeuro, INRIA-CNRS-UNS, Sophia Antipolis, France
- Université Côte d'Azur, Laboratoire Jean-Alexandre Dieudonné, Nice, France
| | - Martin Krupa
- Project Team MathNeuro, INRIA-CNRS-UNS, Sophia Antipolis, France
- Université Côte d'Azur, Laboratoire Jean-Alexandre Dieudonné, Nice, France
| | | |
Collapse
|
3
|
Spatiotemporal discrimination in attractor networks with short-term synaptic plasticity. J Comput Neurosci 2019; 46:279-297. [PMID: 31134433 PMCID: PMC6571095 DOI: 10.1007/s10827-019-00717-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2018] [Revised: 03/04/2019] [Accepted: 04/02/2019] [Indexed: 12/28/2022]
Abstract
We demonstrate that a randomly connected attractor network with dynamic synapses can discriminate between similar sequences containing multiple stimuli suggesting such networks provide a general basis for neural computations in the brain. The network contains units representing assemblies of pools of neurons, with preferentially strong recurrent excitatory connections rendering each unit bi-stable. Weak interactions between units leads to a multiplicity of attractor states, within which information can persist beyond stimulus offset. When a new stimulus arrives, the prior state of the network impacts the encoding of the incoming information, with short-term synaptic depression ensuring an itinerancy between sets of active units. We assess the ability of such a network to encode the identity of sequences of stimuli, so as to provide a template for sequence recall, or decisions based on accumulation of evidence. Across a range of parameters, such networks produce the primacy (better final encoding of the earliest stimuli) and recency (better final encoding of the latest stimuli) observed in human recall data and can retain the information needed to make a binary choice based on total number of presentations of a specific stimulus. Similarities and differences in the final states of the network produced by different sequences lead to predictions of specific errors that could arise when an animal or human subject generalizes from training data, when the training data comprises a subset of the entire stimulus repertoire. We suggest that such networks can provide the general purpose computational engines needed for us to solve many cognitive tasks.
Collapse
|
4
|
Peelle JE. Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior. Ear Hear 2019; 39:204-214. [PMID: 28938250 PMCID: PMC5821557 DOI: 10.1097/aud.0000000000000494] [Citation(s) in RCA: 332] [Impact Index Per Article: 66.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Accepted: 07/28/2017] [Indexed: 02/04/2023]
Abstract
Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners' abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in Saint Louis, Saint Louis, Missouri, USA
| |
Collapse
|
5
|
Ayasse ND, Wingfield A. A Tipping Point in Listening Effort: Effects of Linguistic Complexity and Age-Related Hearing Loss on Sentence Comprehension. Trends Hear 2019; 22:2331216518790907. [PMID: 30235973 PMCID: PMC6154259 DOI: 10.1177/2331216518790907] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
In recent years, there has been a growing interest in the relationship between effort and performance. Early formulations implied that, as the challenge of a task increases, individuals will exert more effort, with resultant maintenance of stable performance. We report an experiment in which normal-hearing young adults, normal-hearing older adults, and older adults with age-related mild-to-moderate hearing loss were tested for comprehension of recorded sentences that varied the comprehension challenge in two ways. First, sentences were constructed that expressed their meaning either with a simpler subject-relative syntactic structure or a more computationally demanding object-relative structure. Second, for each sentence type, an adjectival phrase was inserted that created either a short or long gap in the sentence between the agent performing an action and the action being performed. The measurement of pupil dilation as an index of processing effort showed effort to increase with task difficulty until a difficulty tipping point was reached. Beyond this point, the measurement of pupil size revealed a commitment of effort by the two groups of older adults who failed to keep pace with task demands as evidenced by reduced comprehension accuracy. We take these pupillometry data as revealing a complex relationship between task difficulty, effort, and performance that might not otherwise appear from task performance alone.
Collapse
Affiliation(s)
- Nicole D Ayasse
- 1 Department of Psychology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| | - Arthur Wingfield
- 1 Department of Psychology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| |
Collapse
|
6
|
Koeritzer MA, Rogers CS, Van Engen KJ, Peelle JE. The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:740-751. [PMID: 29450493 PMCID: PMC5963044 DOI: 10.1044/2017_jslhr-h-17-0077] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 08/28/2017] [Accepted: 09/20/2017] [Indexed: 05/20/2023]
Abstract
PURPOSE The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. METHOD We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously. RESULTS Recognition memory (indexed by d') was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise. CONCLUSIONS Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences. SUPPLEMENTAL MATERIALS https://doi.org/10.23641/asha.5848059.
Collapse
Affiliation(s)
- Margaret A Koeritzer
- Program in Audiology and Communication Sciences, Washington University in St. Louis, MO
| | - Chad S Rogers
- Department of Otolaryngology, Washington University in St. Louis, MO
| | - Kristin J Van Engen
- Department of Psychological and Brain Sciences and Program in Linguistics, Washington University in St. Louis, MO
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis, MO
| |
Collapse
|
7
|
Panda P, Roy K. Learning to Generate Sequences with Combination of Hebbian and Non-hebbian Plasticity in Recurrent Spiking Neural Networks. Front Neurosci 2017; 11:693. [PMID: 29311774 PMCID: PMC5733011 DOI: 10.3389/fnins.2017.00693] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2017] [Accepted: 11/23/2017] [Indexed: 11/13/2022] Open
Abstract
Synaptic Plasticity, the foundation for learning and memory formation in the human brain, manifests in various forms. Here, we combine the standard spike timing correlation based Hebbian plasticity with a non-Hebbian synaptic decay mechanism for training a recurrent spiking neural model to generate sequences. We show that inclusion of the adaptive decay of synaptic weights with standard STDP helps learn stable contextual dependencies between temporal sequences, while reducing the strong attractor states that emerge in recurrent models due to feedback loops. Furthermore, we show that the combined learning scheme suppresses the chaotic activity in the recurrent model substantially, thereby enhancing its' ability to generate sequences consistently even in the presence of perturbations.
Collapse
Affiliation(s)
- Priyadarshini Panda
- Nanoelectronics Reserach Laboratory, Purdue Univerisity, School of Electrical and Computer Engineering, West Lafayette, IN, United States
| | - Kaushik Roy
- Nanoelectronics Reserach Laboratory, Purdue Univerisity, School of Electrical and Computer Engineering, West Lafayette, IN, United States
| |
Collapse
|
8
|
Ayasse ND, Lash A, Wingfield A. Effort Not Speed Characterizes Comprehension of Spoken Sentences by Older Adults with Mild Hearing Impairment. Front Aging Neurosci 2017; 8:329. [PMID: 28119598 PMCID: PMC5222878 DOI: 10.3389/fnagi.2016.00329] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2016] [Accepted: 12/19/2016] [Indexed: 12/13/2022] Open
Abstract
In spite of the rapidity of everyday speech, older adults tend to keep up relatively well in day-to-day listening. In laboratory settings older adults do not respond as quickly as younger adults in off-line tests of sentence comprehension, but the question is whether comprehension itself is actually slower. Two unique features of the human eye were used to address this question. First, we tracked eye-movements as 20 young adults and 20 healthy older adults listened to sentences that referred to one of four objects pictured on a computer screen. Although the older adults took longer to indicate the referenced object with a cursor-pointing response, their gaze moved to the correct object as rapidly as that of the younger adults. Second, we concurrently measured dilation of the pupil of the eye as a physiological index of effort. This measure revealed that although poorer hearing acuity did not slow processing, success came at the cost of greater processing effort.
Collapse
Affiliation(s)
- Nicole D Ayasse
- Volen National Center for Complex Systems, Brandeis University Waltham, MA, USA
| | - Amanda Lash
- Volen National Center for Complex Systems, Brandeis University Waltham, MA, USA
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University Waltham, MA, USA
| |
Collapse
|
9
|
Ward CM, Rogers CS, Van Engen KJ, Peelle JE. Effects of Age, Acoustic Challenge, and Verbal Working Memory on Recall of Narrative Speech. Exp Aging Res 2016; 42:97-111. [PMID: 26683044 DOI: 10.1080/0361073x.2016.1108785] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
BACKGROUND/STUDY CONTEXT A common goal during speech comprehension is to remember what we have heard. Encoding speech into long-term memory frequently requires processes such as verbal working memory that may also be involved in processing degraded speech. Here the authors tested whether young and older adult listeners' memory for short stories was worse when the stories were acoustically degraded, or whether the additional contextual support provided by a narrative would protect against these effects. METHODS The authors tested 30 young adults (aged 18-28 years) and 30 older adults (aged 65-79 years) with good self-reported hearing. Participants heard short stories that were presented as normal (unprocessed) speech or acoustically degraded using a noise vocoding algorithm with 24 or 16 channels. The degraded stories were still fully intelligible. Following each story, participants were asked to repeat the story in as much detail as possible. Recall was scored using a modified idea unit scoring approach, which included separately scoring hierarchical levels of narrative detail. RESULTS Memory for acoustically degraded stories was significantly worse than for normal stories at some levels of narrative detail. Older adults' memory for the stories was significantly worse overall, but there was no interaction between age and acoustic clarity or level of narrative detail. Verbal working memory (assessed by reading span) significantly correlated with recall accuracy for both young and older adults, whereas hearing ability (better ear pure tone average) did not. CONCLUSION The present findings are consistent with a framework in which the additional cognitive demands caused by a degraded acoustic signal use resources that would otherwise be available for memory encoding for both young and older adults. Verbal working memory is a likely candidate for supporting both of these processes.
Collapse
Affiliation(s)
- Caitlin M Ward
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Chad S Rogers
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Kristin J Van Engen
- b Department of Psychology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Jonathan E Peelle
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| |
Collapse
|
10
|
Miller P. Itinerancy between attractor states in neural systems. Curr Opin Neurobiol 2016; 40:14-22. [PMID: 27318972 DOI: 10.1016/j.conb.2016.05.005] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2016] [Revised: 05/20/2016] [Accepted: 05/27/2016] [Indexed: 11/25/2022]
Abstract
Converging evidence from neural, perceptual and simulated data suggests that discrete attractor states form within neural circuits through learning and development. External stimuli may bias neural activity to one attractor state or cause activity to transition between several discrete states. Evidence for such transitions, whose timing can vary across trials, is best accrued through analyses that avoid any trial-averaging of data. One such method, hidden Markov modeling, has been effective in this context, revealing state transitions in many neural circuits during many tasks. Concurrently, modeling efforts have revealed computational benefits of stimulus processing via transitions between attractor states. This review describes the current state of the field, with comments on how its perceived limitations have been addressed.
Collapse
Affiliation(s)
- Paul Miller
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA 02454-9110, USA
| |
Collapse
|
11
|
Alterations in gray matter volume due to unilateral hearing loss. Sci Rep 2016; 6:25811. [PMID: 27174521 PMCID: PMC4865827 DOI: 10.1038/srep25811] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2015] [Accepted: 04/21/2016] [Indexed: 12/12/2022] Open
Abstract
Although extensive research on neural plasticity resulting from hearing deprivation has been conducted, the direct influence of compromised audition on the auditory cortex and the potential impact of long durations of incomplete sensory stimulation on the adult cortex are still not fully understood. In this study, using voxel-based morphometry, we evaluated gray matter (GM) volume changes that may be associated with reduced hearing ability and the duration of hearing impairment in 42 unilateral hearing loss (UHL) patients with acoustic neuromas compared to 24 normal controls. We found significant GM volume increases in the somatosensory and motor systems and GM volume decreases in the auditory (i.e., Heschl’s gyrus) and visual systems (i.e., the calcarine cortex) in UHL patients. The GM volume decreases in the primary auditory cortex (i.e., superior temporal gyrus and Heschl’s gyrus) correlated with reduced hearing ability. Meanwhile, the GM volume decreases in structures involving high-level cognitive control functions (i.e., dorsolateral prefrontal cortex and anterior cingulate cortex) correlated positively with hearing loss duration. Our findings demonstrated that the severity and duration of UHL may contribute to the dissociated morphology of auditory and high-level neural structures, providing insight into the brain’s plasticity related to chronic, persistent partial sensory loss.
Collapse
|
12
|
Shafiro V, Sheft S, Risley R. The intelligibility of interrupted and temporally altered speech: Effects of context, age, and hearing loss. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 139:455-65. [PMID: 26827039 PMCID: PMC4723407 DOI: 10.1121/1.4939891] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Temporal constraints on the perception of interrupted speech were investigated by comparing the intelligibility of speech that was periodically gated (PG) and subsequently either temporally compressed (PGTC) by concatenating remaining speech fragments or temporally expanded (PGTE) by doubling the silent intervals between speech fragments. Experiment 1 examined the effects of PGTC and PGTE at different gating rates (0.5 -16 Hz) on the intelligibility of words and sentences for young normal-hearing adults. In experiment 2, older normal-hearing (ONH) and older hearing-impaired (OHI) adults were tested with sentences only. The results of experiment 1 indicated that sentences were more intelligible than words. In both experiments, PGTC sentences were less intelligible than either PG or PGTE sentences. Compared with PG sentences, the intelligibility of PGTE sentences was significantly reduced by the same amount for ONH and OHI groups. Temporal alterations tended to produce a U-shaped rate-intelligibility function with a dip at 2-4 Hz, indicating that temporal alterations interacted with the duration of speech fragments. The present findings demonstrate that both aging and hearing loss negatively affect the overall intelligibility of interrupted and temporally altered speech. However, a mild-to-moderate hearing loss did not exacerbate the negative effects of temporal alterations associated with aging.
Collapse
Affiliation(s)
- Valeriy Shafiro
- Department of Communication Disorders and Sciences, Rush University Medical Center, 600 South Paulina Street, Suite 1012 AAC, Chicago, Illinois 60612, USA
| | - Stanley Sheft
- Department of Communication Disorders and Sciences, Rush University Medical Center, 600 South Paulina Street, Suite 1012 AAC, Chicago, Illinois 60612, USA
| | - Robert Risley
- Department of Communication Disorders and Sciences, Rush University Medical Center, 600 South Paulina Street, Suite 1012 AAC, Chicago, Illinois 60612, USA
| |
Collapse
|
13
|
Wingfield A, Amichetti NM, Lash A. Cognitive aging and hearing acuity: modeling spoken language comprehension. Front Psychol 2015; 6:684. [PMID: 26124724 PMCID: PMC4462993 DOI: 10.3389/fpsyg.2015.00684] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2015] [Accepted: 05/10/2015] [Indexed: 12/30/2022] Open
Abstract
The comprehension of spoken language has been characterized by a number of "local" theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The ease of language understanding (ELU) model (Rönnberg et al., 2013) stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we discuss interactions between perceptual, linguistic, and cognitive factors in spoken language understanding. Central to our presentation is an examination of aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout this paper we offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled.
Collapse
Affiliation(s)
- Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| | | | | |
Collapse
|
14
|
Peelle JE. Methodological challenges and solutions in auditory functional magnetic resonance imaging. Front Neurosci 2014; 8:253. [PMID: 25191218 PMCID: PMC4139601 DOI: 10.3389/fnins.2014.00253] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2014] [Accepted: 07/29/2014] [Indexed: 02/06/2023] Open
Abstract
Functional magnetic resonance imaging (fMRI) studies involve substantial acoustic noise. This review covers the difficulties posed by such noise for auditory neuroscience, as well as a number of possible solutions that have emerged. Acoustic noise can affect the processing of auditory stimuli by making them inaudible or unintelligible, and can result in reduced sensitivity to auditory activation in auditory cortex. Equally importantly, acoustic noise may also lead to increased listening effort, meaning that even when auditory stimuli are perceived, neural processing may differ from when the same stimuli are presented in quiet. These and other challenges have motivated a number of approaches for collecting auditory fMRI data. Although using a continuous echoplanar imaging (EPI) sequence provides high quality imaging data, these data may also be contaminated by background acoustic noise. Traditional sparse imaging has the advantage of avoiding acoustic noise during stimulus presentation, but at a cost of reduced temporal resolution. Recently, three classes of techniques have been developed to circumvent these limitations. The first is Interleaved Silent Steady State (ISSS) imaging, a variation of sparse imaging that involves collecting multiple volumes following a silent period while maintaining steady-state longitudinal magnetization. The second involves active noise control to limit the impact of acoustic scanner noise. Finally, novel MRI sequences that reduce the amount of acoustic noise produced during fMRI make the use of continuous scanning a more practical option. Together these advances provide unprecedented opportunities for researchers to collect high-quality data of hemodynamic responses to auditory stimuli using fMRI.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis St. Louis, MO, USA
| |
Collapse
|
15
|
Monitoring the capacity of working memory: executive control and effects of listening effort. Mem Cognit 2014; 41:839-49. [PMID: 23400826 DOI: 10.3758/s13421-013-0302-0] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In two experiments, we used an interruption-and-recall (IAR) task to explore listeners' ability to monitor the capacity of working memory as new information arrived in real time. In this task, listeners heard recorded word lists with instructions to interrupt the input at the maximum point that would still allow for perfect recall. Experiment 1 demonstrated that the most commonly selected segment size closely matched participants' memory span, as measured in a baseline span test. Experiment 2 showed that reducing the sound level of presented word lists to a suprathreshold but effortful listening level disrupted the accuracy of matching selected segment sizes with participants' memory spans. The results are discussed in terms of whether online capacity monitoring may be subsumed under other, already enumerated working memory executive functions (inhibition, set shifting, and memory updating).
Collapse
|
16
|
Cousins KAQ, Dar H, Wingfield A, Miller P. Acoustic masking disrupts time-dependent mechanisms of memory encoding in word-list recall. Mem Cognit 2014; 42:622-38. [PMID: 24838269 PMCID: PMC4030694 DOI: 10.3758/s13421-013-0377-7] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recall of recently heard words is affected by the clarity of presentation: Even if all words are presented with sufficient clarity for successful recognition, those that are more difficult to hear are less likely to be recalled. Such a result demonstrates that memory processing depends on more than whether a word is simply "recognized" versus "not recognized." More surprising is that, when a single item in a list of spoken words is acoustically masked, prior words that were heard with full clarity are also less likely to be recalled. To account for such a phenomenon, we developed the linking-by-active-maintenance model (LAMM). This computational model of perception and encoding predicts that these effects will be time dependent. Here we challenged our model by investigating whether and how the impact of acoustic masking on memory depends on presentation rate. We found that a slower presentation rate causes a more disruptive impact of stimulus degradation on prior, clearly heard words than does a fast rate. These results are unexpected according to prior theories of effortful listening, but we demonstrated that they can be accounted for by LAMM.
Collapse
Affiliation(s)
- Katheryn A Q Cousins
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, 02454-9110, USA
| | | | | | | |
Collapse
|
17
|
Campbell J, Sharma A. Compensatory changes in cortical resource allocation in adults with hearing loss. Front Syst Neurosci 2013; 7:71. [PMID: 24478637 PMCID: PMC3905471 DOI: 10.3389/fnsys.2013.00071] [Citation(s) in RCA: 124] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2013] [Accepted: 10/07/2013] [Indexed: 12/13/2022] Open
Abstract
Hearing loss has been linked to many types of cognitive decline in adults, including an association between hearing loss severity and dementia. However, it remains unclear whether cortical re-organization associated with hearing loss occurs in early stages of hearing decline and in early stages of auditory processing. In this study, we examined compensatory plasticity in adults with mild-moderate hearing loss using obligatory, passively-elicited, cortical auditory evoked potentials (CAEP). High-density EEG elicited by speech stimuli was recorded in adults with hearing loss and age-matched normal hearing controls. Latency, amplitude and source localization of the P1, N1, P2 components of the CAEP were analyzed. Adults with mild-moderate hearing loss showed increases in latency and amplitude of the P2 CAEP relative to control subjects. Current density reconstructions revealed decreased activation in temporal cortex and increased activation in frontal cortical areas for hearing-impaired listeners relative to normal hearing listeners. Participants' behavioral performance on a clinical test of speech perception in noise was significantly correlated with the increases in P2 latency. Our results indicate that changes in cortical resource allocation are apparent in early stages of adult hearing loss, and that these passively-elicited cortical changes are related to behavioral speech perception outcome.
Collapse
Affiliation(s)
- Julia Campbell
- Department of Speech, Language and Hearing Sciences, University of Colorado at Boulder Boulder, CO, USA
| | - Anu Sharma
- Department of Speech, Language and Hearing Sciences, University of Colorado at Boulder Boulder, CO, USA ; Institute of Cognitive Science, University of Colorado at Boulder Boulder, CO, USA
| |
Collapse
|
18
|
Piquado T, Benichov JI, Brownell H, Wingfield A. The hidden effect of hearing acuity on speech recall, and compensatory effects of self-paced listening. Int J Audiol 2012; 51:576-83. [PMID: 22731919 DOI: 10.3109/14992027.2012.684403] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
OBJECTIVE The purpose of this research was to determine whether negative effects of hearing loss on recall accuracy for spoken narratives can be mitigated by allowing listeners to control the rate of speech input. DESIGN Paragraph-length narratives were presented for recall under two listening conditions in a within-participants design: presentation without interruption (continuous) at an average speech-rate of 150 words per minute; and presentation interrupted at periodic intervals at which participants were allowed to pause before initiating the next segment (self-paced). STUDY SAMPLE Participants were 24 adults ranging from 21 to 33 years of age. Half had age-normal hearing acuity and half had mild- to-moderate hearing loss. The two groups were comparable for age, years of formal education, and vocabulary. RESULTS When narrative passages were presented continuously, without interruption, participants with hearing loss recalled significantly fewer story elements, both main ideas and narrative details, than those with age-normal hearing. The recall difference was eliminated when the two groups were allowed to self-pace the speech input. CONCLUSION Results support the hypothesis that the listening effort associated with reduced hearing acuity can slow processing operations and increase demands on working memory, with consequent negative effects on accuracy of narrative recall.
Collapse
|
19
|
Delnooz CCS, Helmich RC, Medendorp WP, Van de Warrenburg BPC, Toni I. Writer's cramp: increased dorsal premotor activity during intended writing. Hum Brain Mapp 2011; 34:613-25. [PMID: 22113948 DOI: 10.1002/hbm.21464] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2011] [Revised: 08/01/2011] [Accepted: 08/15/2011] [Indexed: 12/29/2022] Open
Abstract
Simple writer's cramp (WC) is a task-specific form of dystonia, characterized by abnormal movements and postures of the hand during writing. It is extremely task-specific, since dystonic symptoms can occur when a patient uses a pencil for writing, but not when it is used for sharpening. Maladaptive plasticity, loss of inhibition, and abnormal sensory processing are important pathophysiological elements of WC. However, it remains unclear how those elements can account for its task-specificity. We used fMRI to isolate cerebral alterations associated with the task-specificity of simple WC. Subjects (13 simple WC patients, 20 matched controls) imagined grasping a pencil to either write with it or sharpen it. On each trial, we manipulated the pencil's position and the number of imagined movements, while monitoring variations in motor output with electromyography. We show that simple WC is characterized by abnormally increased activity in the dorsal premotor cortex (PMd) when imagined actions are specifically related to writing. This cerebral effect was independent from the known deficits in dystonia in generating focal motor output and in processing somatosensory feedback. This abnormal activity of the PMd suggests that the task-specific element of simple WC is primarily due to alterations at the planning level, in the computations that transform a desired action outcome into the motor commands leading to that action. These findings open the way for testing the therapeutic value of interventions that take into account the computational substrate of task-specificity in simple WC, e.g. modulations of PMd activity during the planning phase of writing.
Collapse
Affiliation(s)
- Cathérine C S Delnooz
- Department of Neurology, Radboud University Nijmegen Medical Centre, Donders Institute for Brian, Cognition and Behaviour, Centre for Neuroscience, Nijmegen, The Netherlands
| | | | | | | | | |
Collapse
|
20
|
Abstract
Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment, we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry, demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally, these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task.
Collapse
|
21
|
Piquado T, Cousins KAQ, Wingfield A, Miller P. Effects of degraded sensory input on memory for speech: behavioral data and a test of biologically constrained computational models. Brain Res 2010; 1365:48-65. [PMID: 20875801 DOI: 10.1016/j.brainres.2010.09.070] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2010] [Revised: 09/17/2010] [Accepted: 09/18/2010] [Indexed: 12/23/2022]
Abstract
Poor hearing acuity reduces memory for spoken words, even when the words are presented with enough clarity for correct recognition. An "effortful hypothesis" suggests that the perceptual effort needed for recognition draws from resources that would otherwise be available for encoding the word in memory. To assess this hypothesis, we conducted a behavioral task requiring immediate free recall of word-lists, some of which contained an acoustically masked word that was just above perceptual threshold. Results show that masking a word reduces the recall of that word and words prior to it, as well as weakening the linking associations between the masked and prior words. In contrast, recall probabilities of words following the masked word are not affected. To account for this effect we conducted computational simulations testing two classes of models: Associative Linking Models and Short-Term Memory Buffer Models. Only a model that integrated both contextual linking and buffer components matched all of the effects of masking observed in our behavioral data. In this Linking-Buffer Model, the masked word disrupts a short-term memory buffer, causing associative links of words in the buffer to be weakened, affecting memory for the masked word and the word prior to it, while allowing links of words following the masked word to be spared. We suggest that these data account for the so-called "effortful hypothesis", where distorted input has a detrimental impact on prior information stored in short-term memory.
Collapse
Affiliation(s)
- Tepring Piquado
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| | | | | | | |
Collapse
|