1
|
Mohammadi Y, Graversen C, Manresa JB, Østergaard J, Andersen OK. Effects of Background Noise and Linguistic Violations on Frontal Theta Oscillations During Effortful Listening. Ear Hear 2024; 45:721-729. [PMID: 38287477 DOI: 10.1097/aud.0000000000001464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2024]
Abstract
OBJECTIVES Background noise and linguistic violations have been shown to increase the listening effort. The present study aims to examine the effects of the interaction between background noise and linguistic violations on subjective listening effort and frontal theta oscillations during effortful listening. DESIGN Thirty-two normal-hearing listeners participated in this study. The linguistic violation was operationalized as sentences versus random words (strings). Behavioral and electroencephalography data were collected while participants listened to sentences and strings in background noise at different signal to noise ratios (SNRs) (-9, -6, -3, 0 dB), maintained them in memory for about 3 sec in the presence of background noise, and then chose the correct sequence of words from a base matrix of words. RESULTS Results showed the interaction effects of SNR and speech type on effort ratings. Although strings were inherently more effortful than sentences, decreasing SNR from 0 to -9 dB (in 3 dB steps), increased effort rating more for sentences than strings in each step, suggesting the more pronounced effect of noise on sentence processing that strings in low SNRs. Results also showed a significant interaction between SNR and speech type on frontal theta event-related synchronization during the retention interval. This interaction indicated that strings exhibited higher frontal theta event-related synchronization than sentences at SNR of 0 dB, suggesting increased verbal working memory demand for strings under challenging listening conditions. CONCLUSIONS The study demonstrated that the interplay between linguistic violation and background noise shapes perceived effort and cognitive load during speech comprehension under challenging listening conditions. The differential impact of noise on processing sentences versus strings highlights the influential role of context and cognitive resource allocation in the processing of speech.
Collapse
Affiliation(s)
- Yousef Mohammadi
- Department of Health Science and Technology, Integrative Neuroscience, Aalborg University, Aalborg, Denmark
| | - Carina Graversen
- Department of Health Science and Technology, Integrative Neuroscience, Aalborg University, Aalborg, Denmark
- Department of Health Science and Technology, Center for Neuroplasticity and Pain, Aalborg University, Aalborg, Denmark
| | - José Biurrun Manresa
- Department of Health Science and Technology, Center for Neuroplasticity and Pain, Aalborg University, Aalborg, Denmark
- Institute for Research and Development in Bioengineering and Bioinformatics, National Scientific and Technical Research Council (CONICET) - National University of Entre Ríos (UNER), Oro Verde, Argentina
| | - Jan Østergaard
- Department of Electronic Systems, Aalborg University, Aalborg, Denmark
| | - Ole Kæseler Andersen
- Department of Health Science and Technology, Integrative Neuroscience, Aalborg University, Aalborg, Denmark
- Department of Health Science and Technology, Center for Neuroplasticity and Pain, Aalborg University, Aalborg, Denmark
| |
Collapse
|
2
|
Perepelytsia V, Dellwo V. Acoustic compression in Zoom audio does not compromise voice recognition performance. Sci Rep 2023; 13:18742. [PMID: 37907749 PMCID: PMC10618539 DOI: 10.1038/s41598-023-45971-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 10/26/2023] [Indexed: 11/02/2023] Open
Abstract
Human voice recognition over telephone channels typically yields lower accuracy when compared to audio recorded in a studio environment with higher quality. Here, we investigated the extent to which audio in video conferencing, subject to various lossy compression mechanisms, affects human voice recognition performance. Voice recognition performance was tested in an old-new recognition task under three audio conditions (telephone, Zoom, studio) across all matched (familiarization and test with same audio condition) and mismatched combinations (familiarization and test with different audio conditions). Participants were familiarized with female voices presented in either studio-quality (N = 22), Zoom-quality (N = 21), or telephone-quality (N = 20) stimuli. Subsequently, all listeners performed an identical voice recognition test containing a balanced stimulus set from all three conditions. Results revealed that voice recognition performance (d') in Zoom audio was not significantly different to studio audio but both in Zoom and studio audio listeners performed significantly better compared to telephone audio. This suggests that signal processing of the speech codec used by Zoom provides equally relevant information in terms of voice recognition compared to studio audio. Interestingly, listeners familiarized with voices via Zoom audio showed a trend towards a better recognition performance in the test (p = 0.056) compared to listeners familiarized with studio audio. We discuss future directions according to which a possible advantage of Zoom audio for voice recognition might be related to some of the speech coding mechanisms used by Zoom.
Collapse
Affiliation(s)
- Valeriia Perepelytsia
- Department of Computational Linguistics, University of Zurich, Andreasstrasse 15, 8050, Zurich, Switzerland.
| | - Volker Dellwo
- Department of Computational Linguistics, University of Zurich, Andreasstrasse 15, 8050, Zurich, Switzerland
| |
Collapse
|
3
|
Nittrouer S, Lowenstein JH. Recognition of Sentences With Complex Syntax in Speech Babble by Adolescents With Normal Hearing or Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1110-1135. [PMID: 36758200 PMCID: PMC10205108 DOI: 10.1044/2022_jslhr-22-00407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/17/2022] [Accepted: 11/22/2022] [Indexed: 05/25/2023]
Abstract
PURPOSE General language abilities of children with cochlear implants have been thoroughly investigated, especially at young ages, but far less is known about how well they process language in real-world settings, especially in higher grades. This study addressed this gap in knowledge by examining recognition of sentences with complex syntactic structures in backgrounds of speech babble by adolescents with cochlear implants, and peers with normal hearing. DESIGN Two experiments were conducted. First, new materials were developed using young adults with normal hearing as the normative sample, creating a corpus of sentences with controlled, but complex syntactic structures presented in three kinds of babble that varied in voice gender and number of talkers. Second, recognition by adolescents with normal hearing or cochlear implants was examined for these new materials and for sentence materials used with these adolescents at younger ages. Analyses addressed three objectives: (1) to assess the stability of speech recognition across a multiyear age range, (2) to evaluate speech recognition of sentences with complex syntax in babble, and (3) to explore how bottom-up and top-down mechanisms account for performance under these conditions. RESULTS Results showed: (1) Recognition was stable across the ages of 10-14 years for both groups. (2) Adolescents with normal hearing performed similarly to young adults with normal hearing, showing effects of syntactic complexity and background babble; adolescents with cochlear implants showed poorer recognition overall, and diminished effects of both factors. (3) Top-down language and working memory primarily explained recognition for adolescents with normal hearing, but the bottom-up process of perceptual organization primarily explained recognition for adolescents with cochlear implants. CONCLUSIONS Comprehension of language in real-world settings relies on different mechanisms for adolescents with cochlear implants than for adolescents with normal hearing. A novel finding was that perceptual organization is a critical factor. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21965228.
Collapse
Affiliation(s)
- Susan Nittrouer
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville
| | - Joanna H. Lowenstein
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville
| |
Collapse
|
4
|
van der Hoek-Snieders HEM, Stegeman I, Smit AL, Rhebergen KS. Linguistic Complexity of Speech Recognition Test Sentences and Its Influence on Children's Verbal Repetition Accuracy. Ear Hear 2021; 41:1511-1517. [PMID: 33136627 DOI: 10.1097/aud.0000000000000868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Speech recognition (SR)-tests have been developed for children without considering the linguistic complexity of the sentences used. However, linguistic complexity is hypothesized to influence correct sentence repetition. The aim of this study is to identify lexical and grammatical parameters influencing verbal repetition accuracy of sentences derived from a Dutch SR-test when performed by 6-year-old typically developing children. DESIGN For this observational, cross-sectional study, 40 typically developing children aged 6 were recruited at four primary schools in the Netherlands. All children performed a sentence repetition task derived from an SR-test for adults. The sentence complexity was described beforehand with one lexical parameter, age of acquisition, and four grammatical parameters, specifically sentence length, prepositions, sentence structure, and verb inflection. A multiple logistic regression analysis was performed. RESULTS Sentences with a higher age of acquisition (odds ratio [OR] = 1.59) or greater sentence length (OR = 1.28) had a higher risk of repetition inaccuracy. Sentences including a spatial (OR = 1.25) or other preposition (OR = 1.25) were at increased risk for incorrect repetition, as were complex sentences (OR = 1.69) and sentences in the present perfect (OR = 1.44) or future tense (OR = 2.32). CONCLUSIONS The variation in verbal repetition accuracy in 6-year-old children is significantly influenced by both lexical and grammatical parameters. Linguistic complexity is an important factor to take into account when assessing speech intelligibility in children.
Collapse
Affiliation(s)
- Hanneke E M van der Hoek-Snieders
- Department of Otorhinolaryngology and Head & Neck Surgery, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | | | | |
Collapse
|
5
|
Harmon TG, Dromey C, Nelson B, Chapman K. Effects of Background Noise on Speech and Language in Young Adults. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1104-1116. [PMID: 33719537 DOI: 10.1044/2020_jslhr-20-00376] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose The aim of this study was to investigate how different types of background noise that differ in their level of linguistic content affect speech acoustics, speech fluency, and language production for young adult speakers when performing a monologue discourse task. Method Forty young adults monologued by responding to open-ended questions in a silent baseline and five background noise conditions (debate, movie dialogue, contemporary music, classical music, and pink noise). Measures related to speech acoustics (intensity and frequency), speech fluency (speech rate, pausing, and disfluencies), and language production (lexical, morphosyntactic, and macrolinguistic structure) were analyzed and compared across conditions. Participants also reported on which conditions they perceived as more distracting. Results All noise conditions resulted in some change to spoken language compared with the silent baseline. Effects on speech acoustics were consistent with expected changes due to the Lombard effect (e.g., increased intensity and fundamental frequency). Effects on speech fluency showed decreased pausing and increased disfluencies. Several background noise conditions also seemed to interfere with language production. Conclusions Findings suggest that young adults present with both compensatory and interference effects when speaking in noise. Several adjustments may facilitate intelligibility when noise is present and help both speaker and listener maintain attention on the production. Other adjustments provide evidence that background noise eliciting linguistic interference has the potential to degrade spoken language even for healthy young adults, because of increased cognitive demands.
Collapse
Affiliation(s)
- Tyson G Harmon
- Department of Communication Disorders, Brigham Young University, Provo, UT
| | - Christopher Dromey
- Department of Communication Disorders, Brigham Young University, Provo, UT
| | - Brenna Nelson
- Department of Communication Disorders, Brigham Young University, Provo, UT
| | - Kacy Chapman
- Department of Communication Disorders, Brigham Young University, Provo, UT
| |
Collapse
|
6
|
Vogelzang M, Thiel CM, Rosemann S, Rieger JW, Ruigendijk E. Effects of age-related hearing loss and hearing aid experience on sentence processing. Sci Rep 2021; 11:5994. [PMID: 33727628 PMCID: PMC7971046 DOI: 10.1038/s41598-021-85349-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Accepted: 02/26/2021] [Indexed: 12/15/2022] Open
Abstract
Age-related hearing loss typically affects the hearing of high frequencies in older adults. Such hearing loss influences the processing of spoken language, including higher-level processing such as that of complex sentences. Hearing aids may alleviate some of the speech processing disadvantages associated with hearing loss. However, little is known about the relation between hearing loss, hearing aid use, and their effects on higher-level language processes. This neuroimaging (fMRI) study examined these factors by measuring the comprehension and neural processing of simple and complex spoken sentences in hard-of-hearing older adults (n = 39). Neither hearing loss severity nor hearing aid experience influenced sentence comprehension at the behavioral level. In contrast, hearing loss severity was associated with increased activity in left superior frontal areas and the left anterior insula, but only when processing specific complex sentences (i.e. object-before-subject) compared to simple sentences. Longer hearing aid experience in a sub-set of participants (n = 19) was associated with recruitment of several areas outside of the core speech processing network in the right hemisphere, including the cerebellum, the precentral gyrus, and the cingulate cortex, but only when processing complex sentences. Overall, these results indicate that brain activation for language processing is affected by hearing loss as well as subsequent hearing aid use. Crucially, they show that these effects become apparent through investigation of complex but not simple sentences.
Collapse
Affiliation(s)
- Margreet Vogelzang
- Institute of Dutch Studies, University of Oldenburg, Ammerländer Heerstraße 114-116, 26129, Oldenburg, Germany.
- Cluster of Excellence "Hearing4all", University of Oldenburg, Ammerländer Heerstraße 114-116, 26129, Oldenburg, Germany.
- Department of Theoretical and Applied Linguistics, University of Cambridge, Cambridge, UK.
| | - Christiane M Thiel
- Cluster of Excellence "Hearing4all", University of Oldenburg, Ammerländer Heerstraße 114-116, 26129, Oldenburg, Germany
- Biological Psychology, Department of Psychology, Department for Medicine and Health Sciences, University of Oldenburg, Ammerländer Heerstraße 114-116, 26129, Oldenburg, Germany
| | - Stephanie Rosemann
- Cluster of Excellence "Hearing4all", University of Oldenburg, Ammerländer Heerstraße 114-116, 26129, Oldenburg, Germany
- Biological Psychology, Department of Psychology, Department for Medicine and Health Sciences, University of Oldenburg, Ammerländer Heerstraße 114-116, 26129, Oldenburg, Germany
| | - Jochem W Rieger
- Cluster of Excellence "Hearing4all", University of Oldenburg, Ammerländer Heerstraße 114-116, 26129, Oldenburg, Germany
- Applied Neurocognitive Psychology, Department of Psychology, University of Oldenburg, Ammerländer Heerstraße 114-116, 26129, Oldenburg, Germany
| | - Esther Ruigendijk
- Institute of Dutch Studies, University of Oldenburg, Ammerländer Heerstraße 114-116, 26129, Oldenburg, Germany
- Cluster of Excellence "Hearing4all", University of Oldenburg, Ammerländer Heerstraße 114-116, 26129, Oldenburg, Germany
| |
Collapse
|
7
|
Vogelzang M, Thiel CM, Rosemann S, Rieger JW, Ruigendijk E. When Hearing Does Not Mean Understanding: On the Neural Processing of Syntactically Complex Sentences by Listeners With Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:250-262. [PMID: 33400550 DOI: 10.1044/2020_jslhr-20-00262] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose Adults with mild-to-moderate age-related hearing loss typically exhibit issues with speech understanding, but their processing of syntactically complex sentences is not well understood. We test the hypothesis that listeners with hearing loss' difficulties with comprehension and processing of syntactically complex sentences are due to the processing of degraded input interfering with the successful processing of complex sentences. Method We performed a neuroimaging study with a sentence comprehension task, varying sentence complexity (through subject-object order and verb-arguments order) and cognitive demands (presence or absence of a secondary task) within subjects. Groups of older subjects with hearing loss (n = 20) and age-matched normal-hearing controls (n = 20) were tested. Results The comprehension data show effects of syntactic complexity and hearing ability, with normal-hearing controls outperforming listeners with hearing loss, seemingly more so on syntactically complex sentences. The secondary task did not influence off-line comprehension. The imaging data show effects of group, sentence complexity, and task, with listeners with hearing loss showing decreased activation in typical speech processing areas, such as the inferior frontal gyrus and superior temporal gyrus. No interactions between group, sentence complexity, and task were found in the neuroimaging data. Conclusions The results suggest that listeners with hearing loss process speech differently from their normal-hearing peers, possibly due to the increased demands of processing degraded auditory input. Increased cognitive demands by means of a secondary visual shape processing task influence neural sentence processing, but no evidence was found that it does so in a different way for listeners with hearing loss and normal-hearing listeners.
Collapse
Affiliation(s)
- Margreet Vogelzang
- Institute of Dutch Studies, Carl von Ossietzky University of Oldenburg, Germany
- Cluster of Excellence "Hearing4all", Carl von Ossietzky University of Oldenburg, Germany
| | - Christiane M Thiel
- Cluster of Excellence "Hearing4all", Carl von Ossietzky University of Oldenburg, Germany
- Biological Psychology Lab, Department of Psychology, Faculty of Medicine and Health Sciences, Carl von Ossietzky University of Oldenburg, Germany
| | - Stephanie Rosemann
- Cluster of Excellence "Hearing4all", Carl von Ossietzky University of Oldenburg, Germany
- Biological Psychology Lab, Department of Psychology, Faculty of Medicine and Health Sciences, Carl von Ossietzky University of Oldenburg, Germany
| | - Jochem W Rieger
- Cluster of Excellence "Hearing4all", Carl von Ossietzky University of Oldenburg, Germany
- Applied Neurocognitive Psychology Lab, Department of Psychology, Carl von Ossietzky University of Oldenburg, Germany
| | - Esther Ruigendijk
- Institute of Dutch Studies, Carl von Ossietzky University of Oldenburg, Germany
- Cluster of Excellence "Hearing4all", Carl von Ossietzky University of Oldenburg, Germany
| |
Collapse
|
8
|
Vogelzang M, Thiel CM, Rosemann S, Rieger JW, Ruigendijk E. Neural Mechanisms Underlying the Processing of Complex Sentences: An fMRI Study. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2020; 1:226-248. [PMID: 37213656 PMCID: PMC10158620 DOI: 10.1162/nol_a_00011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2019] [Accepted: 04/01/2020] [Indexed: 05/23/2023]
Abstract
Previous research has shown effects of syntactic complexity on sentence processing. In linguistics, syntactic complexity (caused by different word orders) is traditionally explained by distinct linguistic operations. This study investigates whether different complex word orders indeed result in distinct patterns of neural activity, as would be expected when distinct linguistic operations are applied. Twenty-two older adults performed an auditory sentence processing paradigm in German with and without increased cognitive load. The results show that without increased cognitive load, complex sentences show distinct activation patterns compared with less complex, canonical sentences: complex object-initial sentences show increased activity in the left inferior frontal and temporal regions, whereas complex adjunct-initial sentences show increased activity in occipital and right superior frontal regions. Increased cognitive load seems to affect the processing of different sentence structures differently, increasing neural activity for canonical sentences, but leaving complex sentences relatively unaffected. We discuss these results in the context of the idea that linguistic operations required for processing sentence structures with higher levels of complexity involve distinct brain operations.
Collapse
Affiliation(s)
| | - Christiane M. Thiel
- Cluster of Excellence “Hearing4all,” University of Oldenburg, Oldenburg, Germany
- Biological Psychology, Department of Psychology, Department for Medicine and Health Sciences, University of Oldenburg, Oldenburg, Germany
| | - Stephanie Rosemann
- Cluster of Excellence “Hearing4all,” University of Oldenburg, Oldenburg, Germany
- Biological Psychology, Department of Psychology, Department for Medicine and Health Sciences, University of Oldenburg, Oldenburg, Germany
| | - Jochem W. Rieger
- Cluster of Excellence “Hearing4all,” University of Oldenburg, Oldenburg, Germany
- Applied Neurocognitive Psychology, Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - Esther Ruigendijk
- Institute of Dutch Studies, University of Oldenburg, Oldenburg, Germany
- Cluster of Excellence “Hearing4all,” University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
9
|
Francis AL, Love J. Listening effort: Are we measuring cognition or affect, or both? WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2019; 11:e1514. [PMID: 31381275 DOI: 10.1002/wcs.1514] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Revised: 07/07/2019] [Accepted: 07/10/2019] [Indexed: 12/14/2022]
Abstract
Listening effort is increasingly recognized as a factor in communication, particularly for and with nonnative speakers, for the elderly, for individuals with hearing impairment and/or for those working in noise. However, as highlighted by McGarrigle et al., International Journal of Audiology, 2014, 53, 433-445, the term "listening effort" encompasses a wide variety of concepts, including the engagement and control of multiple possibly distinct neural systems for information processing, and the affective response to the expenditure of those resources in a given context. Thus, experimental or clinical methods intended to objectively quantify listening effort may ultimately reflect a complex interaction between the operations of one or more of those information processing systems, and/or the affective and motivational response to the demand on those systems. Here we examine theoretical, behavioral, and psychophysiological factors related to resolving the question of what we are measuring, and why, when we measure "listening effort." This article is categorized under: Linguistics > Language in Mind and Brain Psychology > Theory and Methods Psychology > Attention Psychology > Emotion and Motivation.
Collapse
Affiliation(s)
- Alexander L Francis
- Department of Speech, Language and Hearing Sciences, Purdue University, West Lafayette, Indiana
| | - Jordan Love
- Department of Speech, Language and Hearing Sciences, Purdue University, West Lafayette, Indiana
| |
Collapse
|
10
|
Schouwenaars A, Finke M, Hendriks P, Ruigendijk E. Which Questions Do Children With Cochlear Implants Understand? An Eye-Tracking Study. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:387-409. [PMID: 30950684 DOI: 10.1044/2018_jslhr-h-17-0310] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Purpose The purpose of this study was to investigate the processing of morphosyntactic cues (case and verb agreement) by children with cochlear implants (CIs) in German which-questions, where interpretation depends on these morphosyntactic cues. The aim was to examine whether children with CIs who perceive the different cues also make use of them in speech comprehension and processing in the same way as children with normal hearing (NH). Method Thirty-three children with CIs (age 7;01-12;04 years;months, M = 9;07, bilaterally implanted before age 3;3) and 36 children with NH (age 7;05-10;09 years, M = 9;01) received a picture selection task with eye tracking to test their comprehension of subject, object, and passive which-questions. Two screening tasks tested their auditory discrimination of case morphology and their perception and comprehension of subject-verb agreement. Results Children with CIs who performed well on the screening tests still showed more difficulty on the comprehension of object questions than children with NH, whereas they comprehended subject questions and passive questions equally well as children with NH. There was large interindividual variability within the CI group. The gaze patterns of children with NH showed reanalysis effects for object questions disambiguated later in the sentence by verb agreement, but not for object questions disambiguated by case at the first noun phrase. The gaze patterns of children with CIs showed reanalysis effects even for object questions disambiguated at the first noun phrase. Conclusions Even when children with CIs perceive case and subject-verb agreement, their ability to use these cues for offline comprehension and online processing still lags behind normal development, which is reflected in lower performance rates and longer processing times. Individual variability within the CI group can partly be explained by working memory and hearing age. Supplemental Material https://doi.org/10.23641/asha.7728731.
Collapse
Affiliation(s)
- Atty Schouwenaars
- Cluster of Excellence "Hearing4all," Department of Dutch, Oldenburg University, Germany
| | - Mareike Finke
- Cluster of Excellence "Hearing4all," Department of Otolaryngology, Hannover Medical School, Germany
| | - Petra Hendriks
- Center for Language and Cognition Groningen, University of Groningen, the Netherlands
| | - Esther Ruigendijk
- Cluster of Excellence "Hearing4all," Department of Dutch, Oldenburg University, Germany
| |
Collapse
|
11
|
Coene M, Krijger S, van Knijff E, Meeuws M, De Ceulaer G, Govaerts PJ. LiCoS: A New Linguistically Controlled Sentences Test to Assess Functional Hearing Performance. Folia Phoniatr Logop 2018; 70:90-99. [PMID: 30041186 DOI: 10.1159/000490050] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2018] [Accepted: 05/14/2018] [Indexed: 11/19/2022] Open
Abstract
PURPOSE To overcome the potential tension between clinical and ecological validity in speech audiometric assessment by creating a new set of sentence materials with high linguistic validity for the Dutch-speaking area. METHODS A linguistic "fingerprint" of modern spoken Dutch and Flemish served to generate a set of sentences recorded from 1 male and 1 female talker. The sentences were presented to 30 normal-hearing listeners in stationary speech noise at a signal-to-noise ratio (SNR) of -5 dB sound pressure level (SPL). A list design criterion was used to achieve perceptive homogeneity across the test lists, by scrambling lists of sentences of different syntactic types while controlling for linguistic complexity. The original set of test materials was narrowed down to 360 sentences, and list equivalency was evaluated at the audiological and linguistic levels. A psychometric curve was generated with a resolution of 2 dB based on a second group of 60 young normal-hearing native speakers of Dutch and Flemish. RESULTS Sentence understanding showed an average repetition accuracy of 63.40% (SD 1.01) across the lists at an SNR of -5 dB SPL. No significant differences were found between the lists at the level of the individual listener. At the linguistic level, the sentence lists showed an equal distribution of phonological, morphological, and syntactic features. CONCLUSION LiCoS combines the clinical benefit of acoustic control at the list level with the high ecological validity of linguistically representative test items. The new speech audiometric test is particularly appropriate to assess sentence understanding in individuals who would otherwise exhibit near-ceiling performance when tested with linguistically more simplified test stimuli. In combination with pure tone audiometric assessment, LiCoS provides valuable complementary information with respect to the functional hearing of patients.
Collapse
Affiliation(s)
- Martine Coene
- Language and Hearing Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands.,The Eargroup, Antwerp, Belgium
| | - Stefanie Krijger
- Department of Otorhinolaryngology, Ghent University, Ghent University Hospital, Gent, Belgium
| | - Eline van Knijff
- Language and Hearing Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | | | | | - Paul J Govaerts
- Language and Hearing Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands.,Department of Otorhinolaryngology, Ghent University, Ghent University Hospital, Gent, Belgium.,The Eargroup, Antwerp, Belgium
| |
Collapse
|
12
|
van Knijff EC, Coene M, Govaerts PJ. Speech understanding in noise in elderly adults: the effect of inhibitory control and syntactic complexity. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2018; 53:628-642. [PMID: 29446191 DOI: 10.1111/1460-6984.12376] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Revised: 01/12/2018] [Accepted: 01/16/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND Previous research has suggested that speech perception in elderly adults is influenced not only by age-related hearing loss or presbycusis but also by declines in cognitive abilities, by background noise and by the syntactic complexity of the message. AIMS To gain further insight into the influence of these cognitive as well as acoustic and linguistic factors on speech perception in elderly adults by investigating inhibitory control as a listener characteristic and background noise type and syntactic complexity as input characteristics. METHODS & PROCEDURES Phoneme identification was measured in different noise conditions and in different linguistic contexts (single words, sentences with varying syntactic complexity). Additionally, inhibitory control was measured using a visual stimulus-response matching task. Fifty-one adults participated in this study, including elderly adults with age-related hearing loss (n = 9) and with normal hearing (n = 17), and a control group of normal hearing younger adults (n = 25). OUTCOMES & RESULTS The analysis revealed that elderly adults with normal hearing and with hearing loss were less likely to identify successfully phonemes in single words than younger normal hearing controls. In the context of sentences, only elderly adults with hearing loss had a lower odds of correct phoneme perception than the control group. Additionally, in elderly adults with hearing loss, phoneme-in-sentence perception was linked to age-related declines in inhibitory control. In all participants, phoneme identification in sentences was influenced by both noise type and syntactic complexity. CONCLUSIONS & IMPLICATIONS Inhibitory control and syntactic complexity might play a significant role in speech perception, especially in elderly listeners. These factors might also influence the results of clinical assessments of speech perception. Testing procedures thus need to be selected and their results interpreted carefully with these influences in mind.
Collapse
Affiliation(s)
- Eline C van Knijff
- Language and Hearing Center Amsterdam, Vrije Universiteit Amsterdam, the Netherlands
| | - Martine Coene
- Language and Hearing Center Amsterdam, Vrije Universiteit Amsterdam, the Netherlands
- The Eargroup, Antwerp, Belgium
| | - Paul J Govaerts
- Language and Hearing Center Amsterdam, Vrije Universiteit Amsterdam, the Netherlands
- The Eargroup, Antwerp, Belgium
| |
Collapse
|
13
|
Processing Mechanisms in Hearing-Impaired Listeners: Evidence from Reaction Times and Sentence Interpretation. Ear Hear 2018; 37:e391-e401. [PMID: 27748664 DOI: 10.1097/aud.0000000000000339] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE The authors aimed to determine whether hearing impairment affects sentence comprehension beyond phoneme or word recognition (i.e., on the sentence level), and to distinguish grammatically induced processing difficulties in structurally complex sentences from perceptual difficulties associated with listening to degraded speech. Effects of hearing impairment or speech in noise were expected to reflect hearer-specific speech recognition difficulties. Any additional processing time caused by the sustained perceptual challenges across the sentence may either be independent of or interact with top-down processing mechanisms associated with grammatical sentence structure. DESIGN Forty-nine participants listened to canonical subject-initial or noncanonical object-initial sentences that were presented either in quiet or in noise. Twenty-four participants had mild-to-moderate hearing impairment and received hearing-loss-specific amplification. Twenty-five participants were age-matched peers with normal hearing status. Reaction times were measured on-line at syntactically critical processing points as well as two control points to capture differences in processing mechanisms. An off-line comprehension task served as an additional indicator of sentence (mis)interpretation, and enforced syntactic processing. RESULTS The authors found general effects of hearing impairment and speech in noise that negatively affected perceptual processing, and an effect of word order, where complex grammar locally caused processing difficulties for the noncanonical sentence structure. Listeners with hearing impairment were hardly affected by noise at the beginning of the sentence, but were affected markedly toward the end of the sentence, indicating a sustained perceptual effect of speech recognition. Comprehension of sentences with noncanonical word order was negatively affected by degraded signals even after sentence presentation. CONCLUSION Hearing impairment adds perceptual processing load during sentence processing, but affects grammatical processing beyond the word level to the same degree as in normal hearing, with minor differences in processing mechanisms. The data contribute to our understanding of individual differences in speech perception and language understanding. The authors interpret their results within the ease of language understanding model.
Collapse
|
14
|
Abstract
Examination of cognitive functions in the framework of speech perception has recently gained increasing scientific and clinical interest. Especially against the background of age-related hearing impairment and cognitive decline potential new perspectives in terms of better individualisation of auditory diagnosis and rehabilitation might arise. This review addresses the relationships of speech audiometry, speech perception and cognitive functions. It presents models of speech perception, discusses associations of neuropsychological with audiometric outcomes and shows recent efforts to consider cognitive functions with speech audiometry.
Collapse
Affiliation(s)
- H Meister
- FB Audiologie, Jean-Uhrmacher-Institut für klinische HNO-Forschung, Universität zu Köln, Geibelstraße 29-31, 50931, Köln, Deutschland.
| |
Collapse
|
15
|
Strauss DJ, Francis AL. Toward a taxonomic model of attention in effortful listening. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2017; 17:809-825. [PMID: 28567568 PMCID: PMC5548861 DOI: 10.3758/s13415-017-0513-0] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In recent years, there has been increasing interest in studying listening effort. Research on listening effort intersects with the development of active theories of speech perception and contributes to the broader endeavor of understanding speech perception within the context of neuroscientific theories of perception, attention, and effort. Due to the multidisciplinary nature of the problem, researchers vary widely in their precise conceptualization of the catch-all term listening effort. Very recent consensus work stresses the relationship between listening effort and the allocation of cognitive resources, providing a conceptual link to current cognitive neuropsychological theories associating effort with the allocation of selective attention. By linking listening effort to attentional effort, we enable the application of a taxonomy of external and internal attention to the characterization of effortful listening. More specifically, we use a vectorial model to decompose the demand causing listening effort into its mutually orthogonal external and internal components and map the relationship between demanded and exerted effort by means of a resource-limiting term that can represent the influence of motivation as well as vigilance and arousal. Due to its quantitative nature and easy graphical interpretation, this model can be applied to a broad range of problems dealing with listening effort. As such, we conclude that the model provides a good starting point for further research on effortful listening within a more differentiated neuropsychological framework.
Collapse
Affiliation(s)
- Daniel J Strauss
- Systems Neuroscience and Neurotechnology Unit, Neurocenter, Faculty of Medicine, Saarland University & School of Engineering, Building 90.5, 66421, htw saar, Homburg/Saar, Germany.
- Leibniz-Institute for New Materials, Saarbruecken, Germany.
- Key Numerics GmbH - Neurocognitive Technologies, Saarbruecken, Germany.
| | - Alexander L Francis
- Speech Perception and Cognitive Effort Laboratory Department of Speech, Language & Hearing Sciences, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
16
|
Linguistic Factors Influencing Speech Audiometric Assessment. BIOMED RESEARCH INTERNATIONAL 2016; 2016:7249848. [PMID: 27830152 PMCID: PMC5088328 DOI: 10.1155/2016/7249848] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/23/2016] [Revised: 06/09/2016] [Accepted: 08/11/2016] [Indexed: 11/18/2022]
Abstract
In speech audiometric testing, hearing performance is typically measured by calculating the number of correct repetitions of a speech stimulus. We investigate to what extent the repetition accuracy of Dutch speech stimuli presented against a background noise is influenced by nonauditory processes. We show that variation in verbal repetition accuracy is partially explained by morpholexical and syntactic features of the target language. Verbs, prepositions, conjunctions, determiners, and pronouns yield significantly lower correct repetitions than nouns, adjectives, or adverbs. The reduced repetition performance for verbs and function words is probably best explained by the similarities in the perceptual nature of verbal morphology and function words in Dutch. For sentences, an overall negative effect of syntactic complexity on speech repetition accuracy was found. The lowest number of correct repetitions was obtained with passive sentences, reflecting the cognitive cost of processing a noncanonical sentence structure. Taken together, these findings may have important implications for the audiological practice. In combination with hearing loss, linguistic complexity may increase the cognitive demands to process sentences in noise, leading to suboptimal functional hearing in day-to-day listening situations. Using test sentences with varying degrees of syntactic complexity may therefore provide useful information to measure functional hearing benefits.
Collapse
|
17
|
Müller JA, Wendt D, Kollmeier B, Brand T. Comparing Eye Tracking with Electrooculography for Measuring Individual Sentence Comprehension Duration. PLoS One 2016; 11:e0164627. [PMID: 27764125 PMCID: PMC5072642 DOI: 10.1371/journal.pone.0164627] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2016] [Accepted: 09/28/2016] [Indexed: 11/18/2022] Open
Abstract
The aim of this study was to validate a procedure for performing the audio-visual paradigm introduced by Wendt et al. (2015) with reduced practical challenges. The original paradigm records eye fixations using an eye tracker and calculates the duration of sentence comprehension based on a bootstrap procedure. In order to reduce practical challenges, we first reduced the measurement time by evaluating a smaller measurement set with fewer trials. The results of 16 listeners showed effects comparable to those obtained when testing the original full measurement set on a different collective of listeners. Secondly, we introduced electrooculography as an alternative technique for recording eye movements. The correlation between the results of the two recording techniques (eye tracker and electrooculography) was r = 0.97, indicating that both methods are suitable for estimating the processing duration of individual participants. Similar changes in processing duration arising from sentence complexity were found using the eye tracker and the electrooculography procedure. Thirdly, the time course of eye fixations was estimated with an alternative procedure, growth curve analysis, which is more commonly used in recent studies analyzing eye tracking data. The results of the growth curve analysis were compared with the results of the bootstrap procedure. Both analysis methods show similar processing durations.
Collapse
Affiliation(s)
- Jana Annina Müller
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, Germany
- * E-mail:
| | - Dorothea Wendt
- Hearing Systems, Department of Electrical Engineering, Technical University of Denmark, Lyngby, Denmark
- Eriksholm Research Centre, Snekkersten, Denmark
| | - Birger Kollmeier
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, Germany
| | - Thomas Brand
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|
18
|
Abstract
Examination of cognitive functions in the framework of speech perception has recently gained increasing scientific and clinical interest. Especially against the background of age-related hearing impairment and cognitive decline, potential new perspectives in terms of a better individualization of auditory diagnosis and rehabilitation might arise. This review addresses the relationships between speech audiometry, speech perception, and cognitive functions. It presents models of speech perception, discusses associations of neuropsychological and audiometric outcomes, and shows examples of recent efforts undertaken in Germany to consider cognitive functions with speech audiometry.
Collapse
Affiliation(s)
- H Meister
- Jean Uhrmacher Institute for Clinical ENT-Research, University of Cologne, Geibelstr. 29-31, 50931, Cologne, Germany.
| |
Collapse
|
19
|
Carroll R, Ruigendijk E. ERP responses to processing prosodic phrasing of sentences in amplitude modulated noise. Neuropsychologia 2016; 82:91-103. [PMID: 26776233 DOI: 10.1016/j.neuropsychologia.2016.01.014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2015] [Revised: 01/11/2016] [Accepted: 01/12/2016] [Indexed: 10/22/2022]
Abstract
Intonation phrase boundaries (IPBs) were hypothesized to be especially difficult to process in the presence of an amplitude modulated noise masker because of a potential rhythmic competition. In an event-related potential study, IPBs were presented in silence, stationary, and amplitude modulated noise. We elicited centro-parietal Closure Positive Shifts (CPS) in 23 young adults with normal hearing at IPBs in all acoustic conditions, albeit with some differences. CPS peak amplitudes were highest in stationary noise, followed by modulated noise, and lowest in silence. Both noise types elicited CPS delays, slightly more so in stationary compared to amplitude modulated noise. These data suggest that amplitude modulation is not tantamount to a rhythmic competitor for prosodic phrasing but rather supports an assumed speech perception benefit due to local release from masking. The duration of CPS time windows was, however, not only longer in noise compared to silence, but also longer for amplitude modulated compared to stationary noise. This is interpreted as support for additional processing load associated with amplitude modulation for the CPS component. Taken together, processing prosodic phrasing of sentences in amplitude modulated noise seems to involve the same issues that have been observed for the perception and processing of segmental information that are related to lexical items presented in noise: a benefit from local release from masking, even for prosodic cues, and a detrimental additional processing load that is associated with either stream segregation or signal reconstruction.
Collapse
Affiliation(s)
- Rebecca Carroll
- Cluster of Excellence 'Hearing4all', University of Oldenburg, Germany; Institute of Dutch Studies, University of Oldenburg, Ammerländer Heerstraße 114-118, 26111 Oldenburg, Germany.
| | - Esther Ruigendijk
- Cluster of Excellence 'Hearing4all', University of Oldenburg, Germany; Institute of Dutch Studies, University of Oldenburg, Ammerländer Heerstraße 114-118, 26111 Oldenburg, Germany
| |
Collapse
|
20
|
Wendt D, Kollmeier B, Brand T. How hearing impairment affects sentence comprehension: using eye fixations to investigate the duration of speech processing. Trends Hear 2015; 19:19/0/2331216515584149. [PMID: 25910503 PMCID: PMC4409940 DOI: 10.1177/2331216515584149] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
The main objective of this study was to investigate the extent to which hearing impairment influences the duration of sentence processing. An eye-tracking paradigm is introduced that provides an online measure of how hearing impairment prolongs processing of linguistically complex sentences; this measure uses eye fixations recorded while the participant listens to a sentence. Eye fixations toward a target picture (which matches the aurally presented sentence) were measured in the presence of a competitor picture. Based on the recorded eye fixations, the single target detection amplitude, which reflects the tendency of the participant to fixate the target picture, was used as a metric to estimate the duration of sentence processing. The single target detection amplitude was calculated for sentence structures with different levels of linguistic complexity and for different listening conditions: in quiet and in two different noise conditions. Participants with hearing impairment spent more time processing sentences, even at high levels of speech intelligibility. In addition, the relationship between the proposed online measure and listener-specific factors, such as hearing aid use and cognitive abilities, was investigated. Longer processing durations were measured for participants with hearing impairment who were not accustomed to using a hearing aid. Moreover, significant correlations were found between sentence processing duration and individual cognitive abilities (such as working memory capacity or susceptibility to interference). These findings are discussed with respect to audiological applications.
Collapse
Affiliation(s)
- Dorothea Wendt
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany Hearing Systems, Department of Electrical Engineering, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Birger Kollmeier
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany Cluster of Excellence Hearing4all, Oldenburg, Germany
| | - Thomas Brand
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany Cluster of Excellence Hearing4all, Oldenburg, Germany
| |
Collapse
|
21
|
An eye-tracking paradigm for analyzing the processing time of sentences with different linguistic complexities. PLoS One 2014; 9:e100186. [PMID: 24950184 PMCID: PMC4065036 DOI: 10.1371/journal.pone.0100186] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2013] [Accepted: 05/23/2014] [Indexed: 12/04/2022] Open
Abstract
An eye-tracking paradigm was developed for use in audiology in order to enable online analysis of the speech comprehension process. This paradigm should be useful in assessing impediments in speech processing. In this paradigm, two scenes, a target picture and a competitor picture, were presented simultaneously with an aurally presented sentence that corresponded to the target picture. At the same time, eye fixations were recorded using an eye-tracking device. The effect of linguistic complexity on language processing time was assessed from eye fixation information by systematically varying linguistic complexity. This was achieved with a sentence corpus containing seven German sentence structures. A novel data analysis method computed the average tendency to fixate the target picture as a function of time during sentence processing. This allowed identification of the point in time at which the participant understood the sentence, referred to as the decision moment. Systematic differences in processing time were observed as a function of linguistic complexity. These differences in processing time may be used to assess the efficiency of cognitive processes involved in resolving linguistic complexity. Thus, the proposed method enables a temporal analysis of the speech comprehension process and has potential applications in speech audiology and psychoacoustics.
Collapse
|
22
|
Uslar VN, Carroll R, Hanke M, Hamann C, Ruigendijk E, Brand T, Kollmeier B. Development and evaluation of a linguistically and audiologically controlled sentence intelligibility test. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 134:3039-3056. [PMID: 24116439 DOI: 10.1121/1.4818760] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
To allow for a systematic variation of linguistic complexity of sentences while acoustically controlling for intelligibility of sentence fragments, a German corpus, Oldenburg linguistically and audiologically controlled sentences (OLACS), was designed, implemented, and evaluated. Sentences were controlled for plausibility with a questionnaire survey. Verification of the speech material was performed in three listening conditions (quiet, stationary, and fluctuating noise) by collecting speech reception thresholds (SRTs) and response latencies as well as individual cognitive measures for 20 young listeners with normal hearing. Consistent differences in response latencies across sentence types verified the effect of linguistic complexity on processing speed. The addition of noise decreased response latencies, giving evidence for different response strategies for measurements in noise. Linguistic complexity had a significant effect on SRT. In fluctuating noise, this effect was more pronounced, indicating that fluctuating noise correlates with stronger cognitive contributions. SRTs in quiet correlated with hearing thresholds, whereas cognitive measures explained up to 40% of the variance in SRTs in noise. In conclusion, OLACS appears to be a suitable tool for assessing the interaction between aspects of speech understanding (including cognitive processing) and speech intelligibility in German.
Collapse
Affiliation(s)
- Verena N Uslar
- Department of Medical Physics and Acoustics, Carl von Ossietzky University, Oldenburg, Germany
| | | | | | | | | | | | | |
Collapse
|