1
|
Gómez-Vicente V, Esquiva G, Lancho C, Benzerdjeb K, Jerez AA, Ausó E. Importance of Visual Support Through Lipreading in the Identification of Words in Spanish Language. LANGUAGE AND SPEECH 2024:238309241270741. [PMID: 39189455 DOI: 10.1177/00238309241270741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/28/2024]
Abstract
We sought to examine the contribution of visual cues, such as lipreading, in the identification of familiar (words) and unfamiliar (phonemes) words in terms of percent accuracy. For that purpose, in this retrospective study, we presented lists of words and phonemes (adult female healthy voice) in auditory (A) and audiovisual (AV) modalities to 65 Spanish normal-hearing male and female listeners classified in four age groups. Our results showed a remarkable benefit of AV information in word and phoneme recognition. Regarding gender, women exhibited better performance than men in both A and AV modalities, although we only found significant differences for words but not for phonemes. Concerning age, significant differences were detected in word recognition in the A modality between the youngest (18-29 years old) and oldest (⩾50 years old) groups only. We conclude visual information enhances word and phoneme recognition and women are more influenced by visual signals than men in AV speech perception. On the contrary, it seems that, overall, age is not a limiting factor for word recognition, with no significant differences observed in the AV modality.
Collapse
Affiliation(s)
| | - Gema Esquiva
- Department of Optics, Pharmacology and Anatomy, University of Alicante, Spain; Alicante Institute for Health and Biomedical Research (ISABIAL), Spain
| | - Carmen Lancho
- Data Science Laboratory, University Rey Juan Carlos, Spain
| | | | | | - Eva Ausó
- Department of Optics, Pharmacology and Anatomy, University of Alicante, Spain
| |
Collapse
|
2
|
Krason A, Zhang Y, Man H, Vigliocco G. Mouth and facial informativeness norms for 2276 English words. Behav Res Methods 2024; 56:4786-4801. [PMID: 37604959 PMCID: PMC11289175 DOI: 10.3758/s13428-023-02216-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/01/2023] [Indexed: 08/23/2023]
Abstract
Mouth and facial movements are part and parcel of face-to-face communication. The primary way of assessing their role in speech perception has been by manipulating their presence (e.g., by blurring the area of a speaker's lips) or by looking at how informative different mouth patterns are for the corresponding phonemes (or visemes; e.g., /b/ is visually more salient than /g/). However, moving beyond informativeness of single phonemes is challenging due to coarticulation and language variations (to name just a few factors). Here, we present mouth and facial informativeness (MaFI) for words, i.e., how visually informative words are based on their corresponding mouth and facial movements. MaFI was quantified for 2276 English words, varying in length, frequency, and age of acquisition, using phonological distance between a word and participants' speechreading guesses. The results showed that MaFI norms capture well the dynamic nature of mouth and facial movements per word, with words containing phonemes with roundness and frontness features, as well as visemes characterized by lower lip tuck, lip rounding, and lip closure being visually more informative. We also showed that the more of these features there are in a word, the more informative it is based on mouth and facial movements. Finally, we demonstrated that the MaFI norms generalize across different variants of English language. The norms are freely accessible via Open Science Framework ( https://osf.io/mna8j/ ) and can benefit any language researcher using audiovisual stimuli (e.g., to control for the effect of speech-linked mouth and facial movements).
Collapse
Affiliation(s)
- Anna Krason
- Department of Experimental Psychology, University College London, 26 Bedford Way, London, WC1H, 0AP, UK.
| | - Ye Zhang
- Department of Experimental Psychology, University College London, 26 Bedford Way, London, WC1H, 0AP, UK.
| | - Hillarie Man
- Department of Experimental Psychology, University College London, 26 Bedford Way, London, WC1H, 0AP, UK
| | - Gabriella Vigliocco
- Department of Experimental Psychology, University College London, 26 Bedford Way, London, WC1H, 0AP, UK
| |
Collapse
|
3
|
Sekine K, Özyürek A. Children benefit from gestures to understand degraded speech but to a lesser extent than adults. Front Psychol 2024; 14:1305562. [PMID: 38303780 PMCID: PMC10832995 DOI: 10.3389/fpsyg.2023.1305562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 12/13/2023] [Indexed: 02/03/2024] Open
Abstract
The present study investigated to what extent children, compared to adults, benefit from gestures to disambiguate degraded speech by manipulating speech signals and manual modality. Dutch-speaking adults (N = 20) and 6- and 7-year-old children (N = 15) were presented with a series of video clips in which an actor produced a Dutch action verb with or without an accompanying iconic gesture. Participants were then asked to repeat what they had heard. The speech signal was either clear or altered into 4- or 8-band noise-vocoded speech. Children had more difficulty than adults in disambiguating degraded speech in the speech-only condition. However, when presented with both speech and gestures, children reached a comparable level of accuracy to that of adults in the degraded-speech-only condition. Furthermore, for adults, the enhancement of gestures was greater in the 4-band condition than in the 8-band condition, whereas children showed the opposite pattern. Gestures help children to disambiguate degraded speech, but children need more phonological information than adults to benefit from use of gestures. Children's multimodal language integration needs to further develop to adapt flexibly to challenging situations such as degraded speech, as tested in our study, or instances where speech is heard with environmental noise or through a face mask.
Collapse
Affiliation(s)
- Kazuki Sekine
- Faculty of Human Sciences, Waseda University, Tokorozawa, Japan
| | - Aslı Özyürek
- Centre for Language Studies, Radboud University, Nijmegen, Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
4
|
Windle R, Dillon H, Heinrich A. A review of auditory processing and cognitive change during normal ageing, and the implications for setting hearing aids for older adults. Front Neurol 2023; 14:1122420. [PMID: 37409017 PMCID: PMC10318159 DOI: 10.3389/fneur.2023.1122420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 06/02/2023] [Indexed: 07/07/2023] Open
Abstract
Throughout our adult lives there is a decline in peripheral hearing, auditory processing and elements of cognition that support listening ability. Audiometry provides no information about the status of auditory processing and cognition, and older adults often struggle with complex listening situations, such as speech in noise perception, even if their peripheral hearing appears normal. Hearing aids can address some aspects of peripheral hearing impairment and improve signal-to-noise ratios. However, they cannot directly enhance central processes and may introduce distortion to sound that might act to undermine listening ability. This review paper highlights the need to consider the distortion introduced by hearing aids, specifically when considering normally-ageing older adults. We focus on patients with age-related hearing loss because they represent the vast majority of the population attending audiology clinics. We believe that it is important to recognize that the combination of peripheral and central, auditory and cognitive decline make older adults some of the most complex patients seen in audiology services, so they should not be treated as "standard" despite the high prevalence of age-related hearing loss. We argue that a primary concern should be to avoid hearing aid settings that introduce distortion to speech envelope cues, which is not a new concept. The primary cause of distortion is the speed and range of change to hearing aid amplification (i.e., compression). We argue that slow-acting compression should be considered as a default for some users and that other advanced features should be reconsidered as they may also introduce distortion that some users may not be able to tolerate. We discuss how this can be incorporated into a pragmatic approach to hearing aid fitting that does not require increased loading on audiology services.
Collapse
Affiliation(s)
- Richard Windle
- Audiology Department, Royal Berkshire NHS Foundation Trust, Reading, United Kingdom
| | - Harvey Dillon
- NIHR Manchester Biomedical Research Centre, Manchester, United Kingdom
- Department of Linguistics, Macquarie University, North Ryde, NSW, Australia
| | - Antje Heinrich
- NIHR Manchester Biomedical Research Centre, Manchester, United Kingdom
- Division of Human Communication, Development and Hearing, School of Health Sciences, University of Manchester, Manchester, United Kingdom
| |
Collapse
|
5
|
Williams BT, Viswanathan N, Brouwer S. The effect of visual speech information on linguistic release from masking. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:602. [PMID: 36732222 PMCID: PMC10162837 DOI: 10.1121/10.0016865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 11/30/2022] [Accepted: 12/23/2022] [Indexed: 05/07/2023]
Abstract
Listeners often experience challenges understanding a person (target) in the presence of competing talkers (maskers). This difficulty reduces with the availability of visual speech information (VSI; lip movements, degree of mouth opening) and during linguistic release from masking (LRM; masking decreases with dissimilar language maskers). We investigate whether and how LRM occurs with VSI. We presented English targets with either Dutch or English maskers in audio-only and audiovisual conditions to 62 American English participants. The signal-to-noise ratio (SNR) was easy at 0 audio-only and -8 dB audiovisual in Experiment 1 and hard at -8 and -16 dB in Experiment 2 to assess the effects of modality on LRM across the same and different SNRs. We found LRM in the audiovisual condition for all SNRs and in audio-only for -8 dB, demonstrating reliable LRM for audiovisual conditions. Results also revealed that LRM is modulated by modality with larger LRM in audio-only indicating that introducing VSI weakens LRM. Furthermore, participants showed higher performance for Dutch maskers compared to English maskers with and without VSI. This establishes that listeners use both VSI and dissimilar language maskers to overcome masking. Our study shows that LRM persists in the audiovisual modality and its strength depends on the modality.
Collapse
Affiliation(s)
- Brittany T Williams
- Department of Communication Sciences and Disorders, The Pennsylvania State University, State College, Pennsylvania 16801, USA
| | - Navin Viswanathan
- Department of Communication Sciences and Disorders, The Pennsylvania State University, State College, Pennsylvania 16801, USA
| | - Susanne Brouwer
- Department of Modern Languages and Cultures, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
6
|
Hilviu D, Gabbatore I, Parola A, Bosco FM. A cross-sectional study to assess pragmatic strengths and weaknesses in healthy ageing. BMC Geriatr 2022; 22:699. [PMID: 35999510 PMCID: PMC9400309 DOI: 10.1186/s12877-022-03304-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 07/15/2022] [Indexed: 11/13/2022] Open
Abstract
Background Ageing refers to the natural and physiological changes that individuals experience over the years. This process also involves modifications in terms of communicative-pragmatics, namely the ability to convey meanings in social contexts and to interact with other people using various expressive means, such as linguistic, extralinguistic and paralinguistic aspects of communication. Very few studies have provided a complete assessment of communicative-pragmatic performance in healthy ageing. Methods The aim of this study was to comprehensively assess communicative-pragmatic ability in three samples of 20 (N = 60) healthy adults, each belonging to a different age range (20–40, 65–75, 76–86 years old) and to compare their performance in order to observe any potential changes in their ability to communicate. We also explored the potential role of education and sex on the communicative-pragmatic abilities observed. The three age groups were evaluated with a between-study design by means of the Assessment Battery for Communication (ABaCo), a validated assessment tool characterised by five scales: linguistic, extralinguistic, paralinguistic, contextual and conversational. Results The results indicated that the pragmatic ability assessed by the ABaCo is poorer in older participants when compared to the younger ones (main effect of age group: F(2,56) = 9.097; p < .001). Specifically, significant differences were detected in tasks on the extralinguistic, paralinguistic and contextual scales. Whereas the data highlighted a significant role of education (F(1,56) = 4.713; p = .034), no sex-related differences were detected. Conclusions Our results suggest that the ageing process may also affect communicative-pragmatic ability and a comprehensive assessment of the components of such ability may help to better identify difficulties often experienced by older individuals in their daily life activities. Supplementary Information The online version contains supplementary material available at 10.1186/s12877-022-03304-z.
Collapse
Affiliation(s)
- Dize Hilviu
- GIPSI Research Group, Department of Psychology, University of Turin, Turin, Italy
| | - Ilaria Gabbatore
- GIPSI Research Group, Department of Psychology, University of Turin, Turin, Italy. .,Research Unit of Logopedics, Faculty of Humanities, University of Oulu, Oulu, Finland.
| | - Alberto Parola
- GIPSI Research Group, Department of Psychology, University of Turin, Turin, Italy.,Department of Linguistics, Cognitive Science and Semiotics, Aarhus University, Aarhus, Denmark
| | - Francesca M Bosco
- GIPSI Research Group, Department of Psychology, University of Turin, Turin, Italy.,Neuroscience Institute of Turin - NIT, University of Turin, Turin, Italy
| |
Collapse
|
7
|
van Nispen K, Sekine K, van der Meulen I, Preisig BC. Gesture in the eye of the beholder: An eye-tracking study on factors determining the attention for gestures produced by people with aphasia. Neuropsychologia 2022; 174:108315. [DOI: 10.1016/j.neuropsychologia.2022.108315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 06/28/2022] [Accepted: 06/30/2022] [Indexed: 10/17/2022]
|
8
|
Wilms V, Drijvers L, Brouwer S. The Effects of Iconic Gestures and Babble Language on Word Intelligibility in Sentence Context. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:1822-1838. [PMID: 35439423 DOI: 10.1044/2022_jslhr-21-00387] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE This study investigated to what extent iconic co-speech gestures help word intelligibility in sentence context in two different linguistic maskers (native vs. foreign). It was hypothesized that sentence recognition improves with the presence of iconic co-speech gestures and with foreign compared to native babble. METHOD Thirty-two native Dutch participants performed a Dutch word recognition task in context in which they were presented with videos in which an actress uttered short Dutch sentences (e.g., Ze begint te openen, "She starts to open"). Participants were presented with a total of six audiovisual conditions: no background noise (i.e., clear condition) without gesture, no background noise with gesture, French babble without gesture, French babble with gesture, Dutch babble without gesture, and Dutch babble with gesture; and they were asked to type down what was said by the Dutch actress. The accurate identification of the action verbs at the end of the target sentences was measured. RESULTS The results demonstrated that performance on the task was better in the gesture compared to the nongesture conditions (i.e., gesture enhancement effect). In addition, performance was better in French babble than in Dutch babble. CONCLUSIONS Listeners benefit from iconic co-speech gestures during communication and from foreign background speech compared to native. These insights into multimodal communication may be valuable to everyone who engages in multimodal communication and especially to a public who often works in public places where competing speech is present in the background.
Collapse
Affiliation(s)
- Veerle Wilms
- Centre for Language Studies, Radboud University, Nijmegen, the Netherlands
| | - Linda Drijvers
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| | - Susanne Brouwer
- Centre for Language Studies, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
9
|
Cuevas P, He Y, Billino J, Kozasa E, Straube B. Age-related effects on the neural processing of semantic complexity in a continuous narrative: Modulation by gestures already present in young to middle-aged adults. Neuropsychologia 2020; 151:107725. [PMID: 33347914 DOI: 10.1016/j.neuropsychologia.2020.107725] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Revised: 09/03/2020] [Accepted: 12/09/2020] [Indexed: 11/29/2022]
Abstract
The processing of semantically complex speech is a demanding task which can be facilitated by speech-associated arm and hand gestures. However, the role of age concerning the perception of semantic complexity and the influence of gestures in this context remains unclear. The goal of this study was to investigate if age-related differences are already present in early adulthood during the processing of semantic complexity and gestures. To this end, we analyzed fMRI images of a sample of 38 young and middle-aged participants (age-range: 19-55). They had the task to listen and to watch a narrative. The narrative contained segments varying in the degree of semantic complexity, and they were spontaneously accompanied by gestures. The semantic complexity of the story was measured by the idea density. Consistent with previous findings in young adults, we observed increased activation for passages with lower compared to higher complexity in bilateral temporal areas and the precuneus. BOLD signal in the left frontal and left parietal regions correlated during the perception of complex passages with increasing age. This correlation was reduced for passages presented with gestures. Median-split based post-hoc comparisons confirmed that group differences between younger (19-23 years) and older adults within the early adult lifespan (24-55 years) were significantly reduced in passages with gestures. Our results suggest that older adults within early adulthood adapt to the requirements of highly complex passages activating additional regions when no gesture information is available. Gestures might play a facilitative role with increasing age, especially when speech is complex.
Collapse
Affiliation(s)
- Paulina Cuevas
- Translational Neuroimaging Marburg, Department of Psychiatry and Psychotherapy, Philipps-Universität Marburg, Rudolf-Bultmann-Straße 8, 35039, Marburg, Germany; Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany.
| | - Yifei He
- Translational Neuroimaging Marburg, Department of Psychiatry and Psychotherapy, Philipps-Universität Marburg, Rudolf-Bultmann-Straße 8, 35039, Marburg, Germany; Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| | - Jutta Billino
- Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany; Department of Psychology, Justus Liebig University Giessen, Otto-Behaghel-Straße 10F, 35394, Gießen, Germany
| | - Elisa Kozasa
- Hospital Israelita Albert Einstein, 05652-900, São Paulo, Brazil
| | - Benjamin Straube
- Translational Neuroimaging Marburg, Department of Psychiatry and Psychotherapy, Philipps-Universität Marburg, Rudolf-Bultmann-Straße 8, 35039, Marburg, Germany; Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| |
Collapse
|