1
|
Wang X, Mo Y, Kong F, Guo W, Zhou H, Zheng N, Schnupp JWH, Zheng Y, Meng Q. Cochlear-implant Mandarin tone recognition with a disyllabic word corpus. Front Psychol 2022; 13:1026116. [DOI: 10.3389/fpsyg.2022.1026116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 09/28/2022] [Indexed: 11/13/2022] Open
Abstract
Despite pitch being considered the primary cue for discriminating lexical tones, there are secondary cues such as loudness contour and duration, which may allow some cochlear implant (CI) tone discrimination even with severely degraded pitch cues. To isolate pitch cues from other cues, we developed a new disyllabic word stimulus set (Di) whose primary (pitch) and secondary (loudness) cue varied independently. This Di set consists of 270 disyllabic words, each having a distinct meaning depending on the perceived tone. Thus, listeners who hear the primary pitch cue clearly may hear a different meaning from listeners who struggle with the pitch cue and must rely on the secondary loudness contour. A lexical tone recognition experiment was conducted, which compared Di with a monosyllabic set of natural recordings. Seventeen CI users and eight normal-hearing (NH) listeners took part in the experiment. Results showed that CI users had poorer pitch cues encoding and their tone recognition performance was significantly influenced by the “missing” or “confusing” secondary cues with the Di corpus. The pitch-contour-based tone recognition is still far from satisfactory for CI users compared to NH listeners, even if some appear to integrate multiple cues to achieve high scores. This disyllabic corpus could be used to examine the performance of pitch recognition of CI users and the effectiveness of pitch cue enhancement based Mandarin tone enhancement strategies. The Di corpus is freely available online: https://github.com/BetterCI/DiTone.
Collapse
|
2
|
Firestone GM, McGuire K, Liang C, Zhang N, Blankenship CM, Xiang J, Zhang F. A Preliminary Study of the Effects of Attentive Music Listening on Cochlear Implant Users' Speech Perception, Quality of Life, and Behavioral and Objective Measures of Frequency Change Detection. Front Hum Neurosci 2020; 14:110. [PMID: 32296318 PMCID: PMC7136537 DOI: 10.3389/fnhum.2020.00110] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Accepted: 03/11/2020] [Indexed: 11/17/2022] Open
Abstract
Introduction Most cochlear implant (CI) users have difficulty in listening tasks that rely strongly on perception of frequency changes (e.g., speech perception in noise, musical melody perception, etc.). Some previous studies using behavioral or subjective assessments have shown that short-term music training can benefit CI users’ perception of music and speech. Electroencephalographic (EEG) recordings may reveal the neural basis for music training benefits in CI users. Objective To examine the effects of short-term music training on CI hearing outcomes using a comprehensive test battery of subjective evaluation, behavioral tests, and EEG measures. Design Twelve adult CI users were recruited for a home-based music training program that focused on attentive listening to music genres and materials that have an emphasis on melody. The participants used a music streaming program (i.e., Pandora) downloaded onto personal electronic devices for training. The participants attentively listened to music through a direct audio cable or through Bluetooth streaming. The training schedule was 40 min/session/day, 5 days/week, for either 4 or 8 weeks. The pre-training and post-training tests included: hearing thresholds, Speech, Spatial and Qualities of Hearing Scale (SSQ12) questionnaire, psychoacoustic tests of frequency change detection threshold (FCDT), speech recognition tests (CNC words, AzBio sentences, and QuickSIN), and EEG responses to tones that contained different magnitudes of frequency changes. Results All participants except one finished the 4- or 8-week training, resulting in a dropout rate of 8.33%. Eleven participants performed all tests except for two who did not participate in EEG tests. Results showed a significant improvement in the FCDTs as well as performance on CNC and QuickSIN after training (p < 0.05), but no significant improvement in SSQ scores (p > 0.05). Results of the EEG tests showed larger post-training cortical auditory evoked potentials (CAEPs) in seven of the nine participants, suggesting a better cortical processing of both stimulus onset and within-stimulus frequency changes. Conclusion These preliminary data suggest that extensive, focused music listening can improve frequency perception and speech perception in CI users. Further studies that include a larger sample size and control groups are warranted to determine the efficacy of short-term music training in CI users.
Collapse
Affiliation(s)
- Gabrielle M Firestone
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Kelli McGuire
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Chun Liang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Nanhua Zhang
- Division of Biostatistics and Epidemiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States.,Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Chelsea M Blankenship
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Jing Xiang
- Department of Pediatrics and Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | - Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| |
Collapse
|
3
|
Spitzer ER, Landsberger DM, Friedmann DR, Galvin JJ. Pleasantness Ratings for Harmonic Intervals With Acoustic and Electric Hearing in Unilaterally Deaf Cochlear Implant Patients. Front Neurosci 2019; 13:922. [PMID: 31551686 PMCID: PMC6733976 DOI: 10.3389/fnins.2019.00922] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Accepted: 08/16/2019] [Indexed: 11/13/2022] Open
Abstract
Background Harmony is an important part of tonal music that conveys context, form and emotion. Two notes sounded simultaneously form a harmonic interval. In normal-hearing (NH) listeners, some harmonic intervals (e.g., minor 2nd, tritone, major 7th) typically sound more dissonant than others (e.g., octave, major 3rd, 4th). Because of the limited spectro-temporal resolution afforded by cochlear implants (CIs), music perception is generally poor. However, CI users may still be sensitive to relative dissonance across intervals. In this study, dissonance ratings for harmonic intervals were measured in 11 unilaterally deaf CI patients, in whom ratings from the CI could be compared to those from the normal ear. Methods Stimuli consisted of pairs of equal amplitude MIDI piano tones. Intervals spanned a range of two octaves relative to two root notes (F3 or C4). Dissonance was assessed in terms of subjective pleasantness ratings for intervals presented to the NH ear alone, the CI ear alone, and both ears together (NH + CI). Ratings were collected for both root notes for within- and across-octave intervals (1–12 and 13–24 semitones). Participants rated the pleasantness of each interval by clicking on a line anchored with “least pleasant” and “most pleasant.” A follow-up experiment repeated the task with a smaller stimulus set. Results With NH-only listening, within-octave intervals minor 2nd, major 2nd, and major 7th were rated least pleasant; major 3rd, 5th, and octave were rated most pleasant. Across-octave counterparts were similarly rated. With CI-only listening, ratings were consistently lower and showed a reduced range. Mean ratings were highly correlated between NH-only and CI-only listening (r = 0.845, p < 0.001). Ratings were similar between NH-only and NH + CI listening, with no significant binaural enhancement/interference. The follow-up tests showed that ratings were reliable for the least and most pleasant intervals. Discussion Although pleasantness ratings were less differentiated for the CI ear than the NH ear, there were similarities between the two listening modes. Given the lack of spectro-temporal detail needed for harmonicity-based distinctions, temporal envelope interactions (within and across channels) associated with a perception of roughness may contribute to dissonance perception for harmonic intervals with CI-only listening.
Collapse
Affiliation(s)
- Emily R Spitzer
- Department of Otolaryngology-Head and Neck Surgery, New York University School of Medicine, New York, NY, United States
| | - David M Landsberger
- Department of Otolaryngology-Head and Neck Surgery, New York University School of Medicine, New York, NY, United States
| | - David R Friedmann
- Department of Otolaryngology-Head and Neck Surgery, New York University School of Medicine, New York, NY, United States
| | | |
Collapse
|
4
|
Categorisation of natural sounds at different stages of auditory recovery in cochlear implant adult deaf patients. Hear Res 2018; 367:182-194. [DOI: 10.1016/j.heares.2018.06.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/03/2017] [Revised: 06/05/2018] [Accepted: 06/08/2018] [Indexed: 11/17/2022]
|
5
|
Paquette S, Ahmed GD, Goffi-Gomez MV, Hoshino ACH, Peretz I, Lehmann A. Musical and vocal emotion perception for cochlear implants users. Hear Res 2018; 370:272-282. [PMID: 30181063 DOI: 10.1016/j.heares.2018.08.009] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Revised: 08/18/2018] [Accepted: 08/22/2018] [Indexed: 10/28/2022]
Abstract
Cochlear implants can successfully restore hearing in profoundly deaf individuals and enable speech comprehension. However, the acoustic signal provided is severely degraded and, as a result, many important acoustic cues for perceiving emotion in voices and music are unavailable. The deficit of cochlear implant users in auditory emotion processing has been clearly established. Yet, the extent to which this deficit and the specific cues that remain available to cochlear implant users are unknown due to several confounding factors. Here we assessed the recognition of the most basic forms of auditory emotion and aimed to identify which acoustic cues are most relevant to recognize emotions through cochlear implants. To do so, we used stimuli that allowed vocal and musical auditory emotions to be comparatively assessed while controlling for confounding factors. These stimuli were used to evaluate emotion perception in cochlear implant users (Experiment 1) and to investigate emotion perception in natural versus cochlear implant hearing in the same participants with a validated cochlear implant simulation approach (Experiment 2). Our results showed that vocal and musical fear was not accurately recognized by cochlear implant users. Interestingly, both experiments found that timbral acoustic cues (energy and roughness) correlate with participant ratings for both vocal and musical emotion bursts in the cochlear implant simulation condition. This suggests that specific attention should be given to these cues in the design of cochlear implant processors and rehabilitation protocols (especially energy, and roughness). For instance, music-based interventions focused on timbre could improve emotion perception and regulation, and thus improve social functioning, in children with cochlear implants during development.
Collapse
Affiliation(s)
- S Paquette
- International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Québec, Canada; Neurology Department, Beth Israel Deaconess Medical Center, Harvard Medical School, MA, USA.
| | - G D Ahmed
- Department of Otolaryngology, Head and Neck Surgery, McGill University, Québec, Canada; Department of Otolaryngology, Head and Neck Surgery, King Abdulaziz University, Rabigh Medical College, Jeddah, Saudi Arabia
| | - M V Goffi-Gomez
- Cochlear Implant Group, School of Medicine, Hospital das Clínicas, Universidade de São Paulo, SP, Brazil
| | - A C H Hoshino
- Cochlear Implant Group, School of Medicine, Hospital das Clínicas, Universidade de São Paulo, SP, Brazil
| | - I Peretz
- International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Québec, Canada
| | - A Lehmann
- International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Québec, Canada; Department of Otolaryngology, Head and Neck Surgery, McGill University, Québec, Canada
| |
Collapse
|
6
|
Carcagno S, Micheyl C, Cousineau M, Pressnitzer D, Demany L. Effect of stimulus type and pitch salience on pitch-sequence processing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 143:3665. [PMID: 29960504 DOI: 10.1121/1.5043405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Using a same-different discrimination task, it has been shown that discrimination performance for sequences of complex tones varying just detectably in pitch is less dependent on sequence length (1, 2, or 4 elements) when the tones contain resolved harmonics than when they do not [Cousineau, Demany, and Pessnitzer (2009). J. Acoust. Soc. Am. 126, 3179-3187]. This effect had been attributed to the activation of automatic frequency-shift detectors (FSDs) by the shifts in resolved harmonics. The present study provides evidence against this hypothesis by showing that the sequence-processing advantage found for complex tones with resolved harmonics is not found for pure tones or other sounds supposed to activate FSDs (narrow bands of noise and wide-band noises eliciting pitch sensations due to interaural phase shifts). The present results also indicate that for pitch sequences, processing performance is largely unrelated to pitch salience per se: for a fixed level of discriminability between sequence elements, sequences of elements with salient pitches are not necessarily better processed than sequences of elements with less salient pitches. An ideal-observer model for the same-different binary-sequence discrimination task is also developed in the present study. The model allows the computation of d' for this task using numerical methods.
Collapse
Affiliation(s)
- Samuele Carcagno
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom
| | - Christophe Micheyl
- Starkey Hearing Research Center, 2150 Shattuck Avenue, Suite 408, Berkeley, California 94704, USA
| | - Marion Cousineau
- Department of Psychology, International Laboratory for Brain, Music and Sound Research (BRAMS) and Center for Research on Brain, Language and Music (CRBLM), University of Montreal, Montreal, Quebec H3C 3J7, Canada
| | - Daniel Pressnitzer
- Laboratoire des Systèmes Perceptifs, Département d'études cognitives, École Normale Supérieure, PSL Research University, Centre National de la Recherche Scientifique, 29 Rue d'Ulm, 75005 Paris, France
| | - Laurent Demany
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, Université de Bordeaux and Centre National de la Recherche Scientifique, 146 Rue Leo-Saignat, F-33076 Bordeaux, France
| |
Collapse
|
7
|
A Follow-Up Study on Music and Lexical Tone Perception in Adult Mandarin-Speaking Cochlear Implant Users. Otol Neurotol 2018; 38:e421-e428. [PMID: 28984805 DOI: 10.1097/mao.0000000000001580] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVE The aim was to evaluate the development of music and lexical tone perception in Mandarin-speaking adult cochlear implant (CI) users over a period of 1 year. STUDY DESIGN Prospective patient series. SETTING Tertiary hospital and research institute. PATIENTS Twenty five adult CI users, with ages ranging from 19 to 75 years old, participated in a year-long follow-up evaluation. There were also 40 normal hearing adult subjects who participated as a control group to provide the normal value range. INTERVENTIONS Musical sounds in cochlear implants (Mu.S.I.C.) test battery was undertaken to evaluate music perception ability. Mandarin Tone Identification in Noise Test (M-TINT) was used to assess lexical tone recognition. The tests for CI users were completed at 1, 3, 6, and 12 months after the CI switch-on. MAIN OUTCOMES MEASURES Quantitative and statistical analysis of their results from music and tone perception tests. RESULTS The performance of music perception and tone recognition both demonstrated an overall improvement in outcomes during the entire 1-year follow-up process. The increasing trends were obvious in the early period especially in the first 6 months after switch-on. There was a significant improvement in the melody discrimination (p < 0.01), timbre identification (p < 0.001), tone recognition in quiet (p < 0.0001), and in noise (p < 0.0001). CONCLUSIONS Adult Mandarin-speaking CI users show an increasingly improved performance on music and tone perception during the 1-year follow-up. The improvement was the most prominent in the first 6 months of CI use. It is essential to strengthen the rehabilitation training within the first 6 months.
Collapse
|
8
|
Ahmed DG, Paquette S, Zeitouni A, Lehmann A. Neural Processing of Musical and Vocal Emotions Through Cochlear Implants Simulation. Clin EEG Neurosci 2018; 49:143-151. [PMID: 28958161 DOI: 10.1177/1550059417733386] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Cochlear implants (CIs) partially restore the sense of hearing in the deaf. However, the ability to recognize emotions in speech and music is reduced due to the implant's electrical signal limitations and the patient's altered neural pathways. Electrophysiological correlations of these limitations are not yet well established. Here we aimed to characterize the effect of CIs on auditory emotion processing and, for the first time, directly compare vocal and musical emotion processing through a CI-simulator. We recorded 16 normal hearing participants' electroencephalographic activity while listening to vocal and musical emotional bursts in their original form and in a degraded (CI-simulated) condition. We found prolonged P50 latency and reduced N100-P200 complex amplitude in the CI-simulated condition. This points to a limitation in encoding sound signals processed through CI simulation. When comparing the processing of vocal and musical bursts, we found a delay in latency with the musical bursts compared to the vocal bursts in both conditions (original and CI-simulated). This suggests that despite the cochlear implants' limitations, the auditory cortex can distinguish between vocal and musical stimuli. In addition, it adds to the literature supporting the complexity of musical emotion. Replicating this study with actual CI users might lead to characterizing emotional processing in CI users and could ultimately help develop optimal rehabilitation programs or device processing strategies to improve CI users' quality of life.
Collapse
Affiliation(s)
- Duha G Ahmed
- 1 International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Montreal, Quebec, Canada.,2 Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Quebec, Canada.,3 Department of Otolaryngology, Head and Neck Surgery, King Abdulaziz University, Rabigh Medical College, Jeddah, Saudi Arabia
| | - Sebastian Paquette
- 1 International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Montreal, Quebec, Canada.,4 Neurology Department, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Anthony Zeitouni
- 2 Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Quebec, Canada
| | - Alexandre Lehmann
- 1 International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Montreal, Quebec, Canada.,2 Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
9
|
Meng Q, Zheng N, Li X. Loudness Contour Can Influence Mandarin Tone Recognition: Vocoder Simulation and Cochlear Implants. IEEE Trans Neural Syst Rehabil Eng 2016; 25:641-649. [PMID: 27448366 DOI: 10.1109/tnsre.2016.2593489] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Lexical tone recognition with current cochlear implants (CI) remains unsatisfactory due to significantly degraded pitch-related acoustic cues, which dominate the tone recognition by normal-hearing (NH) listeners. Several secondary cues (e.g., amplitude contour, duration, and spectral envelope) that influence tone recognition in NH listeners and CI users have been studied. This work proposes a loudness contour manipulation algorithm, namely Loudness-Tone (L-Tone), to investigate the effects of loudness contour on Mandarin tone recognition and the effectiveness of using loudness cue to enhance tone recognition for CI users. With L-Tone, the intensity of sound samples is multiplied by gain values determined by instantaneous fundamental frequencies (F0s) and pre-defined gain- F0 mapping functions. Perceptual experiments were conducted with a four-channel noise-band vocoder simulation in NH listeners and with CI users. The results suggested that 1) loudness contour is a useful secondary cue for Mandarin tone recognition, especially when pitch cues are significantly degraded; 2) L-Tone can be used to improve Mandarin tone recognition in both simulated and actual CI-hearing without significant negative effect on vowel and consonant recognition. L-Tone is a promising algorithm for incorporation into real-time CI processing and off-line CI rehabilitation training software.
Collapse
|
10
|
Meister H, Fürsen K, Streicher B, Lang-Roth R, Walger M. The Use of Voice Cues for Speaker Gender Recognition in Cochlear Implant Recipients. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2016; 59:546-556. [PMID: 27135985 DOI: 10.1044/2015_jslhr-h-15-0128] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2015] [Accepted: 09/23/2015] [Indexed: 06/05/2023]
Abstract
PURPOSE The focus of this study was to examine the influence of fundamental frequency (F0) and vocal tract length (VTL) modifications on speaker gender recognition in cochlear implant (CI) recipients for different stimulus types. METHOD Single words and sentences were manipulated using isolated or combined F0 and VTL cues. Using an 11-point rating scale, CI recipients and listeners with normal hearing rated the maleness/femaleness of the corresponding voice. RESULTS Speaker gender ratings for combined F0 and VTL modifications were similar across all stimulus types in both CI recipients and listeners with normal hearing, although the CI recipients showed a somewhat larger ambiguity. In contrast to listeners with normal hearing, F0-VTL and F0-only modifications revealed similar ratings in the CI recipients when using words as stimuli. However, when sentences were used, a difference was found between F0-VTL-based and F0-based ratings. Modifying VTL cues alone did not affect ratings in the CI group. CONCLUSIONS Whereas speaker gender ratings by listeners with normal hearing relied on combined VTL and F0 cues, CI recipients made only limited use of VTL cues, which might be one reason behind problems with identifying the speaker on the basis of voice. However, use of the voice cues depended on stimulus type, with the greater information in sentences allowing a more detailed analysis than single words in both listener groups.
Collapse
|
11
|
Collett E, Marx M, Gaillard P, Roby B, Fraysse B, Deguine O, Barone P. Categorization of common sounds by cochlear implanted and normal hearing adults. Hear Res 2016; 335:207-219. [PMID: 27050944 DOI: 10.1016/j.heares.2016.03.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2015] [Revised: 03/03/2016] [Accepted: 03/14/2016] [Indexed: 11/17/2022]
Abstract
Auditory categorization involves grouping of acoustic events along one or more shared perceptual dimensions which can relate to both semantic and physical attributes. This process involves both high level cognitive processes (categorization) and low-level perceptual encoding of the acoustic signal, both of which are affected by the use of a cochlear implant (CI) device. The goal of this study was twofold: I) compare the categorization strategies of CI users and normal hearing listeners (NHL) II) investigate if any characteristics of the raw acoustic signal could explain the results. 16 experienced CI users and 20 NHL were tested using a Free-Sorting Task of 16 common sounds divided into 3 predefined categories of environmental, musical and vocal sounds. Multiple Correspondence Analysis (MCA) and Hierarchical Clustering based on Principal Components (HCPC) show that CI users followed a similar categorization strategy to that of NHL and were able to discriminate between the three different types of sounds. However results for CI users were more varied and showed less inter-participant agreement. Acoustic analysis also highlighted the average pitch salience and average autocorrelation peak as being important for the perception and categorization of the sounds. The results therefore show that on a broad level of categorization CI users may not have as many difficulties as previously thought in discriminating certain kinds of sound; however the perception of individual sounds remains challenging.
Collapse
Affiliation(s)
- E Collett
- Université de Toulouse, CerCo UMR 5549 CNRS, Université Paul Sabatier, Toulouse, France; Université de Toulouse, CerCo UMR 5549 CNRS, Faculté de Médecine de Purpan, Toulouse, France; Advanced Bionics SARL, France
| | - M Marx
- Université de Toulouse, CerCo UMR 5549 CNRS, Université Paul Sabatier, Toulouse, France; Université de Toulouse, CerCo UMR 5549 CNRS, Faculté de Médecine de Purpan, Toulouse, France; Service d'Oto-Rhino-Laryngologie et Oto-Neurologie, Hopital Purpan, Toulouse, France
| | - P Gaillard
- Université de Toulouse, CLLE UMR 5263, CNRS, UT2J, Université de Toulouse Jean-Jaurès, Toulouse, France
| | - B Roby
- Université de Toulouse, CerCo UMR 5549 CNRS, Université Paul Sabatier, Toulouse, France; Université de Toulouse, CerCo UMR 5549 CNRS, Faculté de Médecine de Purpan, Toulouse, France; Service d'Oto-Rhino-Laryngologie et Oto-Neurologie, Hopital Purpan, Toulouse, France
| | - B Fraysse
- Service d'Oto-Rhino-Laryngologie et Oto-Neurologie, Hopital Purpan, Toulouse, France
| | - O Deguine
- Université de Toulouse, CerCo UMR 5549 CNRS, Université Paul Sabatier, Toulouse, France; Université de Toulouse, CerCo UMR 5549 CNRS, Faculté de Médecine de Purpan, Toulouse, France; Service d'Oto-Rhino-Laryngologie et Oto-Neurologie, Hopital Purpan, Toulouse, France
| | - P Barone
- Université de Toulouse, CerCo UMR 5549 CNRS, Université Paul Sabatier, Toulouse, France; Université de Toulouse, CerCo UMR 5549 CNRS, Faculté de Médecine de Purpan, Toulouse, France.
| |
Collapse
|
12
|
Cousineau M, Carcagno S, Demany L, Pressnitzer D. What is a melody? On the relationship between pitch and brightness of timbre. Front Syst Neurosci 2014; 7:127. [PMID: 24478638 PMCID: PMC3894522 DOI: 10.3389/fnsys.2013.00127] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2013] [Accepted: 12/25/2013] [Indexed: 11/13/2022] Open
Abstract
Previous studies showed that the perceptual processing of sound sequences is more efficient when the sounds vary in pitch than when they vary in loudness. We show here that sequences of sounds varying in brightness of timbre are processed with the same efficiency as pitch sequences. The sounds used consisted of two simultaneous pure tones one octave apart, and the listeners’ task was to make same/different judgments on pairs of sequences varying in length (one, two, or four sounds). In one condition, brightness of timbre was varied within the sequences by changing the relative level of the two pure tones. In other conditions, pitch was varied by changing fundamental frequency, or loudness was varied by changing the overall level. In all conditions, only two possible sounds could be used in a given sequence, and these two sounds were equally discriminable. When sequence length increased from one to four, discrimination performance decreased substantially for loudness sequences, but to a smaller extent for brightness sequences and pitch sequences. In the latter two conditions, sequence length had a similar effect on performance. These results suggest that the processes dedicated to pitch and brightness analysis, when probed with a sequence-discrimination task, share unexpected similarities.
Collapse
Affiliation(s)
- Marion Cousineau
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, University of Montreal Montreal, QC, Canada
| | | | | | - Daniel Pressnitzer
- Laboratoire des Systèmes Perceptifs, CNRS UMR 8248 Paris, France ; Département d'études cognitives, École normale supérieure Paris, France
| |
Collapse
|
13
|
Luo X, Masterson ME, Wu CC. Contour identification with pitch and loudness cues using cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 135:EL8-EL14. [PMID: 24437857 PMCID: PMC3874060 DOI: 10.1121/1.4832915] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2013] [Revised: 10/23/2013] [Accepted: 11/08/2013] [Indexed: 05/29/2023]
Abstract
Different from speech, pitch and loudness cues may or may not co-vary in music. Cochlear implant (CI) users with poor pitch perception may use loudness contour cues more than normal-hearing (NH) listeners. Contour identification was tested in CI users and NH listeners; the five-note contours contained either pitch cues alone, loudness cues alone, or both. Results showed that NH listeners' contour identification was better with pitch cues than with loudness cues; CI users performed similarly with either cues. When pitch and loudness cues were co-varied, CI performance significantly improved, suggesting that CI users were able to integrate the two cues.
Collapse
Affiliation(s)
- Xin Luo
- Department of Speech, Language, and Hearing Sciences, Purdue University, 500 Oval Drive, West Lafayette, Indiana 47907 , ,
| | - Megan E Masterson
- Department of Speech, Language, and Hearing Sciences, Purdue University, 500 Oval Drive, West Lafayette, Indiana 47907 , ,
| | - Ching-Chih Wu
- Department of Speech, Language, and Hearing Sciences, Purdue University, 500 Oval Drive, West Lafayette, Indiana 47907 , ,
| |
Collapse
|
14
|
Massida Z, Marx M, Belin P, James C, Fraysse B, Barone P, Deguine O. Gender categorization in cochlear implant users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2013; 56:1389-1401. [PMID: 24023381 DOI: 10.1044/1092-4388(2013/12-0132)] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
PURPOSE In this study, the authors examined the ability of subjects with cochlear implants (CIs) to discriminate voice gender and how this ability evolved as a function of CI experience. METHOD The authors presented a continuum of voice samples created by voice morphing, with 9 intermediate acoustic parameter steps between a typical male and a typical female. This method allowed for the evaluation of gender categorization not only when acoustical features were specific to gender but also for more ambiguous cases, when fundamental frequency or formant distribution were located between typical values. RESULTS Results showed a global, though variable, deficit for voice gender categorization in CI recipients compared with subjects with normal hearing. This deficit was stronger for ambiguous stimuli in the voice continuum: Average performance scores for CI users were 58% lower than average scores for subjects with normal hearing in cases of ambiguous stimuli and 19% lower for typical male and female voices. The authors found no significant improvement in voice gender categorization with CI experience. CONCLUSIONS These results emphasize the dissociation between recovery of speech recognition and voice feature perception after cochlear implantation. This large and durable deficit may be related to spectral and temporal degradation induced by CI sound coding, or it may be related to central voice processing deficits.
Collapse
|
15
|
Cousineau M, Demany L, Pressnitzer D. The role of peripheral resolvability in pitch-sequence processing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 128:EL236-EL241. [PMID: 21110532 DOI: 10.1121/1.3499701] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
The authors previously reported that same/different judgments on pitch sequences were more accurate for tones with resolved (low-rank) harmonics compared to unresolved (high-rank) harmonics, even when discriminability between tones was equated [Cousineau et al. (2009). J. Acoust. Soc. Am. 126, 3179-3187]. Here, peripheral resolvability, defined by the number of harmonics per cochlear filter, was contrasted with harmonic number. Tones were presented either diotically or dichotically. In the latter case, even and odd harmonics were presented to different ears, thus halving the number of harmonics per cochlear filter. Performance was better for dichotic than for diotic presentations. This indicates that peripheral resolvability is necessary and sufficient for efficient pitch-sequence processing.
Collapse
Affiliation(s)
- Marion Cousineau
- Laboratoire Psychologie de la Perception, CNRS and Université Paris Descartes, 45 rue des Saints-Pères, F-75006 Paris, France.
| | | | | |
Collapse
|