1
|
Kondo HM, Hasegawa R, Ezaki T, Sakata H, Ho HT. Functional coupling between auditory memory and verbal transformations. Sci Rep 2024; 14:3480. [PMID: 38347058 PMCID: PMC10861569 DOI: 10.1038/s41598-024-54013-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Accepted: 02/07/2024] [Indexed: 02/15/2024] Open
Abstract
The ability to parse sound mixtures into coherent auditory objects is fundamental to cognitive functions, such as speech comprehension and language acquisition. Yet, we still lack a clear understanding of how auditory objects are formed. To address this question, we studied a speech-specific case of perceptual multistability, called verbal transformations (VTs), in which a variety of verbal forms is induced by continuous repetition of a physically unchanging word. Here, we investigated the degree to which auditory memory through sensory adaptation influences VTs. Specifically, we hypothesized that when memory persistence is longer, participants are able to retain the current verbal form longer, resulting in sensory adaptation, which in turn, affects auditory perception. Participants performed VT and auditory memory tasks on different days. In the VT task, Japanese participants continuously reported their perception while listening to a Japanese word (2- or 3-mora in length) played repeatedly for 5 min. In the auditory memory task, a different sequence of three morae, e.g., /ka/, /hi/, and /su/, was presented to each ear simultaneously. After some period (0-4 s), participants were visually cued to recall one of the sequences, i.e., in the left or right ear. We found that delayed recall accuracy was negatively correlated with the number of VTs, particularly under 2-mora conditions. This suggests that memory persistence is important for formation and selection of perceptual objects.
Collapse
Affiliation(s)
- Hirohito M Kondo
- School of Psychology, Chukyo University, 101-2 Yagoto Honmachi, Showa, Nagoya, Aichi, 466-8666, Japan.
| | - Ryuju Hasegawa
- School of Psychology, Chukyo University, 101-2 Yagoto Honmachi, Showa, Nagoya, Aichi, 466-8666, Japan
| | - Takahiro Ezaki
- Research Center for Advanced Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Honami Sakata
- School of Psychology, Chukyo University, 101-2 Yagoto Honmachi, Showa, Nagoya, Aichi, 466-8666, Japan
| | - Hao Tam Ho
- School of Psychology, Chukyo University, 101-2 Yagoto Honmachi, Showa, Nagoya, Aichi, 466-8666, Japan
- Département d'études Cognitives, École Normale Supérieure, Paris, France
| |
Collapse
|
2
|
Körner A, Strack F. Articulation posture influences pitch during singing imagery. Psychon Bull Rev 2023; 30:2187-2195. [PMID: 37221280 PMCID: PMC10728233 DOI: 10.3758/s13423-023-02306-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/03/2023] [Indexed: 05/25/2023]
Abstract
Facial muscle activity contributes to singing and to articulation: in articulation, mouth shape can alter vowel identity; and in singing, facial movement correlates with pitch changes. Here, we examine whether mouth posture causally influences pitch during singing imagery. Based on perception-action theories and embodied cognition theories, we predict that mouth posture influences pitch judgments even when no overt utterances are produced. In two experiments (total N = 160), mouth posture was manipulated to resemble the articulation of either /i/ (as in English meet; retracted lips) or /o/ (as in French rose; protruded lips). Holding this mouth posture, participants were instructed to mentally "sing" given songs (which were all positive in valence) while listening with their inner ear and, afterwards, to assess the pitch of their mental chant. As predicted, compared to the o-posture, the i-posture led to higher pitch in mental singing. Thus, bodily states can shape experiential qualities, such as pitch, during imagery. This extends embodied music cognition and demonstrates a new link between language and music.
Collapse
Affiliation(s)
- Anita Körner
- Department of Psychology, University of Kassel, Holländische Straße 36-38, 34127, Kassel, Germany.
| | - Fritz Strack
- Department of Psychology, University of Würzburg, Würzburg, Germany
| |
Collapse
|
3
|
Irrelevant speech impairs serial recall of verbal but not spatial items in children and adults. Mem Cognit 2023; 51:307-320. [PMID: 36190658 PMCID: PMC9950248 DOI: 10.3758/s13421-022-01359-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/07/2022] [Indexed: 12/03/2022]
Abstract
Immediate serial recall of visually presented items is reliably impaired by task-irrelevant speech that the participants are instructed to ignore ("irrelevant speech effect," ISE). The ISE is stronger with changing speech tokens (words or syllables) when compared to repetitions of single tokens ("changing-state effect," CSE). These phenomena have been attributed to sound-induced diversions of attention away from the focal task (attention capture account), or to specific interference of obligatory, involuntary sound processing with either the integrity of phonological traces in a phonological short-term store (phonological loop account), or the efficiency of a domain-general rehearsal process employed for serial order retention (changing-state account). Aiming to further explore the role of attention, phonological coding, and serial order retention in the ISE, we analyzed the effects of steady-state and changing-state speech on serial order reconstruction of visually presented verbal and spatial items in children (n = 81) and adults (n = 80). In the verbal task, both age groups performed worse with changing-state speech (sequences of different syllables) when compared with steady-state speech (one syllable repeated) and silence. Children were more impaired than adults by both speech sounds. In the spatial task, no disruptive effect of irrelevant speech was found in either group. These results indicate that irrelevant speech evokes similarity-based interference, and thus pose difficulties for the attention-capture and the changing-state account of the ISE.
Collapse
|
4
|
Marsh JE, Threadgold E, Barker ME, Litchfield D, Degno F, Ball LJ. The susceptibility of compound remote associate problems to disruption by irrelevant sound: a Window onto the component processes underpinning creative cognition? JOURNAL OF COGNITIVE PSYCHOLOGY 2021. [DOI: 10.1080/20445911.2021.1900201] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- John E. Marsh
- School of Psychology, University of Central Lancashire, Preston, UK
- Engineering Psychology, Humans and Technology, Department of Business Administration, Technology and Social Sciences, Luleå University of Technology, Luleå, Sweden
| | - Emma Threadgold
- School of Psychology, University of Central Lancashire, Preston, UK
| | - Melissa E. Barker
- School of Psychology, University of Central Lancashire, Preston, UK
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| | | | - Federica Degno
- School of Psychology, University of Central Lancashire, Preston, UK
| | - Linden J. Ball
- School of Psychology, University of Central Lancashire, Preston, UK
| |
Collapse
|
5
|
Helou LB, Welch B, Wang W, Rosen CA, Verdolini Abbott K. Intrinsic Laryngeal Muscle Activity During Subvocalization. J Voice 2021; 37:426-432. [PMID: 33612369 DOI: 10.1016/j.jvoice.2021.01.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Revised: 12/28/2020] [Accepted: 01/24/2021] [Indexed: 11/29/2022]
Abstract
PURPOSE Subvocalization, the low-grade activity of speech articulator muscles while thinking or reading, may mediate phonological representations of verbal material. However, no literature exists that directly measures whether intrinsic laryngeal muscles (ILMs) are active during subvocalization. The possibility of ILM activation during subvocalization has implications for establishing appropriate baselines when experimental conditions involve linguistic features. METHOD In two separate studies, forty-five cisgender women completed one or two silentsil tasks (two in the first study, Experiments 1a and 1b, and one in the second, Experiment 2). Fine wire electromyography was used to directly measure ILM activity during an at-rest baseline and silent tasks used to determine whether subvocalization occurred (referred to hereafter as "subvocalization tasks"). Other muscles were measured via surface electromyography: submental muscle in Experiments 1a and 1b, anterior tibialis in Experiment 2, and upper trapezius in all experiments. RESULTS Interrupted time-series analysis was used to directly measure changes in ILM activity from baseline to the subvocalization tasks. A paired two tailed t-test was used to measure mean differences in ILM activity across conditions for each participant. Some individuals displayed statistically significant increases from baseline during subvocalization tasks, whereas others displayed decreases. Cohen's d was used to calculate the effect size for each muscle across the three subvocalization conditions. Of the 21 muscles measured across three experiments, five yielded a small mean effect size, and the effect sizes for the remaining 16 muscles were negligible. At a group level, only the right cricothyroid showed statistically significant changes (Experiment 1b). CONCLUSION The ILM responses during subvocalization vary in both magnitude and direction. Most but not all changes can be described as negligible. For future studies of ILM activity during conditions that involve linguistic processing, investigators should consider the idiosyncratic variation during subvocalization when determining the most appropriate baseline task.
Collapse
Affiliation(s)
- Leah B Helou
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania.
| | - Brett Welch
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Wei Wang
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, Missouri; Barnes-Jewish Hospital, St. Louis, Missouri
| | - Clark A Rosen
- Voice and Swallowing Center, Division of Laryngology, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, California
| | | |
Collapse
|
6
|
Van Dyck E, Buhmann J, Lorenzoni V. Instructed versus spontaneous entrainment of running cadence to music tempo. Ann N Y Acad Sci 2020; 1489:91-102. [PMID: 33210323 PMCID: PMC8048782 DOI: 10.1111/nyas.14528] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Revised: 10/09/2020] [Accepted: 10/20/2020] [Indexed: 12/31/2022]
Abstract
Matching exercise behavior to musical beats has been shown to favorably affect repetitive endurance tasks. In this study, our aim was to explore the role of spontaneous versus instructed entrainment, focusing on self‐paced exercise of healthy, recreational runners. For three 4‐min running tasks, 33 recreational participants were either running in silence or with music; when running with music, either no instructions were given to entrain to the music, or participants were instructed to match their running cadence with the tempo of the music. The results indicated that less entrainment occurred when no instruction to match the exercise with the musical tempo was provided. In addition, similar to the condition without music, lower speeds and shorter step lengths were observed when runners were instructed to match their running behavior to the musical tempo when compared with the condition without such instruction. Our findings demonstrate the impact of instruction on running performance and stress the importance of intention to entrain running behavior to musical beats.
Collapse
|
7
|
Endestad T, Godøy RI, Sneve MH, Hagen T, Bochynska A, Laeng B. Mental Effort When Playing, Listening, and Imagining Music in One Pianist's Eyes and Brain. Front Hum Neurosci 2020; 14:576888. [PMID: 33192407 PMCID: PMC7593683 DOI: 10.3389/fnhum.2020.576888] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2020] [Accepted: 09/07/2020] [Indexed: 01/17/2023] Open
Abstract
We investigated "musical effort" with an internationally renowned, classical, pianist while playing, listening, and imagining music. We used pupillometry as an objective measure of mental effort and fMRI as an exploratory method of effort with the same musical pieces. We also compared a group of non-professional pianists and non-musicians by the use of pupillometry and a small group of non-musicians with fMRI. This combined approach of psychophysiology and neuroimaging revealed the cognitive work during different musical activities. We found that pupil diameters were largest when "playing" (regardless of whether there was sound produced or not) compared to conditions with no movement (i.e., "listening" and "imagery"). We found positive correlations between pupil diameters of the professional pianist during different conditions with the same piano piece (i.e., normal playing, silenced playing, listen, imagining), which might indicate similar degrees of load on cognitive resources as well as an intimate link between the motor imagery of sound-producing body motions and gestures. We also confirmed that musical imagery had a strong commonality with music listening in both pianists and musically naïve individuals. Neuroimaging provided evidence for a relationship between noradrenergic (NE) activity and mental workload or attentional intensity within the domain of music cognition. We found effort related activity in the superior part of the locus coeruleus (LC) and, similarly to the pupil, the listening and imagery engaged less the LC-NE network than the motor condition. The pianists attended more intensively to the most difficult piece than the non-musicians since they showed larger pupils for the most difficult piece. Non-musicians were the most engaged by the music listening task, suggesting that the amount of attention allocated for the same task may follow a hierarchy of expertise demanding less attentional effort in expert or performers than in novices. In the professional pianist, we found only weak evidence for a commonality between subjective effort (as rated measure-by-measure) and the objective effort gauged with pupil diameter during listening. We suggest that psychophysiological methods like pupillometry can index mental effort in a manner that is not available to subjective awareness or introspection.
Collapse
Affiliation(s)
- Tor Endestad
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Helgelandssykehuset, Mosjøen, Norway
| | - Rolf Inge Godøy
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
| | | | - Thomas Hagen
- Department of Psychology, University of Oslo, Oslo, Norway
| | - Agata Bochynska
- Department of Psychology, University of Oslo, Oslo, Norway
- Department of Psychology, New York University, New York, NY, United States
| | - Bruno Laeng
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
| |
Collapse
|
8
|
Grandchamp R, Rapin L, Perrone-Bertolotti M, Pichat C, Haldin C, Cousin E, Lachaux JP, Dohen M, Perrier P, Garnier M, Baciu M, Lœvenbruck H. The ConDialInt Model: Condensation, Dialogality, and Intentionality Dimensions of Inner Speech Within a Hierarchical Predictive Control Framework. Front Psychol 2019; 10:2019. [PMID: 31620039 PMCID: PMC6759632 DOI: 10.3389/fpsyg.2019.02019] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2019] [Accepted: 08/19/2019] [Indexed: 11/19/2022] Open
Abstract
Inner speech has been shown to vary in form along several dimensions. Along condensation, condensed inner speech forms have been described, that are supposed to be deprived of acoustic, phonological and even syntactic qualities. Expanded forms, on the other extreme, display articulatory and auditory properties. Along dialogality, inner speech can be monologal, when we engage in internal soliloquy, or dialogal, when we recall past conversations or imagine future dialogs involving our own voice as well as that of others addressing us. Along intentionality, it can be intentional (when we deliberately rehearse material in short-term memory) or it can arise unintentionally (during mind wandering). We introduce the ConDialInt model, a neurocognitive predictive control model of inner speech that accounts for its varieties along these three dimensions. ConDialInt spells out the condensation dimension by including inhibitory control at the conceptualization, formulation or articulatory planning stage. It accounts for dialogality, by assuming internal model adaptations and by speculating on neural processes underlying perspective switching. It explains the differences between intentional and spontaneous varieties in terms of monitoring. We present an fMRI study in which we probed varieties of inner speech along dialogality and intentionality, to examine the validity of the neuroanatomical correlates posited in ConDialInt. Condensation was also informally tackled. Our data support the hypothesis that expanded inner speech recruits speech production processes down to articulatory planning, resulting in a predicted signal, the inner voice, with auditory qualities. Along dialogality, covertly using an avatar's voice resulted in the activation of right hemisphere homologs of the regions involved in internal own-voice soliloquy and in reduced cerebellar activation, consistent with internal model adaptation. Switching from first-person to third-person perspective resulted in activations in precuneus and parietal lobules. Along intentionality, compared with intentional inner speech, mind wandering with inner speech episodes was associated with greater bilateral inferior frontal activation and decreased activation in left temporal regions. This is consistent with the reported subjective evanescence and presumably reflects condensation processes. Our results provide neuroanatomical evidence compatible with predictive control and in favor of the assumptions made in the ConDialInt model.
Collapse
Affiliation(s)
- Romain Grandchamp
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Lucile Rapin
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | | | - Cédric Pichat
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Célise Haldin
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Emilie Cousin
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Jean-Philippe Lachaux
- INSERM U1028, CNRS UMR5292, Brain Dynamics and Cognition Team, Lyon Neurosciences Research Center, Bron, France
| | - Marion Dohen
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Pascal Perrier
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Maëva Garnier
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Monica Baciu
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Hélène Lœvenbruck
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| |
Collapse
|
9
|
Pitch-specific contributions of auditory imagery and auditory memory in vocal pitch imitation. Atten Percept Psychophys 2019; 81:2473-2481. [PMID: 31286436 DOI: 10.3758/s13414-019-01799-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Vocal imitation guides both music and language development. Despite the developmental significance of this behavior, a sizable minority of individuals are inaccurate at vocal pitch imitation. Although previous research suggested that inaccurate pitch imitation results from deficient sensorimotor associations between pitch perception and vocal motor planning, the cognitive processes involved in sensorimotor translation are not clearly defined. In the present research, we investigated the roles of basic cognitive processes in the vocal imitation of pitch, as well as the degree to which these processes rely on pitch-specific resources. In the present study, participants completed a battery of pitch and verbal tasks to measure pitch perception, pitch and verbal auditory imagery, pitch and verbal auditory short-term memory, and pitch imitation ability. Information on participants' music background was collected, as well. Pitch imagery, pitch short-term memory, pitch discrimination ability, and musical experience were unique predictors of pitch imitation ability. Furthermore, pitch imagery was a partial mediator of the relationship between pitch short-term memory and pitch imitation ability. These results indicate that vocal imitation recruits cognitive processes that rely on at least partially separate neural resources for pitch and verbal representations.
Collapse
|
10
|
Beaman CP. The Literary and Recent Scientific History of the Earworm: A Review and Theoretical Framework. ACTA ACUST UNITED AC 2018. [DOI: 10.1080/25742442.2018.1533735] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- C. Philip Beaman
- School of Psychology & Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
11
|
Pruitt TA, Halpern AR, Pfordresher PQ. Covert singing in anticipatory auditory imagery. Psychophysiology 2018; 56:e13297. [PMID: 30368823 DOI: 10.1111/psyp.13297] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Revised: 09/16/2018] [Accepted: 09/19/2018] [Indexed: 11/29/2022]
Abstract
To date, several fMRI studies reveal activation in motor planning areas during musical auditory imagery. We addressed whether such activations may give rise to peripheral motor activity, termed subvocalization or covert singing, using surface electromyography. Sensors placed on extrinsic laryngeal muscles, facial muscles, and a control site on the bicep measured muscle activity during auditory imagery that preceded singing, as well as during the completion of a visual imagery task. Greater activation was found in laryngeal and lip muscles for auditory than for visual imagery tasks, whereas no differences across tasks were found for other sensors. Furthermore, less accurate singers exhibited greater laryngeal activity during auditory imagery than did more accurate singers. This suggests that subvocalization may be used as a strategy to facilitate auditory imagery, which appears to be degraded in inaccurate singers. Taken together, these results suggest that subvocalization may play a role in anticipatory auditory imagery, and possibly as a way of supplementing motor associations with auditory imagery.
Collapse
Affiliation(s)
- Tim A Pruitt
- Department of Psychology, University at Buffalo, The State University of New York, Buffalo, New York
| | - Andrea R Halpern
- Department of Psychology, Bucknell University, Lewisburg, Pennsylvania
| | - Peter Q Pfordresher
- Department of Psychology, University at Buffalo, The State University of New York, Buffalo, New York
| |
Collapse
|
12
|
Heaton P, Tsang WF, Jakubowski K, Mullensiefen D, Allen R. Discriminating autism and language impairment and specific language impairment through acuity of musical imagery. RESEARCH IN DEVELOPMENTAL DISABILITIES 2018; 80:52-63. [PMID: 29913330 DOI: 10.1016/j.ridd.2018.06.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2018] [Revised: 06/05/2018] [Accepted: 06/07/2018] [Indexed: 06/08/2023]
Abstract
Deficits in auditory short-term memory have been widely reported in children with Specific Language Impairment (SLI), and recent evidence suggests that children with Autism Spectrum Disorder and co-morbid language impairment (ALI) experience similar difficulties. Music, like language relies on auditory memory and the aim of the study was to extend work investigating the impact of auditory short-term memory impairments to musical perception in children with neurodevelopmental disorders. Groups of children with SLI and ALI were matched on chronological age (CA), receptive vocabulary, non-verbal intelligence and digit span, and compared with CA matched typically developing (TD) controls, on tests of pitch and temporal acuity within a voluntary musical imagery paradigm. The SLI participants performed at significantly lower levels than the ALI and TD groups on both conditions of the task and their musical imagery and digit span scores were positively correlated. In contrast ALI participants performed as well as TD controls on the tempo condition and better than TD controls on the pitch condition of the task. Whilst auditory short-term memory and receptive vocabulary impairments were similar across ALI and SLI groups, these were not associated with a deficit in voluntary musical imagery performance in the ALI group.
Collapse
Affiliation(s)
- Pamela Heaton
- Psychology, Goldsmiths University of London, New Cross, London, SE14 6NW, United Kingdom.
| | - Wai Fung Tsang
- Psychology, Goldsmiths University of London, New Cross, London, SE14 6NW, United Kingdom
| | - Kelly Jakubowski
- Music, University of Durham, Palace Green, Durham, DH1 3RL, United Kingdom
| | - Daniel Mullensiefen
- Psychology, Goldsmiths University of London, New Cross, London, SE14 6NW, United Kingdom
| | - Rory Allen
- Psychology, Goldsmiths University of London, New Cross, London, SE14 6NW, United Kingdom
| |
Collapse
|
13
|
Maintenance of memory for melodies: Articulation or attentional refreshing? Psychon Bull Rev 2017; 24:1964-1970. [PMID: 28337645 DOI: 10.3758/s13423-017-1269-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
14
|
Prete G, Marzoli D, Brancucci A, Tommasi L. Hearing it right: Evidence of hemispheric lateralization in auditory imagery. Hear Res 2015; 332:80-86. [PMID: 26706706 DOI: 10.1016/j.heares.2015.12.011] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/21/2015] [Revised: 11/26/2015] [Accepted: 12/02/2015] [Indexed: 11/19/2022]
Abstract
An advantage of the right ear (REA) in auditory processing (especially for verbal content) has been firmly established in decades of behavioral, electrophysiological and neuroimaging research. The laterality of auditory imagery, however, has received little attention, despite its potential relevance for the understanding of auditory hallucinations and related phenomena. In Experiments 1-4 we find that right-handed participants required to imagine hearing a voice or a sound unilaterally show a strong population bias to localize the self-generated auditory image at their right ear, likely the result of left-hemispheric dominance in auditory processing. In Experiments 5-8 - by means of the same paradigm - it was also ascertained that the right-ear bias for hearing imagined voices depends just on auditory attention mechanisms, as biases due to other factors (i.e., lateralized movements) were controlled. These results, suggesting a central role of the left hemisphere in auditory imagery, demonstrate that brain asymmetries can drive strong lateral biases in mental imagery.
Collapse
Affiliation(s)
- Giulia Prete
- Department of Psychological Science, Health and Territory, 'G. d'Annunzio' University of Chieti-Pescara, Italy.
| | - Daniele Marzoli
- Department of Psychological Science, Health and Territory, 'G. d'Annunzio' University of Chieti-Pescara, Italy
| | - Alfredo Brancucci
- Department of Psychological Science, Health and Territory, 'G. d'Annunzio' University of Chieti-Pescara, Italy
| | - Luca Tommasi
- Department of Psychological Science, Health and Territory, 'G. d'Annunzio' University of Chieti-Pescara, Italy
| |
Collapse
|
15
|
Neumann N, Lotze M, Eickhoff SB. Cognitive Expertise: An ALE Meta-Analysis. Hum Brain Mapp 2015; 37:262-72. [PMID: 26467981 DOI: 10.1002/hbm.23028] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2015] [Revised: 10/02/2015] [Accepted: 10/05/2015] [Indexed: 12/17/2022] Open
Abstract
Expert performance constitutes the endpoint of skill acquisition and is accompanied by widespread neuroplastic changes. To reveal common mechanisms of reorganization associated with long-term expertise in a cognitive domain (mental calculation, chess, language, memory, music without motor involvement), we used activation likelihood estimation meta-analysis and compared brain activation of experts to nonexperts. Twenty-six studies matched inclusion criteria, most of which reported an increase and not a decrease of activation foci in experts. Increased activation occurred in the left rolandic operculum (OP 4) and left primary auditory cortex and in bilateral premotor cortex in studies that used auditory stimulation. In studies with visual stimulation, experts showed enhanced activation in the right inferior parietal cortex (area PGp) and the right lingual gyrus. Experts' brain activation patterns seem to be characterized by enhanced or additional activity in domain-specific primary, association, and motor structures, confirming that learning is localized and very specialized.
Collapse
Affiliation(s)
- Nicola Neumann
- Institute of Diagnostic Radiology and Neuroradiology, Functional Imaging Unit, Ernst-Moritz-Arndt-University of Greifswald, Greifswald, Germany
| | - Martin Lotze
- Institute of Diagnostic Radiology and Neuroradiology, Functional Imaging Unit, Ernst-Moritz-Arndt-University of Greifswald, Greifswald, Germany
| | - Simon B Eickhoff
- Cognitive Neuroscience Group, Institute of Clinical Neuroscience and Medical Psychology, Heinrich-Heine University, Düsseldorf, Germany.,Brain Network Modeling Group, Institute of Neuroscience and Medicine (INM-1), Research Center Jülich, Jülich, Germany
| |
Collapse
|
16
|
Beaman CP, Powell K, Rapley E. Rapid Communication: Want to block earworms from conscious awareness? B(u)y gum! Q J Exp Psychol (Hove) 2015; 68:1049-57. [DOI: 10.1080/17470218.2015.1034142] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Three experiments examine the role of articulatory motor planning in experiencing an involuntary musical recollection (an “earworm”). Experiment 1 shows that interfering with articulatory motor programming by chewing gum reduces both the number of voluntary and the number of involuntary—unwanted—musical thoughts. This is consistent with other findings that chewing gum interferes with voluntary processes such as recollections from verbal memory, the interpretation of ambiguous auditory images, and the scanning of familiar melodies, but is not predicted by theories of thought suppression, which assume that suppression is made more difficult by concurrent tasks or cognitive loads. Experiment 2 shows that chewing the gum affects the experience of “hearing” the music and cannot be ascribed to a general effect on thinking about a tune only in abstract terms. Experiment 3 confirms that the reduction of musical recollections by chewing gum is not the consequence of a general attentional or dual-task demand. The data support a link between articulatory motor programming and the appearance in consciousness of both voluntary and unwanted musical recollections.
Collapse
Affiliation(s)
- C. Philip Beaman
- Centre for Cognition Research, University of Reading, Reading, UK
- School of Psychology & Clinical Language Sciences, University of Reading, Reading, UK
| | - Kitty Powell
- School of Psychology & Clinical Language Sciences, University of Reading, Reading, UK
| | - Ellie Rapley
- School of Psychology & Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
17
|
Bernal B, Ardila A, Rosselli M. Broca's area network in language function: a pooling-data connectivity study. Front Psychol 2015; 6:687. [PMID: 26074842 PMCID: PMC4440904 DOI: 10.3389/fpsyg.2015.00687] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2015] [Accepted: 05/10/2015] [Indexed: 01/23/2023] Open
Abstract
Background and Objective: Modern neuroimaging developments have demonstrated that cognitive functions correlate with brain networks rather than specific areas. The purpose of this paper was to analyze the connectivity of Broca’s area based on language tasks. Methods: A connectivity modeling study was performed by pooling data of Broca’s activation in language tasks. Fifty-seven papers that included 883 subjects in 84 experiments were analyzed. Analysis of Likelihood Estimates of pooled data was utilized to generate the map; thresholds at p < 0.01 were corrected for multiple comparisons and false discovery rate. Resulting images were co-registered into MNI standard space. Results: A network consisting of 16 clusters of activation was obtained. Main clusters were located in the frontal operculum, left posterior temporal region, supplementary motor area, and the parietal lobe. Less common clusters were seen in the sub-cortical structures including the left thalamus, left putamen, secondary visual areas, and the right cerebellum. Conclusion: Broca’s area-44-related networks involved in language processing were demonstrated utilizing a pooling-data connectivity study. Significance, interpretation, and limitations of the results are discussed.
Collapse
Affiliation(s)
- Byron Bernal
- Brain Institute-Department of Radiology, fMRI and Neuroconnectivity, Miami Children's Hospital Miami, FL, USA
| | - Alfredo Ardila
- Department of Communication Sciences and Disorders, Florida International University Miami, FL, USA
| | | |
Collapse
|
18
|
Sound representation in higher language areas during language generation. Proc Natl Acad Sci U S A 2015; 112:1868-73. [PMID: 25624479 DOI: 10.1073/pnas.1418162112] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
How language is encoded by neural activity in the higher-level language areas of humans is still largely unknown. We investigated whether the electrophysiological activity of Broca's area correlates with the sound of the utterances produced. During speech perception, the electric cortical activity of the auditory areas correlates with the sound envelope of the utterances. In our experiment, we compared the electrocorticogram recorded during awake neurosurgical operations in Broca's area and in the dominant temporal lobe with the sound envelope of single words versus sentences read aloud or mentally by the patients. Our results indicate that the electrocorticogram correlates with the sound envelope of the utterances, starting before any sound is produced and even in the absence of speech, when the patient is reading mentally. No correlations were found when the electrocorticogram was recorded in the superior parietal gyrus, an area not directly involved in language generation, or in Broca's area when the participants were executing a repetitive motor task, which did not include any linguistic content, with their dominant hand. The distribution of suprathreshold correlations across frequencies of cortical activities varied whether the sound envelope derived from words or sentences. Our results suggest the activity of language areas is organized by sound when language is generated before any utterance is produced or heard.
Collapse
|
19
|
Perham N, Lewis A, Turner J, Hodgetts HM. The Sound of Silence: Can Imagining Music Improve Spatial Rotation Performance? CURRENT PSYCHOLOGY 2014. [DOI: 10.1007/s12144-014-9232-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
20
|
Rychlowska M, Cañadas E, Wood A, Krumhuber EG, Fischer A, Niedenthal PM. Blocking mimicry makes true and false smiles look the same. PLoS One 2014; 9:e90876. [PMID: 24670316 PMCID: PMC3966726 DOI: 10.1371/journal.pone.0090876] [Citation(s) in RCA: 69] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2013] [Accepted: 02/04/2014] [Indexed: 11/19/2022] Open
Abstract
Recent research suggests that facial mimicry underlies accurate interpretation of subtle facial expressions. In three experiments, we manipulated mimicry and tested its role in judgments of the genuineness of true and false smiles. Experiment 1 used facial EMG to show that a new mouthguard technique for blocking mimicry modifies both the amount and the time course of facial reactions. In Experiments 2 and 3, participants rated true and false smiles either while wearing mouthguards or when allowed to freely mimic the smiles with or without additional distraction, namely holding a squeeze ball or wearing a finger-cuff heart rate monitor. Results showed that blocking mimicry compromised the decoding of true and false smiles such that they were judged as equally genuine. Together the experiments highlight the role of facial mimicry in judging subtle meanings of facial expressions.
Collapse
Affiliation(s)
- Magdalena Rychlowska
- Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin, United States of America
- Université Blaise Pascal, Clermont-Ferrand, France
- * E-mail:
| | - Elena Cañadas
- Institut de Psychologie du Travail et des Organisations, Université de Neuchâtel, Neuchâtel, Switzerland
| | - Adrienne Wood
- Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin, United States of America
| | - Eva G. Krumhuber
- Division of Psychology and Language Sciences, University College London, London, United Kingdom
| | - Agneta Fischer
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| | - Paula M. Niedenthal
- Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin, United States of America
| |
Collapse
|
21
|
Pecenka N, Engel A, Keller PE. Neural correlates of auditory temporal predictions during sensorimotor synchronization. Front Hum Neurosci 2013; 7:380. [PMID: 23970857 PMCID: PMC3748321 DOI: 10.3389/fnhum.2013.00380] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2013] [Accepted: 07/02/2013] [Indexed: 11/13/2022] Open
Abstract
Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events) and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS) and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons). Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1) a distributed network of cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex) and (2) medial cortical areas (medial prefrontal cortex, posterior cingulate cortex). While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.
Collapse
Affiliation(s)
- Nadine Pecenka
- Music Cognition and Action Group, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | | | | |
Collapse
|
22
|
How silent is silent reading? Intracerebral evidence for top-down activation of temporal voice areas during reading. J Neurosci 2013; 32:17554-62. [PMID: 23223279 DOI: 10.1523/jneurosci.2982-12.2012] [Citation(s) in RCA: 59] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
As you might experience it while reading this sentence, silent reading often involves an imagery speech component: we can hear our own "inner voice" pronouncing words mentally. Recent functional magnetic resonance imaging studies have associated that component with increased metabolic activity in the auditory cortex, including voice-selective areas. It remains to be determined, however, whether this activation arises automatically from early bottom-up visual inputs or whether it depends on late top-down control processes modulated by task demands. To answer this question, we collaborated with four epileptic human patients recorded with intracranial electrodes in the auditory cortex for therapeutic purposes, and measured high-frequency (50-150 Hz) "gamma" activity as a proxy of population level spiking activity. Temporal voice-selective areas (TVAs) were identified with an auditory localizer task and monitored as participants viewed words flashed on screen. We compared neural responses depending on whether words were attended or ignored and found a significant increase of neural activity in response to words, strongly enhanced by attention. In one of the patients, we could record that response at 800 ms in TVAs, but also at 700 ms in the primary auditory cortex and at 300 ms in the ventral occipital temporal cortex. Furthermore, single-trial analysis revealed a considerable jitter between activation peaks in visual and auditory cortices. Altogether, our results demonstrate that the multimodal mental experience of reading is in fact a heterogeneous complex of asynchronous neural responses, and that auditory and visual modalities often process distinct temporal frames of our environment at the same time.
Collapse
|
23
|
Hyman IE, Burland NK, Duskin HM, Cook MC, Roy CM, McGrath JC, Roundhill RF. Going Gaga: Investigating, Creating, and Manipulating the Song Stuck in My Head. APPLIED COGNITIVE PSYCHOLOGY 2012. [DOI: 10.1002/acp.2897] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Ira E. Hyman
- Psychology Department; Western Washington University; Bellingham; USA
| | - Naomi K. Burland
- Psychology Department; Western Washington University; Bellingham; USA
| | | | - Megan C. Cook
- Psychology Department; Western Washington University; Bellingham; USA
| | - Christina M. Roy
- Psychology Department; Western Washington University; Bellingham; USA
| | - Jessie C. McGrath
- Psychology Department; Western Washington University; Bellingham; USA
| | | |
Collapse
|
24
|
Bailes F, Bishop L, Stevens CJ, Dean RT. Mental imagery for musical changes in loudness. Front Psychol 2012; 3:525. [PMID: 23227014 PMCID: PMC3512351 DOI: 10.3389/fpsyg.2012.00525] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2012] [Accepted: 11/06/2012] [Indexed: 11/13/2022] Open
Abstract
Musicians imagine music during mental rehearsal, when reading from a score, and while composing. An important characteristic of music is its temporality. Among the parameters that vary through time is sound intensity, perceived as patterns of loudness. Studies of mental imagery for melodies (i.e., pitch and rhythm) show interference from concurrent musical pitch and verbal tasks, but how we represent musical changes in loudness is unclear. Theories suggest that our perceptions of loudness change relate to our perceptions of force or effort, implying a motor representation. An experiment was conducted to investigate the modalities that contribute to imagery for loudness change. Musicians performed a within-subjects loudness change recall task, comprising 48 trials. First, participants heard a musical scale played with varying patterns of loudness, which they were asked to remember. There followed an empty interval of 8 s (nil distractor control), or the presentation of a series of four sine tones, or four visual letters or three conductor gestures, also to be remembered. Participants then saw an unfolding score of the notes of the scale, during which they were to imagine the corresponding scale in their mind while adjusting a slider to indicate the imagined changes in loudness. Finally, participants performed a recognition task of the tone, letter, or gesture sequence. Based on the motor hypothesis, we predicted that observing and remembering conductor gestures would impair loudness change scale recall, while observing and remembering tone or letter string stimuli would not. Results support this prediction, with loudness change recalled less accurately in the gestures condition than in the control condition. An effect of musical training suggests that auditory and motor imagery ability may be closely related to domain expertise.
Collapse
Affiliation(s)
- Freya Bailes
- MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | | | | | | |
Collapse
|
25
|
Navarro Cebrian A, Janata P. Influences of multiple memory systems on auditory mental image acuity. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 127:3189-3202. [PMID: 21117767 DOI: 10.1121/1.3372729] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
The influence of different memory systems and associated attentional processes on the acuity of auditory images, formed for the purpose of making intonation judgments, was examined across three experiments using three different task types (cued-attention, imagery, and two-tone discrimination). In experiment 1 the influence of implicit long-term memory for musical scale structure was manipulated by varying the scale degree (leading tone versus tonic) of the probe note about which a judgment had to be made. In experiments 2 and 3 the ability of short-term absolute pitch knowledge to develop was manipulated by presenting blocks of trials in the same key or in seven different keys. The acuity of auditory images depended on all of these manipulations. Within individual listeners, thresholds in the two-tone discrimination and cued-attention conditions were closely related. In many listeners, cued-attention thresholds were similar to thresholds in the imagery condition, and depended on the amount of training individual listeners had in playing a musical instrument. The results indicate that mental images formed at a sensory/cognitive interface for the purpose of making perceptual decisions are highly malleable.
Collapse
Affiliation(s)
- Ana Navarro Cebrian
- Department of Psychology, Center for Mind and Brain, University of California, Davis, 267 Cousteau Place, Davis, California 95618, USA
| | | |
Collapse
|
26
|
Wu J, Mai X, Yu Z, Qin S, Luo YJ. Effects of discrepancy between imagined and perceived sounds on the N2 component of the event-related potential. Psychophysiology 2010; 47:289-98. [PMID: 20003146 DOI: 10.1111/j.1469-8986.2009.00936.x] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Two experiments were conducted to examine whether the N2 component of the event-related potential (ERP), typically elicited in a S1-S2 matching task and considered to reflect mismatch process, can still be elicited when the S1 was imagined instead of perceived and to investigate how N2 amplitude varied with the degree of S1-S2 discrepancy. Three levels of discrepancy were defined by the degree of separation between the heard (S2) and imagined (S1) sounds. It was found that the N2 was reliably elicited when the perceived S2 differed from the imagined S1, but whether N2 amplitude increased with the degree of discrepancy depended in part on the S1-S2 discriminability (as evidenced by reaction time). Specifically, the effect of increasing discrepancy was attenuated as discriminability increased from hard to easy. These results, together with the dynamic ERP topography observed within the N2 window, suggest that the N2 effect reflects two sequential but overlapping processes: automatic mismatch and controlled detection.
Collapse
Affiliation(s)
- Jianhui Wu
- Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | | | | | | | | |
Collapse
|
27
|
Parieto-frontal gamma band activity during the perceptual emergence of speech forms. Neuroimage 2008; 42:404-13. [PMID: 18524627 DOI: 10.1016/j.neuroimage.2008.03.063] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2007] [Revised: 03/26/2008] [Accepted: 03/30/2008] [Indexed: 11/24/2022] Open
Abstract
The multistable perception of speech refers to the perceptual changes experienced while listening to a speech form cycled in rapid and continuous repetition, the so-called Verbal Transformation Effect. Because distinct interpretations of the same repeated stimulus alternate spontaneously, this effect provides an invaluable tool to examine how speech percepts are formed in the listener's mind. In order to track the temporal dynamics of brain activity specifically linked to perceptual changes, intracerebral EEG activity was recorded from two implanted epileptic patients while performing a verbal transformation task. To this aim, they were asked to carefully listen to a speech sequence played repeatedly and to press a button whenever they perceived a change in the repeated utterance. For both patients, 300-800 ms prior to the reported perceptual transitions, high frequency activity in the gamma band range (>40 Hz) was observed within the left inferior frontal and supramarginal gyri. An additional auditory decision task was used to rule out the possibility that the increased gamma band activity was due to the patients' motor responses. These results suggest that articulatory-based representations play a key part in the endogenously driven emergence of auditory speech percepts. The findings are interpreted in relation to theories assuming a link between perception and action in the human speech processing system.
Collapse
|
28
|
|
29
|
Mainy N, Kahane P, Minotti L, Hoffmann D, Bertrand O, Lachaux JP. Neural correlates of consolidation in working memory. Hum Brain Mapp 2007; 28:183-93. [PMID: 16767775 PMCID: PMC6871297 DOI: 10.1002/hbm.20264] [Citation(s) in RCA: 87] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Many of our daily activities rely on a brain system called working memory, which implements our ability to encode information for short-term maintenance, possible manipulation, and retrieval. A recent intracranial study of patients performing a paradigmatic working memory task revealed that the maintenance of information involves a distributed network of oscillations in the gamma band (>40 Hz). Using a similar task, we focused on the encoding stage and targeted a process referred to as short-term consolidation, which corresponds to the encoding of novel items in working memory. The paradigm was designed to manipulate the subjects' intention to encode: series of 10 letters were presented, among which only five had to be remembered, as indicated by visual cues preceding or following each letter. During this task we recorded the intracerebral EEG of nine epileptic patients implanted in mesiotemporal structures, perisylvian regions, and prefrontal areas and used time-frequency analysis to search for neural activities simultaneous with the encoding of the letters into working memory. We found such activities in the form of increases of gamma band activity in a set of regions associated with the phonological loop, including the Broca area and the auditory cortex, and in the prefrontal cortex, the pre- and postcentral gyri, the hippocampus, and the fusiform gyrus.
Collapse
|
30
|
Wu J, Mai X, Chan CCH, Zheng Y, Luo Y. Event-related potentials during mental imagery of animal sounds. Psychophysiology 2006; 43:592-7. [PMID: 17076815 DOI: 10.1111/j.1469-8986.2006.00464.x] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
To investigate the neural correlates of imagined animal sounds, event-related potentials (ERPs) were recorded while subjects were presented with (1) animal pictures without any imagery instruction (control) or (2) animal pictures with instructions to imagine the corresponding sounds (imagery). The results revealed imagery effects starting with an enhancement of the P2, possibly indexing the top-down allocation of attention to the imagery task, and continuing into a more positive-going deflection in the time window of 350-600 ms poststimulus, probably reflecting the formation of auditory imagery. A centro-parietally distributed late positive complex (LPC) was identified in the difference waveform (imagery minus control) and might reflect two subprocesses of imagery formation: sound retrieval from stored information and representation in working memory.
Collapse
Affiliation(s)
- Jianhui Wu
- Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | | | | | | | | |
Collapse
|
31
|
Sato M, Schwartz JL, Abry C, Cathiard MA, Loevenbruck H. Multistable syllables as enacted percepts: a source of an asymmetric bias in the verbal transformation effect. ACTA ACUST UNITED AC 2006; 68:458-74. [PMID: 16900837 DOI: 10.3758/bf03193690] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Perceptual changes are experienced during rapid and continuous repetition of a speech form, leading to an auditory illusion known as the verbal transformation effect. Although verbal transformations are considered to reflect mainly the perceptual organization and interpretation of speech, the present study was designed to test whether or not speech production constraints may participate in the emergence of verbal representations. With this goal in mind, we examined whether variations in the articulatory cohesion of repeated nonsense words--specifically, temporal relationships between articulatory events--could lead to perceptual asymmetries in verbal transformations. The first experiment displayed variations in timing relations between two consonantal gestures embedded in various nonsense syllables in a repetitive speech production task. In the second experiment, French participants repeatedly uttered these syllables while searching for verbal transformation. Syllable transformation frequencies followed the temporal clustering between consonantal gestures: The more synchronized the gestures, the more stable and attractive the syllable. In the third experiment, which involved a covert repetition mode, the pattern was maintained without external speech movements. However, when a purely perceptual condition was used in a fourth experiment, the previously observed perceptual asymmetries of verbal transformations disappeared. These experiments demonstrate the existence of an asymmetric bias in the verbal transformation effect linked to articulatory control constraints. The persistence of this effect from an overt to a covert repetition procedure provides evidence that articulatory stability constraints originating from the action system may be involved in auditory imagery. The absence of the asymmetric bias during a purely auditory procedure rules out perceptual mechanisms as a possible explanation of the observed asymmetries.
Collapse
Affiliation(s)
- Marc Sato
- CNRS UMR 5009, Institut National Polytechnique de Grenoble, France.
| | | | | | | | | |
Collapse
|
32
|
Sato M, Baciu M, Loevenbruck H, Schwartz JL, Cathiard MA, Segebarth C, Abry C. Multistable representation of speech forms: a functional MRI study of verbal transformations. Neuroimage 2005; 23:1143-51. [PMID: 15528113 DOI: 10.1016/j.neuroimage.2004.07.055] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2004] [Revised: 07/12/2004] [Accepted: 07/19/2004] [Indexed: 11/28/2022] Open
Abstract
We used functional magnetic resonance imaging (fMRI) to localize the brain areas involved in the imagery analogue of the verbal transformation effect, that is, the perceptual changes that occur when a speech form is cycled in rapid and continuous mental repetition. Two conditions were contrasted: a baseline condition involving the simple mental repetition of speech sequences, and a verbal transformation condition involving the mental repetition of the same items with an active search for verbal transformation. Our results reveal a predominantly left-lateralized network of cerebral regions activated by the verbal transformation task, similar to the neural network involved in verbal working memory: the left inferior frontal gyrus, the left supramarginal gyrus, the left superior temporal gyrus, the anterior part of the right cingulate cortex, and the cerebellar cortex, bilaterally. Our results strongly suggest that the imagery analogue of the verbal transformation effect, which requires percept analysis, form interpretation, and attentional maintenance of verbal material, relies on a working memory module sharing common components of speech perception and speech production systems.
Collapse
Affiliation(s)
- Marc Sato
- Institut de la Communication Parlée, CNRS UMR 5009, Institut National Polytechnique de Grenoble, Université Stendhal 46, Avenue Félix Viallet, 38031 Grenoble Cedex 01-France.
| | | | | | | | | | | | | |
Collapse
|
33
|
Aleman A, Wout MV. Subvocalization in auditory-verbal imagery: just a form of motor imagery? Cogn Process 2004. [DOI: 10.1007/s10339-004-0034-y] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
34
|
Abstract
INTRODUCTION The cognitive neuropsychiatric approach to auditory verbal hallucinations (AVHs) attempts to explain the phenomena in cognitive or information-processing terms and ultimately their brain bases. METHODS A narrative review of the literature and an overview of this special issue of Cognitive Neuropsychiatry. RESULTS First, an operational definition of AVHs is offered. Next, clues to etiology are derived from a detailed consideration of the clinical phenomenology of "voices", their form and content. Functional and structural neuroimaging studies suggest the importance of left-side language areas in the generation/perception of AVHs. CONCLUSIONS Existing cognitive neuropsychiatric models provide a useful framework for the understanding of AVHs. However, data need to be applied more specifically to these models so that they may be refined.
Collapse
Affiliation(s)
- Anthony S David
- Section of Cognitive Neuropsychiatry, Institute of Psychiatry, London, UK.
| |
Collapse
|
35
|
Duffau H, Gatignol P, Denvil D, Lopes M, Capelle L. The articulatory loop: study of the subcortical connectivity by electrostimulation. Neuroreport 2004; 14:2005-8. [PMID: 14561939 DOI: 10.1097/00001756-200310270-00026] [Citation(s) in RCA: 115] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Although a cortical network involving Broca's area and the supramarginal gyrus (SMG) was widely studied using neurofunctional imaging, the functional connectivity underlying this so-called articulatory loop remains poorly documented. We describe a patient operated on for a glioma invading the left parietal operculum, using intraoperative electrical functional mapping under local anesthesia. Following the identification of cortical language sites within Broca's area and SMG, the subcortical pathways connecting these regions were detected and preserved during the resection. Postoperatively, the patient presented a slight dysarthria, then recovered. This is the first report of direct tracking of the subcortical connectivity underlying the fronto-parietal articulatory loop, allowing to better understand the pathophysiology of this network and the consequences of its damage.
Collapse
Affiliation(s)
- Hugues Duffau
- Department of Neurosurgery, Hôpital de la Salpêtrière, Paris, Cedex 13, France.
| | | | | | | | | |
Collapse
|
36
|
Abstract
We used event-related fMRI methodology to investigate human brain activity during auditory imagery. A series of susceptibility-weighted MR images covering the whole brain were acquired to obtain blood oxygenation level-dependent (BOLD) signal changes associated with the imagery event of hearing simple monotone. Group analysis across the 12 right-handed subjects revealed activations in the medial and inferior frontal gyri, precuneus, middle frontal gyri, superior temporal gyri, and anterior cingulate gyri. Bilateral primary and secondary auditory areas in the superior temporal gyri also exhibited the event-related MR signal changes. The proposed method allowed for the analysis of brain areas responsive to the event of auditory imagery while our results suggest that auditory imagery and actual audition share common neural substrates.
Collapse
Affiliation(s)
- S S Yoo
- Department of Radiology, Kangnam St. Mary's Hospital, College of Medicine, The Catholic University of Korea, 505 Banpo-Dong, Seocho-Ku, Seoul 137-701, Korea
| | | | | |
Collapse
|
37
|
Abstract
For more than a century, psychologists have been intrigued by the idea that mental representations of perceived human actions are closely connected with mental representations of performing those same actions. In this article, connections between input and output representations are considered in terms of the potential for imitation. A broad range of evidence suggests that, for imitatible stimuli, input and output representations are isomorphic to one another, allowing mutual influence between perception and motoric planning that is rapid, effortless, and possibly obligatory. Thus, the cognitive consequences of imitatibility may underlie such diverse phenomena as phoneme perception; imitation in neonates; echoic memory; stimulus-response compatibility; conduction aphasia; maintenance rehearsal; and a variety of developmental and social activities such as language acquisition, social learning, empathy, and monitoring one's own behavior.
Collapse
Affiliation(s)
- M Wilson
- Department of Psychology, North Dakota State University, USA.
| |
Collapse
|
38
|
Abstract
Musical imagery refers to the experience of "replaying" music by imagining it inside the head. Whereas visual imagery has been extensively studied, few people have investigated imagery in the auditory domain. This article reviews a program of research that has tried to characterize auditory imagery for music using both behavioral and cognitive neuroscientific tools. I begin by describing some of my behavioral studies of the mental analogues of musical tempo, pitch, and temporal extent. I then describe four studies using three techniques that examine the correspondence of brain involvement in actually perceiving vs. imagining familiar music. These involve one lesion study with epilepsy surgery patients, two positron emission tomography (PET) studies, and one study using transcranial magnetic stimulation (TMS). The studies converge on the importance of the right temporal neocortex and other right-hemisphere structures in the processing of both perceived and imagined nonverbal music. Perceiving and imagining songs that have words also involve structures in the left hemisphere. The supplementary motor area (SMA) is activated during musical imagery; it may mediate rehearsal that involves motor programs, such as imagined humming. Future studies are suggested that would involve imagery of sounds that cannot be produced by the vocal tract to clarify the role of the SMA in auditory imagery.
Collapse
Affiliation(s)
- A R Halpern
- Psychology Department, Bucknell University, Lewisburg, PA 17837, USA.
| |
Collapse
|
39
|
Abstract
The highly influential Baddeley and Hitch model of working memory (Baddeley & Hitch, 1974; see also Baddeley, 1986) posited analogical forms of representation that can be broadly characterized as sensorimotor, both for verbal and for visuospatial material. However, difficulties with the model of verbal working memory in particular have led investigators to develop alternative models that avoid appealing either to sensory coding or to motoric coding, or to both. This paper examines the evidence for sensorimotor coding in working memory, including evidence from neuropsychology and from sign language research, as well as from standard working memory paradigms, and concludes that only a sensorimotor model can accommodate the broad range of effects that characterize verbal working memory. In addition, several findings that have been considered to speak against sensorimotor involvement are reexamined and are argued to be in fact compatible with sensorimotor coding. These conclusions have broad implications, in that they support the emerging theoretical viewpoint of embodied cognition.
Collapse
Affiliation(s)
- M Wilson
- North Dakota State University, Fargo, North Dakota, USA.
| |
Collapse
|
40
|
Ingham RJ, Fox PT, Costello Ingham J, Zamarripa F. Is overt stuttered speech a prerequisite for the neural activations associated with chronic developmental stuttering? BRAIN AND LANGUAGE 2000; 75:163-194. [PMID: 11049665 DOI: 10.1006/brln.2000.2351] [Citation(s) in RCA: 48] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Four adult right-handed chronic stutterers and four age-matched controls completed H(2)(15)O PET scans involving overt and imagined oral reading tasks. During overt stuttered speech prominent activations occurred in SMA (medial), BA 46 (right), anterior insula (bilateral), and cerebellum (bilateral) plus deactivations in right A2 (BA 21/22). These activations and deactivations also occurred when the same stutterers imagined they were stuttering. Some parietal regions were significantly activated during imagined stuttering, but not during overt stuttering. Most regional activations changed in the same direction when overt stuttering ceased (during chorus reading) and when subjects imagined that they were not stuttering (also during chorus reading). Controls displayed fewer similarities between regional activations and deactivations during actual and imagined oral reading. Thus overt stuttering appears not to be a prerequisite for the prominent regional activations and deactivations associated with stuttering.
Collapse
Affiliation(s)
- R J Ingham
- University of California, Santa Barbara 93106, USA.
| | | | | | | |
Collapse
|
41
|
Pich J. The role of subvocalization in rehearsal and maintenance of rhythmic patterns. THE SPANISH JOURNAL OF PSYCHOLOGY 2000; 3:63-7. [PMID: 11761742 DOI: 10.1017/s1138741600005552] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This experiment analyzed the influence of subvocal activity in retention of rhythmical auditory patterns. Retention of sixteen percussion sequences was studied. Each sequence (a 4-s "door-knocking" pattern) was followed by one of the following six retention conditions: silence, unattended music (blocking the inner ear, i.e., Gregorian chanting), unattended music (blocking the inner ear, i.e., rock-and-roll), articulatory suppression (blocking the inner voice), tracing circles on the table with index finger (spatial task), and tapping (motor control). After silence, unattended music (chanting), or the spatial task, participants successfully reproduced most patterns. Errors increased with unattended music (rock-and-roll), but significant disruptions only occurred with tapping and articulatory suppression. Whereas the latter case supports the role of an articulatory loop in retention, the production of successive taps or syllables in both interference conditions probably relies on a general rhythm module, which disrupted retention of the patterns.
Collapse
Affiliation(s)
- J Pich
- Departament de Psicologia, Universitat de les Illes Balears, Carretera de Valldemossa, Km 7,5. 07071 Palma de Mallorca, Spain.
| |
Collapse
|
42
|
Gross HM, Heinze A, Seiler T, Stephan V. Generative character of perception: a neural architecture for sensorimotor anticipation. Neural Netw 1999; 12:1101-1129. [PMID: 12662648 DOI: 10.1016/s0893-6080(99)00047-7] [Citation(s) in RCA: 37] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
The basic idea of our anticipatory approach to perception is to avoid the common separation of perception and generation of behavior and to fuse both aspects into a consistent neural process. Our approach tries to explain the phenomenon of perception, in particular, of perception at the level of sensorimotor intelligence, from a behavior-oriented point of view. Perception is assumed to be a generative process of anticipating the course of events resulting from alternative sequences of hypothetically executed actions. By means of this sensorimotor anticipation, it is possible to characterize a visual scenery immediately in categories of behavior, i.e. by a set of actions which describe possible methods of interaction with the objects in the environment. Thus, the competence to perceive a complex situation can be understood as the capability to anticipate the course of events caused by different action sequences. Starting from an abstract description of anticipatory perception and the essential biological evidence for internal simulation, we present two biologically motivated computational models that are able to anticipate and evaluate hypothetically sensorimotor sequences. Both models consider functional aspects of those cortical and subcortical systems that are assumed to be involved in the process of sensory prediction and sensorimotor control. Our first approach, the Model for Anticipation based on Sensory IMagination (MASIM), realizes a sequential search in sensorimotor space using a simple model of lateral cerebellum as sensory predictor. We demonstrate the efficiency of this model approach in the light of visually guided local navigation behaviors of a mobile system. The second approach, the Model for Anticipation based on Cortical Representations (MACOR), is actually still at a conceptual level of realization. We postulate that this model allows a completely parallel search at the neocortical level using assemblies of spiking neurons for grouping, separation, and selection of sensorimotor sequences. Both models are intended as general schemes for anticipation based perception at the level of sensorimotor intelligence.
Collapse
Affiliation(s)
- H -M. Gross
- Department of Neuroinformatics, Technical University Ilmenau, D-98684, Ilmenau, Germany
| | | | | | | |
Collapse
|
43
|
Rosin FM, Sylwan RP, Galera C. Effect of training on the ability of dual-task coordination. Braz J Med Biol Res 1999; 32:1249-61. [PMID: 10510263 DOI: 10.1590/s0100-879x1999001000012] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Within the framework of the working memory model proposed by A. Baddeley and G. Hitch, a dual-task paradigm has been suggested to evaluate the capacity to perform simultaneously two concurrent tasks. This capacity is assumed to reflect the functioning of the central executive component, which appears to be impaired in patients with dysexecutive syndrome. The present study extends the investigation of an index ("mu"), which is supposed to indicate the capacity of coordination of concurrent auditory digit span and tracking tasks, by testing the influence of training on the performance in the dual task. The presentation of the same digit sequence lists or always-different lists did not differently affect the performance. The span length affected the mu values. The improved performance in the tasks under the dual condition closely resembled the improvement in the single-task performance. So, although training improved performance in the single and dual conditions, especially for the tracking component, the mu values remained stable throughout the sessions when the single tasks were performed first. Conversely, training improved the capacity of dual-task coordination throughout the sessions when dual task was performed first, addressing the issue of the contribution of the within-session practice to the mu index.
Collapse
Affiliation(s)
- F M Rosin
- Laboratório de Psicologia Experimental Humana, Departamento de Psicologia e Educação, Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, Universidade de São Paulo, Ribeirão Preto, SP, Brasil
| | | | | |
Collapse
|
44
|
Masutani T, Tsujino H, Koerner E. A Cortical-type Modular Neural Network for Hypothetical Reasoning. Neural Netw 1997; 10:791-814. [PMID: 12662871 DOI: 10.1016/s0893-6080(96)00126-8] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
We propose a multilayer neural network architecture that can implement the kind of hypothetical reasoning that the cortex seems to perform in making sense of the sensory input. The elementary processing nodes of each homogeneous sheet are not single formal neurons, but complex modules abducted from the functional organization of neocortical columns. As an example, we simulate face recognition in this neocortical architecture. A holistic but coarse initial hypothesis is generated by express forward input description and subsequently refined under the constraints of this hypothesis. Separation of forward input description and feedback generated hypothesis, while using the difference in both descriptions at each of the modular units to control the refinement, enables robust recognition and has the potential for autonomous learning. Copyright 1997 Elsevier Science Ltd.
Collapse
|