1
|
Yu D, Bao L, Yin B. Emotional contagion in rodents: A comprehensive exploration of mechanisms and multimodal perspectives. Behav Processes 2024; 216:105008. [PMID: 38373472 DOI: 10.1016/j.beproc.2024.105008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 12/26/2023] [Accepted: 02/14/2024] [Indexed: 02/21/2024]
Abstract
Emotional contagion, a fundamental aspect of empathy, is an automatic and unconscious process in which individuals mimic and synchronize with the emotions of others. Extensively studied in rodents, this phenomenon is mediated through a range of sensory pathways, each contributing distinct insights. The olfactory pathway, marked by two types of pheromones modulated by oxytocin, plays a crucial role in transmitting emotional states. The auditory pathway, involving both squeaks and specific ultrasonic vocalizations, correlates with various emotional states and is essential for expression and communication in rodents. The visual pathway, though less relied upon, encompasses observational motions and facial expressions. The tactile pathway, a more recent focus, underscores the significance of physical interactions such as allogrooming and socio-affective touch in modulating emotional states. This comprehensive review not only highlights plausible neural mechanisms but also poses key questions for future research. It underscores the complexity of multimodal integration in emotional contagion, offering valuable insights for human psychology, neuroscience, animal welfare, and the burgeoning field of animal-human-AI interactions, thereby contributing to the development of a more empathetic intelligent future.
Collapse
Affiliation(s)
- Delin Yu
- School of Psychology, Fujian Normal University, Fuzhou, Fujian 350117, China; Key Laboratory for Learning and Behavioral Sciences, Fujian Normal University, Fuzhou, Fujian 350117, China
| | - Lili Bao
- School of Psychology, Fujian Normal University, Fuzhou, Fujian 350117, China; Key Laboratory for Learning and Behavioral Sciences, Fujian Normal University, Fuzhou, Fujian 350117, China
| | - Bin Yin
- School of Psychology, Fujian Normal University, Fuzhou, Fujian 350117, China; Key Laboratory for Learning and Behavioral Sciences, Fujian Normal University, Fuzhou, Fujian 350117, China.
| |
Collapse
|
2
|
Vicentin S, Cona G, Arcara G, Bisiacchi P. Sensory modality affects the spatiotemporal dynamics of alpha and theta oscillations associated with prospective memory. Int J Psychophysiol 2024; 196:112284. [PMID: 38110002 DOI: 10.1016/j.ijpsycho.2023.112284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 11/10/2023] [Accepted: 12/15/2023] [Indexed: 12/20/2023]
Abstract
BACKGROUND The maintenance of an intention in memory (Prospective Memory, PM) while performing a task is associated with a cost in terms of both performance (longer response times and lower accuracy) and neurophysiological modulations, which extent depends on several features of the stimuli. AIM This study explores the neural patterns associated with PM in different sensory modalities, to identify differences depending on this variable and discuss their functional meaning. METHOD Data were collected using a High-Density EEG during a baseline and a PM condition, proposed in a visual and an auditory version. Theta and alpha oscillations were compared between the two conditions within each modality using a cluster-based permutation approach. RESULTS PM conditions were associated with clusters of decreased alpha and theta activity in both modalities. However, different spatiotemporal dynamics were elicited as a function of sensory modality: alpha decreases displayed an overlapping onset between modalities, but different durations, lasting longer in the auditory modality. Conversely, the clusters of decreased theta activity presented similar durations between modalities, but different temporal and spatial onsets, appearing at different moments over the respective sensory areas. CONCLUSIONS The similar spatiotemporal properties of alpha suppression between modalities indicate that such oscillations may represent a supramodal, top-down process, presumably reflecting the external direction of attention to successfully detect the prospective cue (strategic monitoring). In theta, the clusters showed more modality-specific differences, which temporal and spatial properties correspond to the ones necessary to perform the ongoing task, suggesting a shift in resource allocation in favor of the PM task.
Collapse
Affiliation(s)
- Stefano Vicentin
- Department of General Psychology, University of Padua, Italy; Padova Neuroscience Center, Padua, Italy.
| | - Giorgia Cona
- Department of General Psychology, University of Padua, Italy; Padova Neuroscience Center, Padua, Italy; Department of Neuroscience, University of Padua, Italy
| | | | - Patrizia Bisiacchi
- Department of General Psychology, University of Padua, Italy; Padova Neuroscience Center, Padua, Italy
| |
Collapse
|
3
|
Ishida T, Nittono H. Effects of sensory modality and task relevance on omitted stimulus potentials. Exp Brain Res 2024; 242:47-57. [PMID: 37947851 DOI: 10.1007/s00221-023-06726-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 10/20/2023] [Indexed: 11/12/2023]
Abstract
Omitted stimulus potentials (OSPs) occur when a sensory stimulus is unexpectedly omitted. They are thought to reflect predictions about upcoming sensory events. The present study examined how OSPs differ across the sensory modalities of predicted stimuli. Twenty-nine university students were asked to press a mouse button at a regular interval of 1-2 s, which was immediately followed by either a visual or auditory stimulus in different blocks. The stimuli were sometimes omitted (p = 0.2), to which event-related potentials (ERPs) were recorded. The results showed that stimulus omissions in both modalities elicited ERP waveforms consisting of three components, oN1, oN2, and oP3. The peak latencies of these components were shorter in the auditory modality than in the visual modality. The amplitudes of OSPs were larger when participants were told that the omission indicated their poor performance (i.e., they pressed a button at an irregular interval) than when it was irrelevant to their performance. These findings suggest that OSPs occur from around 100 ms in a modality-specific manner and increase in amplitude depending on the task relevance of stimulus omissions.
Collapse
Affiliation(s)
- Tomomi Ishida
- Graduate School of Human Sciences, Osaka University, 1-2 Yamadaoka, Suita, Osaka, 565-0871, Japan.
| | - Hiroshi Nittono
- Graduate School of Human Sciences, Osaka University, 1-2 Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
4
|
Magliacano A, Catalano L, Sagliano L, Estraneo A, Trojano L. Spontaneous eye blinking during an auditory, an interoceptive and a visual task: The role of the sensory modality and the attentional focus. Cortex 2023; 168:49-61. [PMID: 37659289 DOI: 10.1016/j.cortex.2023.07.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 07/15/2023] [Accepted: 07/31/2023] [Indexed: 09/04/2023]
Abstract
Previous evidence suggested that spontaneous eye blinking changes as a function of the attentional focus. In particular, eye blink rate (EBR) tends to increase when attention is directed to internal versus environmental processing. Most studies on this issue compared eye blinking during visual and mental imagery tasks, and interpreted the increase in EBR as a mechanism to focus cognitive resources on internal processing by disengaging attention from interfering information. However, since eye blinking also depends on the sensory modality of the task, the findings might be influenced by a modality-specific effect. In the present Registered Report we aim at investigating whether the environmental versus internal attentional focus can affect spontaneous blinking behaviour in non-visual tasks as well, in conditions where visual stimuli are not relevant. In a within-subject design, healthy participants performed an interoceptive task (i.e., heartbeat counting) and an auditory task in which pre-recorded heartbeats were presented aurally; during both tasks irrelevant visual stimuli were also presented. In a further control condition with the same auditory and visual stimuli, the participants were required to focus their attention on visual stimuli. Participants' EBR was recorded during each task by means of an eye-tracking system. We found that, although the interoceptive task was more difficult than the auditory and visual tasks, participants' EBR decreased by a comparable level in all tasks with respect to a rest condition, with no differences between internal versus environmental conditions. The present findings do not support the idea that EBR is modulated by an internal versus external focus of attention, at least in presence of controlled visual stimulation.
Collapse
Affiliation(s)
| | - Laura Catalano
- Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy
| | - Laura Sagliano
- Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy
| | | | - Luigi Trojano
- Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy.
| |
Collapse
|
5
|
Abreu R, Postarnak S, Vulchanov V, Baggio G, Vulchanova M. The association between statistical learning and language development during childhood: A scoping review. Heliyon 2023; 9:e18693. [PMID: 37554804 PMCID: PMC10405008 DOI: 10.1016/j.heliyon.2023.e18693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 07/09/2023] [Accepted: 07/25/2023] [Indexed: 08/10/2023] Open
Abstract
The statistical account of language acquisition asserts that language is learned through computations on the statistical regularities present in natural languages. This type of account can predict variability in language development measures as arising from individual differences in extracting this statistical information. Given that statistical learning has been attested across different domains and modalities, a central question is which modality is more tightly yoked with language skills. The results of a scoping review, which aimed for the first time at identifying the evidence of the association between statistical learning skills and language outcomes in typically developing infants and children, provide preliminary support for the statistical learning account of language acquisition, mostly in the domain of lexical outcomes, indicating that typically developing infants and children with stronger auditory and audio-visual statistical learning skills perform better on lexical competence tasks. The results also suggest that the relevance of statistical learning skills for language development is dependent on sensory modality.
Collapse
Affiliation(s)
- Regina Abreu
- Language Acquisition and Language Processing Lab, Norwegian University of Science and Technology – Trondheim, Norway
| | | | - Valentin Vulchanov
- Language Acquisition and Language Processing Lab, Norwegian University of Science and Technology – Trondheim, Norway
| | - Giosuè Baggio
- Language Acquisition and Language Processing Lab, Norwegian University of Science and Technology – Trondheim, Norway
| | - Mila Vulchanova
- Language Acquisition and Language Processing Lab, Norwegian University of Science and Technology – Trondheim, Norway
| |
Collapse
|
6
|
Zimmer U, Wendt M, Pacharra M. Enhancing allocation of visual attention with emotional cues presented in two sensory modalities. Behav Brain Funct 2022; 18:10. [PMID: 36138461 PMCID: PMC9494825 DOI: 10.1186/s12993-022-00195-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 08/27/2022] [Indexed: 11/10/2022]
Abstract
Background Responses to a visual target stimulus in an exogenous spatial cueing paradigm are usually faster if cue and target occur in the same rather than in different locations (i.e., valid vs. invalid), although perceptual conditions for cue and target processing are otherwise equivalent. This cueing validity effect can be increased by adding emotional (task-unrelated) content to the cue. In contrast, adding a secondary non-emotional sensory modality to the cue (bimodal), has not consistently yielded increased cueing effects in previous studies. Here, we examined the interplay of bimodally presented cue content (i.e., emotional vs. neutral), by using combined visual-auditory cues. Specifically, the current ERP-study investigated whether bimodal presentation of fear-related content amplifies deployment of spatial attention to the cued location. Results A behavioral cueing validity effect occurred selectively in trials in which both aspects of the cue (i.e., face and voice) were related to fear. Likewise, the posterior contra-ipsilateral P1-activity in valid trials was significantly larger when both cues were fear-related than in all other cue conditions. Although the P3a component appeared uniformly increased in invalidly cued trials, regardless of cue content, a positive LPC deflection, starting about 450 ms after target onset, was, again, maximal for the validity contrast in trials associated with bimodal presentation of fear-related cues. Conclusions Simultaneous presentation of fear-related stimulus information in the visual and auditory modality appears to increase sustained visual attention (impairing disengagement of attention from the cued location) and to affect relatively late stages of target processing.
Collapse
Affiliation(s)
- Ulrike Zimmer
- Faculty of Human Sciences, Department of Psychology, MSH Medical School Hamburg, Hamburg, Germany. .,ICAN Insitute of Cognitive and Affective Neuroscience, MSH Medical School Hamburg, Hamburg, Germany.
| | - Mike Wendt
- Faculty of Human Sciences, Department of Psychology, MSH Medical School Hamburg, Hamburg, Germany.,ICAN Insitute of Cognitive and Affective Neuroscience, MSH Medical School Hamburg, Hamburg, Germany
| | - Marlene Pacharra
- Faculty of Psychology, Department of Biopsychology, Institute of Cognitive Neuroscience, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
7
|
Kowalski J, Styła R. Visual worry in patients with schizophrenia. J Psychiatr Res 2022; 153:116-124. [PMID: 35810601 DOI: 10.1016/j.jpsychires.2022.07.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 06/22/2022] [Accepted: 07/01/2022] [Indexed: 10/17/2022]
Abstract
OBJECTIVE Worrying is a pervasive transdiagnostic symptom in schizophrenia. It is most often associated in the literature with verbal modality due to many studies of its presence in generalised anxiety disorder. The current study aimed to elucidate worry in different sensory modalities, visual and verbal, in individuals with schizophrenia. METHOD We tested persons with schizophrenia (n = 92) and healthy controls (n = 138) in a cross-sectional design. We used questionnaires of visual and verbal worry (original Worry Modality Questionnaire), trait worry (Penn State Worry Questionnaire) and general psychopathology symptoms (General Functioning Questionnaire-58 and Brief Psychiatric Rating Scale). RESULTS Both visual and verbal worry were associated with psychotic, anxiety and general symptoms of psychopathology in both groups with medium to large effect sizes. Regression analyses indicated that visual worry was a single significant predictor of positive psychotic symptoms in a model with verbal and trait worry, both in clinical and control groups (β's of 0.49 and 0.38, respectively). Visual worry was also a superior predictor of anxiety and general psychopathology severity (β's of 0.34 and 0.37, respectively) than verbal worry (β's of 0.03 and -0.02, respectively), under control of trait worry, in the schizophrenia group. We also proposed two indices of worry modality dominance and analysed profiles of dominating worry modality in both groups. CONCLUSIONS Our study is the first to demonstrate that visual worry might be of specific importance for understanding psychotic and general psychopathology symptoms in persons with schizophrenia.
Collapse
Affiliation(s)
- Joachim Kowalski
- Experimental Psychopathology Laboratory, Institute of Psychology, Polish Academy of Sciences, Warsaw, Poland.
| | - Rafał Styła
- Faculty of Psychology, University of Warsaw, Warsaw, Poland
| |
Collapse
|
8
|
Weinberger AB, Gallagher NM, Colaizzi G, Liu N, Parrott N, Fearon E, Shaikh N, Green AE. Analogical mapping across sensory modalities and evidence for a general analogy factor. Cognition 2022; 223:105029. [PMID: 35091260 DOI: 10.1016/j.cognition.2022.105029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 12/20/2021] [Accepted: 01/17/2022] [Indexed: 11/03/2022]
Abstract
Analogy is a central component of human cognition. Analogical "mapping" of similarities between pieces of information present in our experiences supports cognitive and social development, classroom learning, and creative insights and innovation. To date, analogical mapping has primarily been studied within separate modalities of information (e.g., verbal analogies between words, visuo-spatial analogies between objects). However, human experience, in development and adulthood, includes highly variegated information (e.g., words, sounds, objects) received via multiple sensory and information-processing pathways (e.g., visual vs. auditory pathways). Whereas cross-modal correspondences (e.g., between pitch and height) have been observed, the correspondences were between individual items, rather than between relations. Thus, analogical mapping (characterized by second-order relations between relations) has not been directly tested as a basis for cross-modal correspondence. Here, we devised novel cross-modality analogical stimuli (lines-to-sounds, lines-to-words, words-to-sounds) that explicated second-order comparisons between relations. In four samples across three studies-participants demonstrated well-above-chance identification of cross-modal second-order relations, providing robust evidence of analogy across modalities. Further, performance across all analogy types was explained by a single factor, indicating a modality-general analogical ability (i.e., an "analo-g" factor). Analo-g explained performance over-and-above fluid intelligence as well as verbal and spatial abilities, though a stronger relationship to verbal than visuo-spatial ability emerged, consistent with verbal/semantic contributions to analogy. The present data suggests novel questions about our ability to find/learn second-order relations among the diverse information sources that populate human experience, and about cross-modal human and AI analogical mapping in developmental, educational, and creative contexts.
Collapse
Affiliation(s)
- Adam B Weinberger
- Department of Psychology, Georgetown University, USA; Penn Center for Neuroaesthetics, University of Pennsylvania, USA.
| | - Natalie M Gallagher
- Department of Psychology, Georgetown University, USA; Department of Psychology, Princeton University, USA
| | | | - Nathaniel Liu
- Department of Psychology, Georgetown University, USA
| | | | - Edward Fearon
- Department of Psychology, Georgetown University, USA
| | | | - Adam E Green
- Department of Psychology, Georgetown University, USA.
| |
Collapse
|
9
|
Appelqvist-Dalton M, Wilmott JP, He M, Simmons AM. Time perception in film is modulated by sensory modality and arousal. Atten Percept Psychophys 2022; 84:926-42. [PMID: 35304701 DOI: 10.3758/s13414-022-02464-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/20/2022] [Indexed: 11/08/2022]
Abstract
Considerable research has shown that the perception of time can be distorted subjectively, but little empirical work has examined what factors affect time perception in film, a naturalistic multimodal stimulus. Here, we explore the effect of sensory modality, arousal, and valence on how participants estimate durations in film. Using behavioral ratings combined with pupillometry in a within-participants design, we analyzed responses to and duration estimates of film clips in three experimental conditions: audiovisual (containing music and sound effects), visual (without music and sound effects), and auditory (music and sound effects without a visual scene). Participants viewed clips from little-known nature documentaries, fiction, animation, and experimental films. They were asked to judge clip duration and to report subjective arousal and valence, as their pupil sizes were recorded. Data were analyzed using linear mixed-effects models. Results reveal duration estimates varied between experimental conditions. Clip durations were judged to be shorter than actual durations in all three conditions, with visual-only clips perceived as longer (i.e., less distorted in time) than auditory-only and audiovisual clips. High levels of Composite Arousal (an average of self-reported arousal and pupil size changes) were correlated with longer (more accurate) estimates of duration, particularly in the audiovisual modality. This effect may reflect stimulus complexity or greater cognitive engagement. Increased ratings of valence were correlated with longer estimates of duration. The use of naturalistic, complex stimuli such as film can enhance our understanding of the psychology of time perception.
Collapse
|
10
|
Montalvo A, Azevedo E, de Mendonça A. Shift of musical hallucinations to visual hallucinations after correction of the hearing deficit in a patient with Lewy body dementia: a case report. J Med Case Rep 2021; 15:449. [PMID: 34496966 PMCID: PMC8428060 DOI: 10.1186/s13256-021-03039-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2019] [Accepted: 07/30/2021] [Indexed: 11/10/2022] Open
Abstract
Background Musical hallucinations are a particular type of auditory hallucination in which the patient perceives instrumental music, musical sounds, or songs. Musical hallucinations are associated with acquired hearing loss, particularly within the elderly. Under conditions of reduced auditory sensory input, perception-bearing circuits are disinhibited and perceptual traces released, implying an interaction between peripheral sensory deficits and central factors related to brain dysfunction. Case presentation A 71-year-old Caucasian man with hearing loss complained of memory difficulties and resting tremor of the right upper limb in the previous 2 years. He already had difficulties in instrumental activities of daily life. Neurological examination showed Parkinsonian signs and hypoacusia. Neuropsychological examination identified deficits in executive functions and memory tests. Brain computerized tomography and nuclear magnetic resonance scans showed mild cortical and subcortical atrophy. The clinical diagnosis of possible dementia with Lewy bodies was established. Five years later, the patient began complaining of musical hallucinations. There had been no previous change in medication. An otorhinolaryngologist diagnosed age-related hearing loss and prescribed bilateral hearing aids. After using the hearing aids, the patient did not hear the songs any longer, only some tinnitus, described as a whistle. However, at the same time, the patient started experiencing visual hallucinations he never had before. Discussion To our knowledge, the immediate shift of hallucinations from one sensory modality to another sensory modality when perception is improved has not been previously described. This report emphasizes the interaction between brain pathology and sensory deficits for the genesis of hallucinations, and reinforces the theory that attention and control networks must couple properly to the default mode network, as well as integrate and select adequately peripheral signals to the somatosensory cortices, in order to keep a clear state of mind. Conclusion The clinician should bear in mind and let the patient know that improving one sensory modality to ameliorate hallucinations may sometimes paradoxically lead to hallucinations in a different sensory modality.
Collapse
Affiliation(s)
| | - Eryco Azevedo
- Faculdade de Ciências Médicas, Universidade do Estado do Rio de Janeiro, Rio de Janeiro, Brazil
| | | |
Collapse
|
11
|
Lin HY, Chang WD, Hsieh HC, Yu WH, Lee P. Relationship between intraindividual auditory and visual attention in children with ADHD. Res Dev Disabil 2021; 108:103808. [PMID: 33242747 DOI: 10.1016/j.ridd.2020.103808] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2020] [Revised: 10/13/2020] [Accepted: 10/26/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND AIM Most previous attention-deficit/hyperactivity disorder (ADHD) studies have used only a single sensory modality (usually vision) to investigate attentional problems, although patients with ADHD might display deficits of auditory attention similar to their visual attention. This study explored intraindividual auditory and visual attention in children with and without ADHD to examine the relationship between these two dimensions of attention. METHODS Attentional performances of 140 children (70 children with ADHD and 70 typically developing peers) were measured through the Test of Variables of Attention (TOVA) in the present study. RESULTS For both groups, most attentional indices showed significant differences between the two modalities (d ranging from 0.32 to 0.72). The correlation coefficients of most of the attentional variables in children with ADHD were lower than their typically developing peers. All attentional indices of children with ADHD (ranging from 12.8%-55.7%) were much higher than those of their typically developing peers (ranging from 1.4%-8.6%). CONCLUSION These results not only indicate that typically developing children display more consistent attentional performance, but also support the view that children with ADHD may show attention deficiency in one modality but not necessarily in the other.
Collapse
Affiliation(s)
- Hung-Yu Lin
- Department of Occupational Therapy at Asia University, Taichung, Taiwan.
| | - Wen-Dien Chang
- Department of Sport Performance at National Taiwan University of Sport, Taichung, Taiwan
| | - Hsieh-Chun Hsieh
- Department of Special Education at National Tsing Hua University, Hsinchu, Taiwan
| | - Wan-Hui Yu
- Department of Occupational Therapy at Asia University, Taichung, Taiwan
| | - Posen Lee
- Department of Occupational Therapy at I-Shou University, Kaohsiung, Taiwan
| |
Collapse
|
12
|
Shinoda Y, Yamada Y, Yoshida E, Takahashi T, Tsuneoka Y, Eto K, Kaji T, Fujiwara Y. Hypoalgesia and recovery in methylmercury-exposed rats. J Toxicol Sci 2021; 46:303-309. [PMID: 34078837 DOI: 10.2131/jts.46.303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
Methylmercury (MeHg), the causal substrate in Minamata disease, can lead to severe and chronic neurological disorders. The main symptom of Minamata disease is sensory impairment in the four extremities; however, the sensitivity of individual sensory modalities to MeHg has not been investigated extensively. In the present study, we performed stimulus-response behavioral experiments in MeHg-exposed rats to compare the sensitivities to pain, heat, cold, and mechanical sensations. MeHg (6.7 mg/kg/day) was orally administered to 9-week-old Wistar rats for 5 days and discontinued for 2 days, then administered daily for another 5 days. The four behavioral experiments were performed daily on each rat from the beginning of MeHg treatment for 68 days. The pain sensation decreased significantly from day 11 onwards, but recovered to control levels on day 48. Other sensory modalities were not affected by MeHg exposure. These findings suggest that the pain sensation is the sensory modality most susceptive to MeHg toxicity and that this sensitivity is reversible following discontinuation of the exposure.
Collapse
Affiliation(s)
- Yo Shinoda
- Department of Environmental Health, School of Pharmacy, Tokyo University of Pharmacy and Life Sciences
| | - Yuta Yamada
- Department of Environmental Health, School of Pharmacy, Tokyo University of Pharmacy and Life Sciences
| | - Eiko Yoshida
- Department of Environmental Health, Faculty of Pharmaceutical Sciences, Tokyo University of Science
| | - Tsutomu Takahashi
- Department of Environmental Health, School of Pharmacy, Tokyo University of Pharmacy and Life Sciences
| | - Yayoi Tsuneoka
- Department of Environmental Health, School of Pharmacy, Tokyo University of Pharmacy and Life Sciences
| | - Komyo Eto
- Health and Nursing Facilities for the Aged, Jushindai, Shinwakai
| | - Toshiyuki Kaji
- Department of Environmental Health, Faculty of Pharmaceutical Sciences, Tokyo University of Science
| | - Yasuyuki Fujiwara
- Department of Environmental Health, School of Pharmacy, Tokyo University of Pharmacy and Life Sciences
| |
Collapse
|
13
|
Abstract
UNLABELLED An accumulating body of evidence highlights the contribution of general cognitive processes, such as attention, to language-related skills. OBJECTIVE The purpose of the present study was to explore how interference control (a subcomponent of selective attention) is affected in developmental dyslexia (DD) by means of control over simple stimulus-response mappings. Furthermore, we aimed to examine interference control in adults with DD across sensory modalities. METHODS The performance of 14 dyslexic adults and 14 matched controls was compared on visual/auditory Simon tasks, in which conflict was presented in terms of an incongruent mapping between the location of a visual/auditory stimulus and the appropriate motor response. RESULTS In the auditory task, dyslexic participants exhibited larger Simon effect costs; namely, they showed disproportionately larger reaction times (RTs)/errors costs when the auditory stimulus and response were incongruent relative to RT/errors costs of non-impaired readers. In the visual Simon task, both groups presented Simon effect costs to the same extent. CONCLUSION These results indicate that the ability to control auditory selective attention is carried out less effectively in those with DD compared with visually controlled processing. The implications of this impaired process for the language-related skills of individuals with DD are discussed.
Collapse
|
14
|
Allé MC, Berna F, Danion JM, Berntsen D. Seeing or hearing one's memories: Manipulating autobiographical memory imagery in schizophrenia. Psychiatry Res 2020; 286:112835. [PMID: 32062523 DOI: 10.1016/j.psychres.2020.112835] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Revised: 01/29/2020] [Accepted: 01/29/2020] [Indexed: 11/15/2022]
Abstract
The prevalence of auditory hallucinations in schizophrenia, and theories suggesting a link between autobiographical memory and hallucination, raise the possibility of a dominant role of auditory imagery in autobiographical remembering in patients with schizophrenia, whereas visual imagery is dominant in autobiographical memory of healthy adults. The present study explored this possibility by comparing autobiographical memory characteristics, according to sensory modality, in patients with schizophrenia versus healthy controls. Twenty-eight patients and 28 matched controls were asked to retrieve autobiographical memories that were dominated by auditory, visual, gustatory-olfactory, or tactile imagery. ANOVA analysis showed that patients rated their memories lower on specificity, contextual information, feeling of reliving, overall vividness, coherence and autobiographical me-ness (i.e. whether an autobiographical memory is experienced as belonging to the self), ps < 0.03, compared with control participants. The effects of sensory modality imagery were largely similar for patients and controls, as no interaction effects were observed. The findings did not support a dominance of auditory imagery in patients' autobiographical memory. In the patient group, reduced autobiographical me-ness was predicted by lower ratings of contextual information related to the setting of the event. Future research should examine whether these effects extend to involuntary autobiographical memory in schizophrenia.
Collapse
Affiliation(s)
- Mélissa C Allé
- Center on Autobiographical Memory Research, Department of Psychology and Behavioural Sciences, Aarhus University, Denmark.
| | - Fabrice Berna
- Inserm U1114, Strasbourg University, University Hospital of Strasbourg, France
| | - Jean-Marie Danion
- Inserm U1114, Strasbourg University, University Hospital of Strasbourg, France
| | - Dorthe Berntsen
- Center on Autobiographical Memory Research, Department of Psychology and Behavioural Sciences, Aarhus University, Denmark
| |
Collapse
|
15
|
Ruggirello S, Campioni L, Piermanni S, Sebastiani L, Santarcangelo EL. Does hypnotic assessment predict the functional equivalence between motor imagery and action? Brain Cogn 2019; 136:103598. [PMID: 31472426 DOI: 10.1016/j.bandc.2019.103598] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Revised: 08/22/2019] [Accepted: 08/22/2019] [Indexed: 01/01/2023]
Abstract
Motor imagery is influenced by individual and contextual factors. We investigated whether the psychophysiological trait of hypnotisability modulates its subjective experience and cortical correlates similarly to what was previously shown for head postures mental images. EEG was acquired in 18 high (highs) and 15 low (lows) hypnotizable subjects (Stanford Hypnotic Susceptibility Scale, A). The experimental conditions were: baseline, a complex arm/hand movement, visual (1st person) and kinesthetic imagery of the movement. After each imagery condition, participants scored the vividness and easeness of their performance and their ability to mantain the requested modality of imagery. Subjective reports, chronometric visual/kinesthetic indices, absolute beta and fronto-central midline alpha powers were analyzed. Findings confirmed earlier reports of better kinestetic imagery ability in highs than in lows and better visual than kinesthetic imagery in lows, as well as smaller restructuring of the cortical activity in highs than in lows, during all tasks. Also, they show that hypnotisability accounts for most of the correlations between brain regions for both alpha and beta changes. Thus, imagined and actual movements were less demanding processes in highs at subjective and cortical levels. Finally, hypnotic assessment assists to plan personalized mental training for neuro-rehabilitation and sports and predict their efficacy.
Collapse
Affiliation(s)
- Simona Ruggirello
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Italy
| | - Lisa Campioni
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Italy
| | - Samuele Piermanni
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Italy
| | - Laura Sebastiani
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Italy.
| | - Enrica L Santarcangelo
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Italy
| |
Collapse
|
16
|
Rienäcker F, Jacobs HIL, Van Heugten CM, Van Gerven PWM. Practice makes perfect: High performance gains in older adults engaged in selective attention within and across sensory modalities. Acta Psychol (Amst) 2018; 191:101-11. [PMID: 30240890 DOI: 10.1016/j.actpsy.2018.09.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2018] [Revised: 08/06/2018] [Accepted: 09/10/2018] [Indexed: 11/22/2022] Open
Abstract
Selective attention has been found to decline with aging, possibly depending on the sensory modality through which targets and distractors are presented. We investigated the capacity of older adults to improve performance on visual and auditory selective attention tasks. 31 younger (mean age = 22.8 years, SD = 2.1) and 29 older participants (mean age = 69.5 years, SD = 5.8) performed visual and auditory tasks with and without unimodal and cross-modal distraction across five practice sessions. Reaction time decreased with practice in both age groups. Strikingly, this performance improvement was similar across the age groups. Moreover, distractor modality did not affect performance gains in either age group. Older adults were disproportionally affected by cross-modal visual distraction, however, corroborating previous studies. This age-related effect was mitigated during the practice sessions. Finally, there was no transfer of practice to neuropsychological test performance. These results suggest a high capacity of older individuals to improve selective attention functions within and across sensory modalities.
Collapse
|
17
|
Castro-Meneses LJ, Sowman PF. Stop signals delay synchrony more for finger tapping than vocalization: a dual modality study of rhythmic synchronization in the stop signal task. PeerJ 2018; 6:e5242. [PMID: 30013856 PMCID: PMC6046193 DOI: 10.7717/peerj.5242] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2017] [Accepted: 06/26/2018] [Indexed: 11/20/2022] Open
Abstract
BACKGROUND A robust feature of sensorimotor synchronization (SMS) performance in finger tapping to an auditory pacing signal is the negative asynchrony of the tap with respect to the pacing signal. The Paillard-Fraisse hypothesis suggests that negative asynchrony is a result of inter-modal integration, in which the brain compares sensory information across two modalities (auditory and tactile). The current study compared the asynchronies of vocalizations and finger tapping in time to an auditory pacing signal. Our first hypothesis was that vocalizations have less negative asynchrony compared to finger tapping due to the requirement for sensory integration within only a single (auditory) modality (intra-modal integration). However, due to the different measurements for vocalizations and finger responses, interpreting the comparison between these two response modalities is problematic. To address this problem, we included stop signals in the synchronization task. The rationale for this manipulation was that stop signals would perturb synchronization more in the inter-modal compared to the intra-modal task. We hypothesized that the inclusion of stop signals induce proactive inhibition, which reduces negative asynchrony. We further hypothesized that any reduction in negative asynchrony occurs to a lesser degree for vocalization than for finger tapping. METHOD A total of 30 participants took part in this study. We compared SMS in a single sensory modality (vocalizations (or auditory) to auditory pacing signal) to a dual sensory modality (fingers (or tactile) to auditory pacing signal). The task was combined with a stop signal task in which stop signals were relevant in some blocks and irrelevant in others. Response-to-pacing signal asynchronies and stop signal reaction times were compared across modalities and across the two types of stop signal blocks. RESULTS In the blocks where stopping was irrelevant, we found that vocalization (-61.47 ms) was more synchronous with the auditory pacing signal compared to finger tapping (-128.29 ms). In the blocks where stopping was relevant, stop signals induced proactive inhibition, shifting the response times later. However, proactive inhibition (26.11 ms) was less evident for vocalizations compared to finger tapping (58.06 ms). DISCUSSION These results support the interpretation that relatively large negative asynchrony in finger tapping is a consequence of inter-modal integration, whereas smaller asynchrony is associated with intra-modal integration. This study also supports the interpretation that intra-modal integration is more sensitive to synchronization discrepancies compared to inter-modal integration.
Collapse
Affiliation(s)
- Leidy J. Castro-Meneses
- Perception in Action Research Centre (PARC), Department of Cognitive Science, Macquarie University, North Ryde, NSW, Australia
- Australian Research Council Centre of Excellence in Cognition and its Disorders (CCD), Macquarie University, North Ryde, NSW, Australia
- The MARCS Institute for Brain, Behaviour and Development, University of Western Sydney, Bankstown, NSW, Australia
| | - Paul F. Sowman
- Perception in Action Research Centre (PARC), Department of Cognitive Science, Macquarie University, North Ryde, NSW, Australia
- Australian Research Council Centre of Excellence in Cognition and its Disorders (CCD), Macquarie University, North Ryde, NSW, Australia
| |
Collapse
|
18
|
Toth PG, Marsalek P, Pokora O. Ergodicity and parameter estimates in auditory neural circuits. Biol Cybern 2018; 112:41-55. [PMID: 29082437 PMCID: PMC5908860 DOI: 10.1007/s00422-017-0739-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/05/2017] [Accepted: 10/12/2017] [Indexed: 06/07/2023]
Abstract
This paper discusses ergodic properties and circular statistical characteristics in neuronal spike trains. Ergodicity means that the average taken over a long time period and over smaller population should equal the average in less time and larger population. The objectives are to show simple examples of design and validation of a neuronal model, where the ergodicity assumption helps find correspondence between variables and parameters. The methods used are analytical and numerical computations, numerical models of phenomenological spiking neurons and neuronal circuits. Results obtained using these methods are the following. They are: a formula to calculate vector strength of neural spike timing dependent on the spike train parameters, description of parameters of spike train variability and model of output spiking density based on assumption of the computation realized by sound localization neural circuit. Theoretical results are illustrated by references to experimental data. Examples of neurons where spike trains have and do not have the ergodic property are then discussed.
Collapse
Affiliation(s)
- Peter G. Toth
- Institute of Pathological Physiology, First Medical Faculty, Charles University, U Nemocnice 5, 12853 Prague 2, Czech Republic
| | - Petr Marsalek
- Max Planck Institute for the Physics of Complex Systems, Noethnitzer Strasse 38, 01187 Dresden, Germany
- Czech Technical University in Prague, Zikova 1903/4, 16636 Prague 6, Czech Republic
| | - Ondrej Pokora
- Department of Mathematics and Statistics, Faculty of Science, Masaryk University, Kotlarska 2, 61137 Brno, Czech Republic
| |
Collapse
|
19
|
Sakurada T, Hirai M, Watanabe E. Optimization of a motor learning attention-directing strategy based on an individual's motor imagery ability. Exp Brain Res 2015; 234:301-11. [PMID: 26466828 DOI: 10.1007/s00221-015-4464-9] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2015] [Accepted: 10/06/2015] [Indexed: 11/30/2022]
Abstract
Motor learning performance has been shown to be affected by various cognitive factors such as the focus of attention and motor imagery ability. Most previous studies on motor learning have shown that directing the attention of participants externally, such as on the outcome of an assigned body movement, can be more effective than directing their attention internally, such as on body movement itself. However, to the best of our knowledge, no findings have been reported on the effect of the focus of attention selected according to the motor imagery ability of an individual on motor learning performance. We measured individual motor imagery ability assessed by the Movement Imagery Questionnaire and classified the participants into kinesthetic-dominant (n = 12) and visual-dominant (n = 8) groups based on the questionnaire score. Subsequently, the participants performed a motor learning task such as tracing a trajectory using visuomotor rotation. When the participants were required to direct their attention internally, the after-effects of the learning task in the kinesthetic-dominant group were significantly greater than those in the visual-dominant group. Conversely, when the participants were required to direct their attention externally, the after-effects of the visual-dominant group were significantly greater than those of the kinesthetic-dominant group. Furthermore, we found a significant positive correlation between the size of after-effects and the modality-dominance of motor imagery. These results suggest that a suitable attention strategy based on the intrinsic motor imagery ability of an individual can improve performance during motor learning tasks.
Collapse
Affiliation(s)
- Takeshi Sakurada
- Functional Brain Science Laboratory, Center for Development of Advanced Medical Technology, Jichi Medical University, 3311-1 Yakushiji, Shimotsuke, Tochigi, 329-0498, Japan.,Applied Cognitive Neuroscience Laboratory, Chuo University, 1-13-27 Kasuga, Bunkyo, Tokyo, 112-8551, Japan
| | - Masahiro Hirai
- Functional Brain Science Laboratory, Center for Development of Advanced Medical Technology, Jichi Medical University, 3311-1 Yakushiji, Shimotsuke, Tochigi, 329-0498, Japan.
| | - Eiju Watanabe
- Functional Brain Science Laboratory, Center for Development of Advanced Medical Technology, Jichi Medical University, 3311-1 Yakushiji, Shimotsuke, Tochigi, 329-0498, Japan.,Department of Neurosurgery, Jichi Medical University, 3311-1 Yakushiji, Shimotsuke, Tochigi, 329-0498, Japan
| |
Collapse
|
20
|
Martin K, Johnstone P, Hedrick M. Auditory and visual localization accuracy in young children and adults. Int J Pediatr Otorhinolaryngol 2015; 79:844-851. [PMID: 25841637 DOI: 10.1016/j.ijporl.2015.03.016] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2015] [Revised: 03/16/2015] [Accepted: 03/17/2015] [Indexed: 11/30/2022]
Abstract
OBJECTIVE This study aimed to measure and compare sound and light source localization ability in young children and adults who have normal hearing and normal/corrected vision in order to determine the extent to which age, type of stimuli, and stimulus order affects sound localization accuracy. METHODS Two experiments were conducted. The first involved a group of adults only. The second involved a group of 30 children aged 3 to 5 years. Testing occurred in a sound-treated booth containing a semi-circular array of 15 loudspeakers set at 10° intervals from -70° to 70° azimuth. Each loudspeaker had a tiny light bulb and a small picture fastened underneath. Seven of the loudspeakers were used to randomly test sound and light source identification. The sound stimulus was the word "baseball". The light stimulus was a flashing of a light bulb triggered by the digital signal of the word "baseball". Each participant was asked to face 0° azimuth, and identify the location of the test stimulus upon presentation. Adults used a computer mouse to click on an icon; children responded by verbally naming or walking toward the picture underneath the corresponding loudspeaker or light. A mixed experimental design using repeated measures was used to determine the effect of age and stimulus type on localization accuracy in children and adults. A mixed experimental design was used to compare the effect of stimulus order (light first/last) and varying or fixed intensity sound on localization accuracy in children and adults. RESULTS Localization accuracy was significantly better for light stimuli than sound stimuli for children and adults. Children, compared to adults, showed significantly greater localization errors for audition. Three-year-old children had significantly greater sound localization errors compared to 4- and 5-year olds. Adults performed better on the sound localization task when the light localization task occurred first. CONCLUSIONS Young children can understand and attend to localization tasks, but show poorer localization accuracy than adults in sound localization. This may be a reflection of differences in sensory modality development and/or central processes in young children, compared to adults.
Collapse
Affiliation(s)
- Karen Martin
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, United States
| | - Patti Johnstone
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, United States
| | - Mark Hedrick
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, United States.
| |
Collapse
|
21
|
Abstract
Although many salticid spiders have been shown to have corneas that transmit ultraviolet (UV) light, whether the corneas of non-salticid spiders transmit UV has not been previously investigated. In this study, we determined the spectral corneal transmission properties of 38 species belonging to 13 non-salticid families. We used these data to estimate the T50 transmission cut-off value, the wavelength corresponding to 50% maximal transmission for each species. The corneas of almost all species from the families Deinopidae, Lycosidae, Oxyopidae, Pisauridae, Sparassidae and Thomisidae, all of which have been reported to rely to a substantial extent on vision, transmitted short wavelength light below 400 nm, ranging from 306 to 381 nm. However, species from the families Atypidae and Ctenizidae are not known to rely substantially on vision, and the corneas of these species tended to absorb light of wavelengths below 380 nm, which may not allow UV sensitivity in these spiders. Liphistiidae, the family widely regarded as most basal among spiders, is of particular interest. The species in this family are not known to make substantial use of vision, and yet we found that liphistiid corneas transmitted UV light with a low T50 value (359 nm). T50 values of non-salticid spider corneas also varied with light habitat. Species living in dim environments tended to have UV-opaque corneas, but species inhabiting open areas had UV-transmitting corneas. However, there was no evidence of corneal transmission properties being related to whether a species is diurnal or nocturnal.
Collapse
Affiliation(s)
- Zhiyong Hu
- Centre for Behavioural Ecology & Evolution, College of Life Sciences, Hubei University, Wuhan 430062, Hubei, China
| | - Xin Xu
- Centre for Behavioural Ecology & Evolution, College of Life Sciences, Hubei University, Wuhan 430062, Hubei, China
| | - Zhanqi Chen
- Department of Biological Sciences, National University of Singapore, 14 Science Drive 4, 117543 Singapore
| | - Hongze Li
- Centre for Behavioural Ecology & Evolution, College of Life Sciences, Hubei University, Wuhan 430062, Hubei, China
| | - Xiaoyan Wang
- Centre for Behavioural Ecology & Evolution, College of Life Sciences, Hubei University, Wuhan 430062, Hubei, China
| | - Lingbing Wu
- Centre for Behavioural Ecology & Evolution, College of Life Sciences, Hubei University, Wuhan 430062, Hubei, China
| | - Fengxiang Liu
- Centre for Behavioural Ecology & Evolution, College of Life Sciences, Hubei University, Wuhan 430062, Hubei, China
| | - Jian Chen
- Centre for Behavioural Ecology & Evolution, College of Life Sciences, Hubei University, Wuhan 430062, Hubei, China
| | - Daiqin Li
- Centre for Behavioural Ecology & Evolution, College of Life Sciences, Hubei University, Wuhan 430062, Hubei, China Department of Biological Sciences, National University of Singapore, 14 Science Drive 4, 117543 Singapore
| |
Collapse
|
22
|
Abstract
Terrain slope can be used to encode the location of a goal. However, this directional information may be encoded using a conceptual north (i.e., invariantly with respect to the environment), or in an observer-relative fashion (i.e., varying depending on the direction one faces when learning the goal). This study examines which representation is used, whether the sensory modality in which slope is encoded (visual, kinaesthetic, or both) influences representations, and whether use of slope varies for men and women. In a square room, with a sloped floor explicitly pointed out as the only useful cue, participants encoded the corner in which a goal was hidden. Without direct sensory access to slope cues, participants used a dial to point to the goal. For each trial, the goal was hidden uphill or downhill, and the participants were informed whether they faced uphill or downhill when pointing. In support of observer-relative representations, participants pointed more accurately and quickly when facing concordantly with the hiding position. There was no effect of sensory modality, providing support for functional equivalence. Sex did not interact with the findings on modality or reference frame, but spatial measures correlated with success on the slope task differently for each sex.
Collapse
Affiliation(s)
- Steven M Weisberg
- a Department of Psychology , Spatial Intelligence and Learning Center, Temple University , Philadelphia , PA , USA
| | | | | | | |
Collapse
|
23
|
Jasinska AJ, Stein EA, Kaiser J, Naumer MJ, Yalachkov Y. Factors modulating neural reactivity to drug cues in addiction: a survey of human neuroimaging studies. Neurosci Biobehav Rev 2013; 38:1-16. [PMID: 24211373 DOI: 10.1016/j.neubiorev.2013.10.013] [Citation(s) in RCA: 340] [Impact Index Per Article: 30.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2013] [Revised: 10/04/2013] [Accepted: 10/29/2013] [Indexed: 12/16/2022]
Abstract
Human neuroimaging studies suggest that neural cue reactivity is strongly associated with indices of drug use, including addiction severity and treatment success. However, little is known about factors that modulate cue reactivity. The goal of this review, in which we survey published fMRI and PET studies on drug cue reactivity in cocaine, alcohol, and tobacco cigarette users, is to highlight major factors that modulate brain reactivity to drug cues. First, we describe cue reactivity paradigms used in neuroimaging research and outline the brain circuits that underlie cue reactivity. We then discuss major factors that have been shown to modulate cue reactivity and review specific evidence as well as outstanding questions related to each factor. Building on previous model-building reviews on the topic, we then outline a simplified model that includes the key modulatory factors and a tentative ranking of their relative impact. We conclude with a discussion of outstanding challenges and future research directions, which can inform future neuroimaging studies as well as the design of treatment and prevention programs.
Collapse
Affiliation(s)
- Agnes J Jasinska
- Neuroimaging Research Branch, National Institute on Drug Abuse, Intramural Research Program, Baltimore, MD, USA.
| | - Elliot A Stein
- Neuroimaging Research Branch, National Institute on Drug Abuse, Intramural Research Program, Baltimore, MD, USA
| | - Jochen Kaiser
- Institute of Medical Psychology, Goethe-University, Frankfurt am Main, Germany
| | - Marcus J Naumer
- Institute of Medical Psychology, Goethe-University, Frankfurt am Main, Germany
| | - Yavor Yalachkov
- Institute of Medical Psychology, Goethe-University, Frankfurt am Main, Germany.
| |
Collapse
|