1
|
Chen L. Synesthetic Correspondence: An Overview. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:101-119. [PMID: 38270856 DOI: 10.1007/978-981-99-7611-9_7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Intramodal and cross-modal perceptual grouping based on the spatial proximity and temporal closeness between multiple sensory stimuli, as an operational principle has built a coherent and meaningful representation of the multisensory event/object. To implement and investigate the cross-modal perceptual grouping, researchers have employed excellent paradigms of spatial/temporal ventriloquism and cross-modal dynamic capture and have revealed the conditional constraints as well as the functional facilitations among various correspondence of sensory properties, with featured behavioral evidence, computational framework as well as brain oscillation patterns. Typically, synesthetic correspondence as a special type of cross-modal correspondence can shape the efficiency and effect-size of cross-modal interaction. For example, factors such as pitch/loudness in the auditory dimension with size/brightness in the visual dimension could modulate the strength of the cross-modal temporal capture. The empirical behavioral findings, as well as psychophysical and neurophysiological evidence to address the cross-modal perceptual grouping and synesthetic correspondence, were summarized in this review. Finally, the potential applications (such as artificial synesthesia device) and how synesthetic correspondence interface with semantics (sensory linguistics), as well as the promising research questions in this field have been discussed.
Collapse
Affiliation(s)
- Lihan Chen
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.
- Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.
- Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, China.
- National Key Laboratory of General Artificial Intelligence, Peking University, Beijing, China.
- National Engineering Laboratory for Big Data Analysis and Applications, Peking University, Beijing, China.
| |
Collapse
|
2
|
Chen L, Liao HI. Microsaccadic Eye Movements but not Pupillary Dilation Response Characterizes the Crossmodal Freezing Effect. Cereb Cortex Commun 2020; 1:tgaa072. [PMID: 34296132 PMCID: PMC8153075 DOI: 10.1093/texcom/tgaa072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 09/24/2020] [Accepted: 09/25/2020] [Indexed: 11/14/2022] Open
Abstract
In typical spatial orienting tasks, the perception of crossmodal (e.g., audiovisual) stimuli evokes greater pupil dilation and microsaccade inhibition than unisensory stimuli (e.g., visual). The characteristic pupil dilation and microsaccade inhibition has been observed in response to "salient" events/stimuli. Although the "saliency" account is appealing in the spatial domain, whether this occurs in the temporal context remains largely unknown. Here, in a brief temporal scale (within 1 s) and with the working mechanism of involuntary temporal attention, we investigated how eye metric characteristics reflect the temporal dynamics of perceptual organization, with and without multisensory integration. We adopted the crossmodal freezing paradigm using the classical Ternus apparent motion. Results showed that synchronous beeps biased the perceptual report for group motion and triggered the prolonged sound-induced oculomotor inhibition (OMI), whereas the sound-induced OMI was not obvious in a crossmodal task-free scenario (visual localization without audiovisual integration). A general pupil dilation response was observed in the presence of sounds in both visual Ternus motion categorization and visual localization tasks. This study provides the first empirical account of crossmodal integration by capturing microsaccades within a brief temporal scale; OMI but not pupillary dilation response characterizes task-specific audiovisual integration (shown by the crossmodal freezing effect).
Collapse
Affiliation(s)
- Lihan Chen
- Department of Brain and Cognitive Sciences, Schools of Psychological and Cognitive Sciences, Peking University, Beijing, 100871, China
- Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100871, China
- Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, 100871, China
| | - Hsin-I Liao
- NTT Communication Science Laboratories, NTT Corporation, Atsugi, Kanagawa, 243-0198, Japan
| |
Collapse
|
3
|
Zheng W, Chen L. The Roles of Attentional Shifts and Attentional Reengagement in Resolving The Spatial Compatibility Effect in Tactile Simon-like Tasks. Sci Rep 2018; 8:8760. [PMID: 29884800 PMCID: PMC5993732 DOI: 10.1038/s41598-018-27114-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2017] [Accepted: 05/30/2018] [Indexed: 11/08/2022] Open
Abstract
The Simon effect refers to the acceleration of choice responses when the target position and response location are consistent compared with scenarios in which they are inconsistent, even if the target position is not relevant to the response. Here, we provide the first demonstration that the tactile Simon-like effect operates in an attention-shifting manner. In unimodal scenarios (Experiments 1-4), for the tactile direction task, the spatial compatibility effect was absent in the focused-attention condition but maintained in the divided-attention condition. For the tactile localization task, this pattern was reversed: the spatial compatibility effect occurred for the focused-attention condition but was reduced/absent in the divided-attention condition. In the audiotactile interaction scenario (Experiment 5), the reaction times (RTs) for discriminating the tactile motion direction were prolonged; however, a spatial compatibility effect was not observed. We propose that the temporal course of resolving conflicts between spatial codes during attentional shifts, including attentional reengagement, may account for the tactile Simon-like effect.
Collapse
Affiliation(s)
- Wanting Zheng
- School of Ophthalmology & Optometry, School of Biomedical Engineering, Wenzhou Medical University, Wenzhou, 325035, China.
| | - Lihan Chen
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100871, China.
- Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, China.
| |
Collapse
|
4
|
Wan Y, Chen L. Temporal Reference, Attentional Modulation, and Crossmodal Assimilation. Front Comput Neurosci 2018; 12:39. [PMID: 29922143 PMCID: PMC5996128 DOI: 10.3389/fncom.2018.00039] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2018] [Accepted: 05/16/2018] [Indexed: 11/18/2022] Open
Abstract
Crossmodal assimilation effect refers to the prominent phenomenon by which ensemble mean extracted from a sequence of task-irrelevant distractor events, such as auditory intervals, assimilates/biases the perception (such as visual interval) of the subsequent task-relevant target events in another sensory modality. In current experiments, using visual Ternus display, we examined the roles of temporal reference, materialized as the time information accumulated before the onset of target event, as well as the attentional modulation in crossmodal temporal interaction. Specifically, we examined how the global time interval, the mean auditory inter-intervals and the last interval in the auditory sequence assimilate and bias the subsequent percept of visual Ternus motion (element motion vs. group motion). We demonstrated that both the ensemble (geometric) mean and the last interval in the auditory sequence contribute to bias the percept of visual motion. Longer mean (or last) interval elicited more reports of group motion, whereas the shorter mean (or last) auditory intervals gave rise to more dominant percept of element motion. Importantly, observers have shown dynamic adaptation to the temporal reference of crossmodal assimilation: when the target visual Ternus stimuli were separated by a long gap interval after the preceding sound sequence, the assimilation effect by ensemble mean was reduced. Our findings suggested that crossmodal assimilation relies on a suitable temporal reference on adaptation level, and revealed a general temporal perceptual grouping principle underlying complex audio-visual interactions in everyday dynamic situations.
Collapse
Affiliation(s)
| | - Lihan Chen
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| |
Collapse
|
5
|
Guo L, Bao M, Guan L, Chen L. Cognitive Styles Differentiate Crossmodal Correspondences Between Pitch Glide and Visual Apparent Motion. Multisens Res 2017; 30:363-385. [PMID: 31287072 DOI: 10.1163/22134808-00002556] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2016] [Accepted: 02/13/2017] [Indexed: 11/19/2022]
Abstract
Crossmodal correspondences are the automatic associations that most people have between different basic sensory stimulus attributes, dimensions, or features. For instance, people often show a systematic tendency to associate moving objects with changing pitches. Cognitive styles are defined as an individual's consistent approach to think, perceive, and remember information, and they reflect qualitative rather than quantitative differences between individuals in their thinking processes. Here we asked whether cognitive styles played a role in modulating the crossmodal interaction. We used the visual Ternus display in our study, since it elicits two distinct apparent motion percepts: element motion (with a shorter interval between the two Ternus frames) and group motion (with a longer interval between the two frames). We examined the audiovisual correspondences between the visual Ternus movement directions (upward or downward) and the changes of pitches of concurrent glides (ascending frequency or descending frequency). Moreover, we measured the cognitive styles (with the Embedded Figure Test) for each participant. The results showed that congruent correspondence between pitch-ascending (decreasing) glides and moving upward (downward) visual directions led to a more dominant percept of 'element motion', and such an effect was typically observed in the field-independent group. Importantly, field-independent participants demonstrated a high efficiency for identifying the properties of audiovisual events and applying the crossmodal correspondence in crossmodal interaction. The results suggest cognitive styles could differentiate crossmodal correspondences in crossmodal interaction.
Collapse
Affiliation(s)
- Lu Guo
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Ming Bao
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Luyang Guan
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lihan Chen
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China.,Key Laboratory of Machine Perception, Peking University, Beijing 100871, China
| |
Collapse
|
6
|
Wang Q, Guo L, Bao M, Chen L. Perception of visual apparent motion is modulated by a gap within concurrent auditory glides, even when it is illusory. Front Psychol 2015; 6:564. [PMID: 26042055 PMCID: PMC4436805 DOI: 10.3389/fpsyg.2015.00564] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2014] [Accepted: 04/19/2015] [Indexed: 11/23/2022] Open
Abstract
Auditory and visual events often happen concurrently, and how they group together can have a strong effect on what is perceived. We investigated whether/how intra- or cross-modal temporal grouping influenced the perceptual decision of otherwise ambiguous visual apparent motion. To achieve this, we juxtaposed auditory gap transfer illusion with visual Ternus display. The Ternus display involves a multi-element stimulus that can induce either of two different percepts of apparent motion: ‘element motion’ (EM) or ‘group motion’ (GM). In “EM,” the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary; while in “GM,” both disks appear to move laterally as a whole. The gap transfer illusion refers to the illusory subjective transfer of a short gap (around 100 ms) from the long glide to the short continuous glide when the two glides intercede at the temporal middle point. In our experiments, observers were required to make a perceptual discrimination of Ternus motion in the presence of concurrent auditory glides (with or without a gap inside). Results showed that a gap within a short glide imposed a remarkable effect on separating visual events, and led to a dominant perception of GM as well. The auditory configuration with gap transfer illusion triggered the same auditory capture effect. Further investigations showed that visual interval which coincided with the gap interval (50–230 ms) in the long glide was perceived to be shorter than that within both the short glide and the ‘gap-transfer’ auditory configurations in the same physical intervals (gaps). The results indicated that auditory temporal perceptual grouping takes priority over the cross-modal interaction in determining the final readout of the visual perception, and the mechanism of selective attention on auditory events also plays a role.
Collapse
Affiliation(s)
- Qingcui Wang
- Hangzhou Applied Acoustics Research Institute, Key Laboratory of Science and Technology Hangzhou, China ; Institute of Acoustics - Chinese Academy of Sciences Beijing, China
| | - Lu Guo
- Institute of Acoustics - Chinese Academy of Sciences Beijing, China
| | - Ming Bao
- Institute of Acoustics - Chinese Academy of Sciences Beijing, China
| | - Lihan Chen
- Department of Psychology and Key Laboratory of Machine Perception (Ministry of Education), Peking University Beijing, China
| |
Collapse
|
7
|
Abstract
Spatial ventriloquism refers to the phenomenon that a visual stimulus such as a flash can attract the perceived location of a spatially discordant but temporally synchronous sound. An analogous example of mutual attraction between audition and vision has been found in the temporal domain, where temporal aspects of a visual event, such as its onset, frequency, or duration, can be biased by a slightly asynchronous sound. In this review, we examine various manifestations of spatial and temporal attraction between the senses (both direct effects and aftereffects), and we discuss important constraints on the occurrence of these effects. Factors that potentially modulate ventriloquism-such as attention, synesthetic correspondence, and other cognitive factors-are described. We trace theories and models of spatial and temporal ventriloquism, from the traditional unity assumption and modality appropriateness hypothesis to more recent Bayesian and neural network approaches. Finally, we summarize recent evidence probing the underlying neural mechanisms of spatial and temporal ventriloquism.
Collapse
|
8
|
Fast transfer of crossmodal time interval training. Exp Brain Res 2014; 232:1855-64. [PMID: 24570386 DOI: 10.1007/s00221-014-3877-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2013] [Accepted: 02/12/2014] [Indexed: 10/25/2022]
Abstract
Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.
Collapse
|
9
|
Shi Z, Müller HJ. Multisensory perception and action: development, decision-making, and neural mechanisms. Front Integr Neurosci 2013; 7:81. [PMID: 24319414 PMCID: PMC3836185 DOI: 10.3389/fnint.2013.00081] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2013] [Accepted: 11/04/2013] [Indexed: 11/13/2022] Open
Affiliation(s)
- Zhuanghua Shi
- Department of Psychology, Experimental Psychology, Ludwig-Maximilians-Universität München Munich, Germany
| | | |
Collapse
|
10
|
Wang Q, Bao M, Chen L. The role of spatiotemporal and spectral cues in segregating short sound events: evidence from auditory Ternus display. Exp Brain Res 2013; 232:273-82. [PMID: 24141518 DOI: 10.1007/s00221-013-3738-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2013] [Accepted: 10/03/2013] [Indexed: 11/30/2022]
Abstract
Previous studies using auditory sequences with rapid repetition of tones revealed that spatiotemporal cues and spectral cues are important cues used to fuse or segregate sound streams. However, the perceptual grouping was partially driven by the cognitive processing of the periodicity cues of the long sequence. Here, we investigate whether perceptual groupings (spatiotemporal grouping vs. frequency grouping) could also be applicable to short auditory sequences, where auditory perceptual organization is mainly subserved by lower levels of perceptual processing. To find the answer to that question, we conducted two experiments using an auditory Ternus display. The display was composed of three speakers (A, B and C), with each speaker consecutively emitting one sound consisting of two frames (AB and BC). Experiment 1 manipulated both spatial and temporal factors. We implemented three 'within-frame intervals' (WFIs, or intervals between A and B, and between B and C), seven 'inter-frame intervals' (IFIs, or intervals between AB and BC) and two different speaker layouts (inter-distance of speakers: near or far). Experiment 2 manipulated the differentiations of frequencies between two auditory frames, in addition to the spatiotemporal cues as in Experiment 1. Listeners were required to make two alternative forced choices (2AFC) to report the perception of a given Ternus display: element motion (auditory apparent motion from sound A to B to C) or group motion (auditory apparent motion from sound 'AB' to 'BC'). The results indicate that the perceptual grouping of short auditory sequences (materialized by the perceptual decisions of the auditory Ternus display) was modulated by temporal and spectral cues, with the latter contributing more to segregating auditory events. Spatial layout plays a less role in perceptual organization. These results could be accounted for by the 'peripheral channeling' theory.
Collapse
Affiliation(s)
- Qingcui Wang
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing, 100190, China,
| | | | | |
Collapse
|
11
|
Chiang TC, Liang KC, Chen JH, Hsieh CH, Huang YA. Brain deactivation in the outperformance in bimodal tasks: an FMRI study. PLoS One 2013; 8:e77408. [PMID: 24155952 PMCID: PMC3796455 DOI: 10.1371/journal.pone.0077408] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2013] [Accepted: 09/02/2013] [Indexed: 11/18/2022] Open
Abstract
While it is known that some individuals can effectively perform two tasks simultaneously, other individuals cannot. How the brain deals with performing simultaneous tasks remains unclear. In the present study, we aimed to assess which brain areas corresponded to various phenomena in task performance. Nineteen subjects were requested to sequentially perform three blocks of tasks, including two unimodal tasks and one bimodal task. The unimodal tasks measured either visual feature binding or auditory pitch comparison, while the bimodal task required performance of the two tasks simultaneously. The functional magnetic resonance imaging (fMRI) results are compatible with previous studies showing that distinct brain areas, such as the visual cortices, frontal eye field (FEF), lateral parietal lobe (BA7), and medial and inferior frontal lobe, are involved in processing of visual unimodal tasks. In addition, the temporal lobes and Brodmann area 43 (BA43) were involved in processing of auditory unimodal tasks. These results lend support to concepts of modality-specific attention. Compared to the unimodal tasks, bimodal tasks required activation of additional brain areas. Furthermore, while deactivated brain areas were related to good performance in the bimodal task, these areas were not deactivated where the subject performed well in only one of the two simultaneous tasks. These results indicate that efficient information processing does not require some brain areas to be overly active; rather, the specific brain areas need to be relatively deactivated to remain alert and perform well on two tasks simultaneously. Meanwhile, it can also offer a neural basis for biofeedback in training courses, such as courses in how to perform multiple tasks simultaneously.
Collapse
Affiliation(s)
- Tzu-Ching Chiang
- Department of Psychology, National Chung Cheng University, Min-Hsiung Township, Chia-Yi County, Taiwan
- Department of Psychology, Ohio State University, Columbus, Ohio, United States of America
- * E-mail:
| | - Keng-Chen Liang
- Department of Psychology, National Taiwan University, Taipei, Taiwan
| | - Jyh-Horng Chen
- Electrical Engineering, Interdisciplinary MRI Laboratory, National Taiwan University, Taipei, Taiwan,
| | - Chao-Hsien Hsieh
- Electrical Engineering, Interdisciplinary MRI Laboratory, National Taiwan University, Taipei, Taiwan,
| | - Yun-An Huang
- Electrical Engineering, Interdisciplinary MRI Laboratory, National Taiwan University, Taipei, Taiwan,
| |
Collapse
|
12
|
Botta F, Lupiáñez J, Sanabria D. Visual unimodal grouping mediates auditory attentional bias in visuo-spatial working memory. Acta Psychol (Amst) 2013; 144:104-11. [PMID: 23792666 DOI: 10.1016/j.actpsy.2013.05.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2012] [Revised: 05/17/2013] [Accepted: 05/25/2013] [Indexed: 11/19/2022] Open
Abstract
Audiovisual links in spatial attention have been reported in many previous studies. However, the effectiveness of auditory spatial cues in biasing the information encoding into visuo-spatial working memory (VSWM) is still relatively unknown. In this study, we addressed this issue by combining a cuing paradigm with a change detection task in VSWM. Moreover, we manipulated the perceptual organization of the to-be-remembered visual stimuli. We hypothesized that the auditory effect on VSWM would depend on the perceptual association between the auditory cue and the visual probe. Results showed, for the first time, a significant auditory attentional bias in VSWM. However, the effect was observed only when the to-be-remembered visual stimuli were organized in two distinctive visual objects. We propose that these results shed new light on audio-visual crossmodal links in spatial attention suggesting that, apart from the spatio-temporal contingency, the likelihood of perceptual association between the auditory cue and the visual target can have a large impact on crossmodal attentional biases.
Collapse
Affiliation(s)
- Fabiano Botta
- Department of Experimental Psychology, University of Granada, Spain.
| | | | | |
Collapse
|
13
|
Zhang H, Chen L, Zhou X. Adaptation to visual or auditory time intervals modulates the perception of visual apparent motion. Front Integr Neurosci 2012; 6:100. [PMID: 23133408 PMCID: PMC3488759 DOI: 10.3389/fnint.2012.00100] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2012] [Accepted: 10/16/2012] [Indexed: 11/25/2022] Open
Abstract
It is debated whether sub-second timing is subserved by a centralized mechanism or by the intrinsic properties of task-related neural activity in specific modalities (Ivry and Schlerf, 2008). By using a temporal adaptation task, we investigated whether adapting to different time intervals conveyed through stimuli in different modalities (i.e., frames of a visual Ternus display, visual blinking discs, or auditory beeps) would affect the subsequent implicit perception of visual timing, i.e., inter-stimulus interval (ISI) between two frames in a Ternus display. The Ternus display can induce two percepts of apparent motion (AM), depending on the ISI between the two frames: "element motion" for short ISIs, in which the endmost disc is seen as moving back and forth while the middle disc at the overlapping or central position remains stationary; "group motion" for longer ISIs, in which both discs appear to move in a manner of lateral displacement as a whole. In Experiment 1, participants adapted to either the typical "element motion" (ISI = 50 ms) or the typical "group motion" (ISI = 200 ms). In Experiments 2 and 3, participants adapted to a time interval of 50 or 200 ms through observing a series of two paired blinking discs at the center of the screen (Experiment 2) or hearing a sequence of two paired beeps (with pitch 1000 Hz). In Experiment 4, participants adapted to sequences of paired beeps with either low pitches (500 Hz) or high pitches (5000 Hz). After adaptation in each trial, participants were presented with a Ternus probe in which the ISI between the two frames was equal to the transitional threshold of the two types of motions, as determined by a pretest. Results showed that adapting to the short time interval in all the situations led to more reports of "group motion" in the subsequent Ternus probes; adapting to the long time interval, however, caused no aftereffect for visual adaptation but significantly more reports of group motion for auditory adaptation. These findings, suggesting amodal representation for sub-second timing across modalities, are interpreted in the framework of temporal pacemaker model.
Collapse
Affiliation(s)
- Huihui Zhang
- Department of Psychology, Center for Brain and Cognitive Sciences, Peking UniversityBeijing, China
| | - Lihan Chen
- Department of Psychology, Center for Brain and Cognitive Sciences, Peking UniversityBeijing, China
- Key Laboratory of Machine Perception (Ministry of Education), Peking UniversityBeijing, China
| | - Xiaolin Zhou
- Department of Psychology, Center for Brain and Cognitive Sciences, Peking UniversityBeijing, China
- Key Laboratory of Machine Perception (Ministry of Education), Peking UniversityBeijing, China
| |
Collapse
|
14
|
Shi Z, Jia L, Müller HJ. Modulation of tactile duration judgments by emotional pictures. Front Integr Neurosci 2012; 6:24. [PMID: 22654742 PMCID: PMC3358720 DOI: 10.3389/fnint.2012.00024] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2012] [Accepted: 05/07/2012] [Indexed: 11/17/2022] Open
Abstract
Judging the duration of emotional stimuli is known to be influenced by their valence and arousal values. However, whether and how perceiving emotion in one modality affects time perception in another modality is still unclear. To investigate this, we compared the influence of different types of emotional pictures-a picture of threat, disgust, or a neutral picture presented at the start of a trial-on temporal bisection judgments of the duration of a subsequently presented vibrotactile stimulus. We found an overestimation of tactile duration following exposure to pictures of threat, but not pictures of disgust (even though these scored equally high on arousal), in a short-range temporal bisection task (range 300/900 ms). Follow-up experiments revealed that this duration lengthening effect was abolished when the range to be bisected was increased (1000/1900 ms). However, duration overestimation was maintained in the short-range bisection task regardless of whether the interval between the visual and tactile events was short or long. This pattern is inconsistent with a general arousal interpretation of duration distortion and suggests that crossmodal linkages in the processing of emotions and emotional regulation are two main factors underlying the manifestation of crossmodal duration modulation.
Collapse
Affiliation(s)
- Zhuanghua Shi
- Department of Psychology, Experimental Psychology, Ludwig-Maximilians-Universität München Munich, Germany
| | | | | |
Collapse
|
15
|
Visual apparent motion can be modulated by task-irrelevant lexical information. Atten Percept Psychophys 2011; 73:1010-5. [PMID: 21264743 DOI: 10.3758/s13414-010-0083-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Previous studies have repeatedly demonstrated the impact of Gestalt structural grouping principles upon the parsing of motion correspondence in ambiguous apparent motion. Here, by embedding Chinese characters in a visual Ternus display that comprised two stimulus frames, we showed that the perception of visual apparent motion can be modulated by activation of task-irrelevant lexical representations. Each frame had two disks, with the second disk of the first frame and the first disk of the second frame being presented at the same location. Observers could perceive either "element motion," in which the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary, or "group motion," in which both disks appear to move laterally as a whole. More reports of group motion, as opposed to element motion, were obtained when the embedded characters formed two-character compound words than when they formed nonwords, although this lexicality effect appeared to be attenuated by the use of the same characters at the overlapping position across the two frames. Thus, grouping of visual elements in a changing world can be guided by both structural principles and prior world knowledge, including lexical information.
Collapse
|