1
|
Jagini KK, Sunny MM. No reliable effect of task-irrelevant cross-modal statistical regularities on distractor suppression. Cortex 2023; 161:77-92. [PMID: 36913824 DOI: 10.1016/j.cortex.2023.02.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 02/06/2023] [Indexed: 02/23/2023]
Abstract
Our sensory systems are known to extract and utilize statistical regularities in sensory inputs across space and time for efficient perceptual processing. Past research has shown that participants can utilize statistical regularities of target and distractor stimuli independently within a modality either to enhance the target or to suppress the distractor processing. Utilizing statistical regularities of task-irrelevant stimuli across different modalities also enhances target processing. However, it is not known whether distractor processing can also be suppressed by utilizing statistical regularities of task-irrelevant stimulus of different modalities. In the present study, we investigated whether the spatial (Experiment 1) and non-spatial (Experiment 2) statistical regularities of task-irrelevant auditory stimulus could suppress the salient visual distractor. We used an additional singleton visual search task with two high-probability colour singleton distractor locations. Critically, the spatial location of the high-probability distractor was either predictive (valid trials) or unpredictive (invalid trials) based on the statistical regularities of the task-irrelevant auditory stimulus. The results replicated earlier findings of distractor suppression at high-probability locations compared to the locations where distractors appear with lower probability. However, the results did not show any RT advantage for valid distractor location trials as compared with invalid distractor location trials in both experiments. When tested on whether participants can express awareness of the relationship between specific auditory stimulus and the distractor location, they showed explicit awareness only in Experiment 1. However, an exploratory analysis suggested a possibility of response biases at the awareness testing phase of Experiment 1. Overall, results indicate that irrespective of awareness of the relationship between auditory stimulus and distractor location regularities, there was no reliable influence of task-irrelevant auditory stimulus regularities on distractor suppression.
Collapse
Affiliation(s)
- Kishore Kumar Jagini
- Centre for Cognitive and Brain Sciences, Indian Institute of Technology Gandhinagar, Gandhinagar, India.
| | - Meera Mary Sunny
- Centre for Cognitive and Brain Sciences, Indian Institute of Technology Gandhinagar, Gandhinagar, India
| |
Collapse
|
2
|
Chen S, Geyer T, Zinchenko A, Müller HJ, Shi Z. Multisensory Rather than Unisensory Representations Contribute to Statistical Context Learning in Tactile Search. J Cogn Neurosci 2022; 34:1702-1717. [PMID: 35704553 DOI: 10.1162/jocn_a_01880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Using a combination of behavioral and EEG measures in a tactile odd-one-out search task with collocated visual items, we investigated the mechanisms underlying facilitation of search by repeated (vs. nonrepeated) spatial distractor-target configurations ("contextual cueing") when either the tactile (same-modality) or the visual array (different-modality) context was predictive of the location of the tactile singleton target. Importantly, in both conditions, the stimulation was multisensory, consisting of tactile plus visual items, although the target was singled out in the tactile modality and so the visual items were task-irrelevant. We found that when the predictive context was tactile, facilitation of search RTs by repeated configurations was accompanied by, and correlated with, enhanced lateralized ERP markers of pre-attentive (N1, N2) and, respectively focal-attentional processing (contralateral delay activity) not only over central ("somatosensory"), but also posterior ("visual") electrode sites, although the ERP effects were less marked over visual cortex. A similar pattern-of facilitated RTs and enhanced lateralized (N2 and contralateral delay activity) ERP components-was found when the predictive context was visual, although the ERP effects were less marked over somatosensory cortex. These findings indicate that both somatosensory and visual cortical regions contribute to the more efficient processing of the tactile target in repeated stimulus arrays, although their involvement is differentially weighted depending on the sensory modality that contains the predictive information.
Collapse
Affiliation(s)
- Siyi Chen
- Ludwig-Maximilians-Universität München, Germany
| | | | | | | | | |
Collapse
|
3
|
Chen S, Shi Z, Zinchenko A, Müller HJ, Geyer T. Cross-modal contextual memory guides selective attention in visual-search tasks. Psychophysiology 2022; 59:e14025. [PMID: 35141899 DOI: 10.1111/psyp.14025] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2021] [Revised: 12/14/2021] [Accepted: 01/21/2022] [Indexed: 11/30/2022]
Abstract
Visual search is speeded when a target item is positioned consistently within an invariant (repeatedly encountered) configuration of distractor items ("contextual cueing"). Contextual cueing is also observed in cross-modal search, when the location of the-visual-target is predicted by distractors from another-tactile-sensory modality. Previous studies examining lateralized waveforms of the event-related potential (ERP) with millisecond precision have shown that learned visual contexts improve a whole cascade of search-processing stages. Drawing on ERPs, the present study tested alternative accounts of contextual cueing in tasks in which distractor-target contextual associations are established across, as compared to, within sensory modalities. To this end, we devised a novel, cross-modal search task: search for a visual feature singleton, with repeated (and nonrepeated) distractor configurations presented either within the same (visual) or a different (tactile) modality. We found reaction times (RTs) to be faster for repeated versus nonrepeated configurations, with comparable facilitation effects between visual (unimodal) and tactile (crossmodal) context cues. Further, for repeated configurations, there were enhanced amplitudes (and reduced latencies) of ERPs indexing attentional allocation (PCN) and postselective analysis of the target (CDA), respectively; both components correlated positively with the RT facilitation. These effects were again comparable between uni- and crossmodal cueing conditions. In contrast, motor-related processes indexed by the response-locked LRP contributed little to the RT effects. These results indicate that both uni- and crossmodal context cues benefit the same, visual processing stages related to the selection and subsequent analysis of the search target.
Collapse
Affiliation(s)
- Siyi Chen
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Zhuanghua Shi
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany.,Munich Center for Neurosciences-Brain & Mind, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Artyom Zinchenko
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Hermann J Müller
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany.,Munich Center for Neurosciences-Brain & Mind, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Thomas Geyer
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany.,Munich Center for Neurosciences-Brain & Mind, Ludwig-Maximilians-Universität München, Munich, Germany
| |
Collapse
|
4
|
Chen S, Shi Z, Müller HJ, Geyer T. Multisensory visuo-tactile context learning enhances the guidance of unisensory visual search. Sci Rep 2021; 11:9439. [PMID: 33941832 PMCID: PMC8093296 DOI: 10.1038/s41598-021-88946-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Accepted: 04/16/2021] [Indexed: 02/02/2023] Open
Abstract
Does multisensory distractor-target context learning enhance visual search over and above unisensory learning? To address this, we had participants perform a visual search task under both uni- and multisensory conditions. Search arrays consisted of one Gabor target that differed from three homogeneous distractors in orientation; participants had to discriminate the target's orientation. In the multisensory session, additional tactile (vibration-pattern) stimulation was delivered to two fingers of each hand, with the odd-one-out tactile target and the distractors co-located with the corresponding visual items in half the trials; the other half presented the visual array only. In both sessions, the visual target was embedded within identical (repeated) spatial arrangements of distractors in half of the trials. The results revealed faster response times to targets in repeated versus non-repeated arrays, evidencing 'contextual cueing'. This effect was enhanced in the multisensory session-importantly, even when the visual arrays presented without concurrent tactile stimulation. Drift-diffusion modeling confirmed that contextual cueing increased the rate at which task-relevant information was accumulated, as well as decreasing the amount of evidence required for a response decision. Importantly, multisensory learning selectively enhanced the evidence-accumulation rate, expediting target detection even when the context memories were triggered by visual stimuli alone.
Collapse
Affiliation(s)
- Siyi Chen
- Allgemeine Und Experimentelle Psychologie, Department Psychologie, Ludwig-Maximilians-Universität München, Leopoldstr. 13, 80802, München, Germany.
| | - Zhuanghua Shi
- Allgemeine Und Experimentelle Psychologie, Department Psychologie, Ludwig-Maximilians-Universität München, Leopoldstr. 13, 80802, München, Germany
| | - Hermann J Müller
- Allgemeine Und Experimentelle Psychologie, Department Psychologie, Ludwig-Maximilians-Universität München, Leopoldstr. 13, 80802, München, Germany
| | - Thomas Geyer
- Allgemeine Und Experimentelle Psychologie, Department Psychologie, Ludwig-Maximilians-Universität München, Leopoldstr. 13, 80802, München, Germany
| |
Collapse
|
5
|
Geyer T, Seitz W, Zinchenko A, Müller HJ, Conci M. Why Are Acquired Search-Guiding Context Memories Resistant to Updating? Front Psychol 2021; 12:650245. [PMID: 33732200 PMCID: PMC7956950 DOI: 10.3389/fpsyg.2021.650245] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 02/09/2021] [Indexed: 01/22/2023] Open
Abstract
Looking for goal-relevant objects in our various environments is one of the most ubiquitous tasks the human visual system has to accomplish (Wolfe, 1998). Visual search is guided by a number of separable selective-attention mechanisms that can be categorized as bottom-up driven - guidance by salient physical properties of the current stimuli - or top-down controlled - guidance by observers' "online" knowledge of search-critical object properties (e.g., Liesefeld and Müller, 2019). In addition, observers' expectations based on past experience also play also a significant role in goal-directed visual selection. Because sensory environments are typically stable, it is beneficial for the visual system to extract and learn the environmental regularities that are predictive of (the location of) the target stimulus. This perspective article is concerned with one of these predictive mechanisms: statistical context learning of consistent spatial patterns of target and distractor items in visual search. We review recent studies on context learning and its adaptability to incorporate consistent changes, with the aim to provide new directions to the study of processes involved in the acquisition of search-guiding context memories and their adaptation to consistent contextual changes - from a three-pronged, psychological, computational, and neurobiological perspective.
Collapse
Affiliation(s)
- Thomas Geyer
- Department Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany
- Munich Center for Neurosciences – Brain & Mind, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Werner Seitz
- Department Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany
- Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Artyom Zinchenko
- Department Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Hermann J. Müller
- Department Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany
- Munich Center for Neurosciences – Brain & Mind, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Markus Conci
- Department Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany
| |
Collapse
|