1
|
Britt N, Sun HJ. Spatial attention in three-dimensional space: A meta-analysis for the near advantage in target detection and localization. Neurosci Biobehav Rev 2024; 165:105869. [PMID: 39214342 DOI: 10.1016/j.neubiorev.2024.105869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2024] [Revised: 07/31/2024] [Accepted: 08/27/2024] [Indexed: 09/04/2024]
Abstract
Studies have explored how human spatial attention appears allocated in three-dimensional (3D) space. It has been demonstrated that target distance from the viewer can modulate performance in target detection and localization tasks: reaction times are shorter when targets appear nearer to the observer compared to farther distances (i.e., near advantage). Times have reached to quantitatively analyze this literature. In the current meta-analysis, 29 studies (n = 1260 participants) examined target detection and localization across 3-D space. Moderator analyses included: detection vs localization tasks, spatial cueing vs uncued tasks, control of retinal size across depth, central vs peripheral targets, real-space vs stereoscopic vs monocular depth environments, and inclusion of in-trial motion. The analyses revealed a near advantage for spatial attention that was affected by the moderating variables of controlling for retinal size across depth, the use of spatial cueing tasks, and the inclusion of in-trial motion. Overall, these results provide an up-to-date quantification of the effect of depth and provide insight into methodological differences in evaluating spatial attention.
Collapse
Affiliation(s)
- Noah Britt
- Department of Psychology, Neuroscience, and Behaviour, McMaster University, Hamilton, Ontario, Canada.
| | - Hong-Jin Sun
- Department of Psychology, Neuroscience, and Behaviour, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
2
|
Krzyś KJ, Man LLY, Wammes JD, Castelhano MS. Foreground bias: Semantic consistency effects modulated when searching across depth. Psychon Bull Rev 2024:10.3758/s13423-024-02515-2. [PMID: 38806789 DOI: 10.3758/s13423-024-02515-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/12/2024] [Indexed: 05/30/2024]
Abstract
When processing visual scenes, we tend to prioritize information in the foreground, often at the expense of background information. The foreground bias has been supported by data demonstrating that there are more fixations to foreground, and faster and more accurate detection of targets embedded in foreground. However, it is also known that semantic consistency is associated with more efficient search. Here, we examined whether semantic context interacts with foreground prioritization, either amplifying or mitigating the effect of target semantic consistency. For each scene, targets were placed in the foreground or background and were either semantically consistent or inconsistent with the context of immediately surrounding depth region. Results indicated faster response times (RTs) for foreground and semantically consistent targets, replicating established effects. More importantly, we found the magnitude of the semantic consistency effect was significantly smaller in the foreground than background region. To examine the robustness of this effect, in Experiment 2, we strengthened the reliability of semantics by increasing the proportion of targets consistent with the scene region to 80%. We found the overall results pattern to replicate the incongruous effect of semantic consistency across depth observed in Experiment 1. This suggests foreground bias modulates the effects of semantics so that performance is less impacted in near space.
Collapse
Affiliation(s)
- Karolina J Krzyś
- Department of Psychology, Queen's University, 62 Arch Street, Kingston, ON, K7L 3N6, Canada.
| | - Louisa L Y Man
- Department of Psychology, Queen's University, 62 Arch Street, Kingston, ON, K7L 3N6, Canada
| | - Jeffrey D Wammes
- Department of Psychology, Queen's University, 62 Arch Street, Kingston, ON, K7L 3N6, Canada
| | - Monica S Castelhano
- Department of Psychology, Queen's University, 62 Arch Street, Kingston, ON, K7L 3N6, Canada
| |
Collapse
|
3
|
Segraves MA. Using Natural Scenes to Enhance our Understanding of the Cerebral Cortex's Role in Visual Search. Annu Rev Vis Sci 2023; 9:435-454. [PMID: 37164028 DOI: 10.1146/annurev-vision-100720-124033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Using natural scenes is an approach to studying the visual and eye movement systems approximating how these systems function in everyday life. This review examines the results from behavioral and neurophysiological studies using natural scene viewing in humans and monkeys. The use of natural scenes for the study of cerebral cortical activity is relatively new and presents challenges for data analysis. Methods and results from the use of natural scenes for the study of the visual and eye movement cortex are presented, with emphasis on new insights that this method provides enhancing what is known about these cortical regions from the use of conventional methods.
Collapse
Affiliation(s)
- Mark A Segraves
- Department of Neurobiology, Northwestern University, Evanston, Illinois, USA;
| |
Collapse
|
4
|
Peripheral target detection can be modulated by target distance but not attended distance in 3D space simulated by monocular depth cues. Vision Res 2023; 204:108160. [PMID: 36529047 DOI: 10.1016/j.visres.2022.108160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 12/05/2022] [Accepted: 12/06/2022] [Indexed: 12/23/2022]
Abstract
Most studies of visuo-spatial attention present stimuli on a 2D plane, and less is known about how attention varies in 3D space. Previous studies found better peripheral detection performance for targets at a near compared to a far depth, simulated by pictorial cues and optical flow. The current study examined whether target detectability is monotonically related to distance along the depth axis, and whether the attended distance modulates the effect of target distance. We investigated these questions in two experiments that measured how apparent distance and target eccentricity affects peripheral target detection when performed alone during passive simulated self-motion, or during a simultaneous, active central car-following task. Experiment 1 found that targets at an apparent distance of 18.5 virtual meters were detected faster and more accurately than targets at 9.25 and 37 virtual meters, and detectability declined with eccentricity. Experiment 2 examined the effect of the attended location by varying the distance between the viewer and the lead car on which participants were instructed to fixate (i.e. the headway) while equating target distances across headway conditions. Experiment 2 replicated the effects found in Experiment 1, and headway did not modulate the effect of target distance. These results are consistent with the hypothesis that target detection depends non-monotonically on the distance between the viewer and the target, and is not affected by the distance between the target and attended location. However, target detection may also have been affected by stimulus characteristics that co-varied with apparent depth, rather than depth per se.
Collapse
|
5
|
Zou B, Liu Y, Wolfe JM. Top-down control of attention by stereoscopic depth. Vision Res 2022; 198:108061. [PMID: 35576843 PMCID: PMC9665310 DOI: 10.1016/j.visres.2022.108061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Revised: 04/26/2022] [Accepted: 04/30/2022] [Indexed: 11/21/2022]
Abstract
Stereoscopic depth has a mixed record as a guiding attribute in visual attention. Visual search can be efficient if the target lies at a unique depth; whereas automatic segmentation of search arrays into different depth planes does not appear to be pre-attentive. These prior findings describe bottom-up, stimulus-driven depth guidance. Here, we ask about the top-down selection of depth information. To assess the ability to direct attention to specific depth planes, Experiment 1 used the centroid judgment paradigm which permits quantitative measures of selective processing of items of different depths or colors. Experiment 1 showed that a subset of observers could deploy specific attention filters for each of eight depth planes, suggesting that at least some observers can direct attention to a specific depth plane quite precisely. Experiment 2 used eight depth planes in a visual search experiment. Observers were encouraged to guide their attention to far or near depth planes with an informative but imperfect cue. The benefits of this probabilistic cue were small. However, this may not be a specific problem with guidance by stereoscopic depth. Equivalently poor results were obtained with color. To check and prove that depth guidance in search is possible, Experiment 3 presented items in only two depth planes. In this case, information about the target depth plane allows observers to search more efficiently, replicating earlier work. We conclude that top-down guidance by stereoscopic depth is possible but that it is hard to apply the full range of our stereoscopic ability in search.
Collapse
Affiliation(s)
- Bochao Zou
- School of Computer and Communication Engineering, University of Science and Technology Beijing, China
| | - Yue Liu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display and School of Optoelectronics, Beijing Institute of Technology, China
| | - Jeremy M Wolfe
- Visual Attention Lab, Harvard Medical School and Brigham & Women's Hospital, United States
| |
Collapse
|
6
|
Seinfeld S, Feuchtner T, Pinzek J, Muller J. Impact of Information Placement and User Representations in VR on Performance and Embodiment. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1545-1556. [PMID: 32877336 DOI: 10.1109/tvcg.2020.3021342] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Human sensory processing is sensitive to the proximity of stimuli to the body. It is therefore plausible that these perceptual mechanisms also modulate the detectability of content in VR, depending on its location. We evaluate this in a user study and further explore the impact of the user's representation during interaction. We also analyze how embodiment and motor performance are influenced by these factors. In a dual-task paradigm, participants executed a motor task, either through virtual hands, virtual controllers, or a keyboard. Simultaneously, they detected visual stimuli appearing in different locations. We found that, while actively performing a motor task in the virtual environment, performance in detecting additional visual stimuli is higher when presented near the user's body. This effect is independent of how the user is represented and only occurs when the user is also engaged in a secondary task. We further found improved motor performance and increased embodiment when interacting through virtual tools and hands in VR, compared to interacting with a keyboard. This article contributes to better understanding the detectability of visual content in VR, depending on its location in the virtual environment, as well as the impact of different user representations on information processing, embodiment, and motor performance.
Collapse
|
7
|
Duan Y, Thatte J, Yaklovleva A, Norcia AM. Disparity in Context: Understanding how monocular image content interacts with disparity processing in human visual cortex. Neuroimage 2021; 237:118139. [PMID: 33964460 PMCID: PMC10786599 DOI: 10.1016/j.neuroimage.2021.118139] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 04/16/2021] [Accepted: 04/19/2021] [Indexed: 11/24/2022] Open
Abstract
Horizontal disparities between the two eyes' retinal images are the primary cue for depth. Commonly used random ot tereograms (RDS) intentionally camouflage the disparity cue, breaking the correlations between monocular image structure and the depth map that are present in natural images. Because of the nonlinear nature of visual processing, it is unlikely that simple computational rules derived from RDS will be sufficient to explain binocular vision in natural environments. In order to understand the interplay between natural scene structure and disparity encoding, we used a depth-image-based-rendering technique and a library of natural 3D stereo pairs to synthesize two novel stereogram types in which monocular scene content was manipulated independent of scene depth information. The half-images of the novel stereograms comprised either random-dots or scrambled natural scenes, each with the same depth maps as the corresponding natural scene stereograms. Using these stereograms in a simultaneous Event-Related Potential and behavioral discrimination task, we identified multiple disparity-contingent encoding stages between 100 ~ 500 msec. The first disparity sensitive evoked potential was observed at ~100 msec after an earlier evoked potential (between ~50-100 msec) that was sensitive to the structure of the monocular half-images but blind to disparity. Starting at ~150 msec, disparity responses were stereogram-specific and predictive of perceptual depth. Complex features associated with natural scene content are thus at least partially coded prior to disparity information, but these features and possibly others associated with natural scene content interact with disparity information only after an intermediate, 2D scene-independent disparity processing stage.
Collapse
Affiliation(s)
- Yiran Duan
- Wu Tsai Neurosciences Institute, 290 Jane Stanford Way, Stanford, CA 94305
| | - Jayant Thatte
- Department of Electrical Engineering, David Packard Building, Stanford University, 350 Jane Stanford Way, Stanford, CA 94305
| | | | - Anthony M Norcia
- Wu Tsai Neurosciences Institute, 290 Jane Stanford Way, Stanford, CA 94305.
| |
Collapse
|
8
|
Salsano I, Santangelo V, Macaluso E. The lateral intraparietal sulcus takes viewpoint changes into account during memory-guided attention in natural scenes. Brain Struct Funct 2021; 226:989-1006. [PMID: 33533985 PMCID: PMC8036207 DOI: 10.1007/s00429-021-02221-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2020] [Accepted: 01/14/2021] [Indexed: 11/26/2022]
Abstract
Previous studies demonstrated that long-term memory related to object-position in natural scenes guides visuo-spatial attention during subsequent search. Memory-guided attention has been associated with the activation of memory regions (the medial-temporal cortex) and with the fronto-parietal attention network. Notably, these circuits represent external locations with different frames of reference: egocentric (i.e., eyes/head-centered) in the dorsal attention network vs. allocentric (i.e., world/scene-centered) in the medial temporal cortex. Here we used behavioral measures and fMRI to assess the contribution of egocentric and allocentric spatial information during memory-guided attention. At encoding, participants were presented with real-world scenes and asked to search for and memorize the location of a high-contrast target superimposed in half of the scenes. At retrieval, participants viewed again the same scenes, now all including a low-contrast target. In scenes that included the target at encoding, the target was presented at the same scene-location. Critically, scenes were now shown either from the same or different viewpoint compared with encoding. This resulted in a memory-by-view design (target seen/unseen x same/different view), which allowed us teasing apart the role of allocentric vs. egocentric signals during memory-guided attention. Retrieval-related results showed greater search-accuracy for seen than unseen targets, both in the same and different views, indicating that memory contributes to visual search notwithstanding perspective changes. This view-change independent effect was associated with the activation of the left lateral intra-parietal sulcus. Our results demonstrate that this parietal region mediates memory-guided attention by taking into account allocentric/scene-centered information about the objects' position in the external world.
Collapse
Affiliation(s)
- Ilenia Salsano
- Neuroimaging Laboratory, IRCCS Santa Lucia Foundation, Rome, Italy.
- PhD Program in Behavioral Neuroscience, Sapienza University of Rome, Rome, Italy.
| | - Valerio Santangelo
- Neuroimaging Laboratory, IRCCS Santa Lucia Foundation, Rome, Italy
- Department of Philosophy, Social Sciences and Education, University of Perugia, Perugia, Italy
| | - Emiliano Macaluso
- Neuroimaging Laboratory, IRCCS Santa Lucia Foundation, Rome, Italy
- ImpAct Team, Lyon Neuroscience Research Center, Lyon, France
| |
Collapse
|
9
|
Working memory for stereoscopic depth is limited and imprecise-evidence from a change detection task. Psychon Bull Rev 2020; 26:1657-1665. [PMID: 31388836 DOI: 10.3758/s13423-019-01640-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Most studies on visual working memory (VWM) and spatial working memory (SWM) have employed visual stimuli presented at the fronto-parallel plane and few have involved depth perception. VWM is often considered as a memory buffer for temporarily holding and manipulating visual information that relates to visual features of an object, and SWM for holding and manipulating spatial information that concerns the spatial location of an object. Although previous research has investigated the effect of stereoscopic depth on VWM, the question of how depth positions are stored in working memory has not been systematically investigated, leaving gaps in the existing literature on working memory. Here, we explore working memory for depth by using a change detection task. The memory items were presented at various stereoscopic depth planes perpendicular to the line of sight, with one item per depth plane. Participants were asked to make judgments on whether the depth position of the target (one of the memory items) had changed. The results showed a conservative response bias that observers tended to make 'no change' responses when detecting changes in depth. In addition, we found that similar to VWM, the change detection accuracy degraded with the number of memory items presented, but the accuracy was much lower than that reported for VWM, suggesting that the storage for depth information is severely limited and less precise than that for visual information. The detection sensitivity was higher for the nearest and farthest depths and was better when the probe was presented along with the other items originally in the memory array, indicating that how well the to-be-stored depth can be stored in working memory depends on its relation with the other depth positions.
Collapse
|
10
|
Pedale T, Macaluso E, Santangelo V. Enhanced insular/prefrontal connectivity when resisting from emotional distraction during visual search. Brain Struct Funct 2019; 224:2009-2026. [PMID: 31111208 PMCID: PMC6591190 DOI: 10.1007/s00429-019-01873-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2018] [Accepted: 04/11/2019] [Indexed: 01/26/2023]
Abstract
Previous literature demonstrated that the processing of emotional stimuli can interfere with goal-directed behavior. This has been shown primarily in the context of working memory tasks, but “emotional distraction” may affect also other processes, such as the orienting of visuo-spatial attention. During fMRI, we presented human subjects with emotional stimuli embedded within complex everyday life visual scenes. Emotional stimuli could be either the current target to be searched for or task-irrelevant distractors. Behavioral and eye-movement data revealed faster detection of emotional than neutral targets. Emotional distractors were found to be fixated later and for a shorter duration than emotional targets, suggesting efficient top-down control in avoiding emotional distraction. The fMRI data demonstrated that negative (but not positive) stimuli were mandatorily processed by limbic/para-limbic regions (namely, the right amygdala and the left insula), irrespective of current task relevance: that is, these regions activated for both emotional targets and distractors. However, analyses of inter-regional connectivity revealed a functional coupling between the left insula and the right prefrontal cortex that increased specifically during search in the presence of emotional distractors. This indicates that increased functional coupling between affective limbic/para-limbic regions and control regions in the frontal cortex can attenuate emotional distraction, permitting the allocation of spatial attentional resources toward task-relevant neutral targets in the presence of distracting emotional signals.
Collapse
Affiliation(s)
- Tiziana Pedale
- Department of Psychology, Sapienza University of Rome, Via dei Marsi, 78, 00158, Rome, Italy. .,Neuroimaging Laboratory, IRCCS Santa Lucia Foundation, Via Ardeatina, 306, 00179, Rome, Italy. .,Umeå Center for Functional Brain Imaging (UFBI), Department of Integrative Medical Biology, Umeå University, 901 87, Umeå, Sweden.
| | - Emiliano Macaluso
- Neuroimaging Laboratory, IRCCS Santa Lucia Foundation, Via Ardeatina, 306, 00179, Rome, Italy.,ImpAct Team, Lyon Neuroscience Research Center, 16, av. du Doyen Lépine, 69676, Bron Cedex, France
| | - Valerio Santangelo
- Neuroimaging Laboratory, IRCCS Santa Lucia Foundation, Via Ardeatina, 306, 00179, Rome, Italy. .,Department of Philosophy, Social Sciences and Education, University of Perugia, Piazza G. Ermini, 1, 06123, Perugia, Italy.
| |
Collapse
|
11
|
Duan Y, Yakovleva A, Norcia AM. Determinants of neural responses to disparity in natural scenes. J Vis 2018; 18:21. [PMID: 29677337 PMCID: PMC6097643 DOI: 10.1167/18.3.21] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2017] [Accepted: 02/05/2018] [Indexed: 11/24/2022] Open
Abstract
We studied disparity-evoked responses in natural scenes using high-density electroencephalography (EEG) in an event-related design. Thirty natural scenes that mainly included outdoor settings with trees and buildings were used. Twenty-four subjects viewed a series of trials composed of sequential two-alternative temporal forced-choice presentation of two different versions (two-dimensional [2D] vs. three-dimensional [3D]) of the same scene interleaved by a scrambled image with the same power spectrum. Scenes were viewed orthostereoscopically at 3 m through a pair of shutter glasses. After each trial, participants indicated with a key press which version of the scene was 3D. Performance on the discrimination was >90%. Participants who were more accurate also tended to respond faster; scenes that were reported more accurately as 3D also led to faster reaction times. We compared visual evoked potentials elicited by scrambled, 2D, and 3D scenes using reliable component analysis to reduce dimensionality. The disparity-evoked response to natural scene stimuli, measured from the difference potential between 2D and 3D scenes, comprised a sustained relative negativity in the dominant response component. The magnitude of the disparity-specific response was correlated with the observer's stereoacuity. Scenes with more homogeneous depth maps also tended to elicit large disparity-specific responses. Finally, the magnitude of the disparity-specific response was correlated with the magnitude of the differential response between scrambled and 2D scenes, suggesting that monocular higher-order scene statistics modulate disparity-specific responses.
Collapse
Affiliation(s)
- Yiran Duan
- Department of Psychology, Stanford University, Stanford, CA, USA
| | | | - Anthony M Norcia
- Department of Psychology, Stanford University, Stanford, CA, USA
| |
Collapse
|
12
|
Nardo D, Console P, Reverberi C, Macaluso E. Competition between Visual Events Modulates the Influence of Salience during Free-Viewing of Naturalistic Videos. Front Hum Neurosci 2016; 10:320. [PMID: 27445760 PMCID: PMC4923118 DOI: 10.3389/fnhum.2016.00320] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2016] [Accepted: 06/13/2016] [Indexed: 11/13/2022] Open
Abstract
In daily life the brain is exposed to a large amount of external signals that compete for processing resources. The attentional system can select relevant information based on many possible combinations of goal-directed and stimulus-driven control signals. Here, we investigate the behavioral and physiological effects of competition between distinctive visual events during free-viewing of naturalistic videos. Nineteen healthy subjects underwent functional magnetic resonance imaging (fMRI) while viewing short video-clips of everyday life situations, without any explicit goal-directed task. Each video contained either a single semantically-relevant event on the left or right side (Lat-trials), or multiple distinctive events in both hemifields (Multi-trials). For each video, we computed a salience index to quantify the lateralization bias due to stimulus-driven signals, and a gaze index (based on eye-tracking data) to quantify the efficacy of the stimuli in capturing attention to either side. Behaviorally, our results showed that stimulus-driven salience influenced spatial orienting only in presence of multiple competing events (Multi-trials). fMRI results showed that the processing of competing events engaged the ventral attention network, including the right temporoparietal junction (R TPJ) and the right inferior frontal cortex. Salience was found to modulate activity in the visual cortex, but only in the presence of competing events; while the orienting efficacy of Multi-trials affected activity in both the visual cortex and posterior parietal cortex (PPC). We conclude that in presence of multiple competing events, the ventral attention system detects semantically-relevant events, while regions of the dorsal system make use of saliency signals to select relevant locations and guide spatial orienting.
Collapse
Affiliation(s)
- Davide Nardo
- Neuroimaging Laboratory, Santa Lucia FoundationRome, Italy; Institute of Cognitive Neuroscience, University College LondonLondon, UK
| | - Paola Console
- Neuroimaging Laboratory, Santa Lucia Foundation Rome, Italy
| | - Carlo Reverberi
- Department of Psychology, University of Milano-BicoccaMilan, Italy; NeuroMi-Milan Center for Neuroscience, University of Milano-BicoccaMilan, Italy
| | - Emiliano Macaluso
- Neuroimaging Laboratory, Santa Lucia FoundationRome, Italy; Impact Team, Lyon Neuroscience Research CenterLyon, France
| |
Collapse
|