1
|
Towler JR, Morgan D, Davies-Thompson J. Impaired Face Feature-to-Location Statistical Learning and Single-Feature Discrimination in Developmental Prosopagnosia. Brain Sci 2024; 14:815. [PMID: 39199506 PMCID: PMC11352419 DOI: 10.3390/brainsci14080815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2024] [Revised: 08/01/2024] [Accepted: 08/12/2024] [Indexed: 09/01/2024] Open
Abstract
Individuals with developmental prosopagnosia (DP) experience severe face memory deficits that are often accompanied by impairments in face perception. Images of human facial features are better discriminated between when they are presented in the locations on the visual field that they typically appear in while viewing human faces in daily life, than in locations which they do not typically appear (i.e., better performance for eyes in the upper visual field, and better performance for mouths in the lower visual field). These feature-to-location tuning effects (FLEs) can be explained by a retinotopically organised visual statistical learning mechanism. We had a large group of DP participants (N = 64), a control group (N = 74) and a group of individuals with a mild form of DP (N = 58) complete a single-feature discrimination task to determine whether face perception deficits in DP can be accounted for by an impairment in face feature-to-location tuning. The results showed that individuals with DP did not have significant FLEs, suggesting a marked impairment in the underlying visual statistical learning mechanism. In contrast, the mild DP group showed normal FLE effects which did not differ from the control group. Both DP groups had impaired single-feature processing (SFP) as compared to the control group. We also examined the effects of age on FLEs and SFP.
Collapse
Affiliation(s)
- John R. Towler
- School of Psychology, Faculty of Medicine, Human & Health Sciences, Swansea University, Swansea SA2 8PP, UK (J.D.-T.)
| | | | | |
Collapse
|
2
|
Broda MD, Borovska P, de Haas B. Individual differences in face salience and rapid face saccades. J Vis 2024; 24:16. [PMID: 38913016 PMCID: PMC11204136 DOI: 10.1167/jov.24.6.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 04/04/2024] [Indexed: 06/25/2024] Open
Abstract
Humans saccade to faces in their periphery faster than to other types of objects. Previous research has highlighted the potential importance of the upper face region in this phenomenon, but it remains unclear whether this is driven by the eye region. Similarly, it remains unclear whether such rapid saccades are exclusive to faces or generalize to other semantically salient stimuli. Furthermore, it is unknown whether individuals differ in their face-specific saccadic reaction times and, if so, whether such differences could be linked to differences in face fixations during free viewing. To explore these open questions, we invited 77 participants to perform a saccadic choice task in which we contrasted faces as well as other salient objects, particularly isolated face features and text, with cars. Additionally, participants freely viewed 700 images of complex natural scenes in a separate session, which allowed us to determine the individual proportion of first fixations falling on faces. For the saccadic choice task, we found advantages for all categories of interest over cars. However, this effect was most pronounced for images of full faces. Full faces also elicited faster saccades compared with eyes, showing that isolated eye regions are not sufficient to elicit face-like responses. Additionally, we found consistent individual differences in saccadic reaction times toward faces that weakly correlated with face salience during free viewing. Our results suggest a link between semantic salience and rapid detection, but underscore the unique status of faces. Further research is needed to resolve the mechanisms underlying rapid face saccades.
Collapse
Affiliation(s)
- Maximilian Davide Broda
- Experimental Psychology, Justus Liebig University Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| | - Petra Borovska
- Experimental Psychology, Justus Liebig University Giessen, Germany
| | - Benjamin de Haas
- Experimental Psychology, Justus Liebig University Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
3
|
Chakravarthula PN, Eckstein MP. A preference to look closer to the eyes is associated with a position-invariant face neural code. Psychon Bull Rev 2024; 31:1268-1279. [PMID: 37930609 PMCID: PMC11192658 DOI: 10.3758/s13423-023-02412-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/17/2023] [Indexed: 11/07/2023]
Abstract
When looking at faces, humans invariably move their eyes to a consistent preferred first fixation location on the face. While most people have the preferred fixation location just below the eyes, a minority have it between the nose-tip and mouth. Not much is known about whether these long-term differences in the preferred fixation location are associated with distinct neural representations of faces. To study this, we used a gaze-contingent face adaptation aftereffect paradigm to test in two groups of observers, one with their mean preferred fixation location closer to the eyes (upper lookers) and the other closer to the mouth (lower lookers). In this task, participants were required to maintain their gaze at either their own group's mean preferred fixation location or that of the other group during adaptation and testing. The two possible fixation locations were 3.6° apart on the face. We measured the face adaptation aftereffects when the adaptation and testing happened while participants maintained fixation at either the same or different locations on the face. Both groups showed equally strong adaptation effects when the adaptation and testing happened at the same fixation location. Crucially, only the upper lookers showed a partial transfer of the FAE across the two fixation locations, when adaptation occurred at the eyes. Lower lookers showed no spatial transfer of the FAE irrespective of the adaptation position. Given the classic finding that neural tuning is increasingly position invariant as one moves higher in the visual hierarchy, this result suggests that differences in the preferred fixation location are associated with distinct neural representations of faces.
Collapse
Affiliation(s)
- Puneeth N Chakravarthula
- Psychological and Brain Science, University of California, Santa Barbara, CA, USA.
- Department of Radiology, Washington University in St. Louis, 4525 Scott Ave, St. Louis, MO, 2126 B63110, USA.
| | - Miguel P Eckstein
- Psychological and Brain Science, University of California, Santa Barbara, CA, USA
| |
Collapse
|
4
|
Puce A. From Motion to Emotion: Visual Pathways and Potential Interconnections. J Cogn Neurosci 2024:1-24. [PMID: 38527078 PMCID: PMC11416577 DOI: 10.1162/jocn_a_02141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
The two visual pathway description of [Ungerleider, L. G., & Mishkin, M. Two cortical visual systems. In D. J. Dingle, M. A. Goodale, & R. J. W. Mansfield (Eds.), Analysis of visual behavior (pp. 549-586). Cambridge, MA: MIT, 1982] changed the course of late 20th century systems and cognitive neuroscience. Here, I try to reexamine our laboratory's work through the lens of the [Pitcher, D., & Ungerleider, L. G. Evidence for a third visual pathway specialized for social perception. Trends in Cognitive Sciences, 25, 100-110, 2021] new third visual pathway. I also briefly review the literature related to brain responses to static and dynamic visual displays, visual stimulation involving multiple individuals, and compare existing models of social information processing for the face and body. In this context, I examine how the posterior STS might generate unique social information relative to other brain regions that also respond to social stimuli. I discuss some of the existing challenges we face with assessing how information flow progresses between structures in the proposed functional pathways and how some stimulus types and experimental designs may have complicated our data interpretation and model generation. I also note a series of outstanding questions for the field. Finally, I examine the idea of a potential expansion of the third visual pathway, to include aspects of previously proposed "lateral" visual pathways. Doing this would yield a more general entity for processing motion/action (i.e., "[inter]action") that deals with interactions between people, as well as people and objects. In this framework, a brief discussion of potential hemispheric biases for function, and different forms of neuropsychological impairments created by focal lesions in the posterior brain is highlighted to help situate various brain regions into an expanded [inter]action pathway.
Collapse
|
5
|
Broda MD, de Haas B. Individual differences in human gaze behavior generalize from faces to objects. Proc Natl Acad Sci U S A 2024; 121:e2322149121. [PMID: 38470925 PMCID: PMC10963009 DOI: 10.1073/pnas.2322149121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 01/22/2024] [Indexed: 03/14/2024] Open
Abstract
Individuals differ in where they fixate on a face, with some looking closer to the eyes while others prefer the mouth region. These individual biases are highly robust, generalize from the lab to the outside world, and have been associated with social cognition and associated disorders. However, it is unclear, whether these biases are specific to faces or influenced by domain-general mechanisms of vision. Here, we juxtaposed these hypotheses by testing whether individual face fixation biases generalize to inanimate objects. We analyzed >1.8 million fixations toward faces and objects in complex natural scenes from 405 participants tested in multiple labs. Consistent interindividual differences in fixation positions were highly inter-correlated across faces and objects in all samples. Observers who fixated closer to the eye region also fixated higher on inanimate objects and vice versa. Furthermore, the inter-individual spread of fixation positions scaled with target size in precisely the same, non-linear manner for faces and objects. These findings contradict a purely domain-specific account of individual face gaze. Instead, they suggest significant domain-general contributions to the individual way we look at faces, a finding with potential relevance for basic vision, face perception, social cognition, and associated clinical conditions.
Collapse
Affiliation(s)
- Maximilian Davide Broda
- Experimental Psychology, Justus Liebig University Giessen, Giessen35394, Germany
- Center for Mind, Brain and Behavior, Universities of Marburg, Giessen, and Darmstadt, Marburg35032, Germany
| | - Benjamin de Haas
- Experimental Psychology, Justus Liebig University Giessen, Giessen35394, Germany
- Center for Mind, Brain and Behavior, Universities of Marburg, Giessen, and Darmstadt, Marburg35032, Germany
| |
Collapse
|
6
|
Singh G, Ramanathan M. Repurposing Artificial Intelligence Tools for Disease Modeling: Case Study of Face Recognition Deficits in Neurodegenerative Diseases. Clin Pharmacol Ther 2023; 114:862-873. [PMID: 37394678 DOI: 10.1002/cpt.2987] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 06/20/2023] [Indexed: 07/04/2023]
Abstract
Face recognition deficits occur in diseases such as prosopagnosia, autism, Alzheimer's disease, and dementias. The objective of this study was to evaluate whether degrading the architecture of artificial intelligence (AI) face recognition algorithms can model deficits in diseases. Two established face recognition models, convolutional-classification neural network (C-CNN) and Siamese network (SN), were trained on the FEI faces data set (~ 14 images/person for 200 persons). The trained networks were perturbed by reducing weights (weakening) and node count (lesioning) to emulate brain tissue dysfunction and lesions, respectively. Accuracy assessments were used as surrogates for face recognition deficits. The findings were compared with clinical outcomes from the Alzheimer's Disease Neuroimaging Initiative (ADNI) data set. Face recognition accuracy decreased gradually for weakening factors less than 0.55 for C-CNN, and 0.85 for SN. Rapid accuracy loss occurred at higher values. C-CNN accuracy was similarly affected by weakening any convolutional layer whereas SN accuracy was more sensitive to weakening of the first convolutional layer. SN accuracy declined gradually with a rapid drop when nearly all nodes were lesioned. C-CNN accuracy declined rapidly when as few as 10% of nodes were lesioned. CNN and SN were more sensitive to lesioning of the first convolutional layer. Overall, SN was more robust than C-CNN, and the findings from SN experiments were concordant with ADNI results. As predicted from modeling, brain network failure quotient was related to key clinical outcome measures for cognition and functioning. Perturbation of AI networks is a promising method for modeling disease progression effects on complex cognitive outcomes.
Collapse
Affiliation(s)
- Gargi Singh
- Department of Pharmaceutical Sciences, University at Buffalo, The State University of New York, Buffalo, New York, USA
| | - Murali Ramanathan
- Department of Pharmaceutical Sciences, University at Buffalo, The State University of New York, Buffalo, New York, USA
| |
Collapse
|
7
|
Robinson AK, Quek GL, Carlson TA. Visual Representations: Insights from Neural Decoding. Annu Rev Vis Sci 2023; 9:313-335. [PMID: 36889254 DOI: 10.1146/annurev-vision-100120-025301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/10/2023]
Abstract
Patterns of brain activity contain meaningful information about the perceived world. Recent decades have welcomed a new era in neural analyses, with computational techniques from machine learning applied to neural data to decode information represented in the brain. In this article, we review how decoding approaches have advanced our understanding of visual representations and discuss efforts to characterize both the complexity and the behavioral relevance of these representations. We outline the current consensus regarding the spatiotemporal structure of visual representations and review recent findings that suggest that visual representations are at once robust to perturbations, yet sensitive to different mental states. Beyond representations of the physical world, recent decoding work has shone a light on how the brain instantiates internally generated states, for example, during imagery and prediction. Going forward, decoding has remarkable potential to assess the functional relevance of visual representations for human behavior, reveal how representations change across development and during aging, and uncover their presentation in various mental disorders.
Collapse
Affiliation(s)
- Amanda K Robinson
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia;
| | - Genevieve L Quek
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia;
| | | |
Collapse
|
8
|
Broda MD, de Haas B. Reading the mind in the nose. Iperception 2023; 14:20416695231163449. [PMID: 36960407 PMCID: PMC10028657 DOI: 10.1177/20416695231163449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 02/25/2023] [Indexed: 03/25/2023] Open
Abstract
Humans infer mental states and traits from faces and their expressions. Previous research focused on the role of eyes and mouths in this process, even though most observers fixate somewhere in between. Here, we report that ratings of the nose region are surprisingly consistent with those for the full face and even with subjective feelings of the nose bearer. We propose the nose as central to faces and their perception.
Collapse
Affiliation(s)
- Maximilian Davide Broda
- />Experimental Psychology, Justus Liebig University
Giessen, Germany
- />Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus
Liebig University Giessen, Germany
- Maximilian Davide Broda, Department of
Psychology, Justus Liebig University, Giessen, Otto-Behaghel-Strasse 10F, 35394
Giessen, Germany.
| | - Benjamin de Haas
- />Experimental Psychology, Justus Liebig University
Giessen, Germany
- />Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus
Liebig University Giessen, Germany
| |
Collapse
|
9
|
Broda MD, Haddad T, de Haas B. Quick, eyes! Isolated upper face regions but not artificial features elicit rapid saccades. J Vis 2023; 23:5. [PMID: 36749582 PMCID: PMC9919614 DOI: 10.1167/jov.23.2.5] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Accepted: 12/06/2022] [Indexed: 02/08/2023] Open
Abstract
Human faces elicit faster saccades than objects or animals, resonating with the great importance of faces for our species. The underlying mechanisms are largely unclear. Here, we test two hypotheses based on previous findings. First, ultra-rapid saccades toward faces may not depend on the presence of the whole face, but the upper face region containing the eye region. Second, ultra-rapid saccades toward faces (and possibly face parts) may emerge from our extensive experience with this stimulus and thus extend to glasses and masks - artificial features frequently encountered as part of a face. To test these hypotheses, we asked 43 participants to complete a saccadic choice task, which contrasted images of whole, upper and lower faces, face masks, and glasses with car images. The resulting data confirmed ultra-rapid saccades for isolated upper face regions, but not for artificial facial features.
Collapse
Affiliation(s)
- Maximilian Davide Broda
- Experimental Psychology, Justus Liebig University Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| | - Theresa Haddad
- Experimental Psychology, Justus Liebig University Giessen, Germany
| | - Benjamin de Haas
- Experimental Psychology, Justus Liebig University Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
10
|
Preference for horizontal information in faces predicts typical variations in face recognition but is not impaired in developmental prosopagnosia. Psychon Bull Rev 2023; 30:261-268. [PMID: 36002717 PMCID: PMC9971097 DOI: 10.3758/s13423-022-02163-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/29/2022] [Indexed: 11/08/2022]
Abstract
Face recognition is strongly influenced by the processing of orientation structure in the face image. Faces are much easier to recognize when they are filtered to include only horizontally oriented information compared with vertically oriented information. Here, we investigate whether preferences for horizontal information in faces are related to face recognition abilities in a typical sample (Experiment 1), and whether such preferences are lacking in people with developmental prosopagnosia (DP; Experiment 2). Experiment 1 shows that preferences for horizontal face information are linked to face recognition abilities in a typical sample, with weak evidence of face-selective contributions. Experiment 2 shows that preferences for horizontal face information are comparable in control and DP groups. Our study suggests that preferences for horizontal face information are related to variations in face recognition abilities in the typical range, and that these preferences are not aberrant in DP.
Collapse
|
11
|
Kleiser R, Raffelsberger T, Trenkler J, Meckel S, Seitz R. What influence do face masks have on reading emotions in faces? NEUROIMAGE: REPORTS 2022. [DOI: 10.1016/j.ynirp.2022.100141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
12
|
Linka M, Broda MD, Alsheimer T, de Haas B, Ramon M. Characteristic fixation biases in Super-Recognizers. J Vis 2022; 22:17. [PMID: 35900724 PMCID: PMC9344214 DOI: 10.1167/jov.22.8.17] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 06/22/2022] [Indexed: 12/14/2022] Open
Abstract
Neurotypical observers show large and reliable individual differences in gaze behavior along several semantic object dimensions. Individual gaze behavior toward faces has been linked to face identity processing, including that of neurotypical observers. Here, we investigated potential gaze biases in Super-Recognizers (SRs), individuals with exceptional face identity processing skills. Ten SRs, identified with a novel conservative diagnostic framework, and 43 controls freely viewed 700 complex scenes depicting more than 5000 objects. First, we tested whether SRs and controls differ in fixation biases along four semantic dimensions: faces, text, objects being touched, and bodies. Second, we tested potential group differences in fixation biases toward eyes and mouths. Finally, we tested whether SRs fixate closer to the theoretical optimal fixation point for face identification. SRs showed a stronger gaze bias toward faces and away from text and touched objects, starting from the first fixation onward. Further, SRs spent a significantly smaller proportion of first fixations and dwell time toward faces on mouths but did not differ in dwell time or first fixations devoted to eyes. Face fixation of SRs also fell significantly closer to the theoretical optimal fixation point for identification, just below the eyes. Our findings suggest that reliable superiority for face identity processing is accompanied by early fixation biases toward faces and preferred saccadic landing positions close to the theoretical optimum for face identification. We discuss future directions to investigate the functional basis of individual fixation behavior and face identity processing ability.
Collapse
Affiliation(s)
- Marcel Linka
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | | | - Tamara Alsheimer
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Applied Face Cognition Lab, University of Lausanne, Institute of Psychology, Lausanne, Switzerland
| | - Benjamin de Haas
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Meike Ramon
- Applied Face Cognition Lab, University of Lausanne, Institute of Psychology, Lausanne, Switzerland
| |
Collapse
|
13
|
One object, two networks? Assessing the relationship between the face and body-selective regions in the primate visual system. Brain Struct Funct 2021; 227:1423-1438. [PMID: 34792643 DOI: 10.1007/s00429-021-02420-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 10/22/2021] [Indexed: 10/19/2022]
Abstract
Faces and bodies are often treated as distinct categories that are processed separately by face- and body-selective brain regions in the primate visual system. These regions occupy distinct regions of visual cortex and are often thought to constitute independent functional networks. Yet faces and bodies are part of the same object and their presence inevitably covary in naturalistic settings. Here, we re-evaluate both the evidence supporting the independent processing of faces and bodies and the organizational principles that have been invoked to explain this distinction. We outline four hypotheses ranging from completely separate networks to a single network supporting the perception of whole people or animals. The current evidence, especially in humans, is compatible with all of these hypotheses, making it presently unclear how the representation of faces and bodies is organized in the cortex.
Collapse
|
14
|
Abstract
During natural vision, our brains are constantly exposed to complex, but regularly structured environments. Real-world scenes are defined by typical part-whole relationships, where the meaning of the whole scene emerges from configurations of localized information present in individual parts of the scene. Such typical part-whole relationships suggest that information from individual scene parts is not processed independently, but that there are mutual influences between the parts and the whole during scene analysis. Here, we review recent research that used a straightforward, but effective approach to study such mutual influences: By dissecting scenes into multiple arbitrary pieces, these studies provide new insights into how the processing of whole scenes is shaped by their constituent parts and, conversely, how the processing of individual parts is determined by their role within the whole scene. We highlight three facets of this research: First, we discuss studies demonstrating that the spatial configuration of multiple scene parts has a profound impact on the neural processing of the whole scene. Second, we review work showing that cortical responses to individual scene parts are shaped by the context in which these parts typically appear within the environment. Third, we discuss studies demonstrating that missing scene parts are interpolated from the surrounding scene context. Bridging these findings, we argue that efficient scene processing relies on an active use of the scene's part-whole structure, where the visual brain matches scene inputs with internal models of what the world should look like.
Collapse
Affiliation(s)
- Daniel Kaiser
- Justus-Liebig-Universität Gießen, Germany.,Philipps-Universität Marburg, Germany.,University of York, United Kingdom
| | - Radoslaw M Cichy
- Freie Universität Berlin, Germany.,Humboldt-Universität zu Berlin, Germany.,Bernstein Centre for Computational Neuroscience Berlin, Germany
| |
Collapse
|
15
|
Poltoratski S, Kay K, Finzi D, Grill-Spector K. Holistic face recognition is an emergent phenomenon of spatial processing in face-selective regions. Nat Commun 2021; 12:4745. [PMID: 34362883 PMCID: PMC8346587 DOI: 10.1038/s41467-021-24806-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Accepted: 07/06/2021] [Indexed: 11/10/2022] Open
Abstract
Spatial processing by receptive fields is a core property of the visual system. However, it is unknown how spatial processing in high-level regions contributes to recognition behavior. As face inversion is thought to disrupt typical holistic processing of information in faces, we mapped population receptive fields (pRFs) with upright and inverted faces in the human visual system. Here we show that in face-selective regions, but not primary visual cortex, pRFs and overall visual field coverage are smaller and shifted downward in response to face inversion. From these measurements, we successfully predict the relative behavioral detriment of face inversion at different positions in the visual field. This correspondence between neural measurements and behavior demonstrates how spatial processing in face-selective regions may enable holistic perception. These results not only show that spatial processing in high-level visual regions is dynamically used towards recognition, but also suggest a powerful approach for bridging neural computations by receptive fields to behavior.
Collapse
Affiliation(s)
| | - Kendrick Kay
- Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota, Minneapolis, MN, USA
| | - Dawn Finzi
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Kalanit Grill-Spector
- Department of Psychology, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
16
|
de Haas B, Sereno MI, Schwarzkopf DS. Inferior Occipital Gyrus Is Organized along Common Gradients of Spatial and Face-Part Selectivity. J Neurosci 2021; 41:5511-5521. [PMID: 34016715 PMCID: PMC8221599 DOI: 10.1523/jneurosci.2415-20.2021] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Revised: 04/02/2021] [Accepted: 04/05/2021] [Indexed: 11/21/2022] Open
Abstract
The ventral visual stream of the human brain is subdivided into patches with categorical stimulus preferences, like faces or scenes. However, the functional organization within these areas is less clear. Here, we used functional magnetic resonance imaging and vertex-wise tuning models to independently probe spatial and face-part preferences in the inferior occipital gyrus (IOG) of healthy adult males and females. The majority of responses were well explained by Gaussian population tuning curves for both retinotopic location and the preferred relative position within a face. Parameter maps revealed a common gradient of spatial and face-part selectivity, with the width of tuning curves drastically increasing from posterior to anterior IOG. Tuning peaks clustered more idiosyncratically but were also correlated across maps of visual and face space. Preferences for the upper visual field went along with significantly increased coverage of the upper half of the face, matching recently discovered biases in human perception. Our findings reveal a broad range of neural face-part selectivity in IOG, ranging from narrow to "holistic." IOG is functionally organized along this gradient, which in turn is correlated with retinotopy.SIGNIFICANCE STATEMENT Brain imaging has revealed a lot about the large-scale organization of the human brain and visual system. For example, occipital cortex contains map-like representations of the visual field, while neurons in ventral areas cluster into patches with categorical preferences, like faces or scenes. Much less is known about the functional organization within these areas. Here, we focused on a well established face-preferring area-the inferior occipital gyrus (IOG). A novel neuroimaging paradigm allowed us to map the retinotopic and face-part tuning of many recording sites in IOG independently. We found a steep posterior-anterior gradient of decreasing face-part selectivity, which correlated with retinotopy. This suggests the functional role of ventral areas is not uniform and may follow retinotopic "protomaps."
Collapse
Affiliation(s)
- Benjamin de Haas
- Department of Psychology, Justus Liebig Universität, 35394 Giessen, Germany
- Experimental Psychology, University College London, London WC1E 6BT, United Kingdom
| | - Martin I Sereno
- Experimental Psychology, University College London, London WC1E 6BT, United Kingdom
- SDSU Imaging Center, San Diego State University, San Diego, California 92182
| | - D Samuel Schwarzkopf
- Experimental Psychology, University College London, London WC1E 6BT, United Kingdom
- School of Optometry and Vision Science, University of Auckland, Auckland 1142, New Zealand
| |
Collapse
|
17
|
Kauffmann L, Khazaz S, Peyrin C, Guyader N. Isolated face features are sufficient to elicit ultra-rapid and involuntary orienting responses toward faces. J Vis 2021; 21:4. [PMID: 33544121 PMCID: PMC7873494 DOI: 10.1167/jov.21.2.4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Previous studies have shown that face stimuli influence the programming of eye movements by eliciting involuntary and extremely fast saccades toward them. The present study examined whether holistic processing of faces mediates these effects. We used a saccadic choice task in which participants were presented simultaneously with two images and had to perform a saccade toward the one containing a target stimulus (e.g., a face). Across three experiments, stimuli were altered via upside-down inversion (Experiment 1) or scrambling of thumbnails within the images (Experiments 2 and 3) in order to disrupt holistic processing. We found that disruption of holistic processing only had a limited impact on the latency of saccades toward face targets, which remained extremely short (minimum saccadic reaction times of only ∼120–130 ms), and did not affect the proportion of error saccades toward face distractors that captured attention more than other distractor categories. It, however, resulted in increasing error rate of saccades toward face targets. These results suggest that the processing of isolated face features is sufficient to elicit extremely fast and involuntary saccadic responses toward them. Holistic representations of faces may, however, be used as a search template to accurately detect faces.
Collapse
Affiliation(s)
- Louise Kauffmann
- CNRS, LPNC, University of Grenoble Alpes, University of Savoie Mont Blanc, Grenoble, France.,CNRS, Grenoble INP, GIPSA-lab, University of Grenoble Alpes, Grenoble, France.,
| | - Sarah Khazaz
- CNRS, LPNC, University of Grenoble Alpes, University of Savoie Mont Blanc, Grenoble, France.,
| | - Carole Peyrin
- CNRS, LPNC, University of Grenoble Alpes, University of Savoie Mont Blanc, Grenoble, France.,
| | - Nathalie Guyader
- CNRS, Grenoble INP, GIPSA-lab, University of Grenoble Alpes, Grenoble, France.,
| |
Collapse
|
18
|
Little Z, Jenkins D, Susilo T. Fast saccades towards faces are robust to orientation inversion and contrast negation. Vision Res 2021; 185:9-16. [PMID: 33866144 DOI: 10.1016/j.visres.2021.03.009] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Revised: 03/22/2021] [Accepted: 03/30/2021] [Indexed: 11/18/2022]
Abstract
Eye movement studies show that humans can make very fast saccades towards faces in natural scenes, but the visual mechanisms behind this process remain unclear. Here we investigate whether fast saccades towards faces rely on mechanisms that are sensitive to the orientation or contrast of the face image. We present participants pairs of images each containing a face and a car in the left and right visual field or the reverse, and we ask them to saccade to faces or cars as targets in different blocks. We assign participants to one of three image conditions: normal images, orientation-inverted images, or contrast-negated images. We report three main results that hold regardless of image conditions. First, reliable saccades towards faces are fast - they can occur at 120-130 ms. Second, fast saccades towards faces are selective - they are more accurate and faster by about 60-70 ms than saccades towards cars. Third, saccades towards faces are reflexive - early saccades in the interval of 120-160 ms tend to go to faces, even when cars are the target. These findings suggest that the speed, selectivity, and reflexivity of saccades towards faces do not depend on the orientation or contrast of the face image. Our results accord with studies suggesting that fast saccades towards faces are mainly driven by low-level image properties, such as amplitude spectrum and spatial frequency.
Collapse
Affiliation(s)
- Zoë Little
- School of Psychology, Victoria University of Wellington, New Zealand.
| | - Daniel Jenkins
- School of Psychology, Victoria University of Wellington, New Zealand
| | - Tirta Susilo
- School of Psychology, Victoria University of Wellington, New Zealand
| |
Collapse
|
19
|
Parker TC, Crowley MJ, Naples AJ, Rolison MJ, Wu J, Trapani JA, McPartland JC. The N170 event-related potential reflects delayed neural response to faces when visual attention is directed to the eyes in youths with ASD. Autism Res 2021; 14:1347-1356. [PMID: 33749161 DOI: 10.1002/aur.2505] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 03/10/2021] [Accepted: 03/11/2021] [Indexed: 11/10/2022]
Abstract
Atypical neural response to faces is thought to contribute to social deficits in autism spectrum disorder (ASD). Compared to typically developing (TD) controls, individuals with ASD exhibit delayed brain responses to upright faces at a face-sensitive event-related potential (ERP), the N170. Given observed differences in patterns of visual attention to faces, it is not known whether slowed neural processing may simply reflect atypical looking to faces. The present study manipulated visual attention to facial features to examine whether directed attention to the eyes normalizes N170 latency in ASD. ERPs were recorded in 30 children and adolescents with ASD as well as 26 TD children and adolescents. Results replicated prior findings of shorter N170 latency to the eye region of the face in TD individuals. In contrast, those with ASD did not demonstrate modulation of N170 latency by point of regard to the face. Group differences in latency were most pronounced when attention was directed to the eyes. Results suggest that well-replicated findings of N170 delays in ASD do not simply reflect atypical patterns of visual engagement with experimental stimuli. These findings add to a body of evidence indicating that N170 delays are a promising marker of atypical neural response to social information in ASD. LAY SUMMARY: This study looks at how children's and adolescents' brains respond when looking at different parts of a face. Typically developing children and adolescents processed eyes faster than other parts of the face, whereas this pattern was not seen in ASD. Children and adolescents with ASD processed eyes more slowly than typically developing children. These findings suggest that observed inefficiencies in face processing in ASD are not simply reflective of failure to attend to the eyes.
Collapse
Affiliation(s)
- Termara C Parker
- Child Study Center, Yale School of Medicine, New Haven, Connecticut, USA
| | - Michael J Crowley
- Child Study Center, Yale School of Medicine, New Haven, Connecticut, USA
| | - Adam J Naples
- Child Study Center, Yale School of Medicine, New Haven, Connecticut, USA
| | - Max J Rolison
- Child Study Center, Yale School of Medicine, New Haven, Connecticut, USA
| | - Jia Wu
- Child Study Center, Yale School of Medicine, New Haven, Connecticut, USA
| | - Julie A Trapani
- Department of Psychology, University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - James C McPartland
- Child Study Center, Yale School of Medicine, New Haven, Connecticut, USA
| |
Collapse
|
20
|
Kaiser D, Inciuraite G, Cichy RM. Rapid contextualization of fragmented scene information in the human visual system. Neuroimage 2020; 219:117045. [PMID: 32540354 DOI: 10.1016/j.neuroimage.2020.117045] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Revised: 04/24/2020] [Accepted: 06/09/2020] [Indexed: 10/24/2022] Open
Abstract
Real-world environments are extremely rich in visual information. At any given moment in time, only a fraction of this information is available to the eyes and the brain, rendering naturalistic vision a collection of incomplete snapshots. Previous research suggests that in order to successfully contextualize this fragmented information, the visual system sorts inputs according to spatial schemata, that is knowledge about the typical composition of the visual world. Here, we used a large set of 840 different natural scene fragments to investigate whether this sorting mechanism can operate across the diverse visual environments encountered during real-world vision. We recorded brain activity using electroencephalography (EEG) while participants viewed incomplete scene fragments at fixation. Using representational similarity analysis on the EEG data, we tracked the fragments' cortical representations across time. We found that the fragments' typical vertical location within the environment (top or bottom) predicted their cortical representations, indexing a sorting of information according to spatial schemata. The fragments' cortical representations were most strongly organized by their vertical location at around 200 ms after image onset, suggesting rapid perceptual sorting of information according to spatial schemata. In control analyses, we show that this sorting is flexible with respect to visual features: it is neither explained by commonalities between visually similar indoor and outdoor scenes, nor by the feature organization emerging from a deep neural network trained on scene categorization. Demonstrating such a flexible sorting across a wide range of visually diverse scenes suggests a contextualization mechanism suitable for complex and variable real-world environments.
Collapse
Affiliation(s)
- Daniel Kaiser
- Department of Psychology, University of York, York, UK.
| | - Gabriele Inciuraite
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| |
Collapse
|
21
|
Finlayson NJ, Neacsu V, Schwarzkopf DS. Spatial Heterogeneity in Bistable Figure-Ground Perception. Iperception 2020; 11:2041669520961120. [PMID: 33194167 PMCID: PMC7594238 DOI: 10.1177/2041669520961120] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2020] [Accepted: 09/02/2020] [Indexed: 11/20/2022] Open
Abstract
The appearance of visual objects varies substantially across the visual field. Could such spatial heterogeneity be due to undersampling of the visual field by neurons selective for stimulus categories? Here, we show that which parts of a bistable vase-face image observers perceive as figure and ground depends on the retinal location where the image appears. The spatial patterns of these perceptual biases were similar regardless of whether the images were upright or inverted. Undersampling by neurons tuned to an object class (e.g., faces) or variability in general local versus global processing cannot readily explain this spatial heterogeneity. Rather, these biases could result from idiosyncrasies in low-level sensitivity across the visual field.
Collapse
Affiliation(s)
- Nonie J. Finlayson
- Department of Experimental Psychology, University College
London, London, UK; School of Optometry & Vision Science, University of Auckland,
Auckland, New Zealand
| | - Victorita Neacsu
- Department of Experimental Psychology, University College
London, London, UK; School of Optometry & Vision Science, University of Auckland,
Auckland, New Zealand
| | - D. S. Schwarzkopf
- Department of Experimental Psychology, University College
London, London, UK; School of Optometry & Vision Science, University of Auckland,
Auckland, New Zealand
| |
Collapse
|
22
|
Kaiser D, Häberle G, Cichy RM. Real-world structure facilitates the rapid emergence of scene category information in visual brain signals. J Neurophysiol 2020; 124:145-151. [PMID: 32519577 DOI: 10.1152/jn.00164.2020] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In everyday life, our visual surroundings are not arranged randomly but structured in predictable ways. Although previous studies have shown that the visual system is sensitive to such structural regularities, it remains unclear whether the presence of an intact structure in a scene also facilitates the cortical analysis of the scene's categorical content. To address this question, we conducted an EEG experiment during which participants viewed natural scene images that were either "intact" (with their quadrants arranged in typical positions) or "jumbled" (with their quadrants arranged into atypical positions). We then used multivariate pattern analysis to decode the scenes' category from the EEG signals (e.g., whether the participant had seen a church or a supermarket). The category of intact scenes could be decoded rapidly within the first 100 ms of visual processing. Critically, within 200 ms of processing, category decoding was more pronounced for the intact scenes compared with the jumbled scenes, suggesting that the presence of real-world structure facilitates the extraction of scene category information. No such effect was found when the scenes were presented upside down, indicating that the facilitation of neural category information is indeed linked to a scene's adherence to typical real-world structure rather than to differences in visual features between intact and jumbled scenes. Our results demonstrate that early stages of categorical analysis in the visual system exhibit tuning to the structure of the world that may facilitate the rapid extraction of behaviorally relevant information from rich natural environments.NEW & NOTEWORTHY Natural scenes are structured, with different types of information appearing in predictable locations. Here, we use EEG decoding to show that the visual brain uses this structure to efficiently analyze scene content. During early visual processing, the category of a scene (e.g., a church vs. a supermarket) could be more accurately decoded from EEG signals when the scene adhered to its typical spatial structure compared with when it did not.
Collapse
Affiliation(s)
- Daniel Kaiser
- Department of Psychology, University of York, York, United Kingdom
| | - Greta Häberle
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Charité - Universitätsmedizin Berlin, Einstein Center for Neurosciences Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Charité - Universitätsmedizin Berlin, Einstein Center for Neurosciences Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| |
Collapse
|
23
|
Peterson MF, Zaun I, Hoke H, Jiahui G, Duchaine B, Kanwisher N. Eye movements and retinotopic tuning in developmental prosopagnosia. J Vis 2019; 19:7. [PMID: 31426085 DOI: 10.1167/19.9.7] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Despite extensive investigation, the causes and nature of developmental prosopagnosia (DP)-a severe face identification impairment in the absence of acquired brain injury-remain poorly understood. Drawing on previous work showing that individuals identified as being neurotypical (NT) show robust individual differences in where they fixate on faces, and recognize faces best when the faces are presented at this location, we defined and tested four novel hypotheses for how atypical face-looking behavior and/or retinotopic face encoding could impair face recognition in DP: (a) fixating regions of poor information, (b) inconsistent saccadic targeting, (c) weak retinotopic tuning, and (d) fixating locations not matched to the individual's own face tuning. We found no support for the first three hypotheses, with NTs and DPs consistently fixating similar locations and showing similar retinotopic tuning of their face perception performance. However, in testing the fourth hypothesis, we found preliminary evidence for two distinct phenotypes of DP: (a) Subjects characterized by impaired face memory, typical face perception, and a preference to look high on the face, and (b) Subjects characterized by profound impairments to both face memory and perception and a preference to look very low on the face. Further, while all NTs and upper-looking DPs performed best when faces were presented near their preferred fixation location, this was not true for lower-looking DPs. These results suggest that face recognition deficits in a substantial proportion of people with DP may arise not from aberrant face gaze or compromised retinotopic tuning, but from the suboptimal matching of gaze to tuning.
Collapse
Affiliation(s)
- Matthew F Peterson
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Ian Zaun
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Harris Hoke
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Guo Jiahui
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Brad Duchaine
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
24
|
Kaiser D, Quek GL, Cichy RM, Peelen MV. Object Vision in a Structured World. Trends Cogn Sci 2019; 23:672-685. [PMID: 31147151 PMCID: PMC7612023 DOI: 10.1016/j.tics.2019.04.013] [Citation(s) in RCA: 75] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Revised: 04/15/2019] [Accepted: 04/30/2019] [Indexed: 01/02/2023]
Abstract
In natural vision, objects appear at typical locations, both with respect to visual space (e.g., an airplane in the upper part of a scene) and other objects (e.g., a lamp above a table). Recent studies have shown that object vision is strongly adapted to such positional regularities. In this review we synthesize these developments, highlighting that adaptations to positional regularities facilitate object detection and recognition, and sharpen the representations of objects in visual cortex. These effects are pervasive across various types of high-level content. We posit that adaptations to real-world structure collectively support optimal usage of limited cortical processing resources. Taking positional regularities into account will thus be essential for understanding efficient object vision in the real world.
Collapse
Affiliation(s)
- Daniel Kaiser
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.
| | - Genevieve L Quek
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.
| |
Collapse
|
25
|
Arizpe JM, Noles DL, Tsao JW, Chan AWY. Eye Movement Dynamics Differ between Encoding and Recognition of Faces. Vision (Basel) 2019; 3:vision3010009. [PMID: 31735810 PMCID: PMC6802769 DOI: 10.3390/vision3010009] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Revised: 11/15/2018] [Accepted: 12/26/2018] [Indexed: 11/16/2022] Open
Abstract
Facial recognition is widely thought to involve a holistic perceptual process, and optimal recognition performance can be rapidly achieved within two fixations. However, is facial identity encoding likewise holistic and rapid, and how do gaze dynamics during encoding relate to recognition? While having eye movements tracked, participants completed an encoding ("study") phase and subsequent recognition ("test") phase, each divided into blocks of one- or five-second stimulus presentation time conditions to distinguish the influences of experimental phase (encoding/recognition) and stimulus presentation time (short/long). Within the first two fixations, several differences between encoding and recognition were evident in the temporal and spatial dynamics of the eye-movements. Most importantly, in behavior, the long study phase presentation time alone caused improved recognition performance (i.e., longer time at recognition did not improve performance), revealing that encoding is not as rapid as recognition, since longer sequences of eye-movements are functionally required to achieve optimal encoding than to achieve optimal recognition. Together, these results are inconsistent with a scan path replay hypothesis. Rather, feature information seems to have been gradually integrated over many fixations during encoding, enabling recognition that could subsequently occur rapidly and holistically within a small number of fixations.
Collapse
Affiliation(s)
- Joseph M. Arizpe
- Department of Neurology, University of Tennessee Health Science Center, Memphis, TN 38163, USA
- Children’s Foundation Research Institute, Le Bonheur Children’s Hospital, Memphis, TN 38103, USA
- Science Applications International Corporation (SAIC), Fort Sam Houston, TX 78234, USA
- Correspondence:
| | - Danielle L. Noles
- Department of Neurology, University of Tennessee Health Science Center, Memphis, TN 38163, USA
- Children’s Foundation Research Institute, Le Bonheur Children’s Hospital, Memphis, TN 38103, USA
- School of Medicine, University of Tennessee Health Science Center, Memphis, TN 38163, USA
| | - Jack W. Tsao
- Department of Neurology, University of Tennessee Health Science Center, Memphis, TN 38163, USA
- Children’s Foundation Research Institute, Le Bonheur Children’s Hospital, Memphis, TN 38103, USA
- Department of Anatomy & Neurobiology, University of Tennessee Health Science Center, Memphis, TN 38163, USA
- Memphis Veterans Affairs Medical Center, Memphis, TN 38104, USA
| | - Annie W.-Y. Chan
- Department of Neurology, University of Tennessee Health Science Center, Memphis, TN 38163, USA
- Children’s Foundation Research Institute, Le Bonheur Children’s Hospital, Memphis, TN 38103, USA
- Department of Radiology, University of Tennessee Health Science Center, Memphis, TN 38163, USA
- Department of Life Sciences, Centre for Cognitive Neuroscience, Division of Psychology, Brunel University London, London, UB8 3PH, UK
| |
Collapse
|
26
|
Kamps FS, Morris EJ, Dilks DD. A face is more than just the eyes, nose, and mouth: fMRI evidence that face-selective cortex represents external features. Neuroimage 2019; 184:90-100. [PMID: 30217542 PMCID: PMC6230492 DOI: 10.1016/j.neuroimage.2018.09.027] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Accepted: 09/10/2018] [Indexed: 11/30/2022] Open
Abstract
What is a face? Intuition, along with abundant behavioral and neural evidence, indicates that internal features (e.g., eyes, nose, mouth) are critical for face recognition, yet some behavioral and neural findings suggest that external features (e.g., hair, head outline, neck and shoulders) may likewise be processed as a face. Here we directly test this hypothesis by investigating how external (and internal) features are represented in the brain. Using fMRI, we found highly selective responses to external features (relative to objects and scenes) within the face processing system in particular, rivaling that observed for internal features. We then further asked how external and internal features are represented in regions of the cortical face processing system, and found a similar division of labor for both kinds of features, with the occipital face area and posterior superior temporal sulcus representing the parts of both internal and external features, and the fusiform face area representing the coherent arrangement of both internal and external features. Taken together, these results provide strong neural evidence that a "face" is composed of both internal and external features.
Collapse
Affiliation(s)
- Frederik S Kamps
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| | - Ethan J Morris
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| | - Daniel D Dilks
- Department of Psychology, Emory University, Atlanta, GA 30322, USA.
| |
Collapse
|
27
|
Davidenko N, Kopalle H, Bridgeman B. The Upper Eye Bias: Rotated Faces Draw Fixations to the Upper Eye. Perception 2018; 48:162-174. [PMID: 30588863 DOI: 10.1177/0301006618819628] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
There is a consistent left-gaze bias when observers fixate upright faces, but it is unknown how this bias manifests in rotated faces, where the two eyes appear at different heights on the face. In two eye-tracking experiments, we measured participants' first and second fixations, while they judged the expressions of upright and rotated faces. We hypothesized that rotated faces might elicit a bias to fixate the upper eye. Our results strongly confirmed this hypothesis, with the upper eye bias completely dominating the left-gaze bias in ±45° faces in Experiment 1, and across a range of face orientations (±11.25°, ±22.5°, ±33.75°, ±45°, and ±90°) in Experiment 2. In addition, rotated faces elicited more overall eye-directed fixations than upright faces. We consider potential mechanisms of the upper eye bias in rotated faces and discuss some implications for research in social cognition.
Collapse
Affiliation(s)
- Nicolas Davidenko
- Department of Psychology, University of California, Santa Cruz, CA, USA
| | - Hema Kopalle
- Department of Neurosciences, University of California, San Diego, CA, USA
| | - Bruce Bridgeman
- Department of Psychology, University of California, Santa Cruz, CA, USA
| |
Collapse
|
28
|
de Haas B. How to Enhance the Power to Detect Brain-Behavior Correlations With Limited Resources. Front Hum Neurosci 2018; 12:421. [PMID: 30386224 PMCID: PMC6198725 DOI: 10.3389/fnhum.2018.00421] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2018] [Accepted: 09/28/2018] [Indexed: 11/25/2022] Open
Abstract
Neuroscience has been diagnosed with a pervasive lack of statistical power and, in turn, reliability. One remedy proposed is a massive increase of typical sample sizes. Parts of the neuroimaging community have embraced this recommendation and actively push for a reallocation of resources toward fewer but larger studies. This is especially true for neuroimaging studies focusing on individual differences to test brain-behavior correlations. Here, I argue for a more efficient solution. Ad hoc simulations show that statistical power crucially depends on the choice of behavioral and neural measures, as well as on sampling strategy. Specifically, behavioral prescreening and the selection of extreme groups can ascertain a high degree of robust in-sample variance. Due to the low cost of behavioral testing compared to neuroimaging, this is a more efficient way of increasing power. For example, prescreening can achieve the power boost afforded by an increase of sample sizes from n = 30 to n = 100 at ∼5% of the cost. This perspective article briefly presents simulations yielding these results, discusses the strengths and limitations of prescreening and addresses some potential counter-arguments. Researchers can use the accompanying online code to simulate the expected power boost of prescreening for their own studies.
Collapse
Affiliation(s)
- Benjamin de Haas
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
29
|
Idiosyncratic, Retinotopic Bias in Face Identification Modulated by Familiarity. eNeuro 2018; 5:eN-NWR-0054-18. [PMID: 30294669 PMCID: PMC6171739 DOI: 10.1523/eneuro.0054-18.2018] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2018] [Revised: 07/25/2018] [Accepted: 08/21/2018] [Indexed: 12/21/2022] Open
Abstract
The perception of gender and age of unfamiliar faces is reported to vary idiosyncratically across retinal locations such that, for example, the same androgynous face may appear to be male at one location but female at another. Here, we test spatial heterogeneity for the recognition of the identity of personally familiar faces in human participants. We found idiosyncratic biases that were stable within participants and that varied more across locations for low as compared to high familiar faces. These data suggest that like face gender and age, face identity is processed, in part, by independent populations of neurons monitoring restricted spatial regions and that the recognition responses vary for the same face across these different locations. Moreover, repeated and varied social interactions appear to lead to adjustments of these independent face recognition neurons so that the same familiar face is eventually more likely to elicit the same recognition response across widely separated visual field locations. We provide a mechanistic account of this reduced retinotopic bias based on computational simulations.
Collapse
|
30
|
Typical retinotopic locations impact the time course of object coding. Neuroimage 2018; 176:372-379. [DOI: 10.1016/j.neuroimage.2018.05.006] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2018] [Revised: 04/30/2018] [Accepted: 05/01/2018] [Indexed: 01/28/2023] Open
|
31
|
Kaiser D, Cichy RM. Typical visual-field locations enhance processing in object-selective channels of human occipital cortex. J Neurophysiol 2018; 120:848-853. [DOI: 10.1152/jn.00229.2018] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Natural environments consist of multiple objects, many of which repeatedly occupy similar locations within a scene. For example, hats are seen on people’s heads, while shoes are most often seen close to the ground. Such positional regularities bias the distribution of objects across the visual field: hats are more often encountered in the upper visual field, while shoes are more often encountered in the lower visual field. Here we tested the hypothesis that typical visual field locations of objects facilitate cortical processing. We recorded functional MRI while participants viewed images of objects that were associated with upper or lower visual field locations. Using multivariate classification, we show that object information can be more successfully decoded from response patterns in object-selective lateral occipital cortex (LO) when the objects are presented in their typical location (e.g., shoe in the lower visual field) than when they are presented in an atypical location (e.g., shoe in the upper visual field). In a functional connectivity analysis, we relate this benefit to increased coupling between LO and early visual cortex, suggesting that typical object positioning facilitates information propagation across the visual hierarchy. Together these results suggest that object representations in occipital visual cortex are tuned to the structure of natural environments. This tuning may support object perception in spatially structured environments. NEW & NOTEWORTHY In the real world, objects appear in predictable spatial locations. Hats, commonly appearing on people’s heads, often fall into the upper visual field. Shoes, mostly appearing on people’s feet, often fall into the lower visual field. Here we used functional MRI to demonstrate that such regularities facilitate cortical processing: Objects encountered in their typical locations are coded more efficiently, which may allow us to effortlessly recognize objects in natural environments.
Collapse
Affiliation(s)
- Daniel Kaiser
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Radoslaw M. Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
- Berlin School of Mind and Brain, Humboldt-Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| |
Collapse
|
32
|
Abstract
Perceptual bias is inherent to all our senses, particularly in the form of visual illusions and aftereffects. However, many experiments measuring perceptual biases may be susceptible to nonperceptual factors, such as response bias and decision criteria. Here, we quantify how robust multiple alternative perceptual search (MAPS) is for disentangling estimates of perceptual biases from these confounding factors. First, our results show that while there are considerable response biases in our four-alternative forced-choice design, these are unrelated to perceptual biases estimates, and these response biases are not produced by the response modality (keyboard vs. mouse). We also show that perceptual bias estimates are reduced when feedback is given on each trial, likely due to feedback enabling observers to partially (and actively) correct for perceptual biases. However, this does not impact the reliability with which MAPS detects the presence of perceptual biases. Finally, our results show that MAPS can detect actual perceptual biases and is not a decisional bias towards choosing the target in the middle of the candidate stimulus distribution. In summary, researchers conducting a MAPS experiment should use a constant reference stimulus, but consider varying the mean of the candidate distribution. Ideally, they should not employ trial-wise feedback if the magnitude of perceptual biases is of interest.
Collapse
|
33
|
Typical visual-field locations facilitate access to awareness for everyday objects. Cognition 2018; 180:118-122. [PMID: 30029067 DOI: 10.1016/j.cognition.2018.07.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2018] [Revised: 07/10/2018] [Accepted: 07/12/2018] [Indexed: 11/20/2022]
Abstract
In real-world vision, humans are constantly confronted with complex environments that contain a multitude of objects. These environments are spatially structured, so that objects have different likelihoods of appearing in specific parts of the visual space. Our massive experience with such positional regularities prompts the hypothesis that the processing of individual objects varies in efficiency across the visual field: when objects are encountered in their typical locations (e.g., we are used to seeing lamps in the upper visual field and carpets in the lower visual field), they should be more efficiently perceived than when they are encountered in atypical locations (e.g., a lamp in the lower visual field and a carpet in the upper visual field). Here, we provide evidence for this hypothesis by showing that typical positioning facilitates an object's access to awareness. In two continuous flash suppression experiments, objects more efficiently overcame inter-ocular suppression when they were presented in visual-field locations that matched their typical locations in the environment, as compared to non-typical locations. This finding suggests that through extensive experience the visual system has adapted to the statistics of the environment. This adaptation may be particularly useful for rapid object individuation in natural scenes.
Collapse
|
34
|
Facing a Regular World: How Spatial Object Structure Shapes Visual Processing. J Neurosci 2018; 37:1965-1967. [PMID: 28228518 DOI: 10.1523/jneurosci.3441-16.2017] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2016] [Revised: 12/26/2016] [Accepted: 01/04/2017] [Indexed: 11/21/2022] Open
|
35
|
Sarkheil P, Kilian-Hütten N, Mickartz K, Vornholt T, Mathiak K. Variation of temporal order reveals deficits in categorisation of facial expressions in patients afflicted with depression. Cogn Neuropsychiatry 2018; 23:154-164. [PMID: 29502459 DOI: 10.1080/13546805.2018.1444596] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
INTRODUCTION It is well established that depressive disorders are associated with abnormalities in the processing of affective information. However, type of stimuli, perceptual complexity and cognitive demand are important factors in evaluating these findings. In particular, processing mechanisms of perceptual boundaries in ecologically valid face stimuli are largely unknown in depression. METHODS In the current study, intensity-ordered frame sequences provided a dynamic visualisation of happy or sad facial expressions fading from or to neutral expressions. Patients (n = 20) with major depression (MD) disorder and controls (n = 20) indicated their perceptual boundaries between neutral and emotional face depending on direction and emotion. The averaged time of the perceptual boundary entered a group × condition ANOVA and regression analysis. RESULTS MD group did not systematically shift perceptual boundaries in the dynamic emotional faces but yielded altered statistics in information processing. The Gaussian distribution of boundary judgements was disturbed in depression, increasing goodness-of-fit errors in disappearing emotions. Goodness-of-fit correlated with depression symptom score (Beck Depression Inventory-II (BDI-II)) in the MD group during the disappearing sad (r(18) = 46, p = 0.04) and happy (r(18) = 51, p = 0.02) conditions. CONCLUSION We evaluated detection of appearing and disappearing emotions in dynamic faces. A deviant distribution of categorisation responses emerged in the MD group, which was not emotion-specific. Such a perceptional uncertainty can impede individuals' functioning in interpersonal interaction.
Collapse
Affiliation(s)
- Pegah Sarkheil
- a Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical Faculty , RWTH Aachen University , Aachen , Germany.,c JARA-Translational Brain Medicine , RWTH Aachen , Aachen , Germany
| | - Niclas Kilian-Hütten
- b Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience , Maastricht University , Maastricht , The Netherlands
| | - Kristina Mickartz
- a Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical Faculty , RWTH Aachen University , Aachen , Germany.,b Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience , Maastricht University , Maastricht , The Netherlands
| | - Thomas Vornholt
- a Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical Faculty , RWTH Aachen University , Aachen , Germany.,b Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience , Maastricht University , Maastricht , The Netherlands
| | - Klaus Mathiak
- a Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical Faculty , RWTH Aachen University , Aachen , Germany.,c JARA-Translational Brain Medicine , RWTH Aachen , Aachen , Germany
| |
Collapse
|
36
|
Weibert K, Flack TR, Young AW, Andrews TJ. Patterns of neural response in face regions are predicted by low-level image properties. Cortex 2018; 103:199-210. [PMID: 29655043 DOI: 10.1016/j.cortex.2018.03.009] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2017] [Revised: 01/26/2018] [Accepted: 03/13/2018] [Indexed: 11/30/2022]
Abstract
Models of face processing suggest that the neural response in different face regions is selective for higher-level attributes of the face, such as identity and expression. However, it remains unclear to what extent the response in these regions can also be explained by more basic organizing principles. Here, we used functional magnetic resonance imaging multivariate pattern analysis (fMRI-MVPA) to ask whether spatial patterns of response in the core face regions (occipital face area - OFA, fusiform face area - FFA, superior temporal sulcus - STS) can be predicted across different participants by lower level properties of the stimulus. First, we compared the neural response to face identity and viewpoint, by showing images of different identities from different viewpoints. The patterns of neural response in the core face regions were predicted by the viewpoint, but not the identity of the face. Next, we compared the neural response to viewpoint and expression, by showing images with different expressions from different viewpoints. Again, viewpoint, but not expression, predicted patterns of response in face regions. Finally, we show that the effect of viewpoint in both experiments could be explained by changes in low-level image properties. Our results suggest that a key determinant of the neural representation in these core face regions involves lower-level image properties rather than an explicit representation of higher-level attributes in the face. The advantage of a relatively image-based representation is that it can be used flexibly in the perception of faces.
Collapse
Affiliation(s)
- Katja Weibert
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom
| | - Tessa R Flack
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom
| | - Andrew W Young
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom
| | - Timothy J Andrews
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom.
| |
Collapse
|
37
|
Revealing the mechanisms of human face perception using dynamic apertures. Cognition 2017; 169:25-35. [DOI: 10.1016/j.cognition.2017.08.001] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2017] [Revised: 08/02/2017] [Accepted: 08/02/2017] [Indexed: 11/24/2022]
|
38
|
Attention Priority Map of Face Images in Human Early Visual Cortex. J Neurosci 2017; 38:149-157. [PMID: 29133433 DOI: 10.1523/jneurosci.1206-17.2017] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Revised: 10/28/2017] [Accepted: 11/06/2017] [Indexed: 11/21/2022] Open
Abstract
Attention priority maps are topographic representations that are used for attention selection and guidance of task-related behavior during visual processing. Previous studies have identified attention priority maps of simple artificial stimuli in multiple cortical and subcortical areas, but investigating neural correlates of priority maps of natural stimuli is complicated by the complexity of their spatial structure and the difficulty of behaviorally characterizing their priority map. To overcome these challenges, we reconstructed the topographic representations of upright/inverted face images from fMRI BOLD signals in human early visual areas primary visual cortex (V1) and the extrastriate cortex (V2 and V3) based on a voxelwise population receptive field model. We characterized the priority map behaviorally as the first saccadic eye movement pattern when subjects performed a face-matching task relative to the condition in which subjects performed a phase-scrambled face-matching task. We found that the differential first saccadic eye movement pattern between upright/inverted and scrambled faces could be predicted from the reconstructed topographic representations in V1-V3 in humans of either sex. The coupling between the reconstructed representation and the eye movement pattern increased from V1 to V2/3 for the upright faces, whereas no such effect was found for the inverted faces. Moreover, face inversion modulated the coupling in V2/3, but not in V1. Our findings provide new evidence for priority maps of natural stimuli in early visual areas and extend traditional attention priority map theories by revealing another critical factor that affects priority maps in extrastriate cortex in addition to physical salience and task goal relevance: image configuration.SIGNIFICANCE STATEMENT Prominent theories of attention posit that attention sampling of visual information is mediated by a series of interacting topographic representations of visual space known as attention priority maps. Until now, neural evidence of attention priority maps has been limited to studies involving simple artificial stimuli and much remains unknown about the neural correlates of priority maps of natural stimuli. Here, we show that attention priority maps of face stimuli could be found in primary visual cortex (V1) and the extrastriate cortex (V2 and V3). Moreover, representations in extrastriate visual areas are strongly modulated by image configuration. These findings extend our understanding of attention priority maps significantly by showing that they are modulated, not only by physical salience and task-goal relevance, but also by the configuration of stimuli images.
Collapse
|
39
|
Domain Specificity of Oculomotor Learning after Changes in Sensory Processing. J Neurosci 2017; 37:11469-11484. [PMID: 29054879 DOI: 10.1523/jneurosci.1208-17.2017] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2017] [Revised: 08/28/2017] [Accepted: 09/26/2017] [Indexed: 11/21/2022] Open
Abstract
Humans visually process the world with varying spatial resolution and can program their eye movements optimally to maximize information acquisition for a variety of everyday tasks. Diseases such as macular degeneration can change visual sensory processing, introducing central vision loss (a scotoma). However, humans can learn to direct a new preferred retinal location to regions of interest for simple visual tasks. Whether such learned compensatory saccades are optimal and generalize to more complex tasks, which require integrating information across a large area of the visual field, is not well understood. Here, we explore the possible effects of central vision loss on the optimal saccades during a face identification task, using a gaze-contingent simulated scotoma. We show that a new foveated ideal observer with a central scotoma correctly predicts that the human optimal point of fixation to identify faces shifts from just below the eyes to one that is at the tip of the nose and another at the top of the forehead. However, even after 5000 trials, humans of both sexes surprisingly do not change their initial fixations to adapt to the new optimal fixation points to faces. In contrast, saccades do change for tasks such as object following and to a lesser extent during search. Our findings argue against a central brain motor-compensatory mechanism that generalizes across tasks. They instead suggest task specificity in the learning of oculomotor plans in response to changes in front-end sensory processing and the possibility of separate domain-specific representations of learned oculomotor plans in the brain.SIGNIFICANCE STATEMENT The mechanism by which humans adapt eye movements in response to central vision loss is still not well understood and carries importance for gaining a fundamental understanding of brain plasticity. We show that although humans adapt their eye movements for simpler tasks such as object following and search, these adaptations do not generalize to more complex tasks such as face identification. We provide the first computational model to predict where humans with central vision loss should direct their eye movements in face identification tasks, which could become a critical tool in making patient-specific recommendations. Based on these results, we suggest a novel theory for oculomotor learning: a distributed representation of learned eye-movement plans represented in domain-specific areas of the brain.
Collapse
|
40
|
Bracci S, Ritchie JB, de Beeck HO. On the partnership between neural representations of object categories and visual features in the ventral visual pathway. Neuropsychologia 2017; 105:153-164. [PMID: 28619529 PMCID: PMC5680697 DOI: 10.1016/j.neuropsychologia.2017.06.010] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2016] [Revised: 06/04/2017] [Accepted: 06/12/2017] [Indexed: 11/05/2022]
Abstract
A dominant view in the cognitive neuroscience of object vision is that regions of the ventral visual pathway exhibit some degree of category selectivity. However, recent findings obtained with multivariate pattern analyses (MVPA) suggest that apparent category selectivity in these regions is dependent on more basic visual features of stimuli. In which case a rethinking of the function and organization of the ventral pathway may be in order. We suggest that addressing this issue of functional specificity requires clear coding hypotheses, about object category and visual features, which make contrasting predictions about neuroimaging results in ventral pathway regions. One way to differentiate between categorical and featural coding hypotheses is to test for residual categorical effects: effects of category selectivity that cannot be accounted for by visual features of stimuli. A strong method for testing these effects, we argue, is to make object category and target visual features orthogonal in stimulus design. Recent studies that adopt this approach support a feature-based categorical coding hypothesis according to which regions of the ventral stream do indeed code for object category, but in a format at least partially based on the visual features of stimuli.
Collapse
|
41
|
Arcaro MJ, Schade PF, Vincent JL, Ponce CR, Livingstone MS. Seeing faces is necessary for face-domain formation. Nat Neurosci 2017; 20:1404-1412. [PMID: 28869581 PMCID: PMC5679243 DOI: 10.1038/nn.4635] [Citation(s) in RCA: 115] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2017] [Accepted: 08/03/2017] [Indexed: 11/08/2022]
Abstract
Here we report that monkeys raised without exposure to faces did not develop face domains, but did develop domains for other categories and did show normal retinotopic organization, indicating that early face deprivation leads to a highly selective cortical processing deficit. Therefore, experience must be necessary for the formation (or maintenance) of face domains. Gaze tracking revealed that control monkeys looked preferentially at faces, even at ages prior to the emergence of face domains, but face-deprived monkeys did not, indicating that face looking is not innate. A retinotopic organization is present throughout the visual system at birth, so selective early viewing behavior could bias category-specific visual responses toward particular retinotopic representations, thereby leading to domain formation in stereotyped locations in inferotemporal cortex, without requiring category-specific templates or biases. Thus, we propose that environmental importance influences viewing behavior, viewing behavior drives neuronal activity, and neuronal activity sculpts domain formation.
Collapse
Affiliation(s)
- Michael J Arcaro
- Department of Neurobiology, Harvard Medical School, Boston, Massachusetts, USA
| | - Peter F Schade
- Department of Neurobiology, Harvard Medical School, Boston, Massachusetts, USA
| | - Justin L Vincent
- Department of Neurobiology, Harvard Medical School, Boston, Massachusetts, USA
| | - Carlos R Ponce
- Department of Neurobiology, Harvard Medical School, Boston, Massachusetts, USA
| | | |
Collapse
|
42
|
Abstract
Face perception is critical for normal social functioning and is mediated by a network of regions in the ventral visual stream. In this review, we describe recent neuroimaging findings regarding the macro- and microscopic anatomical features of the ventral face network, the characteristics of white matter connections, and basic computations performed by population receptive fields within face-selective regions composing this network. We emphasize the importance of the neural tissue properties and white matter connections of each region, as these anatomical properties may be tightly linked to the functional characteristics of the ventral face network. We end by considering how empirical investigations of the neural architecture of the face network may inform the development of computational models and shed light on how computations in the face network enable efficient face perception.
Collapse
Affiliation(s)
- Kalanit Grill-Spector
- Department of Psychology, Stanford University, Stanford, California 94305;
- Stanford Neurosciences Institute, Stanford University, Stanford, California 94305
| | - Kevin S Weiner
- Department of Psychology, Stanford University, Stanford, California 94305;
| | - Kendrick Kay
- Department of Radiology, University of Minnesota, Minneapolis, Minnesota 55455
| | - Jesse Gomez
- Neurosciences Program, Stanford University School of Medicine, Stanford, California 94305
| |
Collapse
|
43
|
Liu R, Salisbury JP, Vahabzadeh A, Sahin NT. Feasibility of an Autism-Focused Augmented Reality Smartglasses System for Social Communication and Behavioral Coaching. Front Pediatr 2017; 5:145. [PMID: 28695116 PMCID: PMC5483849 DOI: 10.3389/fped.2017.00145] [Citation(s) in RCA: 82] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/26/2017] [Accepted: 06/08/2017] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND Autism spectrum disorder (ASD) is a childhood-onset neurodevelopmental disorder with a rapidly rising prevalence, currently affecting 1 in 68 children, and over 3.5 million people in the United States. Current ASD interventions are primarily based on in-person behavioral therapies that are both costly and difficult to access. These interventions aim to address some of the fundamental deficits that clinically characterize ASD, including deficits in social communication, and the presence of stereotypies, and other autism-related behaviors. Current diagnostic and therapeutic approaches seldom rely on quantitative data measures of symptomatology, severity, or condition trajectory. METHODS Given the current situation, we report on the Brain Power System (BPS), a digital behavioral aid with quantitative data gathering and reporting features. The BPS includes customized smartglasses, providing targeted personalized coaching experiences through a family of gamified augmented-reality applications utilizing artificial intelligence. These applications provide children and adults with coaching for emotion recognition, face directed gaze, eye contact, and behavioral self-regulation. This preliminary case report, part of a larger set of upcoming research reports, explores the feasibility of the BPS to provide coaching in two boys with clinically diagnosed ASD, aged 8 and 9 years. RESULTS The coaching intervention was found to be well tolerated and rated as being both engaging and fun. Both males could easily use the system, and no technical problems were noted. During the intervention, caregivers reported improved non-verbal communication, eye contact, and social engagement during the intervention. Both boys demonstrated decreased symptoms of ASD, as measured by the aberrant behavior checklist at 24-h post-intervention. Specifically, both cases demonstrated improvements in irritability, lethargy, stereotypy, hyperactivity/non-compliance, and inappropriate speech. CONCLUSION Smartglasses using augmented reality may have an important future role in helping address the therapeutic needs of children with ASD. Quantitative data gathering from such sensor-rich systems may allow for digital phenotyping and the refinement of social communication constructs of the research domain criteria. This report provides evidence for the feasibility, usability, and tolerability of one such specialized smartglasses system.
Collapse
Affiliation(s)
- Runpeng Liu
- Brain Power, Cambridge, MA, United States.,Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, United States
| | | | - Arshya Vahabzadeh
- Brain Power, Cambridge, MA, United States.,Department of Psychiatry, Harvard Medical School, Boston, MA, United States
| | - Ned T Sahin
- Brain Power, Cambridge, MA, United States.,Department of Psychology, Harvard University, Cambridge, MA, United States
| |
Collapse
|