1
|
Bayne T, Frohlich J, Cusack R, Moser J, Naci L. Consciousness in the cradle: on the emergence of infant experience. Trends Cogn Sci 2023; 27:1135-1149. [PMID: 37838614 PMCID: PMC10660191 DOI: 10.1016/j.tics.2023.08.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 08/28/2023] [Accepted: 08/29/2023] [Indexed: 10/16/2023]
Abstract
Although each of us was once a baby, infant consciousness remains mysterious and there is no received view about when, and in what form, consciousness first emerges. Some theorists defend a 'late-onset' view, suggesting that consciousness requires cognitive capacities which are unlikely to be in place before the child's first birthday at the very earliest. Other theorists defend an 'early-onset' account, suggesting that consciousness is likely to be in place at birth (or shortly after) and may even arise during the third trimester. Progress in this field has been difficult, not just because of the challenges associated with procuring the relevant behavioral and neural data, but also because of uncertainty about how best to study consciousness in the absence of the capacity for verbal report or intentional behavior. This review examines both the empirical and methodological progress in this field, arguing that recent research points in favor of early-onset accounts of the emergence of consciousness.
Collapse
Affiliation(s)
- Tim Bayne
- Monash University, Melbourne, VIC, Australia; Brain, Mind, and Consciousness Program, Canadian Institute for Advanced Research, Toronto, Canada.
| | - Joel Frohlich
- Institute for Neuromodulation and Neurotechnology, University Hospital and University of Tübingen, Tübingen, Germany; Institute for Advanced Consciousness Studies, Santa Monica, CA, USA
| | - Rhodri Cusack
- Thomas Mitchell Professor of Cognitive Neuroscience, Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Julia Moser
- Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN, USA
| | - Lorina Naci
- Trinity College Institute of Neuroscience and Global Brain Health Institute, Trinity College, Dublin, Ireland
| |
Collapse
|
2
|
Gaze perception from head and pupil rotations in 2D and 3D: Typical development and the impact of autism spectrum disorder. PLoS One 2022; 17:e0275281. [PMID: 36301975 PMCID: PMC9612464 DOI: 10.1371/journal.pone.0275281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 09/13/2022] [Indexed: 11/05/2022] Open
Abstract
The study of gaze perception has largely focused on a single cue (the eyes) in two-dimensional settings. While this literature suggests that 2D gaze perception is shaped by atypical development, as in Autism Spectrum Disorder (ASD), gaze perception is in reality contextually-sensitive, perceived as an emergent feature conveyed by the rotation of the pupils and head. We examined gaze perception in this integrative context, across development, among children and adolescents developing typically or with ASD with both 2D and 3D stimuli. We found that both groups utilized head and pupil rotations to judge gaze on a 2D face. But when evaluating the gaze of a physically-present, 3D robot, the same ASD observers used eye cues less than their typically-developing peers. This demonstrates that emergent gaze perception is a slowly developing process that is surprisingly intact, albeit weakened in ASD, and illustrates how new technology can bridge visual and clinical science.
Collapse
|
3
|
Canas-Bajo T, Whitney D. Relative tuning of holistic face processing towards the fovea. Vision Res 2022; 197:108049. [PMID: 35461170 PMCID: PMC10101769 DOI: 10.1016/j.visres.2022.108049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 03/12/2022] [Accepted: 03/29/2022] [Indexed: 10/18/2022]
Abstract
Humans quickly detect and gaze at faces in the world, which reflects their importance in cognition and may lead to tuning of face recognition toward the central visual field. Although sometimes reported, foveal selectivity in face processing is debated: brain imaging studies have found evidence for a central field bias specific to faces, but behavioral studies have found little foveal selectivity in face recognition. These conflicting results are difficult to reconcile, but they could arise from stimulus-specific differences. Recent studies, for example, suggest that individual faces vary in the degree to which they require holistic processing. Holistic processing is the perception of faces as a whole rather than as a set of separate features. We hypothesized that the dissociation between behavioral and neuroimaging studies arises because of this stimulus-specific dependence on holistic processing. Specifically, the central bias found in neuroimaging studies may be specific to holistic processing. Here, we tested whether the eccentricity-dependence of face perception is determined by the degree to which faces require holistic processing. We first measured the holistic-ness of individual Mooney faces (two-tone shadow images readily perceived as faces). In a group of independent observers, we then used a gender discrimination task to measured recognition of these Mooney faces as a function of their eccentricity. Face gender was recognized across the visual field, even at substantial eccentricities, replicating prior work. Importantly, however, holistic face gender recognition was relatively tuned-slightly, but reliably stronger in the central visual field. Our results may reconcile the debate on the eccentricity-dependance of face perception and reveal a spatial inhomogeneity specifically in the holistic representations of faces.
Collapse
Affiliation(s)
- Teresa Canas-Bajo
- Vision Science Graduate Group, University of California, Berkeley, Berkeley, CA, USA.
| | - David Whitney
- Vision Science Graduate Group, University of California, Berkeley, Berkeley, CA, USA; Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| |
Collapse
|
4
|
Perche O, Lesne F, Patat A, Raab S, Twyman R, Ring RH, Briault S. Electroretinography and contrast sensitivity, complementary translational biomarkers of sensory deficits in the visual system of individuals with fragile X syndrome. J Neurodev Disord 2021; 13:45. [PMID: 34625026 PMCID: PMC8501595 DOI: 10.1186/s11689-021-09375-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 07/30/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Disturbances in sensory function are an important clinical feature of neurodevelopmental disorders such as fragile X syndrome (FXS). Evidence also directly connects sensory abnormalities with the clinical expression of behavioral impairments in individuals with FXS; thus, positioning sensory function as a potential clinical target for the development of new therapeutics. Using electroretinography (ERG) and contrast sensitivity (CS), we previously reported the presence of sensory deficits in the visual system of the Fmr1-/y genetic mouse model of FXS. The goals of the current study were two-folds: (1) to assess the feasibility of measuring ERG and CS as a biomarker of sensory deficits in individuals with FXS, and (2) to investigate whether the deficits revealed by ERG and CS in Fmr1-/y mice translate to humans with FXS. METHODS Both ERG and CS were measured in a cohort of male individuals with FXS (n = 20, 18-45 years) and age-matched healthy controls (n = 20, 18-45 years). Under light-adapted conditions, and using both single flash and flicker (repeated train of flashes) stimulation protocols, retinal function was recorded from individual subjects using a portable, handheld, full-field flash ERG device (RETeval®, LKC Technologies Inc., Gaithersburg, MD, USA). CS was assessed in each subject using the LEA SYMBOLS® low-contrast test (Good-Lite, Elgin, IL, USA). RESULTS Data recording was successfully completed for ERG and assessment of CS in most individuals from both cohorts demonstrating the feasibility of these methods for use in the FXS population. Similar to previously reported findings from the Fmr1-/y genetic mouse model, individuals with FXS were found to exhibit reduced b-wave and flicker amplitude in ERG and an impaired ability to discriminate contrasts compared to healthy controls. CONCLUSIONS This study demonstrates the feasibility of using ERG and CS for assessing visual deficits in FXS and establishes the translational validity of the Fmr1-/y mice phenotype to individuals with FXS. By including electrophysiological and functional readouts, the results of this study suggest the utility of both ERG and CS (ERG-CS) as complementary translational biomarkers for characterizing sensory abnormalities found in FXS, with potential applications to the clinical development of novel therapeutics that target sensory function abnormalities to treat core symptomatology in FXS. TRIAL REGISTRATION ID-RCB number 2019-A01015-52 registered on the 17 May 2019.
Collapse
Affiliation(s)
- Olivier Perche
- Genetic Department, Centre Hospitalier Régional d'Orléans, Orléans, France
- UMR7355, Centre National de la Recherche Scientifique (CNRS), Orléans, France
- Experimental and Molecular Immunology and Neurogenetics, University of Orléans, Orléans, France
- Kaerus Bioscience Ltd., London, EC1Y 4YX, UK
| | | | - Alain Patat
- Kaerus Bioscience Ltd., London, EC1Y 4YX, UK
| | | | | | - Robert H Ring
- Kaerus Bioscience Ltd., London, EC1Y 4YX, UK
- Department of Pharmacology and Physiology, Drexel University College of Medicine, Philadelphia, PA, USA
| | - Sylvain Briault
- Genetic Department, Centre Hospitalier Régional d'Orléans, Orléans, France.
- UMR7355, Centre National de la Recherche Scientifique (CNRS), Orléans, France.
- Experimental and Molecular Immunology and Neurogenetics, University of Orléans, Orléans, France.
- Kaerus Bioscience Ltd., London, EC1Y 4YX, UK.
| |
Collapse
|
5
|
Abstract
When referring to objects, adults package words, sentences, and gestures in ways that shape children's learning. Here, to understand how continuity of reference shapes word learning, an adult taught new words to 4-year-old children (N = 120) using either clusters of references to the same object or no sequential references to each object. In three experiments, the adult used a combination of labels and other object references, which provided informative discourse (e.g., This is small and green), neutral discourse (e.g., This is really great), or no verbal discourse. Switching verbal references from one object to another interfered with learning relative to providing clustered references to a particular object, revealing that discontinuity in discourse hinders children's encoding of new words.
Collapse
|
6
|
Mihalache D, Feng H, Askari F, Sokol-Hessner P, Moody EJ, Mahoor MH, Sweeny TD. Perceiving gaze from head and eye rotations: An integrative challenge for children and adults. Dev Sci 2019; 23:e12886. [PMID: 31271685 DOI: 10.1111/desc.12886] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2018] [Revised: 06/13/2019] [Accepted: 06/20/2019] [Indexed: 12/01/2022]
Abstract
Gaze is an emergent visual feature. A person's gaze direction is perceived not just based on the rotation of their eyes, but also their head. At least among adults, this integrative process appears to be flexible such that one feature can be weighted more heavily than the other depending on the circumstances. Yet it is unclear how this weighting might vary across individuals or across development. When children engage emergent gaze, do they prioritize cues from the head and eyes similarly to adults? Is the perception of gaze among individuals with autism spectrum disorder (ASD) emergent, or is it reliant on a single feature? Sixty adults (M = 29.86 years-of-age), thirty-seven typically developing children and adolescents (M = 9.3 years-of-age; range = 7-15), and eighteen children with ASD (M = 9.72 years-of-age; range = 7-15) viewed faces with leftward, rightward, or direct head rotations in conjunction with leftward or rightward pupil rotations, and then indicated whether the face was looking leftward or rightward. All individuals, across development and ASD status, used head rotation to infer gaze direction, albeit with some individual differences. However, the use of pupil rotation was heavily dependent on age. Finally, children with ASD used pupil rotation significantly less than typically developing (TD) children when inferring gaze direction, even after accounting for age. Our approach provides a novel framework for understanding individual and group differences in gaze as it is actually perceived-as an emergent feature. Furthermore, this study begins to address an important gap in ASD literature, taking the first look at emergent gaze perception in this population.
Collapse
Affiliation(s)
- Diana Mihalache
- Department of Psychology, University of Denver, Denver, Colorado
| | - Huanghao Feng
- Department of Electrical and Computer Engineering, University of Denver, Denver, Colorado
| | - Farzaneh Askari
- Department of Electrical and Computer Engineering, University of Denver, Denver, Colorado
| | | | - Eric J Moody
- Wyoming Institute for Disabilities, University of Wyoming, Laramie, Wyoming
| | - Mohammad H Mahoor
- Department of Electrical and Computer Engineering, University of Denver, Denver, Colorado
| | - Timothy D Sweeny
- Department of Psychology, University of Denver, Denver, Colorado
| |
Collapse
|
7
|
Smith LB, Slone LK. A Developmental Approach to Machine Learning? Front Psychol 2017; 8:2124. [PMID: 29259573 PMCID: PMC5723343 DOI: 10.3389/fpsyg.2017.02124] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 11/21/2017] [Indexed: 11/13/2022] Open
Abstract
Visual learning depends on both the algorithms and the training material. This essay considers the natural statistics of infant- and toddler-egocentric vision. These natural training sets for human visual object recognition are very different from the training data fed into machine vision systems. Rather than equal experiences with all kinds of things, toddlers experience extremely skewed distributions with many repeated occurrences of a very few things. And though highly variable when considered as a whole, individual views of things are experienced in a specific order - with slow, smooth visual changes moment-to-moment, and developmentally ordered transitions in scene content. We propose that the skewed, ordered, biased visual experiences of infants and toddlers are the training data that allow human learners to develop a way to recognize everything, both the pervasively present entities and the rarely encountered ones. The joint consideration of real-world statistics for learning by researchers of human and machine learning seems likely to bring advances in both disciplines.
Collapse
Affiliation(s)
- Linda B. Smith
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, IN, United States
| | | |
Collapse
|
8
|
Sugden NA, Moulson MC. Hey Baby, what's “up”? One- and 3-Month-Olds Experience Faces Primarily Upright but Non-Upright Faces Offer the Best Views. Q J Exp Psychol (Hove) 2017; 70:959-969. [DOI: 10.1080/17470218.2016.1154581] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Experience has been theorized to shape how we process faces. Frequent face types are better discriminated and processed using expert-level holistic strategies while less frequent types are less well discriminated and processed using less mature featural strategies. Although experience is probably influencing the development of face processing, it is unclear what aspects of experience are most influential. The current study utilized infant-perspective head-mounted cameras to capture infants’ daily lives at 1 and 3 months of age to measure the perceptual qualities of frequent and infrequent face types. We examined experience with upright (i.e., frequently experienced) and inverted (i.e., infrequently experienced) faces. A large majority (88%) of all face exposure was to upright faces. Most faces, regardless of orientation, were viewed near to the infant, alone in the field of view, and in a frontal viewpoint (i.e., an “ideal view”). Although they were less frequent than upright faces, proportionally more non-upright faces were viewed in an “ideal view”. At this young age, nearly all faces, even non-upright faces, are seen in ways that facilitate processing.
Collapse
Affiliation(s)
- Nicole A. Sugden
- Department of Psychology, Ryerson University, Toronto, ON, Canada
| | | |
Collapse
|
9
|
Clerkin EM, Hart E, Rehg JM, Yu C, Smith LB. Real-world visual statistics and infants' first-learned object names. Philos Trans R Soc Lond B Biol Sci 2017; 372:20160055. [PMID: 27872373 PMCID: PMC5124080 DOI: 10.1098/rstb.2016.0055] [Citation(s) in RCA: 93] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/29/2016] [Indexed: 11/12/2022] Open
Abstract
We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present-a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'.
Collapse
Affiliation(s)
- Elizabeth M Clerkin
- Department of Psychological and Brain Science, Indiana University, Bloomington, IN 47203, USA
| | - Elizabeth Hart
- Department of Psychological and Brain Science, Indiana University, Bloomington, IN 47203, USA
| | - James M Rehg
- Interactive Computing, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Chen Yu
- Department of Psychological and Brain Science, Indiana University, Bloomington, IN 47203, USA
| | - Linda B Smith
- Department of Psychological and Brain Science, Indiana University, Bloomington, IN 47203, USA
| |
Collapse
|
10
|
Brockhoff A, Papenmeier F, Wolf K, Pfeiffer T, Jahn G, Huff M. Viewpoint matters: Exploring the involvement of reference frames in multiple object tracking from a developmental perspective. COGNITIVE DEVELOPMENT 2016. [DOI: 10.1016/j.cogdev.2015.10.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
11
|
Pratesi A, Cecchi F, Beani E, Sgandurra G, Cioni G, Laschi C, Dario P. A new system for quantitative evaluation of infant gaze capabilities in a wide visual field. Biomed Eng Online 2015; 14:83. [PMID: 26346053 PMCID: PMC4562110 DOI: 10.1186/s12938-015-0076-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2015] [Accepted: 08/12/2015] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The visual assessment of infants poses specific challenges: many techniques that are used on adults are based on the patient's response, and are not suitable for infants. Significant advances in the eye-tracking have made this assessment of infant visual capabilities easier, however, eye-tracking still requires the subject's collaboration, in most cases and thus limiting the application in infant research. Moreover, there is a lack of transferability to clinical practice, and thus it emerges the need for a new tool to measure the paradigms and explore the most common visual competences in a wide visual field. This work presents the design, development and preliminary testing of a new system for measuring infant's gaze in the wide visual field called CareToy C: CareToy for Clinics. METHODS The system is based on a commercial eye tracker (SmartEye) with six cameras running at 60 Hz, suitable for measuring an infant's gaze. In order to stimulate the infant visually and audibly, a mechanical structure has been designed to support five speakers and five screens at a specific distance (60 cm) and angle: one in the centre, two on the right-hand side and two on the left (at 30° and 60° respectively). Different tasks have been designed in order to evaluate the system capability to assess the infant's gaze movements during different conditions (such as gap, overlap or audio-visual paradigms). Nine healthy infants aged 4-10 months were assessed as they performed the visual tasks at random. RESULTS We developed a system able to measure infant's gaze in a wide visual field covering a total visual range of ±60° from the centre with an intermediate evaluation at ±30°. Moreover, the same system, thanks to different integrated software, was able to provide different visual paradigms (as gap, overlap and audio-visual) assessing and comparing different visual and multisensory sub-competencies. The proposed system endowed the integration of a commercial eye-tracker into a purposive setup in a smart and innovative way. CONCLUSIONS The proposed system is suitable for measuring and evaluating infant's gaze capabilities in a wide visual field, in order to provide quantitative data that can enrich the clinical assessment.
Collapse
Affiliation(s)
- Andrea Pratesi
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Viale Rinaldo Piaggio 34, 56025, Pontedera, Pisa, Italy.
| | - Francesca Cecchi
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Viale Rinaldo Piaggio 34, 56025, Pontedera, Pisa, Italy.
| | - Elena Beani
- Department of Developmental Neuroscience, IRCCS Fondazione Stella Maris, Viale del Tirreno 331, 56128, Calambrone, Pisa, Italy.
- Department of Neuroscience, Psychology, Drug Research and Child Health, University of Florence, Florence, Italy.
| | - Giuseppina Sgandurra
- Department of Developmental Neuroscience, IRCCS Fondazione Stella Maris, Viale del Tirreno 331, 56128, Calambrone, Pisa, Italy.
| | - Giovanni Cioni
- Department of Developmental Neuroscience, IRCCS Fondazione Stella Maris, Viale del Tirreno 331, 56128, Calambrone, Pisa, Italy.
- Department of Clinical and Experimental Medicine, University of Pisa, Via Roma 67, 56125, Pisa, Italy.
| | - Cecilia Laschi
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Viale Rinaldo Piaggio 34, 56025, Pontedera, Pisa, Italy.
| | - Paolo Dario
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Viale Rinaldo Piaggio 34, 56025, Pontedera, Pisa, Italy.
| |
Collapse
|
12
|
Smith L, Yu C, Yoshida H, Fausey CM. Contributions of head-mounted cameras to studying the visual environments of infants and young children. JOURNAL OF COGNITION AND DEVELOPMENT 2015; 16:407-419. [PMID: 26257584 PMCID: PMC4527180 DOI: 10.1080/15248372.2014.933430] [Citation(s) in RCA: 63] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Head-mounted video cameras (with and without an eye camera to track gaze direction) are being increasingly used to study infants' and young children's visual environments and provide new and often unexpected insights about the visual world from a child's point of view. The challenge in using head cameras is principally conceptual and concerns the match between what these cameras measure and the research question. Head cameras record the scene in front of faces and thus answer questions about those head-centered scenes. In this "tools of the trade" article, we consider the unique contributions provided by head-centered video, the limitations and open questions that remain for head-camera methods, and the practical issues of placing head-cameras on infants and analyzing the generated video.
Collapse
Affiliation(s)
- Linda Smith
- Department of Psychological and Brain Sciences, Indiana University, Bloomington IN 47405
| | - Chen Yu
- Department of Psychological and Brain Sciences, Indiana University, Bloomington IN 47405
| | - Hanako Yoshida
- Department of Psychology, University of Houston, Houston, TX 77204
| | - Caitlin M Fausey
- Department of Psychological and Brain Sciences, Indiana University, Bloomington IN 47405
| |
Collapse
|
13
|
Sweeny TD, Wurnitsch N, Gopnik A, Whitney D. Ensemble perception of size in 4-5-year-old children. Dev Sci 2015; 18:556-68. [PMID: 25442844 PMCID: PMC5282927 DOI: 10.1111/desc.12239] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2013] [Accepted: 07/14/2014] [Indexed: 11/29/2022]
Abstract
Groups of objects are nearly everywhere we look. Adults can perceive and understand the 'gist' of multiple objects at once, engaging ensemble-coding mechanisms that summarize a group's overall appearance. Are these group-perception mechanisms in place early in childhood? Here, we provide the first evidence that 4-5-year-old children use ensemble coding to perceive the average size of a group of objects. Children viewed a pair of trees, with each containing a group of differently sized oranges. We found that, in order to determine which tree had the larger oranges overall, children integrated the sizes of multiple oranges into ensemble representations. This pooling occurred rapidly, and it occurred despite conflicting information from numerosity, continuous extent, density, and contrast. An ideal observer analysis showed that although children's integration mechanisms are sensitive, they are not yet as efficient as adults'. Overall, our results provide a new insight into the way children see and understand the environment, and they illustrate the fundamental nature of ensemble coding in visual perception.
Collapse
Affiliation(s)
| | | | - Alison Gopnik
- Department of Psychology, University of California – Berkeley
| | - David Whitney
- Department of Psychology, University of California – Berkeley
- Vision Science Group, University of California – Berkeley
| |
Collapse
|
14
|
|
15
|
Benitez VL, Smith LB. Predictable locations aid early object name learning. Cognition 2012; 125:339-52. [PMID: 22989872 PMCID: PMC3472129 DOI: 10.1016/j.cognition.2012.08.006] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2011] [Revised: 07/26/2012] [Accepted: 08/17/2012] [Indexed: 11/15/2022]
Abstract
Expectancy-based localized attention has been shown to promote the formation and retrieval of multisensory memories in adults. Three experiments show that these processes also characterize attention and learning in 16- to 18-month old infants and, moreover, that these processes may play a critical role in supporting early object name learning. The three experiments show that infants learn names for objects when those objects have predictable rather than varied locations, that infants who anticipate the location of named objects better learn those object names, and that infants integrate experiences that are separated in time but share a common location. Taken together, these results suggest that localized attention, cued attention, and spatial indexing are an inter-related set of processes in young children that aid in the early building of coherent object representations. The relevance of the experimental results and spatial attention for everyday word learning are discussed.
Collapse
Affiliation(s)
- Viridiana L Benitez
- Department of Psychological and Brain Sciences, Indiana University, 1101 East 10th Street, Bloomington, IN 47405, United States.
| | | |
Collapse
|
16
|
Yu C, Smith LB. Embodied attention and word learning by toddlers. Cognition 2012; 125:244-62. [PMID: 22878116 PMCID: PMC3829203 DOI: 10.1016/j.cognition.2012.06.016] [Citation(s) in RCA: 218] [Impact Index Per Article: 18.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2010] [Revised: 06/26/2012] [Accepted: 06/29/2012] [Indexed: 10/28/2022]
Abstract
Many theories of early word learning begin with the uncertainty inherent to learning a word from its co-occurrence with a visual scene. However, the relevant visual scene for infant word learning is neither from the adult theorist's view nor the mature partner's view, but is rather from the learner's personal view. Here we show that when 18-month old infants interacted with objects in play with their parents, they created moments in which a single object was visually dominant. If parents named the object during these moments of bottom-up selectivity, later forced-choice tests showed that infants learned the name, but did not when naming occurred during a less visually selective moment. The momentary visual input for parents and toddlers was captured via head cameras placed low on each participant's forehead as parents played with and named objects for their infant. Frame-by-frame analyses of the head camera images at and around naming moments were conducted to determine the visual properties at input that were associated with learning. The analyses indicated that learning occurred when bottom-up visual information was clean and uncluttered. The sensory-motor behaviors of infants and parents were also analyzed to determine how their actions on the objects may have created these optimal visual moments for learning. The results are discussed with respect to early word learning, embodied attention, and the social role of parents in early word learning.
Collapse
Affiliation(s)
- Chen Yu
- Department of Psychological and Brain Sciences, Cognitive Science Program, Indiana University, USA.
| | | |
Collapse
|
17
|
What Grasps and Holds 8-Month-Old Infants' Looking Attention? The Effects of Object Size and Depth Cues. ACTA ACUST UNITED AC 2012. [DOI: 10.1155/2012/439618] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The current eye-tracking study explored the relative impact of object size and depth cues on 8-month-old infants' visual attention processes. A series of slides containing 3 objects of either different or same size were displayed on backgrounds with varying depth cues. The distribution of infants' first looks (a measure of initial attention switch) and infants' looking durations (a measure of sustained attention) at the objects were analyzed. Results revealed that the large objects captured infants' attention first, that is, most of the times infants directed their visual attention first to the largest object in the scene regardless of depth cues. For sustained attention, infants preferred maintaining their attention to the largest object also, but this occurred only when depth cues were present. These findings suggest that infants' initial attention response is driven mainly by object size, while infants' sustained attention is more the product of combined figure and background processing, where object sizes are perceived as a function of depth cues.
Collapse
|
18
|
Abstract
This paper presents two methods that we applied to our research to record infant gaze in the context of goal-oriented actions using different eye-tracking devices: head-mounted and remote eye-tracking. For each type of eye-tracking system, we discuss their advantages and disadvantages, we describe the particular experimental setups we used to study infant looking and reaching, explain how we were able to use and synchronize these systems with other sources of data collection (video recordings and motion capture) in order to analyze gaze and movements directed toward 3D objects within a common time frame. Finally, for each method, we briefly present some results from our studies to illustrate the different levels of analyses that may be carried out using these different types of eye-tracking devices. These examples aim to highlight some of the novel questions that may be addressed using eye-tracking in the context of goal-directed actions.
Collapse
|
19
|
|
20
|
|
21
|
Farzin F, Rivera SM, Whitney D. Resolution of spatial and temporal visual attention in infants with fragile X syndrome. Brain 2011; 134:3355-68. [PMID: 22075522 PMCID: PMC3212718 DOI: 10.1093/brain/awr249] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2011] [Revised: 07/14/2011] [Accepted: 07/28/2011] [Indexed: 11/15/2022] Open
Abstract
Fragile X syndrome is the most common cause of inherited intellectual impairment and the most common single-gene cause of autism. Individuals with fragile X syndrome present with a neurobehavioural phenotype that includes selective deficits in spatiotemporal visual perception associated with neural processing in frontal-parietal networks of the brain. The goal of the current study was to examine whether reduced resolution of spatial and/or temporal visual attention may underlie perceptual deficits related to fragile X syndrome. Eye tracking was used to psychophysically measure the limits of spatial and temporal attention in infants with fragile X syndrome and age-matched neurotypically developing infants. Results from these experiments revealed that infants with fragile X syndrome experience drastically reduced resolution of temporal attention in a genetic dose-sensitive manner, but have a spatial resolution of attention that is not impaired. Coarse temporal attention could have significant knock-on effects for the development of perceptual, cognitive and motor abilities in individuals with the disorder.
Collapse
Affiliation(s)
- Faraz Farzin
- Department of Psychology, University of California, Davis, CA 95616, USA.
| | | | | |
Collapse
|
22
|
Abstract
Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label "visual cognition" is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events.
Collapse
Affiliation(s)
- Patrick Cavanagh
- Centre Attention & Vision, LPP CNRS UMR 8158, Université Paris Descartes, Paris, France.
| |
Collapse
|
23
|
Whitney D, Levi DM. Visual crowding: a fundamental limit on conscious perception and object recognition. Trends Cogn Sci 2011; 15:160-8. [PMID: 21420894 DOI: 10.1016/j.tics.2011.02.005] [Citation(s) in RCA: 466] [Impact Index Per Article: 35.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2011] [Revised: 02/14/2011] [Accepted: 02/14/2011] [Indexed: 11/19/2022]
Abstract
Crowding, the inability to recognize objects in clutter, sets a fundamental limit on conscious visual perception and object recognition throughout most of the visual field. Despite how widespread and essential it is to object recognition, reading and visually guided action, a solid operational definition of what crowding is has only recently become clear. The goal of this review is to provide a broad-based synthesis of the most recent findings in this area, to define what crowding is and is not, and to set the stage for future work that will extend our understanding of crowding well beyond low-level vision. Here we define six diagnostic criteria for what counts as crowding, and further describe factors that both escape and break crowding. All of these lead to the conclusion that crowding occurs at multiple stages in the visual hierarchy.
Collapse
Affiliation(s)
- David Whitney
- Department of Psychology, University of California, Berkeley, CA 94720-1650, USA
| | | |
Collapse
|
24
|
Oakes LM, Hurley KB, Ross-Sheehy S, Luck SJ. Developmental changes in infants' visual short-term memory for location. Cognition 2010; 118:293-305. [PMID: 21168832 DOI: 10.1016/j.cognition.2010.11.007] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2010] [Revised: 11/05/2010] [Accepted: 11/08/2010] [Indexed: 10/18/2022]
Abstract
To examine the development of visual short-term memory (VSTM) for location, we presented 6- to 12-month-old infants (N=199) with two side-by-side stimulus streams. In each stream, arrays of colored circles continually appeared, disappeared, and reappeared. In the changing stream, the location of one or more items changed in each cycle; in the non-changing streams the locations did not change. Eight- and 12.5-month-old infants showed evidence of memory for multiple locations, whereas 6.5-month-old infants showed evidence of memory only for a single location, and only when that location was easily identified by salient landmarks. In the absence of such landmarks, 6.5-month-old infants showed evidence of memory for the overall configuration or shape. This developmental trajectory for spatial VSTM is similar to that previously observed for color VSTM. These results additionally show that infants' ability to detect changes in location is dependent on their developing sensitivity to spatial reference frames.
Collapse
Affiliation(s)
- Lisa M Oakes
- Center for Mind and Brain, The University of California, Davis, California, United States.
| | | | | | | |
Collapse
|