1
|
Franchak JM, Adolph KE. An update of the development of motor behavior. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2024:e1682. [PMID: 38831670 DOI: 10.1002/wcs.1682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 03/31/2024] [Accepted: 04/22/2024] [Indexed: 06/05/2024]
Abstract
This primer describes research on the development of motor behavior. We focus on infancy when basic action systems are acquired-posture, locomotion, manual actions, and facial actions-and we adopt a developmental systems perspective to understand the causes and consequences of developmental change. Experience facilitates improvements in motor behavior and infants accumulate immense amounts of varied everyday experience with all the basic action systems. At every point in development, perception guides behavior by providing feedback about the results of just prior movements and information about what to do next. Across development, new motor behaviors provide new inputs for perception. Thus, motor development opens up new opportunities for acquiring knowledge and acting on the world, instigating cascades of developmental changes in perceptual, cognitive, and social domains. This article is categorized under: Cognitive Biology > Cognitive Development Psychology > Motor Skill and Performance Neuroscience > Development.
Collapse
Affiliation(s)
- John M Franchak
- Department of Psychology, University of California, Riverside, California, USA
| | - Karen E Adolph
- Department of Psychology, Center for Neural Science, New York University, New York, USA
| |
Collapse
|
2
|
Long B, Goodin S, Kachergis G, Marchman VA, Radwan SF, Sparks RZ, Xiang V, Zhuang C, Hsu O, Newman B, Yamins DLK, Frank MC. The BabyView camera: Designing a new head-mounted camera to capture children's early social and visual environments. Behav Res Methods 2024; 56:3523-3534. [PMID: 37656342 DOI: 10.3758/s13428-023-02206-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/24/2023] [Indexed: 09/02/2023]
Abstract
Head-mounted cameras have been used in developmental psychology research for more than a decade to provide a rich and comprehensive view of what infants see during their everyday experiences. However, variation between these devices has limited the field's ability to compare results across studies and across labs. Further, the video data captured by these cameras to date has been relatively low-resolution, limiting how well machine learning algorithms can operate over these rich video data. Here, we provide a well-tested and easily constructed design for a head-mounted camera assembly-the BabyView-developed in collaboration with Daylight Design, LLC., a professional product design firm. The BabyView collects high-resolution video, accelerometer, and gyroscope data from children approximately 6-30 months of age via a GoPro camera custom mounted on a soft child-safety helmet. The BabyView also captures a large, portrait-oriented vertical field-of-view that encompasses both children's interactions with objects and with their social partners. We detail our protocols for video data management and for handling sensitive data from home environments. We also provide customizable materials for onboarding families with the BabyView. We hope that these materials will encourage the wide adoption of the BabyView, allowing the field to collect high-resolution data that can link children's everyday environments with their learning outcomes.
Collapse
Affiliation(s)
- Bria Long
- Department of Psychology, Stanford University, Stanford, CA, USA.
| | | | - George Kachergis
- Department of Psychology, Stanford University, Stanford, CA, USA
| | | | - Samaher F Radwan
- Department of Psychology, Stanford University, Stanford, CA, USA
- Graduate School of Education, Stanford University, Stanford, CA, USA
| | - Robert Z Sparks
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Violet Xiang
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Chengxu Zhuang
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Oliver Hsu
- Daylight Design, LLC, San Francisco, CA, USA
| | | | - Daniel L K Yamins
- Department of Psychology, Stanford University, Stanford, CA, USA
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - Michael C Frank
- Department of Psychology, Stanford University, Stanford, CA, USA
| |
Collapse
|
3
|
Sun L, Francis DJ, Nagai Y, Yoshida H. Early development of saliency-driven attention through object manipulation. Acta Psychol (Amst) 2024; 243:104124. [PMID: 38232506 DOI: 10.1016/j.actpsy.2024.104124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 12/30/2023] [Accepted: 01/02/2024] [Indexed: 01/19/2024] Open
Abstract
In the first years of life, infants progressively develop attention selection skills to gather information from visually clustered environments. As young as newborns, infants are sensitive to the distinguished differences in color, orientation, and luminance, which are the components of visual saliency. However, we know little about how saliency-driven attention emerges and develops socially through everyday free-viewing experiences. The present work assessed the saliency change in infants' egocentric scenes and investigated the impacts of manual engagements on infant object looking in the interactive context of object play. Thirty parent-infant dyads, including infants in two age groups (younger: 3- to 6-month-old; older: 9- to 12-month-old), completed a brief session of object play. Infants' looking behaviors were recorded by the head-mounted eye-tracking gear, and both parents' and infants' manual actions on objects were annotated separately for analyses. The present findings revealed distinct attention mechanisms that underlie the hand-eye coordination between parents and infants and within infants during object play: younger infants are predominantly biased toward the characteristics of the visual saliency accompanying the parent's handled actions on the objects; on the other hand, older infants gradually employed more attention to the object, regardless of the saliency in view, as they gained more self-generated manual actions. Taken together, the present work highlights the tight coordination between visual experiences and sensorimotor competence and proposes a novel dyadic pathway to sustained attention that social sensitivity to parents' hands emerges through saliency-driven attention, preparing infants to focus, follow, and steadily track moving targets in free-flow viewing activities.
Collapse
Affiliation(s)
- Lichao Sun
- Department of Psychology, University of Houston, TX, United States.
| | - David J Francis
- Texas Institute for Measurement, Evaluation, and Statistics, University of Houston, TX, United States.
| | - Yukie Nagai
- International Research Center for Neurointelligence, University of Tokyo, Tokyo, Japan.
| | - Hanako Yoshida
- Department of Psychology, University of Houston, TX, United States.
| |
Collapse
|
4
|
Mendez AH, Yu C, Smith LB. Controlling the input: How one-year-old infants sustain visual attention. Dev Sci 2024; 27:e13445. [PMID: 37665124 DOI: 10.1111/desc.13445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2022] [Revised: 08/01/2023] [Accepted: 08/18/2023] [Indexed: 09/05/2023]
Abstract
Traditionally, the exogenous control of gaze by external saliencies and the endogenous control of gaze by knowledge and context have been viewed as competing systems, with late infancy seen as a period of strengthening top-down control over the vagaries of the input. Here we found that one-year-old infants control sustained attention through head movements that increase the visibility of the attended object. Freely moving one-year-old infants (n = 45) wore head-mounted eye trackers and head motion sensors while exploring sets of toys of the same physical size. The visual size of the objects, a well-documented salience, varied naturally with the infant's moment-to-moment posture and head movements. Sustained attention to an object was characterized by the tight control of head movements that created and then stabilized a visual size advantage for the attended object for sustained attention. The findings show collaboration between exogenous and endogenous attentional systems and suggest new hypotheses about the development of sustained visual attention.
Collapse
Affiliation(s)
- Andres H Mendez
- CICEA, Universidad de la República, Montevideo, Uruguay
- Institut de Neurociencies, Universitat de Barcelona, Barcelona, Spain
| | - Chen Yu
- Department of Psychology, University of Texas, Austin, Texas, USA
| | - Linda B Smith
- Psychological and Brain Sciences, Indiana Unversity, Bloomington, Indiana, USA
| |
Collapse
|
5
|
Wedasingha N, Samarasinghe P, Senevirathna L, Papandrea M, Puiatti A, Rankin D. Automated anomalous child repetitive head movement identification through transformer networks. Phys Eng Sci Med 2023; 46:1427-1445. [PMID: 37814077 DOI: 10.1007/s13246-023-01309-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 07/24/2023] [Indexed: 10/11/2023]
Abstract
The increasing prevalence of behavioral disorders in children is of growing concern within the medical community. Recognising the significance of early identification and intervention for atypical behaviors, there is a consensus on their pivotal role in improving outcomes. Due to inadequate facilities and a shortage of medical professionals with specialized expertise, traditional diagnostic methods have been unable to effectively address the rising incidence of behavioral disorders. Hence, there is a need to develop automated approaches for the diagnosis of behavioral disorders in children, to overcome the challenges with traditional methods. The purpose of this study is to develop an automated model capable of analyzing videos to differentiate between typical and atypical repetitive head movements in. To address problems resulting from the limited availability of child datasets, various learning methods are employed to mitigate these issues. In this work, we present a fusion of transformer networks, and Non-deterministic Finite Automata (NFA) techniques, which classify repetitive head movements of a child as typical or atypical based on an analysis of gender, age, and type of repetitive head movement, along with count, duration, and frequency of each repetitive head movement. Experimentation was carried out with different transfer learning methods to enhance the performance of the model. The experimental results on five datasets: NIR face dataset, Bosphorus 3D face dataset, ASD dataset, SSBD dataset, and the Head Movements in the Wild dataset, indicate that our proposed model has outperformed many state-of-the-art frameworks when distinguishing typical and atypical repetitive head movements in children.
Collapse
Affiliation(s)
- Nushara Wedasingha
- Faculty of Computing, Sri Lanka Institute of Information Technology, New Kandy Rd, Malabe, 10115, Colombo, Sri Lanka.
| | - Pradeepa Samarasinghe
- Faculty of Computing, Sri Lanka Institute of Information Technology, New Kandy Rd, Malabe, 10115, Colombo, Sri Lanka
| | - Lasantha Senevirathna
- Faculty of Computing, Sri Lanka Institute of Information Technology, New Kandy Rd, Malabe, 10115, Colombo, Sri Lanka
| | - Michela Papandrea
- Information Systems and Networking Institute (ISIN), University of Applied Sciences and Arts of Southern Switzerland, Via Pobiette, Manno, 6928, Switzerland
| | - Alessandro Puiatti
- Institute of Digital Technologies for Personalized Healthcare (MeDiTech), University of Applied Sciences and Arts of Southern Switzerland, Via Pobiette, Manno, 6928, Switzerland
| | - Debbie Rankin
- School of Computing, Engineering and Intelligent Systems, Ulster University, Northland Road, Derry-Londonderry, BT48 7JL, Northern Ireland, UK
| |
Collapse
|
6
|
Real-world statistics at two timescales and a mechanism for infant learning of object names. Proc Natl Acad Sci U S A 2022; 119:e2123239119. [PMID: 35482916 PMCID: PMC9170168 DOI: 10.1073/pnas.2123239119] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Infants learn mappings between heard names and seen things before their first birthday and before they produce spoken language. Two challenges to explaining this early learning are the immaturity of infant memory systems and the infrequency of any individual object name in the heard language input. We quantified the frequency of visual referents, heard names, and the cooccurrences of referents and names in infant everyday experiences. We discovered statistical patterns at two timescales that align with a cortical mechanism of associative memory formation that supports the rapid formation of durable associative memories from very few experienced cooccurrences. Infants begin learning the visual referents of nouns before their first birthday. Despite considerable empirical and theoretical effort, little is known about the statistics of the experiences that enable infants to break into object–name learning. We used wearable sensors to collect infant experiences of visual objects and their heard names for 40 early-learned categories. The analyzed data were from one context that occurs multiple times a day and includes objects with early-learned names: mealtime. The statistics reveal two distinct timescales of experience. At the timescale of many mealtime episodes (n = 87), the visual categories were pervasively present, but naming of the objects in each of those categories was very rare. At the timescale of single mealtime episodes, names and referents did cooccur, but each name–referent pair appeared in very few of the mealtime episodes. The statistics are consistent with incremental learning of visual categories across many episodes and the rapid learning of name–object mappings within individual episodes. The two timescales are also consistent with a known cortical learning mechanism for one-episode learning of associations: new information, the heard name, is incorporated into well-established memories, the seen object category, when the new information cooccurs with the reactivation of that slowly established memory.
Collapse
|
7
|
Perkovich E, Sun L, Mire S, Laakman A, Sakhuja U, Yoshida H. What children with and without ASD see: Similar visual experiences with different pathways through parental attention strategies. AUTISM & DEVELOPMENTAL LANGUAGE IMPAIRMENTS 2022; 7:23969415221137293. [PMID: 36518657 PMCID: PMC9742584 DOI: 10.1177/23969415221137293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND AIMS Although young children's gaze behaviors in experimental task contexts have been shown to be potential biobehavioral markers relevant to autism spectrum disorder (ASD), we know little about their everyday gaze behaviors. The present study aims (1) to document early gaze behaviors that occur within a live, social interactive context among children with and without ASD and their parents, and (2) to examine how children's and parents' gaze behaviors are related for ASD and typically developing (TD) groups. A head-mounted eye-tracking system was used to record the frequency and duration of a set of gaze behaviors (such as sustained attention [SA] and joint attention [JA]) that are relevant to early cognitive and language development. METHODS Twenty-six parent-child dyads (ASD group = 13, TD group = 13) participated. Children were between the ages of 3 and 8 years old. We placed head-mounted eye trackers on parents and children to record their parent- and child-centered views, and we also recorded their interactive parent-child object play scene from both a wall- and ceiling-mounted camera. We then annotated the frequency and duration of gaze behaviors (saccades, fixation, SA, and JA) for different regions of interest (object, face, and hands), and attention shifting. Independent group t-tests and ANOVAs were used to observe group comparisons, and linear regression was used to test the predictiveness of parent gaze behaviors for JA. RESULTS The present study found no differences in visual experiences between children with and without ASD. Interestingly, however, significant group differences were found for parent gaze behaviors. Compared to parents of ASD children, parents of TD children focused on objects and shifted their attention between objects and their children's faces more. In contrast, parents of ASD children were more likely to shift their attention between their own hands and their children. JA experiences were also predicted differently, depending on the group: among parents of TD children, attention to objects predicted JA, but among parents of ASD children, attention to their children predicted JA. CONCLUSION Although no differences were found between gaze behaviors of autistic and TD children in this study, there were significant group differences in parents' looking behaviors. This suggests potentially differential pathways for the scaffolding effect of parental gaze for ASD children compared with TD children. IMPLICATIONS The present study revealed the impact of everyday life, social interactive context on early visual experiences, and point to potentially different pathways by which parental looking behaviors guide the looking behaviors of children with and without ASD. Identifying parental social input relevant to early attention development (e.g., JA) among autistic children has implications for mechanisms that could support socially mediated attention behaviors that have been documented to facilitate early cognitive and language development and implications for the development of parent-mediated interventions for young children with or at risk for ASD.Note: This paper uses a combination of person-first and identity-first language, an intentional decision aligning with comments put forth by Vivanti (Vivanti, 2020), recognizing the complexities of known and unknown preferences of those in the larger autism community.
Collapse
Affiliation(s)
- Elizabeth Perkovich
- Elizabeth Perkovich, Department of Psychology, University of Houston, Houston, TX 77204, USA.
| | - Lichao Sun
- Department of Psychology, University of Houston, Houston, TX, USA
| | - Sarah Mire
- Educational Psychology Department, Baylor University, Waco, TX, USA
| | - Anna Laakman
- Department of Psychological Health and Learning Sciences, University of Houston, Houston, TX, USA
| | - Urvi Sakhuja
- Department of Psychology, University of Houston, Houston, TX, USA
| | - Hanako Yoshida
- Department of Psychology, University of Houston, Houston, TX, USA
| |
Collapse
|
8
|
The infant's view redefines the problem of referential uncertainty in early word learning. Proc Natl Acad Sci U S A 2021; 118:2107019118. [PMID: 34933998 PMCID: PMC8719889 DOI: 10.1073/pnas.2107019118] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/01/2021] [Indexed: 11/18/2022] Open
Abstract
The learning of first object names is deemed a hard problem due to the uncertainty inherent in mapping a heard name to the intended referent in a cluttered and variable world. However, human infants readily solve this problem. Despite considerable theoretical discussion, relatively little is known about the uncertainty infants face in the real world. We used head-mounted eye tracking during parent-infant toy play and quantified the uncertainty by measuring the distribution of infant attention to the potential referents when a parent named both familiar and unfamiliar toy objects. The results show that infant gaze upon hearing an object name is often directed to a single referent which is equally likely to be a wrong competitor or the intended target. This bimodal gaze distribution clarifies and redefines the uncertainty problem and constrains possible solutions.
Collapse
|