701
|
Mahlberg M, Conklin K, Bisson MJ. Reading Dickens's characters: Employing psycholinguistic methods to investigate the cognitive reality of patterns in texts. Lang Lit (Harlow) 2014; 23:369-388. [PMID: 30262970 PMCID: PMC5897890 DOI: 10.1177/0963947014543887] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This article reports the findings of an empirical study that uses eye-tracking and follow-up interviews as methods to investigate how participants read body language clusters in novels by Charles Dickens. The study builds on previous corpus stylistic work that has identified patterns of body language presentation as techniques of characterisation in Dickens (Mahlberg, 2013). The article focuses on the reading of 'clusters', that is, repeated sequences of words. It is set in a research context that brings together observations from both corpus linguistics and psycholinguistics on the processing of repeated patterns. The results show that the body language clusters are read significantly faster than the overall sample extracts which suggests that the clusters are stored as units in the brain. This finding is complemented by the results of the follow-up questions which indicate that readers do not seem to refer to the clusters when talking about character information, although they are able to refer to clusters when biased prompts are used to elicit information. Beyond the specific results of the study, this article makes a contribution to the development of complementary methods in literary stylistics and it points to directions for further subclassifications of clusters that could not be achieved on the basis of corpus data alone.
Collapse
|
702
|
Jones PR, Kalwarowsky S, Atkinson J, Braddick OJ, Nardini M. Automated measurement of resolution acuity in infants using remote eye-tracking. Invest Ophthalmol Vis Sci 2014; 55:8102-10. [PMID: 25352118 DOI: 10.1167/iovs.14-15108] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
PURPOSE To validate a novel, automated test of infant resolution acuity based on remote eye-tracking. METHODS Infants aged 2 to 12 months were tested binocularly using a new adaptive computerized test of infant vision using eye tracking (ACTIVE), and Keeler infant acuity cards (KIAC). The ACTIVE test ran automatically, using remote eye-tracking to assess whether the infant fixated a black-and-white grating of variable spatial frequency. Test-retest reliability was assessed by performing each test twice. Accuracy was assessed by comparing acuity measures across tests and with established age-norms, and by comparing low-contrast acuity estimates in adults with data reported previously. RESULTS All infants completed the ACTIVE test at least once. Median test duration was 101 seconds. Measured visual acuity increased with age (P < 0.001), and 90% of mean acuity estimates were within previously published 90% tolerance limits (based on acuity-card age norms). Acuity estimates were also correlated, within-subjects, with results from the KIAC (P = 0.004). In terms of reliability, 86% of acuity estimates deviated by ≤1 octave, with no significant difference in test-retest reliability between the ACTIVE and KIAC procedures (P = 0.461). In adults, acuity estimates from the ACTIVE test did not differ significantly from values reported by previous authors (P > 0.183). CONCLUSIONS An adaptive computerized test of infant vision using eye-tracking provides a rapid, automated measure of resolution acuity in preverbal infants. The ACTIVE performed comparably to the current clinical gold standard (acuity cards) in terms of testability, reliability, and accuracy, and its principles can be extended to measure other visual functions.
Collapse
Affiliation(s)
- Pete R Jones
- Institute of Ophthalmology, University College London (UCL), United Kingdom
| | - Sarah Kalwarowsky
- Institute of Ophthalmology, University College London (UCL), United Kingdom
| | - Janette Atkinson
- Department of Developmental Science, University College London (UCL), United Kingdom Department of Experimental Psychology, University of Oxford, United Kingdom
| | - Oliver J Braddick
- Department of Experimental Psychology, University of Oxford, United Kingdom
| | - Marko Nardini
- Institute of Ophthalmology, University College London (UCL), United Kingdom Department of Psychology, Durham University, United Kingdom
| |
Collapse
|
703
|
Ganushchak LY, Konopka AE, Chen Y. What the eyes say about planning of focused referents during sentence formulation: a cross-linguistic investigation. Front Psychol 2014; 5:1124. [PMID: 25324820 PMCID: PMC4183096 DOI: 10.3389/fpsyg.2014.01124] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2014] [Accepted: 09/16/2014] [Indexed: 11/23/2022] Open
Abstract
This study investigated how sentence formulation is influenced by a preceding discourse context. In two eye-tracking experiments, participants described pictures of two-character transitive events in Dutch (Experiment 1) and Chinese (Experiment 2). Focus was manipulated by presenting questions before each picture. In the Neutral condition, participants first heard “What is happening here?” In the Object or Subject Focus conditions, the questions asked about the Object or Subject character (What is the policeman stopping? Who is stopping the truck?). The target response was the same in all conditions (The policeman is stopping the truck). In both experiments, sentence formulation in the Neutral condition showed the expected pattern of speakers fixating the subject character (policeman) before the object character (truck). In contrast, in the focus conditions speakers rapidly directed their gaze preferentially only to the character they needed to encode to answer the question (the new, or focused, character). The timing of gaze shifts to the new character varied by language group (Dutch vs. Chinese): shifts to the new character occurred earlier when information in the question can be repeated in the response with the same syntactic structure (in Chinese but not in Dutch). The results show that discourse affects the timecourse of linguistic formulation in simple sentences and that these effects can be modulated by language-specific linguistic structures such as parallels in the syntax of questions and declarative sentences.
Collapse
Affiliation(s)
- Lesya Y Ganushchak
- Leiden University Centre for Linguistics Leiden, Netherlands ; Education and Child Studies, Faculty of Social and Behavioral Sciences, Leiden University Leiden, Netherlands ; Leiden Institute for Brain and Cognition Leiden, Netherlands
| | | | - Yiya Chen
- Leiden University Centre for Linguistics Leiden, Netherlands ; Leiden Institute for Brain and Cognition Leiden, Netherlands
| |
Collapse
|
704
|
Tanaka T, Sugimoto M, Tanida Y, Saito S. The influences of working memory representations on long-range regression in text reading: an eye-tracking study. Front Hum Neurosci 2014; 8:765. [PMID: 25324760 PMCID: PMC4179682 DOI: 10.3389/fnhum.2014.00765] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2014] [Accepted: 09/10/2014] [Indexed: 11/13/2022] Open
Abstract
The present study investigated the relationship between verbal and visuospatial working memory (WM) capacity and long-range regression (i.e., word relocation) processes in reading. We analyzed eye movements during a "whodunit task", in which readers were asked to answer a content question while original text was being presented. The eye movements were more efficient in relocating a target word when the target was at recency positions within the text than when it was at primacy positions. Furthermore, both verbal and visuospatial WM capacity partly predicted the efficiency of the initial long-range regression. The results indicate that WM representations have a strong influence at the first stage of long-range regression by driving the first saccade movement toward the correct target position, suggesting that there is a dynamic interaction between internal WM representations and external actions during text reading.
Collapse
Affiliation(s)
- Teppei Tanaka
- Department of Cognitive Psychology in Education, Graduate School of Education, Kyoto University Yoshida-Honmachi, Sakyo-ku, Kyoto, Japan
| | - Masashi Sugimoto
- Department of Cognitive Psychology in Education, Graduate School of Education, Kyoto University Yoshida-Honmachi, Sakyo-ku, Kyoto, Japan ; The Japan Society for the Promotion of Science Tokyo, Japan
| | - Yuki Tanida
- Department of Cognitive Psychology in Education, Graduate School of Education, Kyoto University Yoshida-Honmachi, Sakyo-ku, Kyoto, Japan
| | - Satoru Saito
- Department of Cognitive Psychology in Education, Graduate School of Education, Kyoto University Yoshida-Honmachi, Sakyo-ku, Kyoto, Japan
| |
Collapse
|
705
|
Debue N, van de Leemput C. What does germane load mean? An empirical contribution to the cognitive load theory. Front Psychol 2014; 5:1099. [PMID: 25324806 PMCID: PMC4181236 DOI: 10.3389/fpsyg.2014.01099] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2014] [Accepted: 09/10/2014] [Indexed: 11/30/2022] Open
Abstract
While over the last decades, much attention has been paid to the mental workload in the field of human computer interactions, there is still a lack of consensus concerning the factors that generate it as well as the measurement methods that could reflect workload variations. Based on the multifactorial Cognitive Load Theory (CLT), our study aims to provide some food for thought about the subjective and objective measurement that can be used to disentangle the intrinsic, extraneous, and germane load. The purpose is to provide insight into the way cognitive load can explain how users' cognitive resources are allocated in the use of hypermedia, such as an online newspaper. A two-phase experiment has been conducted on the information retention from online news stories. Phase 1 (92 participants) examined the influence of multimedia content on performance as well as the relationships between cognitive loads and cognitive absorption. In Phase 2 (36 participants), eye-tracking data were collected in order to provide reliable and objective measures. Results confirmed that performance in information retention was impacted by the presence of multimedia content such as animations and pictures. The higher number of fixations on these animations suggests that users' attention could have been attracted by them. Results showed the expected opposite relationships between germane and extraneous load, a positive association between germane load and cognitive absorption and a non-linear association between intrinsic and germane load. The trends based on eye-tracking data analysis provide some interesting findings about the relationship between longer fixations, shorter saccades and cognitive load. Some issues are raised about the respective contribution of mean pupil diameter and Index of Cognitive Activity.
Collapse
Affiliation(s)
- Nicolas Debue
- Faculty of Psychological Science and Education, Research Center for Work and Consumer Psychology, Université Libre de BruxellesBrussels, Belgium
- National Fund for Scientific Research (FRS-FNRS)Brussels, Belgium
| | - Cécile van de Leemput
- Faculty of Psychological Science and Education, Research Center for Work and Consumer Psychology, Université Libre de BruxellesBrussels, Belgium
| |
Collapse
|
706
|
Abstract
The present study used eye-tracking technology to assess whether individuals who report chronic pain direct more attention to sensory pain-related words than do pain-free individuals. A total of 113 participants (51 with chronic pain, 62 pain-free) were recruited. Participants completed a dot-probe task, viewing neutral and sensory pain-related words while their reaction time and eye movements were recorded. Eye-tracking data were analyzed by mixed-design analysis of variance with group (chronic pain versus pain-free) as the between-subjects factor, and word type (sensory pain versus neutral) as the within-subjects factor. Results showed a significant main effect for word type: all participants attended to pain-related words more than neutral words on several eye-tracking parameters. The group main effect was significant for number of fixations, which was greater in the chronic pain group. Finally, the group by word type interaction effect was significant for average visit duration, number of fixations, and total late-phase duration, all greater for sensory pain versus neutral words in the chronic pain group. As well, participants with chronic pain fixated significantly more frequently on pain words than did pain-free participants. In contrast, none of the effects for reaction time were significant. The results support the hypothesis that individuals with chronic pain display specific attentional biases toward pain-related stimuli and demonstrate the value of eye-tracking technology in measuring differences in visual attention variables.
Collapse
Affiliation(s)
| | - Joel Katz
- Department of Psychology, York University, Toronto, ON, Canada
| |
Collapse
|
707
|
Fujisawa TX, Tanaka S, Saito DN, Kosaka H, Tomoda A. Visual attention for social information and salivary oxytocin levels in preschool children with autism spectrum disorders: an eye-tracking study. Front Neurosci 2014; 8:295. [PMID: 25278829 PMCID: PMC4166357 DOI: 10.3389/fnins.2014.00295] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2014] [Accepted: 08/30/2014] [Indexed: 11/27/2022] Open
Abstract
This study was designed to ascertain the relationship between visual attention for social information and oxytocin (OT) levels in Japanese preschool children with autism spectrum disorder (ASD). We hypothesized that poor visual attention for social information and low OT levels are crucially important risk factors associated with ASD. We measured the pattern of gaze fixation for social information using an eye-tracking system, and salivary OT levels by the Enzyme-Linked Immunosorbent Assay (ELISA). There was a positive association between salivary OT levels and fixation duration for an indicated object area in a finger-pointing movie in typically developing (TD) children. However, no association was found between these variables in children with ASD. Moreover, age decreased an individual's attention to people moving and pointed-at objects, but increased attention for mouth-in-the-face recognition, geometric patterns, and biological motions. Thus, OT levels likely vary during visual attention for social information between TD children and those with ASD. Further, aging in preschool children has considerable effect on visual attention for social information.
Collapse
Affiliation(s)
- Takashi X Fujisawa
- Research Center for Child Mental Development, University of Fukui Fukui, Japan ; Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, University of Fukui Fukui, Japan
| | - Shiho Tanaka
- Research Center for Child Mental Development, University of Fukui Fukui, Japan
| | - Daisuke N Saito
- Research Center for Child Mental Development, University of Fukui Fukui, Japan ; Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, University of Fukui Fukui, Japan ; Biomedical Imaging Research Center, University of Fukui Fukui, Japan
| | - Hirotaka Kosaka
- Research Center for Child Mental Development, University of Fukui Fukui, Japan ; Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, University of Fukui Fukui, Japan ; Department of Neuropsychiatry, Faculty of Medical Sciences, University of Fukui Fukui, Japan
| | - Akemi Tomoda
- Research Center for Child Mental Development, University of Fukui Fukui, Japan ; Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, University of Fukui Fukui, Japan
| |
Collapse
|
708
|
Marks KR, Roberts W, Stoops WW, Pike E, Fillmore MT, Rush CR. Fixation time is a sensitive measure of cocaine cue attentional bias. Addiction 2014; 109:1501-8. [PMID: 24894879 PMCID: PMC4612370 DOI: 10.1111/add.12635] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2014] [Revised: 03/20/2014] [Accepted: 05/27/2014] [Indexed: 12/18/2022]
Abstract
BACKGROUND AND AIMS Attentional bias has been demonstrated to a variety of substances. Evidence suggests that fixation time is a more direct measure of attentional bias than response time. The aims of this experiment were to demonstrate that fixation time during the visual probe task is a sensitive and stable measure of cocaine cue attentional bias in cocaine-using adults compared to controls. DESIGN A between-subject, repeated-measures experiment. SETTING An out-patient research unit. PARTICIPANTS Fifteen cocaine using and 15 non-cocaine-using adults recruited from the community. MEASUREMENTS Participants completed a visual probe task with eye tracking and a modified Stroop during two experimental sessions. FINDINGS A significant interaction between cue type and group (F = 13.5; P < 0.05) indicated that cocaine users, but not controls, displayed an attentional bias to cocaine-related images as measured by fixation time. There were no changes in the magnitude of attentional bias across sessions (F = 3.4; P > 0.05) and attentional bias correlated with self-reported life-time cocaine use (r = 0.64, P < 0.05). Response time on the visual probe (F = 1.1; P > 0.05) as well as on the modified Stroop (F = 0.1; P > 0.05) failed to detect an attentional bias. CONCLUSIONS Fixation time on cocaine-related stimuli (propensity to remain focused on the stimulus) is a sensitive and stable measure of cocaine cue attentional bias in cocaine-using adults.
Collapse
Affiliation(s)
- Katherine R. Marks
- University of Kentucky College of Arts and Sciences, Department of Psychology, 110 Kastle Hall, Lexington, KY 40506-0044
| | - Walter Roberts
- University of Kentucky College of Arts and Sciences, Department of Psychology, 110 Kastle Hall, Lexington, KY 40506-0044
| | - William W. Stoops
- University of Kentucky College of Arts and Sciences, Department of Psychology, 110 Kastle Hall, Lexington, KY 40506-0044,University of Kentucky College of Medicine, Department of Behavioral Science, 140 Medical Behavioral Science Building, Lexington, KY, 40536-0086
| | - Erika Pike
- University of Kentucky College of Arts and Sciences, Department of Psychology, 110 Kastle Hall, Lexington, KY 40506-0044
| | - Mark T. Fillmore
- University of Kentucky College of Arts and Sciences, Department of Psychology, 110 Kastle Hall, Lexington, KY 40506-0044
| | - Craig R. Rush
- University of Kentucky College of Arts and Sciences, Department of Psychology, 110 Kastle Hall, Lexington, KY 40506-0044,University of Kentucky College of Medicine, Department of Behavioral Science, 140 Medical Behavioral Science Building, Lexington, KY, 40536-0086,University of Kentucky College of Medicine, Department of Psychiatry, 3470 Blazer Parkway, Lexington, KY 40509,Address Correspondence to: Craig R. Rush, University of Kentucky College of Arts and Sciences, Department of Psychology, 110 Kastle Hall, Lexington, KY, 40506-0044. Telephone: +1 (859) 257-5388. Facsimile: +1 (859) 257-7684.
| |
Collapse
|
709
|
Clackson K, Heyer V. Reflexive anaphor resolution in spoken language comprehension: structural constraints and beyond. Front Psychol 2014; 5:904. [PMID: 25191290 PMCID: PMC4137754 DOI: 10.3389/fpsyg.2014.00904] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2014] [Accepted: 07/29/2014] [Indexed: 11/13/2022] Open
Abstract
We report results from an eye-tracking during listening study examining English-speaking adults’ online processing of reflexive pronouns, and specifically whether the search for an antecedent is restricted to syntactically appropriate positions. Participants listened to a short story where the recipient of an object was introduced with a reflexive, and were asked to identify the object recipient as quickly as possible. This allowed for the recording of participants’ offline interpretation of the reflexive, response times, and eye movements on hearing the reflexive. Whilst our offline results show that the ultimate interpretation for reflexives was constrained by binding principles, the response time, and eye-movement data revealed that during processing participants were temporarily distracted by a structurally inappropriate competitor antecedent when this was prominent in the discourse. These results indicate that in addition to binding principles, online referential decisions are also affected by discourse-level information.
Collapse
Affiliation(s)
- Kaili Clackson
- Department of Language and Linguistics, University of Essex Colchester, UK
| | - Vera Heyer
- Potsdam Research Institute for Multilingualism, University of Potsdam Potsdam, Germany
| |
Collapse
|
710
|
Abstract
Real-time interpretation of pronouns is sometimes sensitive to the presence of grammatically-illicit antecedents and sometimes not. This occasional sensitivity has been taken as evidence that structural constraints do not immediately impact the initial antecedent retrieval for pronoun interpretation. We argue that it is important to separate effects that reflect the initial antecedent retrieval process from those that reflect later processes. We present results from five reading comprehension experiments. Both the current results and previous evidence support the hypothesis that agreement features and structural constraints immediately constrain the antecedent retrieval process for pronoun interpretation. Occasional sensitivity to grammatically-illicit antecedents may be due to repair processes triggered when the initial retrieval fails to return a grammatical antecedent.
Collapse
Affiliation(s)
- Wing-Yee Chow
- Department of Linguistics, University of Maryland College Park, MD, USA ; Basque Center on Cognition, Brain and Language Donostia - San Sebastián, Spain
| | - Shevaun Lewis
- Department of Linguistics, University of Maryland College Park, MD, USA ; Department of Cognitive Science, Johns Hopkins University Baltimore, MD, USA
| | - Colin Phillips
- Department of Linguistics, University of Maryland College Park, MD, USA ; Program in Neuroscience and Cognitive Science, University of Maryland College Park, MD, USA
| |
Collapse
|
711
|
Dey JK, Ishii LE, Byrne PJ, Boahene KDO, Ishii M. Seeing is believing: objectively evaluating the impact of facial reanimation surgery on social perception. Laryngoscope 2014; 124:2489-97. [PMID: 24966145 DOI: 10.1002/lary.24801] [Citation(s) in RCA: 54] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2014] [Revised: 05/27/2014] [Accepted: 06/02/2014] [Indexed: 01/10/2023]
Abstract
OBJECTIVES/HYPOTHESIS Objectively measure the ability of facial reanimation surgery to normalize the appearance of facial paralysis using eye-tracking technology. STUDY DESIGN Prospective randomized controlled experiment. METHODS An eye-tracker system was used to record the eye-movement patterns, called scanpaths, of 86 naïve observers gazing at pictures of paralyzed faces (House-Brackmann IV-VI), smiling and in repose; before and after facial reanimation surgery; as well as normal, nonparalyzed faces. Observers gazed at each face for 10 seconds. Fixation durations for all predefined facial areas of interest were analyzed using mixed-effects linear regression. RESULTS Observers spent the majority of time (6.6 of 10 seconds) gazing in the central triangle region (eyes, nose, and mouth) of normal faces and paralyzed faces. There were significant deviations in fixation within the central triangle of paralyzed faces as compared to normal faces. Total fixation on the eyes remained conserved. However, total nose fixation decreased and mouth fixation increased on paralyzed faces. Facial reanimation surgery normalized many of the hemifacial gaze asymmetries caused by unilateral facial paralysis, and restored a normal distribution of gaze between the functional and paralyzed sides of the face and mouth. CONCLUSION There were objective differences in the way observers directed their attention to facial features when viewing normal and paralyzed faces. After facial reanimation surgery, the attentional distraction caused by facial feature irregularities was reduced. These findings are important additions to the emerging body of objective evidence indicating the effectiveness of reanimation surgery; they also suggest opportunities to optimize reconstruction. LEVEL OF EVIDENCE N/A.
Collapse
Affiliation(s)
- Jacob K Dey
- Division of Facial Plastic and Reconstructive Surgery, Department of Otolaryngology-Head & Neck Surgery, Johns Hopkins School of Medicine, Baltimore, Maryland, U.S.A
| | | | | | | | | |
Collapse
|
712
|
Corbetta D, Thurman SL, Wiener RF, Guan Y, Williams JL. Mapping the feel of the arm with the sight of the object: on the embodied origins of infant reaching. Front Psychol 2014; 5:576. [PMID: 24966847 PMCID: PMC4052117 DOI: 10.3389/fpsyg.2014.00576] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2014] [Accepted: 05/23/2014] [Indexed: 11/23/2022] Open
Abstract
For decades, the emergence and progression of infant reaching was assumed to be largely under the control of vision. More recently, however, the guiding role of vision in the emergence of reaching has been downplayed. Studies found that young infants can reach in the dark without seeing their hand and that corrections in infants' initial hand trajectories are not the result of visual guidance of the hand, but rather the product of poor movement speed calibration to the goal. As a result, it has been proposed that learning to reach is an embodied process requiring infants to explore proprioceptively different movement solutions, before they can accurately map their actions onto the intended goal. Such an account, however, could still assume a preponderant (or prospective) role of vision, where the movement is being monitored with the scope of approximating a future goal-location defined visually. At reach onset, it is unknown if infants map their action onto their vision, vision onto their action, or both. To examine how infants learn to map the feel of their hand with the sight of the object, we tracked the object-directed looking behavior (via eye-tracking) of three infants followed weekly over an 11-week period throughout the transition to reaching. We also examined where they contacted the object. We find that with some objects, infants do not learn to align their reach to where they look, but rather learn to align their look to where they reach. We propose that the emergence of reaching is the product of a deeply embodied process, in which infants first learn how to direct their movement in space using proprioceptive and haptic feedback from self-produced movement contingencies with the environment. As they do so, they learn to map visual attention onto these bodily centered experiences, not the reverse. We suggest that this early visuo-motor mapping is critical for the formation of visually-elicited, prospective movement control.
Collapse
Affiliation(s)
- Daniela Corbetta
- Director Infant Perception-Action Laboratory, Department of Psychology, The University of TennesseeKnoxville, TN, USA
| | | | - Rebecca F. Wiener
- Department of Psychology, The University of TennesseeKnoxville, TN, USA
| | - Yu Guan
- Department of Psychology, The University of TennesseeKnoxville, TN, USA
| | | |
Collapse
|
713
|
Poellmann K, Mitterer H, McQueen JM. Use what you can: storage, abstraction processes, and perceptual adjustments help listeners recognize reduced forms. Front Psychol 2014; 5:437. [PMID: 24910622 PMCID: PMC4038950 DOI: 10.3389/fpsyg.2014.00437] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2013] [Accepted: 04/24/2014] [Indexed: 11/13/2022] Open
Abstract
Three eye-tracking experiments tested whether native listeners recognized reduced Dutch words better after having heard the same reduced words, or different reduced words of the same reduction type and whether familiarization with one reduction type helps listeners to deal with another reduction type. In the exposure phase, a segmental reduction group was exposed to /b/-reductions (e.g., minderij instead of binderij, “book binder”) and a syllabic reduction group was exposed to full-vowel deletions (e.g., p'raat instead of paraat, “ready”), while a control group did not hear any reductions. In the test phase, all three groups heard the same speaker producing reduced-/b/ and deleted-vowel words that were either repeated (Experiments 1 and 2) or new (Experiment 3), but that now appeared as targets in semantically neutral sentences. Word-specific learning effects were found for vowel-deletions but not for /b/-reductions. Generalization of learning to new words of the same reduction type occurred only if the exposure words showed a phonologically consistent reduction pattern (/b/-reductions). In contrast, generalization of learning to words of another reduction type occurred only if the exposure words showed a phonologically inconsistent reduction pattern (the vowel deletions; learning about them generalized to recognition of the /b/-reductions). In order to deal with reductions, listeners thus use various means. They store reduced variants (e.g., for the inconsistent vowel-deleted words) and they abstract over incoming information to build up and apply mapping rules (e.g., for the consistent /b/-reductions). Experience with inconsistent pronunciations leads to greater perceptual flexibility in dealing with other forms of reduction uttered by the same speaker than experience with consistent pronunciations.
Collapse
Affiliation(s)
- Katja Poellmann
- Language Comprehension Department, Max Planck Institute for Psycholinguistics Nijmegen, Netherlands ; International Max Planck Research School for Language Sciences Nijmegen, Netherlands
| | - Holger Mitterer
- Language Comprehension Department, Max Planck Institute for Psycholinguistics Nijmegen, Netherlands
| | - James M McQueen
- Language Comprehension Department, Max Planck Institute for Psycholinguistics Nijmegen, Netherlands ; Behavioural Science Institute and Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen Nijmegen, Netherlands
| |
Collapse
|
714
|
Yorzinski JL, Penkunas MJ, Platt ML, Coss RG. Dangerous animals capture and maintain attention in humans. Evol Psychol 2014; 12:534-48. [PMID: 25299991 PMCID: PMC10480850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2014] [Accepted: 04/21/2014] [Indexed: 06/04/2023] Open
Abstract
Predation is a major source of natural selection on primates and may have shaped attentional processes that allow primates to rapidly detect dangerous animals. Because ancestral humans were subjected to predation, a process that continues at very low frequencies, we examined the visual processes by which men and women detect dangerous animals (snakes and lions). We recorded the eye movements of participants as they detected images of a dangerous animal (target) among arrays of nondangerous animals (distractors) as well as detected images of a nondangerous animal (target) among arrays of dangerous animals (distractors). We found that participants were quicker to locate targets when the targets were dangerous animals compared with nondangerous animals, even when spatial frequency and luminance were controlled. The participants were slower to locate nondangerous targets because they spent more time looking at dangerous distractors, a process known as delayed disengagement, and looked at a larger number of dangerous distractors. These results indicate that dangerous animals capture and maintain attention in humans, suggesting that historical predation has shaped some facets of visual orienting and its underlying neural architecture in modern humans.
Collapse
Affiliation(s)
| | | | - Michael L. Platt
- Department of Psychology, University of California, Davis, CA, USA
| | - Richard G. Coss
- Department of Psychology, University of California, Davis, CA, USA
| |
Collapse
|
715
|
Corbett EA, Sachs NA, Körding KP, Perreault EJ. Multimodal decoding and congruent sensory information enhance reaching performance in subjects with cervical spinal cord injury. Front Neurosci 2014; 8:123. [PMID: 24904265 PMCID: PMC4033069 DOI: 10.3389/fnins.2014.00123] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2014] [Accepted: 05/06/2014] [Indexed: 11/30/2022] Open
Abstract
Cervical spinal cord injury (SCI) paralyzes muscles of the hand and arm, making it difficult to perform activities of daily living. Restoring the ability to reach can dramatically improve quality of life for people with cervical SCI. Any reaching system requires a user interface to decode parameters of an intended reach, such as trajectory and target. A challenge in developing such decoders is that often few physiological signals related to the intended reach remain under voluntary control, especially in patients with high cervical injuries. Furthermore, the decoding problem changes when the user is controlling the motion of their limb, as opposed to an external device. The purpose of this study was to investigate the benefits of combining disparate signal sources to control reach in people with a range of impairments, and to consider the effect of two feedback approaches. Subjects with cervical SCI performed robot-assisted reaching, controlling trajectories with either shoulder electromyograms (EMGs) or EMGs combined with gaze. We then evaluated how reaching performance was influenced by task-related sensory feedback, testing the EMG-only decoder in two conditions. The first involved moving the arm with the robot, providing congruent sensory feedback through their remaining sense of proprioception. In the second, the subjects moved the robot without the arm attached, as in applications that control external devices. We found that the multimodal-decoding algorithm worked well for all subjects, enabling them to perform straight, accurate reaches. The inclusion of gaze information, used to estimate target location, was especially important for the most impaired subjects. In the absence of gaze information, congruent sensory feedback improved performance. These results highlight the importance of proprioceptive feedback, and suggest that multi-modal decoders are likely to be most beneficial for highly impaired subjects and in tasks where such feedback is unavailable.
Collapse
Affiliation(s)
- Elaine A. Corbett
- Sensory Motor Performance Program, Rehabilitation Institute of ChicagoChicago, IL, USA
- Department of Physical Medicine and Rehabilitation, Northwestern UniversityChicago, IL, USA
- Melbourne School of Psychological Sciences, University of MelbourneParkville, VIC, Australia
| | - Nicholas A. Sachs
- Department of Biomedical Engineering, Northwestern UniversityEvanston, IL, USA
| | - Konrad P. Körding
- Sensory Motor Performance Program, Rehabilitation Institute of ChicagoChicago, IL, USA
- Department of Physical Medicine and Rehabilitation, Northwestern UniversityChicago, IL, USA
- Department of Physiology, Northwestern UniversityChicago, IL, USA
| | - Eric J. Perreault
- Sensory Motor Performance Program, Rehabilitation Institute of ChicagoChicago, IL, USA
- Department of Physical Medicine and Rehabilitation, Northwestern UniversityChicago, IL, USA
- Department of Biomedical Engineering, Northwestern UniversityEvanston, IL, USA
| |
Collapse
|
716
|
Borgi M, Cogliati-Dezza I, Brelsford V, Meints K, Cirulli F. Baby schema in human and animal faces induces cuteness perception and gaze allocation in children. Front Psychol 2014; 5:411. [PMID: 24847305 PMCID: PMC4019884 DOI: 10.3389/fpsyg.2014.00411] [Citation(s) in RCA: 82] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2014] [Accepted: 04/19/2014] [Indexed: 11/13/2022] Open
Abstract
The baby schema concept was originally proposed as a set of infantile traits with high appeal for humans, subsequently shown to elicit caretaking behavior and to affect cuteness perception and attentional processes. However, it is unclear whether the response to the baby schema may be extended to the human-animal bond context. Moreover, questions remain as to whether the cute response is constant and persistent or whether it changes with development. In the present study we parametrically manipulated the baby schema in images of humans, dogs, and cats. We analyzed responses of 3-6 year-old children, using both explicit (i.e., cuteness ratings) and implicit (i.e., eye gaze patterns) measures. By means of eye-tracking, we assessed children's preferential attention to images varying only for the degree of baby schema and explored participants' fixation patterns during a cuteness task. For comparative purposes, cuteness ratings were also obtained in a sample of adults. Overall our results show that the response to an infantile facial configuration emerges early during development. In children, the baby schema affects both cuteness perception and gaze allocation to infantile stimuli and to specific facial features, an effect not simply limited to human faces. In line with previous research, results confirm human positive appraisal toward animals and inform both educational and therapeutic interventions involving pets, helping to minimize risk factors (e.g., dog bites).
Collapse
Affiliation(s)
- Marta Borgi
- Section of Behavioral Neuroscience, Department of Cell Biology and Neurosciences, Istituto Superiore di Sanità Rome, Italy
| | - Irene Cogliati-Dezza
- Section of Behavioral Neuroscience, Department of Cell Biology and Neurosciences, Istituto Superiore di Sanità Rome, Italy
| | | | | | - Francesca Cirulli
- Section of Behavioral Neuroscience, Department of Cell Biology and Neurosciences, Istituto Superiore di Sanità Rome, Italy
| |
Collapse
|
717
|
Gillespie-Smith K, Riby DM, Hancock PJB, Doherty-Sneddon G. Children with autism spectrum disorder (ASD) attend typically to faces and objects presented within their picture communication systems. J Intellect Disabil Res 2014; 58:459-470. [PMID: 23600472 DOI: 10.1111/jir.12043] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/18/2013] [Indexed: 06/02/2023]
Abstract
BACKGROUND Children with autism spectrum disorder (ASD) may require interventions for communication difficulties. One type of intervention is picture communication symbols which are proposed to improve comprehension of linguistic input for children with ASD. However, atypical attention to faces and objects is widely reported across the autism spectrum for several types of stimuli. METHOD In this study we used eye-tracking methodology to explore fixation duration and time taken to fixate on the object and face areas within picture communication symbols. Twenty-one children with ASD were compared with typically developing matched groups. RESULTS Children with ASD were shown to have similar fixation patterns on face and object areas compared with typically developing matched groups. CONCLUSIONS It is proposed that children with ASD attend to the images in a manner that does not differentiate them from typically developing individuals. Therefore children with and without autism have the same opportunity to encode the available information. We discuss what this may imply for interventions using picture symbols.
Collapse
Affiliation(s)
- K Gillespie-Smith
- Department of Psychology, School of Natural Sciences, University of Stirling, Stirling, Scotland, UK
| | | | | | | |
Collapse
|
718
|
Wang S, Tsuchiya N, New J, Hurlemann R, Adolphs R. Preferential attention to animals and people is independent of the amygdala. Soc Cogn Affect Neurosci 2014; 10:371-80. [PMID: 24795434 DOI: 10.1093/scan/nsu065] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
The amygdala is thought to play a critical role in detecting salient stimuli. Several studies have taken ecological approaches to investigating such saliency, and argue for domain-specific effects for processing certain natural stimulus categories, in particular faces and animals. Linking this to the amygdala, neurons in the human amygdala have been found to respond strongly to faces and also to animals. However, the amygdala's necessary role for such category-specific effects at the behavioral level remains untested. Here we tested four rare patients with bilateral amygdala lesions on an established change-detection protocol. Consistent with prior published studies, healthy controls showed reliably faster and more accurate detection of people and animals, as compared with artifacts and plants. So did all four amygdala patients: there were no differences in phenomenal change blindness, in behavioral reaction time to detect changes or in eye-tracking measures. The findings provide decisive evidence against a critical participation of the amygdala in rapid initial processing of attention to animate stimuli, suggesting that the necessary neural substrates for this phenomenon arise either in other subcortical structures (such as the pulvinar) or within the cortex itself.
Collapse
Affiliation(s)
- Shuo Wang
- Computation and Neural Systems, California Institute of Technology, Pasadena, CA 91125, USA, Decoding and Controlling Brain Information, Japan Science and Technology Agency, Chiyoda-ku, Tokyo 102-0076, Japan, School of Psychological Sciences, Monash University, Clayton, Victoria 3800, Australia, Division of Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91125, USA, Department of Psychology, Barnard College, Columbia University New York, NY 10027, USA, and Department of Psychiatry, University of Bonn, 53105 Bonn, Germany
| | - Naotsugu Tsuchiya
- Computation and Neural Systems, California Institute of Technology, Pasadena, CA 91125, USA, Decoding and Controlling Brain Information, Japan Science and Technology Agency, Chiyoda-ku, Tokyo 102-0076, Japan, School of Psychological Sciences, Monash University, Clayton, Victoria 3800, Australia, Division of Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91125, USA, Department of Psychology, Barnard College, Columbia University New York, NY 10027, USA, and Department of Psychiatry, University of Bonn, 53105 Bonn, Germany Computation and Neural Systems, California Institute of Technology, Pasadena, CA 91125, USA, Decoding and Controlling Brain Information, Japan Science and Technology Agency, Chiyoda-ku, Tokyo 102-0076, Japan, School of Psychological Sciences, Monash University, Clayton, Victoria 3800, Australia, Division of Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91125, USA, Department of Psychology, Barnard College, Columbia University New York, NY 10027, USA, and Department of Psychiatry, University of Bonn, 53105 Bonn, Germany Computation and Neural Systems, California Institute of Technology, Pasadena, CA 91125, USA, Decoding and Controlling Brain Information, Japan Science and Technology Agency, Chiyoda-ku, Tokyo 102-0076, Japan, School of Psychological Sciences, Monash University, Clayton, Victoria 3800, Australia, Division of Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91125, USA, Department of Psychology, Barnard College, Columbia University New York, NY 10027, USA, and Department of Psychiatry, University of Bonn, 53105 Bonn, Germany
| | - Joshua New
- Computation and Neural Systems, California Institute of Technology, Pasadena, CA 91125, USA, Decoding and Controlling Brain Information, Japan Science and Technology Agency, Chiyoda-ku, Tokyo 102-0076, Japan, School of Psychological Sciences, Monash University, Clayton, Victoria 3800, Australia, Division of Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91125, USA, Department of Psychology, Barnard College, Columbia University New York, NY 10027, USA, and Department of Psychiatry, University of Bonn, 53105 Bonn, Germany
| | - Rene Hurlemann
- Computation and Neural Systems, California Institute of Technology, Pasadena, CA 91125, USA, Decoding and Controlling Brain Information, Japan Science and Technology Agency, Chiyoda-ku, Tokyo 102-0076, Japan, School of Psychological Sciences, Monash University, Clayton, Victoria 3800, Australia, Division of Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91125, USA, Department of Psychology, Barnard College, Columbia University New York, NY 10027, USA, and Department of Psychiatry, University of Bonn, 53105 Bonn, Germany
| | - Ralph Adolphs
- Computation and Neural Systems, California Institute of Technology, Pasadena, CA 91125, USA, Decoding and Controlling Brain Information, Japan Science and Technology Agency, Chiyoda-ku, Tokyo 102-0076, Japan, School of Psychological Sciences, Monash University, Clayton, Victoria 3800, Australia, Division of Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91125, USA, Department of Psychology, Barnard College, Columbia University New York, NY 10027, USA, and Department of Psychiatry, University of Bonn, 53105 Bonn, Germany Computation and Neural Systems, California Institute of Technology, Pasadena, CA 91125, USA, Decoding and Controlling Brain Information, Japan Science and Technology Agency, Chiyoda-ku, Tokyo 102-0076, Japan, School of Psychological Sciences, Monash University, Clayton, Victoria 3800, Australia, Division of Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91125, USA, Department of Psychology, Barnard College, Columbia University New York, NY 10027, USA, and Department of Psychiatry, University of Bonn, 53105 Bonn, Germany
| |
Collapse
|
719
|
Vivanti G, Dissanayake C. Propensity to imitate in autism is not modulated by the model's gaze direction: an eye-tracking study. Autism Res 2014; 7:392-9. [PMID: 24740914 DOI: 10.1002/aur.1376] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2013] [Accepted: 03/19/2014] [Indexed: 11/07/2022]
Abstract
Individuals with Autism Spectrum Disorder (ASD) show a diminished propensity to imitate others' actions, as well as a diminished sensitivity and responsivity to others' communicative cues, such as a direct gaze. However, it is not known whether failure to appreciate the communicative value of a direct gaze is associated with imitation abnormalities in this population. In this eye-tracking study, we investigated how 25 preschoolers with ASD, compared with 25 developmental and chronological age-matched children, imitate actions that are associated with a model's direct gaze versus averted gaze. We found that the model's direct gaze immediately prior to the demonstration increased the attention to the model and the propensity to imitate the demonstrated action in children without ASD. In contrast, preschoolers with ASD showed a similar propensity to look at the model's face and to imitate the demonstrated actions across the direct gaze and the averted gaze conditions. These data indicate that atypical imitation in ASD might be linked to abnormal processing of the model's communicative signals (such as a direct gaze) that modulate imitative behaviours in individuals without ASD. Autism Res 2014, 7: 392-399. © 2014 International Society for Autism Research, Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Giacomo Vivanti
- Olga Tennison Autism Research Centre, School of Psychological Science, La Trobe University, Melbourne, Victoria; Victorian Autism Specific Early Learning and Care Centre: The Margot Prior Wing, La Trobe University, Melbourne, Victoria
| | | |
Collapse
|
720
|
Wu R, Tummeltshammer KS, Gliga T, Kirkham NZ. Ostensive signals support learning from novel attention cues during infancy. Front Psychol 2014; 5:251. [PMID: 24723902 PMCID: PMC3971204 DOI: 10.3389/fpsyg.2014.00251] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2014] [Accepted: 03/06/2014] [Indexed: 11/13/2022] Open
Abstract
Social attention cues (e.g., head turning, gaze direction) highlight which events young infants should attend to in a busy environment and, recently, have been shown to shape infants' likelihood of learning about objects and events. Although studies have documented which social cues guide attention and learning during early infancy, few have investigated how infants learn to learn from attention cues. Ostensive signals, such as a face addressing the infant, often precede social attention cues. Therefore, it is possible that infants can use ostensive signals to learn from other novel attention cues. In this training study, 8-month-olds were cued to the location of an event by a novel non-social attention cue (i.e., flashing square) that was preceded by an ostensive signal (i.e., a face addressing the infant). At test, infants predicted the appearance of specific multimodal events cued by the flashing squares, which were previously shown to guide attention to but not inform specific predictions about the multimodal events (Wu and Kirkham, 2010). Importantly, during the generalization phase, the attention cue continued to guide learning of these events in the absence of the ostensive signal. Subsequent experiments showed that learning was less successful when the ostensive signal was absent even if an interesting but non-ostensive social stimulus preceded the same cued events.
Collapse
Affiliation(s)
- Rachel Wu
- Brain and Cognitive Sciences, University of RochesterRochester, NY, USA
| | - Kristen S. Tummeltshammer
- Department of Psychological Sciences, Centre for Brain and Cognitive Development, Birkbeck, University of LondonLondon, UK
| | - Teodora Gliga
- Department of Psychological Sciences, Centre for Brain and Cognitive Development, Birkbeck, University of LondonLondon, UK
| | - Natasha Z. Kirkham
- Department of Psychological Sciences, Centre for Brain and Cognitive Development, Birkbeck, University of LondonLondon, UK
| |
Collapse
|
721
|
Abstract
Typically developing individuals show a strong visual preference for faces and face-like stimuli; however, this may come at the expense of attending to bodies or to other aspects of a scene. The primary goal of the present study was to provide additional insight into the development of attentional mechanisms that underlie perception of real people in naturalistic scenes. We examined the looking behaviors of typical children, adolescents, and young adults as they viewed static and dynamic scenes depicting one or more people. Overall, participants showed a bias to attend to faces more than on other parts of the scenes. Adding motion cues led to a reduction in the number, but an increase in the average duration of face fixations in single-character scenes. When multiple characters appeared in a scene, motion-related effects were attenuated and participants shifted their gaze from faces to bodies, or made off-screen glances. Children showed the largest effects related to the introduction of motion cues or additional characters, suggesting that they find dynamic faces difficult to process, and are especially prone to look away from faces when viewing complex social scenes-a strategy that could reduce the cognitive and the affective load imposed by having to divide one's attention between multiple faces. Our findings provide new insights into the typical development of social attention during natural scene viewing, and lay the foundation for future work examining gaze behaviors in typical and atypical development.
Collapse
Affiliation(s)
- Brenda M. Stoesz
- Department of Psychology, University of ManitobaWinnipeg, MB, Canada
| | - Lorna S. Jakobson
- Department of Psychology, University of ManitobaWinnipeg, MB, Canada
| |
Collapse
|
722
|
Molitor RJ, Ko PC, Hussey EP, Ally BA. Memory-related eye movements challenge behavioral measures of pattern completion and pattern separation. Hippocampus 2014; 24:666-72. [PMID: 24493460 DOI: 10.1002/hipo.22256] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2013] [Revised: 01/21/2014] [Accepted: 01/24/2014] [Indexed: 12/30/2022]
Abstract
The hippocampus creates distinct episodes from highly similar events through a process called pattern separation and can retrieve memories from partial or degraded cues through a process called pattern completion. These processes have been studied in humans using tasks where participants must distinguish studied items from perceptually similar lure items. False alarms to lures (incorrectly reporting a perceptually similar item as previously studied) are thought to reflect pattern completion, a retrieval-based process. However, false alarms to lures could also result from insufficient encoding of studied items, leading to impoverished memory of item details and a failure to correctly reject lures. The current study investigated the source of lure false alarms by comparing eye movements during the initial presentation of items to eye movements made during the later presentation of item repetitions and similar lures in order to assess mnemonic processing at encoding and retrieval, respectively. Relative to other response types, lure false alarms were associated with fewer fixations to the initially studied items, suggesting that false alarms result from impoverished encoding. Additionally, lure correct rejections and lure false alarms garnered more fixations than hits, denoting additional retrieval-related processing. The results suggest that measures of pattern separation and completion in behavioral paradigms are not process-pure.
Collapse
Affiliation(s)
- Robert J Molitor
- Department of Neurology, Vanderbilt University Medical Center, Nashville, Tennessee
| | | | | | | |
Collapse
|
723
|
Alley S, Jennings C, Persaud N, Plotnikoff RC, Horsley M, Vandelanotte C. Do personally tailored videos in a web-based physical activity intervention lead to higher attention and recall? - an eye-tracking study. Front Public Health 2014; 2:13. [PMID: 24575398 PMCID: PMC3921670 DOI: 10.3389/fpubh.2014.00013] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2013] [Accepted: 01/27/2014] [Indexed: 11/13/2022] Open
Abstract
Over half of the Australian population does not meet physical activity guidelines and has an increased risk of chronic disease. Web-based physical activity interventions have the potential to reach large numbers of the population at low-cost, however issues have been identified with usage and participant retention. Personalized (computer-tailored) physical activity advice delivered through video has the potential to address low engagement, however it is unclear whether it is more effective in engaging participants when compared to text-delivered personalized advice. This study compared the attention and recall outcomes of tailored physical activity advice in video- vs. text-format. Participants (n = 41) were randomly assigned to receive either video- or text-tailored feedback with identical content. Outcome measures included attention to the feedback, measured through advanced eye-tracking technology (TobiiX 120), and recall of the advice, measured through a post intervention interview. Between group ANOVA's, Mann-Whitney U tests and chi square analyses were applied. Participants in the video-group displayed greater attention to the physical activity feedback in terms of gaze-duration on the feedback (7.7 vs. 3.6 min, p < 001), total fixation-duration on the feedback (6.0 vs. 3.3 min, p < 001), and focusing on feedback (6.8 vs. 3.5 min, p < 001). Despite both groups having the same ability to navigate through the feedback, the video-group completed a significantly (p < 0.001) higher percentage of feedback sections (95%) compared to the text-group (66%). The main messages were recalled in both groups, but many details were forgotten. No significant between group differences were found for message recall. These results suggest that video-tailored feedback leads to greater attention compared to text-tailored feedback. More research is needed to determine how message recall can be improved, and whether video-tailored advice can lead to greater health behavior change.
Collapse
Affiliation(s)
- Stephanie Alley
- Centre for Physical Activity Studies, Institute for Health and Social Science Research, Central Queensland University , Rockhampton, QLD , Australia
| | - Cally Jennings
- Faculty of Physical Education and Recreation, University of Alberta , Edmonton, AB , Canada
| | - Nayadin Persaud
- Learning and Teaching Education Research Centre, Central Queensland University , Noosa, QLD , Australia
| | - Ronald C Plotnikoff
- Priority Research Centre for Physical Activity and Nutrition, University of Newcastle , Newcastle, NSW , Australia
| | - Mike Horsley
- Learning and Teaching Education Research Centre, Central Queensland University , Noosa, QLD , Australia
| | - Corneel Vandelanotte
- Centre for Physical Activity Studies, Institute for Health and Social Science Research, Central Queensland University , Rockhampton, QLD , Australia
| |
Collapse
|
724
|
Shic F, Macari S, Chawarska K. Speech disturbs face scanning in 6-month-old infants who develop autism spectrum disorder. Biol Psychiatry 2014; 75:231-7. [PMID: 23954107 PMCID: PMC3864607 DOI: 10.1016/j.biopsych.2013.07.009] [Citation(s) in RCA: 131] [Impact Index Per Article: 13.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/06/2013] [Revised: 06/27/2013] [Accepted: 07/08/2013] [Indexed: 10/26/2022]
Abstract
BACKGROUND From birth, infants show a preference for the faces, gaze, and voices of others. In individuals with autism spectrum disorders (ASDs) these biases seem to be disturbed. The source of these disturbances is not well-understood, but recent efforts have shown that the spontaneous deployment of attention to social targets might be atypical as early as 6 months of age. The nature of this atypical behavior and the conditions under which it arises are currently unknown. METHODS We used eye-tracking to examine the gaze patterns of 6-month-old infants (n = 99) at high risk (n = 57) and low risk (n = 42) for developing ASD as they viewed faces that were: 1) still; 2) moving and expressing positive affect; or 3) speaking. Clinical outcomes were determined through a comprehensive assessment at the age of 3 years. The scanning patterns of infants later diagnosed with ASD were compared with infants without an ASD outcome. RESULTS Infants who later developed ASD spent less time looking at the presented scenes in general than other infants. When these infants looked at faces, their looking toward the inner features of faces decreased compared with the other groups only when the presented face was speaking. CONCLUSIONS Our study suggests that infants later diagnosed with ASD have difficulties regulating attention to complex social scenes. It also suggests that the presence of speech might uniquely disturb the attention of infants who later develop ASD at a critical developmental point when other infants are acquiring language and learning about their social world.
Collapse
Affiliation(s)
- Frederick Shic
- Yale Child Study Center, Yale University School of Medicine, New Haven, Connecticut.
| | - Suzanne Macari
- Yale Child Study Center, Yale University School of Medicine, 230 S Frontage Rd, New Haven, CT 06520
| | - Katarzyna Chawarska
- Yale Child Study Center, Yale University School of Medicine, 230 S Frontage Rd, New Haven, CT 06520
| |
Collapse
|
725
|
Campbell DJ, Shic F, Macari S, Chawarska K. Gaze response to dyadic bids at 2 years related to outcomes at 3 years in autism spectrum disorders: a subtyping analysis. J Autism Dev Disord 2014; 44:431-42. [PMID: 23877749 PMCID: PMC3900601 DOI: 10.1007/s10803-013-1885-9] [Citation(s) in RCA: 54] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
Variability in attention towards direct gaze and child-directed speech may contribute to heterogeneity of clinical presentation in toddlers with autism spectrum disorders (ASD). To evaluate this hypothesis, we clustered sixty-five 20-month-old toddlers with ASD based on their visual responses to dyadic cues for engagement, identifying three subgroups. Subsequently, we compared social, language, and adaptive functioning of these subgroups at 3 years of age. The cluster displaying limited attention to social scenes in general exhibited poor outcome at 3 years; the cluster displaying good attention to the scene and to the speaker's mouth was verbal and high functioning at 3 years. Analysis of visual responses to dyadic cues may provide a clinically meaningful approach to identifying early predictors of outcome.
Collapse
Affiliation(s)
- Daniel J Campbell
- Yale Child Study Center, Yale University School of Medicine, 40 Temple St, Suite 7D, New Haven, CT, 06510, USA
| | | | | | | |
Collapse
|
726
|
Xiao WS, Quinn PC, Pascalis O, Lee K. Own- and other-race face scanning in infants: implications for perceptual narrowing. Dev Psychobiol 2014; 56:262-73. [PMID: 24415549 DOI: 10.1002/dev.21196] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2013] [Accepted: 12/12/2013] [Indexed: 11/10/2022]
Abstract
The present study investigated how 6- and 9-month-old Caucasian infants scan Caucasian and Chinese dynamic faces using eye-tracking methodology. Analyses of looking times revealed that with increased age, infants decreased their looking time to other-race noses, while maintaining their looking time for own-race noses. From 6 to 9 months, infants increased their looking time for the eyes of both races of faces. Analyses of scan paths showed that infants were no more likely to shift their fixation between the eyes of own-race faces than other-race faces. Similarity between participants' scan paths suggested that facial information was collected more efficiently for own- versus other-race faces at 9 months of age. Combined with previous eye-tracking studies of infants' face scanning (Liu et al. [2011] Journal of Experimental Child Psychology, 108, 180-189; Wheeler et al. [2011] PLoS ONE, 6, e18621. doi: 10.1371/journal.pone.0018621; Xiao et al. [2013] International Journal of Behavioral Development, 37, 100-105), the findings are interpreted in the context of perceptual narrowing and suggest differential contributions of visual experience, facial physiognomy, and culture in accounting for similarity and difference in infants scanning of own- and other-race faces.
Collapse
Affiliation(s)
- Wen S Xiao
- Institute of Child Study, University of Toronto, 45 Walmer Road, Toronto, Ontario, Canada, M5R 2X2
| | | | | | | |
Collapse
|
727
|
Burriss RP, Marcinkowska UM, Lyons MT. Gaze properties of women judging the attractiveness of masculine and feminine male faces. Evol Psychol 2014; 12:19-35. [PMID: 24401278 PMCID: PMC10426981 DOI: 10.1556/jep.12.2014.1.2] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2013] [Accepted: 12/03/2013] [Indexed: 11/19/2022] Open
Abstract
Most studies of female facial masculinity preference have relied upon self-reported preference, with participants selecting or rating the attractiveness of faces that differ in masculinity. However, researchers have not established a consensus as to whether women's general preference is for male faces that are masculine or feminine, and several studies have indicated that women prefer neither. We investigated women's preferences for male facial masculinity using standard two-alternative forced choice (2AFC) preference trials, paired with eye tracking measures, to determine whether conscious and non-conscious measures of preference yield similar results. We found that women expressed a preference for, gazed longer at, and fixated more frequently on feminized male faces. We also found effects of relationship status, relationship context (whether faced are judged for attractiveness as a long- or short-term partner), and hormonal contraceptive use. These results support previous findings that women express a preference for feminized over masculinized male faces, demonstrate that non-conscious measures of preference for this trait echo consciously expressed preferences, and suggest that certain aspects of the preference decision-making process may be better captured by eye tracking than by 2AFC preference trials.
Collapse
Affiliation(s)
- Robert P. Burriss
- Department of Psychology, Northumbria University, Newcastle upon Tyne, UK
| | | | - Minna T. Lyons
- Department of Psychology, Liverpool Hope University, Liverpool, UK
| |
Collapse
|
728
|
Abstract
Designers of visual communication material want their material to attract and retain attention. In marketing research, heat maps, dwell time, and time to AOI first hit are often used as evaluation parameters. Here we present two additional measures (1) "scan path entropy" to quantify gaze guidance and (2) the "arrow plot" to visualize the average scan path. Both are based on string representations of scan paths. The latter also incorporates transition matrices and time required for 50% of the observers to first hit AOIs (T50). The new measures were tested in an eye tracking study (48 observers, 39 advertisements). Scan path entropy is a sensible measure for gaze guidance and the new visualization method reveals aspects of the average scan path and gives a better indication in what order global scanning takes place.
Collapse
Affiliation(s)
- Ignace Hooge
- Department of Experimental Psychology, Faculty of Social Sciences, Helmholtz Institute, Utrecht University Utrecht, Netherlands
| | - Guido Camps
- Department of Experimental Psychology, Faculty of Social Sciences, Helmholtz Institute, Utrecht University Utrecht, Netherlands
| |
Collapse
|
729
|
Abstract
Facial appearance in humans is associated with attraction and mate choice. Numerous studies have identified that adults display directional preferences for certain facial traits including symmetry, averageness, and sexually dimorphic traits. Typically, studies measuring human preference for these traits examine declared (e.g., choice or ratings of attractiveness) or visual preferences (e.g., looking time) of participants. However, the extent to which visual and declared preferences correspond remains relatively untested. In order to evaluate the relationship between these measures we examined visual and declared preferences displayed by men and women for opposite-sex faces manipulated across three dimensions (symmetry, averageness, and masculinity) and compared preferences from each method. Results indicated that participants displayed significant visual and declared preferences for symmetrical, average, and appropriately sexually dimorphic faces. We also found that declared and visual preferences correlated weakly but significantly. These data indicate that visual and declared preferences for manipulated facial stimuli produce similar directional preferences across participants and are also correlated with one another within participants. Both methods therefore may be considered appropriate to measure human preferences. However, while both methods appear likely to generate similar patterns of preference at the sample level, the weak nature of the correlation between visual and declared preferences in our data suggests some caution in assuming visual preferences are the same as declared preferences at the individual level. Because there are positive and negative factors in both methods for measuring preference, we suggest that a combined approach is most useful in outlining population level preferences for traits.
Collapse
|
730
|
Abstract
Clinical observations suggest abnormal gaze perception to be an important indicator of social anxiety disorder (SAD). Experimental research has yet paid relatively little attention to the study of gaze perception in SAD. In this article we first discuss gaze perception in healthy human beings before reviewing self-referential and threat-related biases of gaze perception in clinical and non-clinical socially anxious samples. Relative to controls, socially anxious individuals exhibit an enhanced self-directed perception of gaze directions and demonstrate a pronounced fear of direct eye contact, though findings are less consistent regarding the avoidance of mutual gaze in SAD. Prospects for future research and clinical implications are discussed.
Collapse
Affiliation(s)
- Lars Schulze
- Department of Educational Sciences and Psychology, Freie Universität Berlin Berlin, Germany
| | - Babette Renneberg
- Department of Educational Sciences and Psychology, Freie Universität Berlin Berlin, Germany
| | - Janek S Lobmaier
- Institute of Psychology, University of Bern Bern, Switzerland ; Center for Cognition, Learning and Memory, University of Bern Bern, Switzerland
| |
Collapse
|
731
|
Tager-Flusberg H, Kasari C. Minimally verbal school-aged children with autism spectrum disorder: the neglected end of the spectrum. Autism Res 2013; 6:468-78. [PMID: 24124067 PMCID: PMC3869868 DOI: 10.1002/aur.1329] [Citation(s) in RCA: 377] [Impact Index Per Article: 34.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2012] [Accepted: 08/07/2013] [Indexed: 01/19/2023]
Abstract
It is currently estimated that about 30% of children with autism spectrum disorder remain minimally verbal, even after receiving years of interventions and a range of educational opportunities. Very little is known about the individuals at this end of the autism spectrum, in part because this is a highly variable population with no single set of defining characteristics or patterns of skills or deficits, and in part because it is extremely challenging to provide reliable or valid assessments of their developmental functioning. In this paper, we summarize current knowledge based on research including minimally verbal children. We review promising new novel methods for assessing the verbal and nonverbal abilities of minimally verbal school-aged children, including eye-tracking and brain-imaging methods that do not require overt responses. We then review what is known about interventions that may be effective in improving language and communication skills, including discussion of both nonaugmentative and augmentative methods. In the final section of the paper, we discuss the gaps in the literature and needs for future research.
Collapse
|
732
|
Magrelli S, Jermann P, Noris B, Ansermet F, Hentsch F, Nadel J, Billard A. Social orienting of children with autism to facial expressions and speech: a study with a wearable eye-tracker in naturalistic settings. Front Psychol 2013; 4:840. [PMID: 24312064 PMCID: PMC3834245 DOI: 10.3389/fpsyg.2013.00840] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2013] [Accepted: 10/22/2013] [Indexed: 12/27/2022] Open
Abstract
This study investigates attention orienting to social stimuli in children with Autism Spectrum Conditions (ASC) during dyadic social interactions taking place in real-life settings. We study the effect of social cues that differ in complexity and distinguish between social cues produced by facial expressions of emotion and those produced during speech. We record the children's gazes using a head-mounted eye-tracking device and report on a detailed and quantitative analysis of the motion of the gaze in response to the social cues. The study encompasses a group of children with ASC from 2 to 11-years old (n = 14) and a group of typically developing (TD) children (n = 17) between 3 and 6-years old. While the two groups orient overtly to facial expressions, children with ASC do so to a lesser extent. Children with ASC differ importantly from TD children in the way they respond to speech cues, displaying little overt shifting of attention to speaking faces. When children with ASC orient to facial expressions, they show reaction times and first fixation lengths similar to those presented by TD children. However, children with ASC orient to speaking faces slower than TD children. These results support the hypothesis that individuals affected by ASC have difficulties processing complex social sounds and detecting intermodal correspondence between facial and vocal information. It also corroborates evidence that people with ASC show reduced overt attention toward social stimuli.
Collapse
Affiliation(s)
- Silvia Magrelli
- Learning Algorithms and Systems Laboratory, École Polytechnique Fédérale de LausanneLausanne, Switzerland
| | - Patrick Jermann
- Center for Digital Education, École Polytechnique Fédérale de LausanneLausanne, Switzerland
| | - Basilio Noris
- Service of Child and Adolescent Psychiatry, Department of Child and Adolescent Medicine, Hôpitaux Universitaires de GenèveGenève, Switzerland
| | - François Ansermet
- Service of Child and Adolescent Psychiatry, Department of Child and Adolescent Medicine, Hôpitaux Universitaires de GenèveGenève, Switzerland
| | - François Hentsch
- Emotion Center, CNRS and The Universite Pierre et Marie CuriePitié-Salpêtrière, Paris, France.
| | - Jacqueline Nadel
- Learning Algorithms and Systems Laboratory, École Polytechnique Fédérale de LausanneLausanne, Switzerland
| | - Aude Billard
- Learning Algorithms and Systems Laboratory, École Polytechnique Fédérale de LausanneLausanne, Switzerland
| |
Collapse
|
733
|
Abstract
Human factors play a significant part in clinical error. Situational awareness (SA) means being aware of one’s surroundings, comprehending the present situation, and being able to predict outcomes. It is a key human skill that, when properly applied, is associated with reducing medical error: eye-tracking technology can be used to provide an objective and qualitative measure of the initial perception component of SA. Feedback from eye-tracking technology can be used to improve the understanding and teaching of SA in clinical contexts, and consequently, has potential for reducing clinician error and the concomitant adverse events.
Collapse
Affiliation(s)
- Brett Williams
- Department of Community Emergency Health and Paramedic Practice, Frankston, VIC, Australia
| | - Andrew Quested
- Department of Community Emergency Health and Paramedic Practice, Frankston, VIC, Australia
| | - Simon Cooper
- School of Nursing and Midwifery, Berwick, Monash University, Frankston, VIC, Australia
| |
Collapse
|
734
|
Oakes LM, Baumgartner HA, Barrett FS, Messenger IM, Luck SJ. Developmental changes in visual short-term memory in infancy: evidence from eye-tracking. Front Psychol 2013; 4:697. [PMID: 24106485 PMCID: PMC3788337 DOI: 10.3389/fpsyg.2013.00697] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2013] [Accepted: 09/13/2013] [Indexed: 12/18/2022] Open
Abstract
We assessed visual short-term memory (VSTM) for color in 6- and 8-month-old infants (n = 76) using a one-shot change detection task. In this task, a sample array of two colored squares was visible for 517 ms, followed by a 317-ms retention period and then a 3000-ms test array consisting of one unchanged item and one item in a new color. We tracked gaze at 60 Hz while infants looked at the changed and unchanged items during test. When the two sample items were different colors (Experiment 1), 8-month-old infants exhibited a preference for the changed item, indicating memory for the colors, but 6-month-olds exhibited no evidence of memory. When the two sample items were the same color and did not need to be encoded as separate objects (Experiment 2), 6-month-old infants demonstrated memory. These results show that infants can encode information in VSTM in a single, brief exposure that simulates the timing of a single fixation period in natural scene viewing, and they reveal rapid developmental changes between 6 and 8 months in the ability to store individuated items in VSTM.
Collapse
Affiliation(s)
- Lisa M Oakes
- Department of Psychology, Center for Mind and Brain, University of California, Davis Davis, CA, USA
| | | | | | | | | |
Collapse
|
735
|
Woods HC, Scheepers C, Ross KA, Espie CA, Biello SM. What are you looking at? Moving toward an attentional timeline in insomnia: a novel semantic eye tracking study. Sleep 2013; 36:1491-9. [PMID: 24082308 DOI: 10.5665/sleep.3042] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022] Open
Abstract
STUDY OBJECTIVES To date, cognitive probe paradigms have been used in different guises to obtain reaction time measurements suggestive of an attention bias towards sleep in insomnia. This study adopts a methodology which is novel to sleep research to obtain a continual record of where the eyes-and therefore attention-are being allocated with regard to sleep and neutral stimuli. DESIGN A head mounted eye tracker (Eyelink II,SR Research, Ontario, Canada) was used to monitor eye movements in respect to two words presented on a computer screen, with one word being a sleep positive, sleep negative, or neutral word above or below a second distracter pseudoword. Probability and reaction times were the outcome measures. PARTICIPANTS Sleep group classification was determined by screening interview and PSQI (> 8 = insomnia, < 3 = good sleeper) score. MEASUREMENTS AND RESULTS Those individuals with insomnia took longer to fixate on the target word and remained fixated for less time than the good sleep controls. Word saliency had an effect with longer first fixations on positive and negative sleep words in both sleep groups, with largest effect sizes seen with the insomnia group. CONCLUSIONS This overall delay in those with insomnia with regard to vigilance and maintaining attention on the target words moves away from previous attention bias work showing a bias towards sleep, particularly negative, stimuli but is suggestive of a neurocognitive deficit in line with recent research.
Collapse
Affiliation(s)
- Heather Cleland Woods
- School of Psychology, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland
| | | | | | | | | |
Collapse
|
736
|
van Viersen S, Slot EM, Kroesbergen EH, Van't Noordende JE, Leseman PPM. The added value of eye-tracking in diagnosing dyscalculia: a case study. Front Psychol 2013; 4:679. [PMID: 24098294 PMCID: PMC3787405 DOI: 10.3389/fpsyg.2013.00679] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2013] [Accepted: 09/09/2013] [Indexed: 11/13/2022] Open
Abstract
The present study compared eye movements and performance of a 9-year-old girl with Developmental Dyscalculia (DD) on a series of number line tasks to those of a group of typically developing (TD) children (n = 10), in order to answer the question whether eye-tracking data from number line estimation tasks can be a useful tool to discriminate between TD children and children with a number processing deficit. Quantitative results indicated that the child with dyscalculia performed worse on all symbolic number line tasks compared to the control group, indicated by a low linear fit (R (2)) and a low accuracy measured by mean percent absolute error. In contrast to the control group, her magnitude representations seemed to be better represented by a logarithmic than a linear fit. Furthermore, qualitative analyses on the data of the child with dyscalculia revealed more unidentifiable fixation patterns in the processing of multi-digit numbers and more dysfunctional estimation strategy use in one third of the estimation trials as opposed to ~10% in the control group. In line with her dyscalculia diagnosis, these results confirm the difficulties with spatially representing and manipulating numerosities on a number line, resulting in inflexible and inadequate estimation or processing strategies. It can be concluded from this case study that eye-tracking data can be used to discern different number processing and estimation strategies in TD children and children with a number processing deficit. Hence, eye-tracking data in combination with number line estimation tasks might be a valuable and promising addition to current diagnostic measures.
Collapse
Affiliation(s)
- Sietske van Viersen
- Department of Cognitive and Motor Disabilities, Utrecht University Utrecht, Netherlands
| | | | | | | | | |
Collapse
|
737
|
Abstract
Although foreign accents can be highly dissimilar to native speech, existing research suggests that listeners readily adapt to foreign accents after minimal exposure. However, listeners often report difficulty understanding non-native accents, and the time-course and specificity of adaptation remain unclear. Across five experiments, we examined whether listeners could use a newly learned feature of a foreign accent to eliminate lexical competitors during online speech perception. Participants heard the speech of a native English speaker and a native speaker of Québec French who, in English, pronounces /i/ as [i] (e.g., weak as wick) before all consonants except voiced fricatives. We examined whether listeners could learn to eliminate a shifted /i/-competitor (e.g., weak) when interpreting the accented talker produce an unshifted word (e.g., wheeze). In four experiments, adaptation was strikingly limited, though improvement across the course of the experiment and with stimulus variations indicates learning was possible. In a fifth experiment, adaptation was not improved when a native English talker produced the critical vowel shift, demonstrating that the limitation is not simply due to the fact the accented talker was non-native. These findings suggest that although listeners can arrive at the correct interpretation of a foreign accent, this process can pose significant difficulty.
Collapse
Affiliation(s)
- Alison M. Trude
- Department of Psychology and Beckman Institute, University of Illinois at Urbana-Champaign
| | | | - Sarah Brown-Schmidt
- Department of Psychology and Beckman Institute, University of Illinois at Urbana-Champaign
| |
Collapse
|
738
|
Abstract
Models of spoken word recognition assume that words are represented as sequences of phonemes. We evaluated this assumption by examining phonemic anadromes, words that share the same phonemes but differ in their order (e.g., sub and bus). Using the visual-world paradigm, we found that listeners show more fixations to anadromes (e.g., sub when bus is the target) than to unrelated words (well) and to words that share the same vowel but not the same set of phonemes (sun). This contrasts with the predictions of existing models and suggests that words are not defined as strict sequences of phonemes.
Collapse
Affiliation(s)
- Joseph C. Toscano
- Beckman Institute, University of Illinois at Urbana-Champaign 405 N Mathews Ave, Urbana, IL 61801 USA
| | - Nathaniel D. Anderson
- Beckman Institute, University of Illinois at Urbana-Champaign 405 N Mathews Ave, Urbana, IL 61801 USA
- Dept. of Psychology, University of Illinois at Urbana-Champaign 603 E Daniel St, Champaign, IL, 61820 USA
| | - Bob McMurray
- Dept. of Psychology, University of Iowa E11 Seashore Hall, Iowa City, IA 52242 USA
- Dept. of Communication Sciences and Disorders, University of Iowa Wendell Johnson Speech and Hearing Center, Iowa City, IA 52242 USA
| |
Collapse
|
739
|
Smith NA, Gibilisco CR, Meisinger RE, Hankey M. Asymmetry in infants' selective attention to facial features during visual processing of infant-directed speech. Front Psychol 2013; 4:601. [PMID: 24062705 PMCID: PMC3769626 DOI: 10.3389/fpsyg.2013.00601] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2013] [Accepted: 08/19/2013] [Indexed: 11/25/2022] Open
Abstract
Two experiments used eye tracking to examine how infant and adult observers distribute their eye gaze on videos of a mother producing infant- and adult-directed speech. Both groups showed greater attention to the eyes than to the nose and mouth, as well as an asymmetrical focus on the talker's right eye for infant-directed speech stimuli. Observers continued to look more at the talker's apparent right eye when the video stimuli were mirror flipped, suggesting that the asymmetry reflects a perceptual processing bias rather than a stimulus artifact, which may be related to cerebral lateralization of emotion processing.
Collapse
Affiliation(s)
- Nicholas A. Smith
- Perceptual Development Laboratory, Boys Town National Research HospitalOmaha, NE, USA
| | | | | | | |
Collapse
|
740
|
Chawarska K, Macari S, Shic F. Decreased spontaneous attention to social scenes in 6-month-old infants later diagnosed with autism spectrum disorders. Biol Psychiatry 2013; 74:195-203. [PMID: 23313640 PMCID: PMC3646074 DOI: 10.1016/j.biopsych.2012.11.022] [Citation(s) in RCA: 344] [Impact Index Per Article: 31.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/09/2012] [Revised: 11/15/2012] [Accepted: 11/18/2012] [Indexed: 12/20/2022]
Abstract
BACKGROUND The ability to spontaneously attend to the social overtures and activities of others is essential for the development of social cognition and communication. This ability is critically impaired in toddlers with autism spectrum disorders (ASD); however, it is not clear if prodromal symptoms in this area are already present in the first year of life of those affected by the disorder. METHODS To examine whether 6-month-old infants later diagnosed with ASD exhibit atypical spontaneous social monitoring skills, visual responses of 67 infants at high-risk and 50 at low-risk for ASD were studied using an eye-tracking task. Based on their clinical presentation in the third year, infants were divided into those with ASD, those exhibiting atypical development, and those developing typically. RESULTS Compared with the control groups, 6-month-old infants later diagnosed with ASD attended less to the social scene, and when they did look at the scene, they spent less time monitoring the actress in general and her face in particular. Limited attention to the actress and her activities was not accompanied by enhanced attention to objects. CONCLUSIONS Prodromal symptoms of ASD at 6 months include a diminished ability to attend spontaneously to people and their activities. A limited attentional bias toward people early in development is likely to have a detrimental impact on the specialization of social brain networks and the emergence of social interaction patterns. Further investigation into its underlying mechanisms and role in psychopathology of ASD in the first year is warranted.
Collapse
Affiliation(s)
- Katarzyna Chawarska
- Child Study Center, Yale University School of Medicine, New Haven, Connecticut 06510, USA.
| | | | | |
Collapse
|
741
|
Kushnerenko E, Tomalski P, Ballieux H, Ribeiro H, Potton A, Axelsson EL, Murphy E, Moore DG. Brain responses to audiovisual speech mismatch in infants are associated with individual differences in looking behaviour. Eur J Neurosci 2013; 38:3363-9. [PMID: 23889202 DOI: 10.1111/ejn.12317] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2013] [Revised: 05/22/2013] [Accepted: 06/19/2013] [Indexed: 11/30/2022]
Abstract
Research on audiovisual speech integration has reported high levels of individual variability, especially among young infants. In the present study we tested the hypothesis that this variability results from individual differences in the maturation of audiovisual speech processing during infancy. A developmental shift in selective attention to audiovisual speech has been demonstrated between 6 and 9 months with an increase in the time spent looking to articulating mouths as compared to eyes (Lewkowicz & Hansen-Tift. (2012) Proc. Natl Acad. Sci. USA, 109, 1431-1436; Tomalski et al. (2012) Eur. J. Dev. Psychol., 1-14). In the present study we tested whether these changes in behavioural maturational level are associated with differences in brain responses to audiovisual speech across this age range. We measured high-density event-related potentials (ERPs) in response to videos of audiovisually matching and mismatched syllables /ba/ and /ga/, and subsequently examined visual scanning of the same stimuli with eye-tracking. There were no clear age-specific changes in ERPs, but the amplitude of audiovisual mismatch response (AVMMR) to the combination of visual /ba/ and auditory /ga/ was strongly negatively associated with looking time to the mouth in the same condition. These results have significant implications for our understanding of individual differences in neural signatures of audiovisual speech processing in infants, suggesting that they are not strictly related to chronological age but instead associated with the maturation of looking behaviour, and develop at individual rates in the second half of the first year of life.
Collapse
Affiliation(s)
- Elena Kushnerenko
- Institute for Research in Child Development, School of Psychology, University of East London, Water Lane, London, E15 4LZ, UK
| | | | | | | | | | | | | | | |
Collapse
|
742
|
Kushnerenko E, Tomalski P, Ballieux H, Potton A, Birtles D, Frostick C, Moore DG. Brain responses and looking behavior during audiovisual speech integration in infants predict auditory speech comprehension in the second year of life. Front Psychol 2013; 4:432. [PMID: 23882240 PMCID: PMC3712256 DOI: 10.3389/fpsyg.2013.00432] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2013] [Accepted: 06/23/2013] [Indexed: 11/17/2022] Open
Abstract
The use of visual cues during the processing of audiovisual (AV) speech is known to be less efficient in children and adults with language difficulties and difficulties are known to be more prevalent in children from low-income populations. In the present study, we followed an economically diverse group of thirty-seven infants longitudinally from 6–9 months to 14–16 months of age. We used eye-tracking to examine whether individual differences in visual attention during AV processing of speech in 6–9 month old infants, particularly when processing congruent and incongruent auditory and visual speech cues, might be indicative of their later language development. Twenty-two of these 6–9 month old infants also participated in an event-related potential (ERP) AV task within the same experimental session. Language development was then followed-up at the age of 14–16 months, using two measures of language development, the Preschool Language Scale and the Oxford Communicative Development Inventory. The results show that those infants who were less efficient in auditory speech processing at the age of 6–9 months had lower receptive language scores at 14–16 months. A correlational analysis revealed that the pattern of face scanning and ERP responses to audiovisually incongruent stimuli at 6–9 months were both significantly associated with language development at 14–16 months. These findings add to the understanding of individual differences in neural signatures of AV processing and associated looking behavior in infants.
Collapse
Affiliation(s)
- Elena Kushnerenko
- Institute for Research in Child Development, School of Psychology, University of East London London, UK
| | | | | | | | | | | | | |
Collapse
|
743
|
Tourassi G, Voisin S, Paquit V, Krupinski E. Investigating the link between radiologists' gaze, diagnostic decision, and image content. J Am Med Inform Assoc 2013; 20:1067-75. [PMID: 23788627 DOI: 10.1136/amiajnl-2012-001503] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022] Open
Abstract
OBJECTIVE To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. METHODS Gaze data and diagnostic decisions were collected from three breast imaging radiologists and three radiology residents who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Image analysis was performed in mammographic regions that attracted radiologists' attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. RESULTS By pooling the data from all readers, machine learning produced highly accurate predictive models linking image content, gaze, and cognition. Potential linking of those with diagnostic error was also supported to some extent. Merging readers' gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the readers' diagnostic errors while confirming 97.3% of their correct diagnoses. The readers' individual perceptual and cognitive behaviors could be adequately predicted by modeling the behavior of others. However, personalized tuning was in many cases beneficial for capturing more accurately individual behavior. CONCLUSIONS There is clearly an interaction between radiologists' gaze, diagnostic decision, and image content which can be modeled with machine learning algorithms.
Collapse
Affiliation(s)
- Georgia Tourassi
- Oak Ridge National Laboratory, Biomedical Science and Engineering Center, Oak Ridge, Tennessee, USA
| | | | | | | |
Collapse
|
744
|
Abstract
In normal human visual behavior, our visual system is continuously exposed to abrupt changes in the local contrast and mean luminance in various parts of the visual field, as caused by actual changes in the environment, as well as by movements of our body, head, and eyes. Previous research has shown that both threshold and suprathreshold contrast percepts are attenuated by a co-occurring change in the mean luminance at the location of the target stimulus. In the current study, we tested the hypothesis that contrast targets presented with a co-occurring change in local mean luminance receive fewer fixations than targets presented in a region with a steady mean luminance. To that end we performed an eye-tracking experiment involving eight observers. On each trial, after a 4 s adaptation period, an observer's task was to make a saccade to one of two target gratings, presented simultaneously at 7° eccentricity, separated by 30° in polar angle. When both targets were presented with a steady mean luminance, saccades landed mostly in the area between the two targets, signifying the classic global effect. However, when one of the targets was presented with a change in luminance, the saccade distribution was biased towards the target with the steady luminance. The results show that the attenuation of contrast signals by co-occurring, ecologically typical changes in mean luminance affects fixation selection and is therefore likely to affect eye movements in natural visual behavior.
Collapse
Affiliation(s)
- Markku Kilpeläinen
- Department of Cognitive Psychology, Vrije Universiteit Amsterdam, The Netherlands.
| | | | | |
Collapse
|
745
|
Tottenham N, Hertzig ME, Gillespie-Lynch K, Gilhooly T, Millner AJ, Casey BJ. Elevated amygdala response to faces and gaze aversion in autism spectrum disorder. Soc Cogn Affect Neurosci 2013; 9:106-17. [PMID: 23596190 DOI: 10.1093/scan/nst050] [Citation(s) in RCA: 87] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Abstract
Autism spectrum disorders (ASD) are often associated with impairments in judgment of facial expressions. This impairment is often accompanied by diminished eye contact and atypical amygdala responses to face stimuli. The current study used a within-subjects design to examine the effects of natural viewing and an experimental eye-gaze manipulation on amygdala responses to faces. Individuals with ASD showed less gaze toward the eye region of faces relative to a control group. Among individuals with ASD, reduced eye gaze was associated with higher threat ratings of neutral faces. Amygdala signal was elevated in the ASD group relative to controls. This elevated response was further potentiated by experimentally manipulating gaze to the eye region. Potentiation by the gaze manipulation was largest for those individuals who exhibited the least amount of naturally occurring gaze toward the eye region and was associated with their subjective threat ratings. Effects were largest for neutral faces, highlighting the importance of examining neutral faces in the pathophysiology of autism and questioning their use as control stimuli with this population. Overall, our findings provide support for the notion that gaze direction modulates affective response to faces in ASD.
Collapse
Affiliation(s)
- Nim Tottenham
- UCLA Psychology-Developmental, 1285 Franz Hall, BOX 951563, Los Angeles, CA 90095-1563, USA.
| | | | | | | | | | | |
Collapse
|
746
|
Abstract
The present study investigated whether infants visually scan own- and other-race faces differently as well as how these differences in face scanning develop with age. A multi-method approach was used to analyze the eye-tracking data of 6- and 9-month-old Caucasian infants scanning dynamically displayed own- and other-race faces. We found that 6-month-olds showed differential fixation, fixating significantly more on the left eye and mouth of own-race faces, but more on the nose of other-race faces. Infants at 9 months of age fixated more on the eyes of own-race faces, but more on the mouth of other-race faces. A scan path analysis revealed that infants shifted their attention between the eyes of the own-race faces significantly more frequently than for other-race faces. Overall, younger and older infants responded differentially to own- versus other-race faces not only in the absolute amount of time spent fixating specific features, but also on their fixation shifts between features.
Collapse
|
747
|
Paulus M, Proust J, Sodian B. Examining implicit metacognition in 3.5-year-old children: an eye-tracking and pupillometric study. Front Psychol 2013; 4:145. [PMID: 23526709 PMCID: PMC3605506 DOI: 10.3389/fpsyg.2013.00145] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2013] [Accepted: 03/06/2013] [Indexed: 11/13/2022] Open
Abstract
The current study examined early signs of implicit metacognitive monitoring in 3.5-year-old children. During a learning phase children had to learn paired associates. In the test phase, children performed a recognition task and choose the correct associate for a given target among four possible answers. Subsequently, children's explicit confidence judgments (CJs) and their fixation time allocation at the confidence scale were assessed. Analyses showed that explicit CJs did not differ for remembered compared to non-remembered items. In contrast, children's fixation patterns on the confidence scale were affected by the correctness of their memory, as children looked longer to high confidence ratings when they correctly remembered the associated item. Moreover, analyses of pupil size revealed pupil dilations for correctly remembered, but not incorrectly remembered items. The results converge with recent behavioral findings that reported evidence for implicit metacognitive memory monitoring processes in 3.5-year-old children. The study suggests that implicit metacognitive abilities might precede the development of explicit metacognitive knowledge.
Collapse
Affiliation(s)
- Markus Paulus
- Department of Psychology, Ludwig Maximilian UniversityMunich, Germany
| | - Joelle Proust
- Ecole Normale Supérieure, Institut Jean-NicodParis, France
| | - Beate Sodian
- Department of Psychology, Ludwig Maximilian UniversityMunich, Germany
| |
Collapse
|
748
|
Abstract
Faces convey important information about the social environment, and even very young infants are preferentially attentive to face-like over non-face stimuli. Eye-tracking studies have allowed researchers to examine which features of faces infants find most salient across development, and the present study examined scanning of familiar (i.e., mother) and unfamiliar (i.e., stranger) static faces at 6-, 9-, and 12-months-of-age. Infants showed a preference for scanning their mother's face as compared to a stranger's face, and displayed increased attention to the eye region as compared to the mouth region. Infants also showed patterns of decreased attention to eyes and increased attention to mouths between 6 and 12 months. Associations between visual attention at 6, 9, and 12 months and the Communication and Symbolic Behavior Scales DP (CSBS-DP) at 18 months were also examined, and a significant positive relation between attention to eyes at 6 months and the social subscale of the CSBS-DP at 18 months was found. This effect was driven by infants' attention to their mother's eyes. No relations between face scanning in 9- and 12-month-olds and social outcome at 18 months were found. The potential for using individual differences in early infant face processing to predict later social outcome is discussed.
Collapse
Affiliation(s)
- Jennifer B. Wagner
- Department of Pediatrics, Harvard Medical School; Division of
Developmental Medicine, Boston Children’s Hospital
| | - Rhiannon J. Luyster
- Department of Pediatrics, Harvard Medical School; Division of
Developmental Medicine, Boston Children’s Hospital
- Department of Communication Sciences and Disorders, Emerson
College
| | | | | | - Charles A. Nelson
- Department of Pediatrics, Harvard Medical School; Division of
Developmental Medicine, Boston Children’s Hospital
| |
Collapse
|
749
|
Abstract
The current study investigated the role of cultural norms on the development of face-scanning. British and Japanese adults' eye movements were recorded while they observed avatar faces moving their mouth, and then their eyes toward or away from the participants. British participants fixated more on the mouth, which contrasts with Japanese participants fixating mainly on the eyes. Moreover, eye fixations of British participants were less affected by the gaze shift of the avatar than Japanese participants, who shifted their fixation to the corresponding direction of the avatar's gaze. Results are consistent with the Western cultural norms that value the maintenance of eye contact, and the Eastern cultural norms that require flexible use of eye contact and gaze aversion.
Collapse
|
750
|
Abstract
Across the first few years of life, infants readily extract many kinds of regularities from their environment, and this ability is thought to be central to development in a number of domains. Numerous studies have documented infants' ability to recognize deterministic sequential patterns. However, little is known about the processes infants use to build and update representations of structure in time, and how infants represent patterns that are not completely predictable. The present study investigated how infants' expectations fora simple structure develope over time, and how infants update their representations with new information. We measured 12-month-old infants' anticipatory eye movements to targets that appeared in one of two possible locations. During the initial phase of the experiment, infants either saw targets that appeared consistently in the same location (Deterministic condition) or probabilistically in either location, with one side more frequent than the other (Probabilistic condition). After this initial divergent experience, both groups saw the same sequence of trials for the rest of the experiment. The results show that infants readily learn from both deterministic and probabilistic input, with infants in both conditions reliably predicting the most likely target location by the end of the experiment. Local context had a large influence on behavior: infants adjusted their predictions to reflect changes in the target location on the previous trial. This flexibility was particularly evident in infants with more variable prior experience (the Probabilistic condition). The results provide some of the first data showing how infants learn in real time.
Collapse
Affiliation(s)
- Alexa R. Romberg
- Department of Psychology and Waisman Center, University of Wisconsin – MadisonMadison, WI, USA
| | - Jenny R. Saffran
- Department of Psychology and Waisman Center, University of Wisconsin – MadisonMadison, WI, USA
| |
Collapse
|