1
|
Burrows AM, Smith LW, Downing SE, Omstead KM, Smith TD. Evolutionary divergence of facial muscle physiology between domestic dogs and wolves. Anat Rec (Hoboken) 2024. [PMID: 39360643 DOI: 10.1002/ar.25580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Revised: 08/30/2024] [Accepted: 09/07/2024] [Indexed: 10/04/2024]
Abstract
Domestic dogs (Canis familiaris) are descended from gray wolf (Canis lupus) populations that inhabited Western Europe and Siberia. The specific timing of dog domestication is debated, but archeological and genetic evidence suggest that it was a multi-phase process that began at least 15,000 years ago. There are many morphological differences between dogs and wolves, including marked divergence in facial muscle morphology, but we know little about the comparative physiology of these muscles. A better understanding of comparative facial muscle physiology between domestic dogs and gray wolves would improve our conceptual framework for the processual mechanisms in dog domestication. To address these issues, we assessed the myosin profiles (type I and type II) from the zygomaticus and orbicularis oris muscles of 6 domestic dogs and 4 gray wolves. Due to small sample sizes, statistical analyses were not done. Results reveal that sampled domestic dogs have almost 100% fast-twitch (type II) muscle fibers while gray wolves have less than 50%, meaning that dog faces can contract fast while wolf faces are able to sustain facial muscle contraction. Sample sizes are limited but the present study indicates that dog domestication is associated with not only a change in facial muscle morphology but a concomitant change in how these muscles function physiologically. Selective pressures in the development of communication between dogs and humans using facial expression may have influenced this evolutionary divergence, but the paedomorphic retention of barking in adult dogs may have also played a role.
Collapse
Affiliation(s)
- Anne M Burrows
- Department of Physical Therapy, Duquesne University, Pittsburgh, Pennsylvania, USA
| | - Leo W Smith
- Department of Chemistry & Biochemistry, Duquesne University, Pittsburgh, Pennsylvania, USA
| | - Sarah E Downing
- Department of Physical Therapy, Duquesne University, Pittsburgh, Pennsylvania, USA
| | - K Madisen Omstead
- Department of Physical Therapy, Duquesne University, Pittsburgh, Pennsylvania, USA
| | - Timothy D Smith
- Department of Health and Rehabilitation Sciences, Slippery Rock University, Slippery Rock, Pennsylvania, USA
| |
Collapse
|
2
|
Hosseini K, Pettit JW, Soto FA, Mattfeld AT, Buzzell GA. Toward a mechanistic understanding of the role of error monitoring and memory in social anxiety. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2024; 24:948-963. [PMID: 38839717 DOI: 10.3758/s13415-024-01198-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/08/2024] [Indexed: 06/07/2024]
Abstract
Cognitive models state that social anxiety (SA) involves biased cognitive processing that impacts what is learned and remembered within social situations, leading to the maintenance of SA. Neuroscience work links SA to enhanced error monitoring, reflected in error-related neural responses arising from mediofrontal cortex (MFC). Yet, the role of error monitoring in SA remains unclear, as it is unknown whether error monitoring can drive changes in memory, biasing what is learned or remembered about social situations. Motivated by the longer-term goal of identifying mechanisms implicated in SA, in the current study we developed and validated a novel paradigm for probing the role of error-related MFC theta oscillations (associated with error monitoring) and incidental memory biases in SA. Electroencephalography (EEG) data were collected while participants completed a novel Face-Flanker task, involving presentation of task-unrelated, trial-unique faces behind target/flanker arrows on each trial. A subsequent incidental memory assessment evaluated memory biases for error events. Severity of SA symptoms were associated with greater error-related theta synchrony over MFC, as well as between MFC and sensory cortex. Social anxiety also was positively associated with incidental memory biases for error events. Moreover, greater error-related MFC-sensory theta synchrony during the Face-Flanker predicted subsequent incidental memory biases for error events. Collectively, the results demonstrate the potential of a novel paradigm to elucidate mechanisms underlying relations between error monitoring and SA.
Collapse
Affiliation(s)
- Kianoosh Hosseini
- Department of Psychology, Florida International University, 11200 SW 8th St, Miami, FL, USA.
- Center for Children and Families, Florida International University, 11200 SW 8th St, Miami, FL, USA.
| | - Jeremy W Pettit
- Department of Psychology, Florida International University, 11200 SW 8th St, Miami, FL, USA
- Center for Children and Families, Florida International University, 11200 SW 8th St, Miami, FL, USA
| | - Fabian A Soto
- Department of Psychology, Florida International University, 11200 SW 8th St, Miami, FL, USA
- Center for Children and Families, Florida International University, 11200 SW 8th St, Miami, FL, USA
| | - Aaron T Mattfeld
- Department of Psychology, Florida International University, 11200 SW 8th St, Miami, FL, USA
- Center for Children and Families, Florida International University, 11200 SW 8th St, Miami, FL, USA
| | - George A Buzzell
- Department of Psychology, Florida International University, 11200 SW 8th St, Miami, FL, USA
- Center for Children and Families, Florida International University, 11200 SW 8th St, Miami, FL, USA
| |
Collapse
|
3
|
Abdul Razzak R, Bagust J. Perceptual lateralization on the Rod-and-Frame Test in young and older adults. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:405-411. [PMID: 35138959 DOI: 10.1080/23279095.2022.2030741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
INTRODUCTION There is an overall left visual field/right hemisphere advantage in young adults for masked, tachistoscopically presented images on the Rod-and-Frame Test (RFT). This study explored potential age-related lateralization differences in processing of visual context on the RFT. METHODS The 35 young and 33 older adults aligned a rod surrounded either by no frame, a vertical, or leftward/rightward tilted frame to their perceived vertical. Algebraic errors of rod alignment were used to derive the rod-and-frame effect (RFE) and asymmetry index. RESULTS Young adults had frequent indirect effects, mostly to the right-tilted frame, while older adults hardly produced any. Compared with nontilted frames, young adults displayed larger alignment errors with left-tilted frames; however, older adults exhibited this same effect for both frame tilt conditions. Young adults had smaller RFE values than older adults for the right-tilted frame, with no age-related difference in RFE for the left-tilted frame or asymmetry index. The negative asymmetry index was statistically different from the true vertical only in young adults. CONCLUSION There is an age-related reduction in the right hemisphere processing of left-sided visual contexts on the RFT. Such findings can assist clinicians to improve interpretation of RFT findings in clinical patients.
Collapse
Affiliation(s)
- Rima Abdul Razzak
- Department of Physiology, College of Medicine & Medical Sciences, Arabian Gulf University, Manama, Bahrain
| | - Jeff Bagust
- Faculty of Health and Social Sciences, Bournemouth University, Poole, UK
| |
Collapse
|
4
|
Toutain M, Dollion N, Henry L, Grandgeorge M. How Do Children and Adolescents with ASD Look at Animals? A Scoping Review. CHILDREN (BASEL, SWITZERLAND) 2024; 11:211. [PMID: 38397322 PMCID: PMC10887101 DOI: 10.3390/children11020211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 01/05/2024] [Accepted: 02/04/2024] [Indexed: 02/25/2024]
Abstract
Autism spectrum disorder (ASD) is characterized by interaction and communication differences, entailing visual attention skill specificities. Interactions with animals, such as in animal-assisted interventions or with service dogs, have been shown to be beneficial for individuals with ASD. While interacting with humans poses challenges for them, engaging with animals appears to be different. One hypothesis suggests that differences between individuals with ASD's visual attention to humans and to animals may contribute to these interaction differences. We propose a scoping review of the research on the visual attention to animals of youths with ASD. The objective is to review the methodologies and tools used to explore such questions, to summarize the main results, to explore which factors may contribute to the differences reported in the studies, and to deduce how youth with ASD observe animals. Utilizing strict inclusion criteria, we examined databases between 1942 and 2023, identifying 21 studies in international peer-reviewed journals. Three main themes were identified: attentional engagement and detection, visual exploration, and behavior. Collectively, our findings suggest that the visual attention of youths with ASD towards animals appears comparable to that of neurotypical peers, at least in 2D pictures (i.e., eye gaze patterns). Future studies should explore whether these results extend to real-life interactions.
Collapse
Affiliation(s)
- Manon Toutain
- CNRS, EthoS (Éthologie Animale et Humaine)—UMR 6552, University Rennes, Normandie University, F-35000 Rennes, France; (L.H.); (M.G.)
| | - Nicolas Dollion
- Laboratoire C2S (Cognition Santé Société)—EA6291, Université Reims Champagne-Ardenne, F-51100 Reims, France;
| | - Laurence Henry
- CNRS, EthoS (Éthologie Animale et Humaine)—UMR 6552, University Rennes, Normandie University, F-35000 Rennes, France; (L.H.); (M.G.)
| | - Marine Grandgeorge
- CNRS, EthoS (Éthologie Animale et Humaine)—UMR 6552, University Rennes, Normandie University, F-35000 Rennes, France; (L.H.); (M.G.)
| |
Collapse
|
5
|
Fujihara Y, Guo K, Liu CH. Relationship between types of anxiety and the ability to recognize facial expressions. Acta Psychol (Amst) 2023; 241:104100. [PMID: 38041913 DOI: 10.1016/j.actpsy.2023.104100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 11/16/2023] [Accepted: 11/28/2023] [Indexed: 12/04/2023] Open
Abstract
This study examined whether three subtypes of anxiety (trait anxiety, state anxiety, and social anxiety) have different effects on recognition of facial expressions. One hundred and thirty-eight participants matched facial expressions of three intensity levels (20 %, 40 %, 100 %) with one of the six emotion labels ("happy", "sad", "fear", "angry", "disgust", and "surprise"). While using a conventional method of analysis we were able to replicate some significant correlations between each anxiety type and recognition performance found in the literature. However, when we used partial correlation to isolate the effect of each anxiety type, most of these correlations were no longer significant, apart from the negative correlations between Beck Anxiety Inventory and reaction time to fearful faces displayed at 40 % intensity level, and the correlations between anxiety and categorisation errors. Specifically, social anxiety was positively correlated with misidentifying a happy face as a disgust face at 40 % intensity level, and state anxiety negatively correlated with misidentifying a happy face as a sad face at 20 % intensity level. However, these partial correlation analyses became non-significant after p value adjustment for multiple comparisons. Our eye tracking data also showed that state anxiety may be associated with reduced fixations on the eye regions of low-intensity sad or fearful faces. These analyses cast doubts on some effects reported in the previous studies because they are likely to reflect a mixture of influences from highly correlated anxiety subtypes.
Collapse
Affiliation(s)
- Yuya Fujihara
- Department of Psychology, Yasuda Women's University, Japan.
| | - Kun Guo
- School of Psychology, University of Lincoln, Brayford Pool, Lincoln, Lincolnshire LN6 7TS, United Kingdom.
| | - Chang Hong Liu
- Department of Psychology, Bournemouth University, United Kingdom.
| |
Collapse
|
6
|
Todd E, Subendran S, Wright G, Guo K. Emotion category-modulated interpretation bias in perceiving ambiguous facial expressions. Perception 2023; 52:695-711. [PMID: 37427421 PMCID: PMC10510303 DOI: 10.1177/03010066231186936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 06/22/2023] [Indexed: 07/11/2023]
Abstract
In contrast to prototypical facial expressions, we show less perceptual tolerance in perceiving vague expressions by demonstrating an interpretation bias, such as more frequent perception of anger or happiness when categorizing ambiguous expressions of angry and happy faces that are morphed in different proportions and displayed under high- or low-quality conditions. However, it remains unclear whether this interpretation bias is specific to emotion categories or reflects a general negativity versus positivity bias and whether the degree of this bias is affected by the valence or category of two morphed expressions. These questions were examined in two eye-tracking experiments by systematically manipulating expression ambiguity and image quality in fear- and sad-happiness faces (Experiment 1) and by directly comparing anger-, fear-, sadness-, and disgust-happiness expressions (Experiment 2). We found that increasing expression ambiguity and degrading image quality induced a general negativity versus positivity bias in expression categorization. The degree of negativity bias, the associated reaction time and face-viewing gaze allocation were further manipulated by different expression combinations. It seems that although we show a viewing condition-dependent bias in interpreting vague facial expressions that display valence-contradicting expressive cues, it appears that the perception of these ambiguous expressions is guided by a categorical process similar to that involved in perceiving prototypical expressions.
Collapse
|
7
|
Owners' Beliefs regarding the Emotional Capabilities of Their Dogs and Cats. Animals (Basel) 2023; 13:ani13050820. [PMID: 36899676 PMCID: PMC10000035 DOI: 10.3390/ani13050820] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 02/15/2023] [Accepted: 02/22/2023] [Indexed: 03/12/2023] Open
Abstract
The correct interpretation of an animal's emotional state is crucial for successful human-animal interaction. When studying dog and cat emotional expressions, a key source of information is the pet owner, given the extensive interactions they have had with their pets. In this online survey we asked 438 owners whether their dogs and/or cats could express 22 different primary and secondary emotions, and to indicate the behavioral cues they relied upon to identify those expressed emotions. Overall, more emotions were reported in dogs compared to cats, both from owners that owned just one species and those that owned both. Although owners reported a comparable set of sources of behavioral cues (e.g., body posture, facial expression, and head posture) for dogs and cats in expressing the same emotion, distinct combinations tended to be associated with specific emotions in both cats and dogs. Furthermore, the number of emotions reported by dog owners was positively correlated with their personal experience with dogs but negatively correlated with their professional experience. The number of emotions reported in cats was higher in cat-only households compared to those that also owned dogs. These results provide a fertile ground for further empirical investigation of the emotional expressions of dogs and cats, aimed at validating specific emotions in these species.
Collapse
|
8
|
Comparing emotion inferences from dogs (Canis familiaris), panins (Pan troglodytes/Pan paniscus), and humans (Homo sapiens) facial displays. Sci Rep 2022; 12:13171. [PMID: 35915205 PMCID: PMC9343398 DOI: 10.1038/s41598-022-16098-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 07/04/2022] [Indexed: 11/08/2022] Open
Abstract
Human beings are highly familiar over-learnt social targets, with similar physical facial morphology between perceiver and target. But does experience with or similarity to a social target determine whether we can accurately infer emotions from their facial displays? Here, we test this question across two studies by having human participants infer emotions from facial displays of: dogs, a highly experienced social target but with relatively dissimilar facial morphology; panins (chimpanzees/bonobos), inexperienced social targets, but close genetic relatives with a more similar facial morphology; and humans. We find that people are more accurate inferring emotions from facial displays of dogs compared to panins, though they are most accurate for human faces. However, we also find an effect of emotion, such that people vary in their ability to infer different emotional states from different species’ facial displays, with anger more accurately inferred than happiness across species, perhaps hinting at an evolutionary bias towards detecting threat. These results not only compare emotion inferences from human and animal faces but provide initial evidence that experience with a non-human animal affects inferring emotion from facial displays.
Collapse
|
9
|
Castelain T, Van der Henst JB. The Influence of Language on Spatial Reasoning: Reading Habits Modulate the Formulation of Conclusions and the Integration of Premises. Front Psychol 2021; 12:654266. [PMID: 34079496 PMCID: PMC8165199 DOI: 10.3389/fpsyg.2021.654266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Accepted: 03/08/2021] [Indexed: 12/05/2022] Open
Abstract
In the present study, we explore how reading habits (e.g., reading from left to right in French or reading from right to left in Arabic) influence the scanning and the construction of mental models in spatial reasoning. For instance, when participants are given a problem like A is to the left of B; B is to the left of C, what is the relation between A and C? They are assumed to construct the model: A B C. If reading habits influence the scanning process, then readers of French should inspect models from left to right, whereas readers of Arabic should inspect them from right to left. The prediction following this analysis is that readers of French should be more inclined to produce "left" conclusions (i.e., A is to the left of C), whereas readers of Arabic should be more inclined to produce "right" conclusions (i.e., C is to the right of A). Furthermore, one may expect that readers of French show a greater ease in constructing models following a left-to-right direction than models following a right-to-left direction, whereas an opposite pattern might be expected for readers of Arabic. We tested these predictions in two experiments involving French and Yemeni participants. Experiment 1 investigated the formulation of conclusions from spatial premises, and Experiment 2, which was based on non-linguistic stimuli, examined the time required to construct mental models from left to right and from right to left. Our results show clear differences between the two groups. As expected, the French sample showed a strong left-to-right bias, but the Yemeni sample did not show the reverse bias. Results are discussed in terms of cultural influences and universal mechanisms.
Collapse
Affiliation(s)
- Thomas Castelain
- Center for Cognitive Sciences, University of Neuchâtel, Neuchâtel, Switzerland
| | - Jean-Baptiste Van der Henst
- Trajectoires Team, Centre de Recherche en Neurosciences de Lyon, CNRS UMR 5292, Inserm UMR-S 1028, Université Lyon 1, Lyon, France
| |
Collapse
|
10
|
Smith K, Kempe V, Wood L. Eye Placement Bias Is Remarkably Robust. Iperception 2021; 12:20416695211017564. [PMID: 34104381 PMCID: PMC8161889 DOI: 10.1177/20416695211017564] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Accepted: 04/24/2021] [Indexed: 11/16/2022] Open
Abstract
When drawing faces, people show a systematic bias of placing the eyes higher up the head than they are placed in reality. This study investigated the development of this phenomenon while removing the potential confound of drawing ability. Participants (N = 124) in three age groups (3-5 yo, 10-11 yo, and adults) reconstructed two foam faces: one from observation and one from memory. The high eye placement bias was remarkably robust with mean eye placement in every condition significantly higher than the original faces. The same bias was not shown for mouth placement. Eye placement was highest for the youngest participants and for the memory conditions. The results suggest that an eye placement bias is not caused by the motor skill demands required for drawing and lend evidence to the suggestion that an eye placement bias is caused by perceptual and decision-making processes.
Collapse
Affiliation(s)
- Kirsten Smith
- Division of Psychology, University of Abertay, Dundee, Scotland
| | - Vera Kempe
- Division of Psychology, University of Abertay, Dundee, Scotland
| | - Lara Wood
- Division of Psychology, University of Abertay, Dundee, Scotland
| |
Collapse
|
11
|
Kinchella J, Guo K. Facial Expression Ambiguity and Face Image Quality Affect Differently on Expression Interpretation Bias. Perception 2021; 50:328-342. [PMID: 33709837 DOI: 10.1177/03010066211000270] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
We often show an invariant or comparable recognition performance for perceiving prototypical facial expressions, such as happiness and anger, under different viewing settings. However, it is unclear to what extent the categorisation of ambiguous expressions and associated interpretation bias are invariant in degraded viewing conditions. In this exploratory eye-tracking study, we systematically manipulated both facial expression ambiguity (via morphing happy and angry expressions in different proportions) and face image clarity/quality (via manipulating image resolution) to measure participants' expression categorisation performance, perceived expression intensity, and associated face-viewing gaze distribution. Our analysis revealed that increasing facial expression ambiguity and decreasing face image quality induced the opposite direction of expression interpretation bias (negativity vs. positivity bias, or increased anger vs. increased happiness categorisation), the same direction of deterioration impact on rating expression intensity, and qualitatively different influence on face-viewing gaze allocation (decreased gaze at eyes but increased gaze at mouth vs. stronger central fixation bias). These novel findings suggest that in comparison with prototypical facial expressions, our visual system has less perceptual tolerance in processing ambiguous expressions which are subject to viewing condition-dependent interpretation bias.
Collapse
|
12
|
Törnqvist H, Somppi S, Kujala MV, Vainio O. Observing animals and humans: dogs target their gaze to the biological information in natural scenes. PeerJ 2020; 8:e10341. [PMID: 33362955 DOI: 10.7717/peerj.10341/supp-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Accepted: 10/20/2020] [Indexed: 05/26/2023] Open
Abstract
BACKGROUND This study examines how dogs observe images of natural scenes containing living creatures (wild animals, dogs and humans) recorded with eye gaze tracking. Because dogs have had limited exposure to wild animals in their lives, we also consider the natural novelty of the wild animal images for the dogs. METHODS The eye gaze of dogs was recorded while they viewed natural images containing dogs, humans, and wild animals. Three categories of images were used: naturalistic landscape images containing single humans or animals, full body images containing a single human or an animal, and full body images containing a pair of humans or animals. The gazing behavior of two dog populations, family and kennel dogs, were compared. RESULTS As a main effect, dogs gazed at living creatures (object areas) longer than the background areas of the images; heads longer than bodies; heads longer than background areas; and bodies longer than background areas. Dogs gazed less at the object areas vs. the background in landscape images than in the other image categories. Both dog groups also gazed wild animal heads longer than human or dog heads in the images. When viewing single animal and human images, family dogs focused their gaze very prominently on the head areas, but in images containing a pair of animals or humans, they gazed more at the body than the head areas. In kennel dogs, the difference in gazing times of the head and body areas within single or paired images failed to reach significance. DISCUSSION Dogs focused their gaze on living creatures in all image categories, also detecting them in the natural landscape images. Generally, they also gazed at the biologically informative areas of the images, such as the head, which supports the importance of the head/face area for dogs in obtaining social information. The natural novelty of the species represented in the images as well as the image category affected the gazing behavior of dogs. Furthermore, differences in the gazing strategy between family and kennel dogs was obtained, suggesting an influence of different social living environments and life experiences.
Collapse
Affiliation(s)
- Heini Törnqvist
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| | - Sanni Somppi
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| | - Miiamaaria V Kujala
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
- Department of Psychology, Faculty of Education and Psychology, University of Jyväskylä, Jyväskylä, Finland
| | - Outi Vainio
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
13
|
Törnqvist H, Somppi S, Kujala MV, Vainio O. Observing animals and humans: dogs target their gaze to the biological information in natural scenes. PeerJ 2020; 8:e10341. [PMID: 33362955 PMCID: PMC7749655 DOI: 10.7717/peerj.10341] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Accepted: 10/20/2020] [Indexed: 11/20/2022] Open
Abstract
Background This study examines how dogs observe images of natural scenes containing living creatures (wild animals, dogs and humans) recorded with eye gaze tracking. Because dogs have had limited exposure to wild animals in their lives, we also consider the natural novelty of the wild animal images for the dogs. Methods The eye gaze of dogs was recorded while they viewed natural images containing dogs, humans, and wild animals. Three categories of images were used: naturalistic landscape images containing single humans or animals, full body images containing a single human or an animal, and full body images containing a pair of humans or animals. The gazing behavior of two dog populations, family and kennel dogs, were compared. Results As a main effect, dogs gazed at living creatures (object areas) longer than the background areas of the images; heads longer than bodies; heads longer than background areas; and bodies longer than background areas. Dogs gazed less at the object areas vs. the background in landscape images than in the other image categories. Both dog groups also gazed wild animal heads longer than human or dog heads in the images. When viewing single animal and human images, family dogs focused their gaze very prominently on the head areas, but in images containing a pair of animals or humans, they gazed more at the body than the head areas. In kennel dogs, the difference in gazing times of the head and body areas within single or paired images failed to reach significance. Discussion Dogs focused their gaze on living creatures in all image categories, also detecting them in the natural landscape images. Generally, they also gazed at the biologically informative areas of the images, such as the head, which supports the importance of the head/face area for dogs in obtaining social information. The natural novelty of the species represented in the images as well as the image category affected the gazing behavior of dogs. Furthermore, differences in the gazing strategy between family and kennel dogs was obtained, suggesting an influence of different social living environments and life experiences.
Collapse
Affiliation(s)
- Heini Törnqvist
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| | - Sanni Somppi
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| | - Miiamaaria V Kujala
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland.,Department of Psychology, Faculty of Education and Psychology, University of Jyväskylä, Jyväskylä, Finland
| | - Outi Vainio
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
14
|
Ding X, Raziei Z, Larson EC, Olinick EV, Krueger P, Hahsler M. Swapped face detection using deep learning and subjective assessment. EURASIP JOURNAL ON INFORMATION SECURITY 2020. [DOI: 10.1186/s13635-020-00109-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Abstract
AbstractThe tremendous success of deep learning for imaging applications has resulted in numerous beneficial advances. Unfortunately, this success has also been a catalyst for malicious uses such as photo-realistic face swapping of parties without consent. In this study, we use deep transfer learning for face swapping detection, showing true positive rates greater than 96% with very few false alarms. Distinguished from existing methods that only provide detection accuracy, we also provide uncertainty for each prediction, which is critical for trust in the deployment of such detection systems. Moreover, we provide a comparison to human subjects. To capture human recognition performance, we build a website to collect pairwise comparisons of images from human subjects. Based on these comparisons, we infer a consensus ranking from the image perceived as most real to the image perceived as most fake. Overall, the results show the effectiveness of our method. As part of this study, we create a novel dataset that is, to the best of our knowledge, the largest swapped face dataset created using still images. This dataset will be available for academic research use per request. Our goal of this study is to inspire more research in the field of image forensics through the creation of a dataset and initial analysis.
Collapse
|
15
|
Correia-Caeiro C, Guo K, Mills DS. Perception of dynamic facial expressions of emotion between dogs and humans. Anim Cogn 2020; 23:465-476. [PMID: 32052285 PMCID: PMC7181561 DOI: 10.1007/s10071-020-01348-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2019] [Revised: 01/07/2020] [Accepted: 01/14/2020] [Indexed: 11/29/2022]
Abstract
Facial expressions are a core component of the emotional response of social mammals. In contrast to Darwin's original proposition, expressive facial cues of emotion appear to have evolved to be species-specific. Faces trigger an automatic perceptual process, and so, inter-specific emotion perception is potentially a challenge; since observers should not try to “read” heterospecific facial expressions in the same way that they do conspecific ones. Using dynamic spontaneous facial expression stimuli, we report the first inter-species eye-tracking study on fully unrestrained participants and without pre-experiment training to maintain attention to stimuli, to compare how two different species living in the same ecological niche, humans and dogs, perceive each other’s facial expressions of emotion. Humans and dogs showed different gaze distributions when viewing the same facial expressions of either humans or dogs. Humans modulated their gaze depending on the area of interest (AOI) being examined, emotion, and species observed, but dogs modulated their gaze depending on AOI only. We also analysed if the gaze distribution was random across AOIs in both species: in humans, eye movements were not correlated with the diagnostic facial movements occurring in the emotional expression, and in dogs, there was only a partial relationship. This suggests that the scanning of facial expressions is a relatively automatic process. Thus, to read other species’ facial emotions successfully, individuals must overcome these automatic perceptual processes and employ learning strategies to appreciate the inter-species emotional repertoire.
Collapse
Affiliation(s)
- Catia Correia-Caeiro
- School of Psychology, University of Lincoln, Lincoln, UK. .,School of Life Sciences, University of Lincoln, Lincoln, UK.
| | - Kun Guo
- School of Psychology, University of Lincoln, Lincoln, UK
| | - Daniel S Mills
- School of Life Sciences, University of Lincoln, Lincoln, UK
| |
Collapse
|
16
|
Abstract
Dogs were shaped during the course of domestication both in their behavior and in their anatomical features. Here we show that domestication transformed the facial muscle anatomy of dogs specifically for facial communication with humans. A muscle responsible for raising the inner eyebrow intensely is uniformly present in dogs but not in wolves. Behavioral data show that dogs also produce the eyebrow movement significantly more often and with higher intensity than wolves do, with highest-intensity movements produced exclusively by dogs. Interestingly, this movement increases paedomorphism and resembles an expression humans produce when sad, so its production in dogs may trigger a nurturing response. We hypothesize that dogs’ expressive eyebrows are the result of selection based on humans’ preferences. Domestication shaped wolves into dogs and transformed both their behavior and their anatomy. Here we show that, in only 33,000 y, domestication transformed the facial muscle anatomy of dogs specifically for facial communication with humans. Based on dissections of dog and wolf heads, we show that the levator anguli oculi medialis, a muscle responsible for raising the inner eyebrow intensely, is uniformly present in dogs but not in wolves. Behavioral data, collected from dogs and wolves, show that dogs produce the eyebrow movement significantly more often and with higher intensity than wolves do, with highest-intensity movements produced exclusively by dogs. Interestingly, this movement increases paedomorphism and resembles an expression that humans produce when sad, so its production in dogs may trigger a nurturing response in humans. We hypothesize that dogs with expressive eyebrows had a selection advantage and that “puppy dog eyes” are the result of selection based on humans’ preferences.
Collapse
|
17
|
Guo K, Li Z, Yan Y, Li W. Viewing heterospecific facial expressions: an eye-tracking study of human and monkey viewers. Exp Brain Res 2019; 237:2045-2059. [PMID: 31165915 PMCID: PMC6647127 DOI: 10.1007/s00221-019-05574-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Accepted: 05/31/2019] [Indexed: 11/03/2022]
Abstract
Common facial expressions of emotion have distinctive patterns of facial muscle movements that are culturally similar among humans, and perceiving these expressions is associated with stereotypical gaze allocation at local facial regions that are characteristic for each expression, such as eyes in angry faces. It is, however, unclear to what extent this 'universality' view can be extended to process heterospecific facial expressions, and how 'social learning' process contributes to heterospecific expression perception. In this eye-tracking study, we examined face-viewing gaze allocation of human (including dog owners and non-dog owners) and monkey observers while exploring expressive human, chimpanzee, monkey and dog faces (positive, neutral and negative expressions in human and dog faces; neutral and negative expressions in chimpanzee and monkey faces). Human observers showed species- and experience-dependent expression categorization accuracy. Furthermore, both human and monkey observers demonstrated different face-viewing gaze distributions which were also species dependent. Specifically, humans predominately attended at human eyes but animal mouth when judging facial expressions. Monkeys' gaze distributions in exploring human and monkey faces were qualitatively different from exploring chimpanzee and dog faces. Interestingly, the gaze behaviour of both human and monkey observers were further affected by their prior experience of the viewed species. It seems that facial expression processing is species dependent, and social learning may play a significant role in discriminating even rudimentary types of heterospecific expressions.
Collapse
Affiliation(s)
- Kun Guo
- School of Psychology, University of Lincoln, Lincoln, LN6 7TS, UK.
| | - Zhihan Li
- State Key Laboratory of Cognitive Neuroscience and Learning, and IDG, Beijing Normal University, Beijing, 100875, China
| | - Yin Yan
- State Key Laboratory of Cognitive Neuroscience and Learning, and IDG, Beijing Normal University, Beijing, 100875, China
| | - Wu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, and IDG, Beijing Normal University, Beijing, 100875, China
| |
Collapse
|
18
|
Alper S, Us EO, Tasman DR. The evil eye effect: vertical pupils are perceived as more threatening. Cogn Emot 2018; 33:1249-1260. [PMID: 30486750 DOI: 10.1080/02699931.2018.1550741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Popular culture has many examples of evil characters having vertically pupilled eyes. Humans have a long evolutionary history of rivalry with snakes and their visual systems were evolved to rapidly detect snakes and snake-related cues. Considering such evolutionary background, we hypothesised that humans would perceive vertical pupils, which are characteristics of ambush predators including some of the snakes, as threatening. In seven studies (aggregate N = 1458) conducted on samples from American and Turkish samples, we found that vertical pupils are perceived as more threatening on both explicit (Study 1) and implicit level (Studies 2-7) and they are associated with physical, rather than social, threat (Study 4). Findings provided partial support regarding our hypothesis about the relevance of snake detection processes: Snake phobia, and not spider phobia, was found to be related to perceiving vertical pupils as threatening (Study 5), however an experimental manipulation of saliency of snakes rendered no significant effect (Study 6) and a comparison of fears of snakes, alligators, and cats did not support our prediction (Study 7). We discuss the potential implications and limitations of these novel findings.
Collapse
Affiliation(s)
- Sinan Alper
- a Yasar University , Department of Psychology , Izmir , Turkey
| | - Elif Oyku Us
- b Baskent University , Department of Psychology , Ankara , Turkey
| | | |
Collapse
|
19
|
Guo K, Soornack Y, Settle R. Expression-dependent susceptibility to face distortions in processing of facial expressions of emotion. Vision Res 2018; 157:112-122. [PMID: 29496513 DOI: 10.1016/j.visres.2018.02.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2017] [Revised: 02/02/2018] [Accepted: 02/04/2018] [Indexed: 11/29/2022]
Abstract
Our capability of recognizing facial expressions of emotion under different viewing conditions implies the existence of an invariant expression representation. As natural visual signals are often distorted and our perceptual strategy changes with external noise level, it is essential to understand how expression perception is susceptible to face distortion and whether the same facial cues are used to process high- and low-quality face images. We systematically manipulated face image resolution (experiment 1) and blur (experiment 2), and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. Our analysis revealed a reasonable tolerance to face distortion in expression perception. Reducing image resolution up to 48 × 64 pixels or increasing image blur up to 15 cycles/image had little impact on expression assessment and associated gaze behaviour. Further distortion led to decreased expression categorization accuracy and intensity rating, increased reaction time and fixation duration, and stronger central fixation bias which was not driven by distortion-induced changes in local image saliency. Interestingly, the observed distortion effects were expression-dependent with less deterioration impact on happy and surprise expressions, suggesting this distortion-invariant facial expression perception might be achieved through the categorical model involving a non-linear configural combination of local facial features.
Collapse
Affiliation(s)
- Kun Guo
- School of Psychology, University of Lincoln, UK.
| | | | | |
Collapse
|
20
|
Abstract
Humans readily attribute personality and behavioral traits to dogs, and these attributions influence decisions about adoption. This study focused on how these attributions could be influenced by breed and pose by using pictures of four breeds (Doberman Pinscher, Golden Retriever, pit bull, and Rottweiler) in 4 poses (dog sitting alone, sitting with a human, standing alone, and walking on a leash with a human). Participants rated each picture on friendliness, aggressiveness, and adoptability. Eye-tracking technology identified which specific features were represented in each picture to determine whether they had any effect on the judgments. Although the Golden Retriever was seen as most adoptable, pose differences had many significant effects that could be useful for increasing the adoptability of all breeds. Data also revealed facial areas that attracted more attention (e.g., faster time to first fixation and longer fixation duration), particularly when the dog was alone. Focus on these areas could help to optimize photographs to present dogs in the friendliest, least aggressive, and most adoptable way.
Collapse
|
21
|
Harrison NR, Jones J, Davies SJ. Systematic Distortions in Vertical Placement of Features in Drawings of Faces and Houses. Iperception 2017; 8:2041669517691055. [PMID: 28210488 PMCID: PMC5298532 DOI: 10.1177/2041669517691055] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
A crucial part of accurately drawing portraits is the correct vertical positioning of the eyes. Non-experts typically place the eyes higher on the head than they are actually located; however, the explanation for this remains unclear. In Experiment 1, participants drew faces from memory and directly copied from a photograph, to confirm whether biases in observational drawings were related to biases in memory-based drawings. In Experiment 2, participants drew a cat's face, to test explanations by Carbon and Wirth for the positional bias: the 'view-from-below, the 'head-as-box', and the 'hair-as-hat' explanations. Results indicated that none of these three explanations could fully account for the vertical positioning biases observed in drawings of the cat's face. The findings are discussed in relation to the idea that distortions of vertical alignment in drawings may be related to the position of the most salient features within a face or object.
Collapse
|
22
|
Abstract
Individuals vary in perceptual accuracy when categorising facial expressions, yet it is unclear how these individual differences in non-clinical population are related to cognitive processing stages at facial information acquisition and interpretation. We tested 104 healthy adults in a facial expression categorisation task, and correlated their categorisation accuracy with face-viewing gaze allocation and personal traits assessed with Autism Quotient, anxiety inventory and Self-Monitoring Scale. The gaze allocation had limited but emotion-specific impact on categorising expressions. Specifically, longer gaze at the eyes and nose regions were coupled with more accurate categorisation of disgust and sad expressions, respectively. Regarding trait measurements, higher autistic score was coupled with better recognition of sad but worse recognition of anger expressions, and contributed to categorisation bias towards sad expressions; whereas higher anxiety level was associated with greater categorisation accuracy across all expressions and with increased tendency of gazing at the nose region. It seems that both anxiety and autistic-like traits were associated with individual variation in expression categorisation, but this association is not necessarily mediated by variation in gaze allocation at expression-specific local facial regions. The results suggest that both facial information acquisition and interpretation capabilities contribute to individual differences in expression categorisation within non-clinical populations.
Collapse
Affiliation(s)
- Corinne Green
- a School of Psychology , University of Lincoln , Lincoln , UK
| | - Kun Guo
- a School of Psychology , University of Lincoln , Lincoln , UK
| |
Collapse
|
23
|
Somppi S, Törnqvist H, Kujala MV, Hänninen L, Krause CM, Vainio O. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity--Evidence from Gazing Patterns. PLoS One 2016; 11:e0143047. [PMID: 26761433 PMCID: PMC4711950 DOI: 10.1371/journal.pone.0143047] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2014] [Accepted: 10/30/2015] [Indexed: 01/21/2023] Open
Abstract
Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel perspective on understanding the processing of emotional expressions and sensitivity to social threat in non-primates.
Collapse
Affiliation(s)
- Sanni Somppi
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
- * E-mail:
| | - Heini Törnqvist
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
- Cognitive Science, Faculty of Behavioural Sciences, University of Helsinki, Helsinki, Finland
| | - Miiamaaria V. Kujala
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Laura Hänninen
- Department of Production Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| | - Christina M. Krause
- Cognitive Science, Faculty of Behavioural Sciences, University of Helsinki, Helsinki, Finland
| | - Outi Vainio
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
24
|
Gavin CJ, Houghton S, Guo K. Dog owners show experience-based viewing behaviour in judging dog face approachability. PSYCHOLOGICAL RESEARCH 2015; 81:75-82. [PMID: 26486649 DOI: 10.1007/s00426-015-0718-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2015] [Accepted: 10/09/2015] [Indexed: 11/29/2022]
Abstract
Our prior visual experience plays a critical role in face perception. We show superior perceptual performance for differentiating conspecific (vs non-conspecific), own-race (vs other-race) and familiar (vs unfamiliar) faces. However, it remains unclear whether our experience with faces of other species would influence our gaze allocation for extracting salient facial information. In this eye-tracking study, we asked both dog owners and non-owners to judge the approachability of human, monkey and dog faces, and systematically compared their behavioural performance and gaze pattern associated with the task. Compared to non-owners, dog owners assessed dog faces with shorter time and fewer fixations, but gave higher approachability ratings. The gaze allocation within local facial features was also modulated by the ownership. The averaged proportion of the fixations and viewing time directed at the dog mouth region were significantly less for the dog owners, and more experienced dog owners tended to look more at the dog eyes, suggesting the adoption of a prior experience-based viewing behaviour for assessing dog approachability. No differences in behavioural performance and gaze pattern were observed between dog owners and non-owners when judging human and monkey faces, implying that the dog owner's experience-based gaze strategy for viewing dog faces was not transferable across faces of other species.
Collapse
Affiliation(s)
- Carla Jade Gavin
- School of Psychology, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| | - Sarah Houghton
- School of Psychology, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| | - Kun Guo
- School of Psychology, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK.
| |
Collapse
|
25
|
Marsh AA. Understanding amygdala responsiveness to fearful expressions through the lens of psychopathy and altruism. J Neurosci Res 2015; 94:513-25. [PMID: 26366635 DOI: 10.1002/jnr.23668] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2015] [Revised: 08/24/2015] [Accepted: 08/27/2015] [Indexed: 01/12/2023]
Abstract
Because the face is the central focus of human social interactions, emotional facial expressions provide a unique window into the emotional lives of others. They play a particularly important role in fostering empathy, which entails understanding and responding to others' emotions, especially distress-related emotions such as fear. This Review considers how fearful facial as well as vocal and postural expressions are interpreted, with an emphasis on the role of the amygdala. The amygdala may be best known for its role in the acquisition and expression of conditioned fear, but it also supports the perception and recognition of others' fear. Various explanations have been supplied for the amygdala's role in interpreting and responding to fearful expressions. They include theories that amygdala responses to fearful expressions 1) reflect heightened vigilance in response to uncertain danger, 2) promote heightened attention to the eye region of faces, 3) represent a response to an unconditioned aversive stimulus, or 4) reflect the generation of an empathic fear response. Among these, only empathic fear explains why amygdala lesions would impair fear recognition across modalities. Supporting the possibility of a link between fundamental empathic processes and amygdala responses to fear is evidence that impaired fear recognition in psychopathic individuals results from amygdala dysfunction, whereas enhanced fear recognition in altruistic individuals results from enhanced amygdala function. Empathic concern and caring behaviors may be fostered by sensitivity to signs of acute distress in others, which relies on intact functioning of the amygdala.
Collapse
Affiliation(s)
- Abigail A Marsh
- Department of Psychology, Georgetown University, Washington, DC
| |
Collapse
|
26
|
Törnqvist H, Somppi S, Koskela A, Krause CM, Vainio O, Kujala MV. Comparison of dogs and humans in visual scanning of social interaction. ROYAL SOCIETY OPEN SCIENCE 2015; 2:150341. [PMID: 26473057 PMCID: PMC4593691 DOI: 10.1098/rsos.150341] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2015] [Accepted: 09/02/2015] [Indexed: 05/31/2023]
Abstract
Previous studies have demonstrated similarities in gazing behaviour of dogs and humans, but comparisons under similar conditions are rare, and little is known about dogs' visual attention to social scenes. Here, we recorded the eye gaze of dogs while they viewed images containing two humans or dogs either interacting socially or facing away: the results were compared with equivalent data measured from humans. Furthermore, we compared the gazing behaviour of two dog and two human populations with different social experiences: family and kennel dogs; dog experts and non-experts. Dogs' gazing behaviour was similar to humans: both species gazed longer at the actors in social interaction than in non-social images. However, humans gazed longer at the actors in dog than human social interaction images, whereas dogs gazed longer at the actors in human than dog social interaction images. Both species also made more saccades between actors in images representing non-conspecifics, which could indicate that processing social interaction of non-conspecifics may be more demanding. Dog experts and non-experts viewed the images very similarly. Kennel dogs viewed images less than family dogs, but otherwise their gazing behaviour did not differ, indicating that the basic processing of social stimuli remains similar regardless of social experiences.
Collapse
Affiliation(s)
- Heini Törnqvist
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, Helsinki, Finland
- Cognitive Science, Faculty of Behavioural Sciences, University of Helsinki, Helsinki, Finland
| | - Sanni Somppi
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, Helsinki, Finland
| | - Aija Koskela
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, Helsinki, Finland
| | - Christina M. Krause
- Cognitive Science, Faculty of Behavioural Sciences, University of Helsinki, Helsinki, Finland
| | - Outi Vainio
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, Helsinki, Finland
| | - Miiamaaria V. Kujala
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, Helsinki, Finland
- Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| |
Collapse
|
27
|
Arizpe JM, Walsh V, Baker CI. Characteristic visuomotor influences on eye-movement patterns to faces and other high level stimuli. Front Psychol 2015; 6:1027. [PMID: 26283982 PMCID: PMC4518262 DOI: 10.3389/fpsyg.2015.01027] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2015] [Accepted: 07/06/2015] [Indexed: 11/13/2022] Open
Abstract
Eye-movement patterns are often utilized in studies of visual perception as indices of the specific information extracted to efficiently process a given stimulus during a given task. Our prior work, however, revealed that not only the stimulus and task influence eye-movements, but that visuomotor (start position) factors also robustly and characteristically influence eye-movement patterns to faces (Arizpe et al., 2012). Here we manipulated lateral starting side and distance from the midline of face and line-symmetrical control (butterfly) stimuli in order to further investigate the nature and generality of such visuomotor influences. First we found that increasing starting distance from midline (4°, 8°, 12°, and 16° visual angle) strongly and proportionately increased the distance of the first ordinal fixation from midline. We did not find influences of starting distance on subsequent fixations, however, suggesting that eye-movement plans are not strongly affected by starting distance following an initial orienting fixation. Further, we replicated our prior effect of starting side (left, right) to induce a spatially contralateral tendency of fixations after the first ordinal fixation. However, we also established that these visuomotor influences did not depend upon the predictability of the location of the upcoming stimulus, and were present not only for face stimuli but also for our control stimulus category (butterflies). We found a correspondence in overall left-lateralized fixation tendency between faces and butterflies. Finally, for faces, we found a relationship between left starting side (right sided fixation pattern tendency) and increased recognition performance, which likely reflects a cortical right hemisphere (left visual hemifield) advantage for face perception. These results further indicate the importance of considering and controlling for visuomotor influences in the design, analysis, and interpretation of eye-movement studies.
Collapse
Affiliation(s)
- Joseph M. Arizpe
- Applied Cognitive Neuroscience Group, Institute of Cognitive Neuroscience, University College LondonLondon, UK
- Section on Learning and Plasticity, National Institute of Mental Health, National Institutes of HealthBethesda, MD, USA
- Department of Neurology, University of Tennessee Health Science CenterMemphis, TN, USA
- Pediatrics Department, Le Bonheur Children’s HospitalMemphis, TN, USA
| | - Vincent Walsh
- Applied Cognitive Neuroscience Group, Institute of Cognitive Neuroscience, University College LondonLondon, UK
| | - Chris I. Baker
- Section on Learning and Plasticity, National Institute of Mental Health, National Institutes of HealthBethesda, MD, USA
| |
Collapse
|
28
|
Guo K, Shaw H. Face in profile view reduces perceived facial expression intensity: an eye-tracking study. Acta Psychol (Amst) 2015; 155:19-28. [PMID: 25531122 DOI: 10.1016/j.actpsy.2014.12.001] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2014] [Revised: 11/28/2014] [Accepted: 12/03/2014] [Indexed: 10/24/2022] Open
Abstract
Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues.
Collapse
|
29
|
Pitteri E, Mongillo P, Carnier P, Marinelli L, Huber L. Part-based and configural processing of owner's face in dogs. PLoS One 2014; 9:e108176. [PMID: 25251285 PMCID: PMC4177116 DOI: 10.1371/journal.pone.0108176] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2014] [Accepted: 08/26/2014] [Indexed: 11/18/2022] Open
Abstract
Dogs exhibit characteristic looking patterns when looking at human faces but little is known about what the underlying cognitive mechanisms are and how much these are influenced by individual experience. In Experiment 1, seven dogs were trained in a simultaneous discrimination procedure to assess whether they could discriminate a) the owner's face parts (eyes, nose or mouth) presented in isolation and b) whole faces where the same parts were covered. Dogs discriminated all the three parts of the owner's face presented in isolation, but needed fewer sessions to reach the learning criterion for the eyes than for both nose and mouth. Moreover, covering the eyes region significantly disrupted face discriminability compared to the whole face condition while such difference was not found when the nose or mouth was hidden. In Experiment 2, dogs were presented with manipulated images of the owner's face (inverted, blurred, scrambled, grey-scale) to test the relative contribution of part-based and configural processing in the discrimination of human faces. Furthermore, by comparing the dogs enrolled in the previous experiment and seven ‘naïve’ dogs we examined if the relative contribution of part-based and configural processing was affected by dogs' experience with the face stimuli. Naïve dogs discriminated the owner only when configural information was provided, whereas expert dogs could discriminate the owner also when part-based processing was necessary. The present study provides the first evidence that dogs can discriminate isolated internal features of a human face and corroborate previous reports of salience of the eyes region for human face processing. Although the reliance on part-perception may be increased by specific experience, our findings suggest that human face discrimination by dogs relies mainly on configural rather than on part-based elaboration.
Collapse
Affiliation(s)
- Elisa Pitteri
- Department of Comparative Biomedicine and Food Science, University of Padova, Legnaro, PD, Italy
| | - Paolo Mongillo
- Department of Comparative Biomedicine and Food Science, University of Padova, Legnaro, PD, Italy
| | - Paolo Carnier
- Department of Comparative Biomedicine and Food Science, University of Padova, Legnaro, PD, Italy
| | - Lieta Marinelli
- Department of Comparative Biomedicine and Food Science, University of Padova, Legnaro, PD, Italy
- * E-mail:
| | - Ludwig Huber
- Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna, and University of Vienna, Vienna, Austria
| |
Collapse
|
30
|
Borgi M, Cogliati-Dezza I, Brelsford V, Meints K, Cirulli F. Baby schema in human and animal faces induces cuteness perception and gaze allocation in children. Front Psychol 2014; 5:411. [PMID: 24847305 PMCID: PMC4019884 DOI: 10.3389/fpsyg.2014.00411] [Citation(s) in RCA: 88] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2014] [Accepted: 04/19/2014] [Indexed: 11/13/2022] Open
Abstract
The baby schema concept was originally proposed as a set of infantile traits with high appeal for humans, subsequently shown to elicit caretaking behavior and to affect cuteness perception and attentional processes. However, it is unclear whether the response to the baby schema may be extended to the human-animal bond context. Moreover, questions remain as to whether the cute response is constant and persistent or whether it changes with development. In the present study we parametrically manipulated the baby schema in images of humans, dogs, and cats. We analyzed responses of 3-6 year-old children, using both explicit (i.e., cuteness ratings) and implicit (i.e., eye gaze patterns) measures. By means of eye-tracking, we assessed children's preferential attention to images varying only for the degree of baby schema and explored participants' fixation patterns during a cuteness task. For comparative purposes, cuteness ratings were also obtained in a sample of adults. Overall our results show that the response to an infantile facial configuration emerges early during development. In children, the baby schema affects both cuteness perception and gaze allocation to infantile stimuli and to specific facial features, an effect not simply limited to human faces. In line with previous research, results confirm human positive appraisal toward animals and inform both educational and therapeutic interventions involving pets, helping to minimize risk factors (e.g., dog bites).
Collapse
Affiliation(s)
- Marta Borgi
- Section of Behavioral Neuroscience, Department of Cell Biology and Neurosciences, Istituto Superiore di Sanità Rome, Italy
| | - Irene Cogliati-Dezza
- Section of Behavioral Neuroscience, Department of Cell Biology and Neurosciences, Istituto Superiore di Sanità Rome, Italy
| | | | | | - Francesca Cirulli
- Section of Behavioral Neuroscience, Department of Cell Biology and Neurosciences, Istituto Superiore di Sanità Rome, Italy
| |
Collapse
|
31
|
Guo K. Size-invariant facial expression categorization and associated gaze allocation within social interaction space. Perception 2014; 42:1027-42. [PMID: 24494434 DOI: 10.1068/p7552] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
As faces often appear under very different viewing conditions (eg brightness, viewing angle, or viewing distance), invariant facial information recognition is a key to our social interactions. Although we would clearly benefit from differentiating different facial expressions (eg angry vs happy) at a distance, there is surprisingly little research examining how expression categorization and associated gaze allocation are affected by viewing distance within the range of typical social space. In this study I systematically varied the size of faces displaying six basic facial expressions of emotion with varying intensities to mimic viewing distances ranging from arms length to 5 m, and employed a self-paced expression categorization task to measure participants' categorization performance and associated gaze patterns. Irrespective of the displayed expression and its intensity, the participants showed indistinguishable categorization accuracy and reaction time across the tested face sizes. Reducing face size decreased the number of fixations directed at the faces but increased individual fixation durations, and shifted gaze distribution from scanning all key internal facial features to fixating at mainly the central face region. The results suggest size-invariant facial expression categorization behaviour within social interaction distance which could be linked to a holistic gaze strategy for extracting expressive facial cues.
Collapse
Affiliation(s)
- Kun Guo
- School of Psychology, University of Lincoln, Lincoln LN6 7TS, UK.
| |
Collapse
|
32
|
Perceptual and gaze biases during face processing: related or not? PLoS One 2014; 9:e85746. [PMID: 24454927 PMCID: PMC3893266 DOI: 10.1371/journal.pone.0085746] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2013] [Accepted: 11/29/2013] [Indexed: 11/21/2022] Open
Abstract
Previous studies have demonstrated a left perceptual bias while looking at faces, due to the fact that observers mainly use information from the left side of a face (from the observer's point of view) to perform a judgment task. Such a bias is consistent with the right hemisphere dominance for face processing and has sometimes been linked to a left gaze bias, i.e. more and/or longer fixations on the left side of the face. Here, we recorded eye-movements, in two different experiments during a gender judgment task, using normal and chimeric faces which were presented above, below, right or left to the central fixation point or on it (central position). Participants performed the judgment task by remaining fixated on the fixation point or after executing several saccades (up to three). A left perceptual bias was not systematically found as it depended on the number of allowed saccades and face position. Moreover, the gaze bias clearly depended on the face position as the initial fixation was guided by face position and landed on the closest half-face, toward the center of gravity of the face. The analysis of the subsequent fixations revealed that observers move their eyes from one side to the other. More importantly, no apparent link between gaze and perceptual biases was found here. This implies that we do not look necessarily toward the side of the face that we use to make a gender judgment task. Despite the fact that these results may be limited by the absence of perceptual and gaze biases in some conditions, we emphasized the inter-individual differences observed in terms of perceptual bias, hinting at the importance of performing individual analysis and drawing attention to the influence of the method used to study this bias.
Collapse
|
33
|
Kujala MV, Törnqvist H, Somppi S, Hänninen L, Krause CM, Vainio O, Kujala J. Reactivity of dogs' brain oscillations to visual stimuli measured with non-invasive electroencephalography. PLoS One 2013; 8:e61818. [PMID: 23650504 PMCID: PMC3641087 DOI: 10.1371/journal.pone.0061818] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2012] [Accepted: 03/18/2013] [Indexed: 11/19/2022] Open
Abstract
Studying cognition of domestic dogs has gone through a renaissance within the last decades. However, although the behavioral studies of dogs are beginning to be common in the field of animal cognition, the neural events underlying cognition remain unknown. Here, we employed a non-invasive electroencephalography, with adhesive electrodes attached to the top of the skin, to measure brain activity of from 8 domestic dogs (Canis familiaris) while they stayed still to observe photos of dog and human faces. Spontaneous oscillatory activity of the dogs, peaking in the sensors over the parieto-occipital cortex, was suppressed statistically significantly during visual task compared with resting activity at the frequency of 15-30 Hz. Moreover, a stimulus-induced low-frequency (~2-6 Hz) suppression locked to the stimulus onset was evident at the frontal sensors, possibly reflecting a motor rhythm guiding the exploratory eye movements. The results suggest task-related reactivity of the macroscopic oscillatory activity in the dog brain. To our knowledge, the study is the first to reveal non-invasively measured reactivity of brain electrophysiological oscillations in healthy dogs, and it has been based purely on positive operant conditional training, without the need for movement restriction or medication.
Collapse
Affiliation(s)
- Miiamaaria V Kujala
- Lyon Neuroscience Research Center, INSERM U1028 - CNRS UMR5292, Bron, France.
| | | | | | | | | | | | | |
Collapse
|
34
|
Social interactions through the eyes of macaques and humans. PLoS One 2013; 8:e56437. [PMID: 23457569 PMCID: PMC3574082 DOI: 10.1371/journal.pone.0056437] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2012] [Accepted: 01/09/2013] [Indexed: 11/19/2022] Open
Abstract
Group-living primates frequently interact with each other to maintain social bonds as well as to compete for valuable resources. Observing such social interactions between group members provides individuals with essential information (e.g. on the fighting ability or altruistic attitude of group companions) to guide their social tactics and choice of social partners. This process requires individuals to selectively attend to the most informative content within a social scene. It is unclear how non-human primates allocate attention to social interactions in different contexts, and whether they share similar patterns of social attention to humans. Here we compared the gaze behaviour of rhesus macaques and humans when free-viewing the same set of naturalistic images. The images contained positive or negative social interactions between two conspecifics of different phylogenetic distance from the observer; i.e. affiliation or aggression exchanged by two humans, rhesus macaques, Barbary macaques, baboons or lions. Monkeys directed a variable amount of gaze at the two conspecific individuals in the images according to their roles in the interaction (i.e. giver or receiver of affiliation/aggression). Their gaze distribution to non-conspecific individuals was systematically varied according to the viewed species and the nature of interactions, suggesting a contribution of both prior experience and innate bias in guiding social attention. Furthermore, the monkeys' gaze behavior was qualitatively similar to that of humans, especially when viewing negative interactions. Detailed analysis revealed that both species directed more gaze at the face than the body region when inspecting individuals, and attended more to the body region in negative than in positive social interactions. Our study suggests that monkeys and humans share a similar pattern of role-sensitive, species- and context-dependent social attention, implying a homologous cognitive mechanism of social attention between rhesus macaques and humans.
Collapse
|
35
|
Human perception of fear in dogs varies according to experience with dogs. PLoS One 2012; 7:e51775. [PMID: 23284765 PMCID: PMC3526646 DOI: 10.1371/journal.pone.0051775] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2012] [Accepted: 11/12/2012] [Indexed: 11/19/2022] Open
Abstract
To investigate the role of experience in humans’ perception of emotion using canine visual signals, we asked adults with various levels of dog experience to interpret the emotions of dogs displayed in videos. The video stimuli had been pre-categorized by an expert panel of dog behavior professionals as showing examples of happy or fearful dog behavior. In a sample of 2,163 participants, the level of dog experience strongly predicted identification of fearful, but not of happy, emotional examples. The probability of selecting the “fearful” category to describe fearful examples increased with experience and ranged from.30 among those who had never lived with a dog to greater than.70 among dog professionals. In contrast, the probability of selecting the “happy” category to describe happy emotional examples varied little by experience, ranging from.90 to.93. In addition, the number of physical features of the dog that participants reported using for emotional interpretations increased with experience, and in particular, more-experienced respondents were more likely to attend to the ears. Lastly, more-experienced respondents provided lower difficulty and higher accuracy self-ratings than less-experienced respondents when interpreting both happy and fearful emotional examples. The human perception of emotion in other humans has previously been shown to be sensitive to individual differences in social experience, and the results of the current study extend the notion of experience-dependent processes from the intraspecific to the interspecific domain.
Collapse
|
36
|
Hall CL, Hogue T, Guo K. Sexual cognition guides viewing strategies to human figures. JOURNAL OF SEX RESEARCH 2012; 51:184-196. [PMID: 23148708 DOI: 10.1080/00224499.2012.716872] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Gaze patterns to figure images have been proposed to reflect the observer's sexual interest, particularly for men. This eye-tracking study investigated how individual differences in sexual motivation tendencies are manifested in naturalistic gaze patterns. Heterosexual men and women (M = 21.0 years, SD = 2.1) free-viewed plain-clothed male and female figures, aged 10, 20, and 40 years old, while their eye movements were recorded. Questionnaires were used to measure sexual cognitions, including sensation seeking and sexual compulsivity, sexual inhibition and excitation, and approach and avoidance responses to sexual stimuli. Our analysis showed a clear role of sexual cognitions in influencing gaze strategies for men. Specifically, men who scored higher on sexual compulsivity dedicated more gaze to the waist-hip region when viewing figures of their preferred sexual partners than men who scored lower on sexual compulsivity. Women's sexual cognitions showed no clear effect on the gaze pattern in viewing figures of their preferred age and gender of sexual partners, suggesting women's gaze is unlikely to be a straightforward reflection of their sexual preferences. The findings further suggest that men's gaze allocation is driven by sexual preferences and supports the utility of eye tracking in the assessment of male sexual interest.
Collapse
|
37
|
Guo K. Holistic gaze strategy to categorize facial expression of varying intensities. PLoS One 2012; 7:e42585. [PMID: 22880043 PMCID: PMC3411802 DOI: 10.1371/journal.pone.0042585] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2012] [Accepted: 07/10/2012] [Indexed: 11/19/2022] Open
Abstract
Using faces representing exaggerated emotional expressions, recent behaviour and eye-tracking studies have suggested a dominant role of individual facial features in transmitting diagnostic cues for decoding facial expressions. Considering that in everyday life we frequently view low-intensity expressive faces in which local facial cues are more ambiguous, we probably need to combine expressive cues from more than one facial feature to reliably decode naturalistic facial affects. In this study we applied a morphing technique to systematically vary intensities of six basic facial expressions of emotion, and employed a self-paced expression categorization task to measure participants' categorization performance and associated gaze patterns. The analysis of pooled data from all expressions showed that increasing expression intensity would improve categorization accuracy, shorten reaction time and reduce number of fixations directed at faces. The proportion of fixations and viewing time directed at internal facial features (eyes, nose and mouth region), however, was not affected by varying levels of intensity. Further comparison between individual facial expressions revealed that although proportional gaze allocation at individual facial features was quantitatively modulated by the viewed expressions, the overall gaze distribution in face viewing was qualitatively similar across different facial expressions and different intensities. It seems that we adopt a holistic viewing strategy to extract expressive cues from all internal facial features in processing of naturalistic facial expressions.
Collapse
Affiliation(s)
- Kun Guo
- School of Psychology, University of Lincoln, Lincoln, United Kingdom.
| |
Collapse
|
38
|
Kujala MV, Kujala J, Carlson S, Hari R. Dog experts' brains distinguish socially relevant body postures similarly in dogs and humans. PLoS One 2012; 7:e39145. [PMID: 22720054 PMCID: PMC3374771 DOI: 10.1371/journal.pone.0039145] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2011] [Accepted: 05/19/2012] [Indexed: 11/21/2022] Open
Abstract
We read conspecifics' social cues effortlessly, but little is known about our abilities to understand social gestures of other species. To investigate the neural underpinnings of such skills, we used functional magnetic resonance imaging to study the brain activity of experts and non-experts of dog behavior while they observed humans or dogs either interacting with, or facing away from a conspecific. The posterior superior temporal sulcus (pSTS) of both subject groups dissociated humans facing toward each other from humans facing away, and in dog experts, a distinction also occurred for dogs facing toward vs. away in a bilateral area extending from the pSTS to the inferior temporo-occipital cortex: the dissociation of dog behavior was significantly stronger in expert than control group. Furthermore, the control group had stronger pSTS responses to humans than dogs facing toward a conspecific, whereas in dog experts, the responses were of similar magnitude. These findings suggest that dog experts' brains distinguish socially relevant body postures similarly in dogs and humans.
Collapse
Affiliation(s)
- Miiamaaria V Kujala
- Brain Research Unit, OV Lounasmaa Laboratory, Aalto University, Espoo, Finland.
| | | | | | | |
Collapse
|
39
|
Hall C, Hogue T, Guo K. Differential gaze behavior towards sexually preferred and non-preferred human figures. JOURNAL OF SEX RESEARCH 2011; 48:461-9. [PMID: 20967669 DOI: 10.1080/00224499.2010.521899] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The gaze pattern associated with image exploration is a sensitive index of our attention, motivation, and preference. To examine whether an individual's gaze behavior can reflect his or her sexual interest, this study compared gaze patterns of young heterosexual men and women (M = 19.94 years, SD = 1.05) while they viewed photographs of plain-clothed male and female figures aged from birth to 60 years old. The analysis revealed a clear gender difference in viewing sexually preferred figure images. Men displayed a distinctive gaze pattern only when viewing 20-year-old female images, with more fixations and longer viewing times dedicated to the upper body and waist-hip regions. Women also directed more attention at the upper body on female images in comparison to male images, but this difference was not age-specific. Analysis of local image salience revealed that observers' eye-scanning strategies could not be accounted for by low-level processes, such as analyzing local image contrast and structure, but were associated with attractiveness judgments. The results suggest that the difference in cognitive processing of sexually preferred and non-preferred figures can be manifested in gaze patterns associated with figure viewing. Thus, eye-tracking holds promise as a potential sensitive measure for sexual preference, particularly in men.
Collapse
|
40
|
Guo K, Smith C, Powell K, Nicholls K. Consistent left gaze bias in processing different facial cues. PSYCHOLOGICAL RESEARCH 2011; 76:263-9. [PMID: 21559946 DOI: 10.1007/s00426-011-0340-9] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2011] [Accepted: 04/30/2011] [Indexed: 11/25/2022]
Abstract
While viewing faces, humans often demonstrate a natural gaze bias towards the left visual field, that is, the right side of the viewee's face is often inspected first and for longer periods. Previous studies have suggested that this gaze asymmetry is a part of the gaze pattern associated with face exploration, but its relation with perceptual processing of facial cues is unclear. In this study we recorded participants' saccadic eye movements while exploring face images under different task instructions (free viewing, judging familiarity and judging facial expression). We observed a consistent left gaze bias in face viewing irrespective of task demands. The probability of the first fixation and the proportion of overall fixations directed at the left hemiface were indistinguishable across different task instructions or across different facial expressions. It seems that the left gaze bias is an automatic reflection of hemispheric lateralisation in face processing, and is not necessarily correlated with the perceptual processing of a specific type of facial information.
Collapse
Affiliation(s)
- Kun Guo
- School of Psychology, University of Lincoln, Lincoln, LN6 7TS, UK.
| | | | | | | |
Collapse
|
41
|
Tatler BW, Wade NJ, Kwan H, Findlay JM, Velichkovsky BM. Yarbus, eye movements, and vision. Iperception 2010; 1:7-27. [PMID: 23396904 PMCID: PMC3563050 DOI: 10.1068/i0382] [Citation(s) in RCA: 80] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2010] [Revised: 06/04/2010] [Indexed: 10/31/2022] Open
Abstract
The impact of Yarbus's research on eye movements was enormous following the translation of his book Eye Movements and Vision into English in 1967. In stark contrast, the published material in English concerning his life is scant. We provide a brief biography of Yarbus and assess his impact on contemporary approaches to research on eye movements. While early interest in his work focused on his study of stabilised retinal images, more recently this has been replaced with interest in his work on the cognitive influences on scanning patterns. We extended his experiment on the effect of instructions on viewing a picture using a portrait of Yarbus rather than a painting. The results obtained broadly supported those found by Yarbus.
Collapse
Affiliation(s)
- Benjamin W Tatler
- School of Psychology, University of Dundee, Dundee, DD1 4HN, UK; e-mail:
| | - Nicholas J Wade
- School of Psychology, University of Dundee, Dundee, DD1 4HN, UK; e-mail:
| | - Hoi Kwan
- School of Psychology, University of Dundee, Dundee, DD1 4HN, UK; e-mail:
| | - John M Findlay
- Department of Psychology, University of Durham, South Road, Durham, DH1 3LE, UK; e-mail:
| | - Boris M Velichkovsky
- Institute of Cognitive Studies, Kurchatov Research Centre, 123182 Moscow, Russia; e-mail:
| |
Collapse
|