1
|
Kasahara S, Kumasaki N, Shimizu K. Investigating the impact of motion visual synchrony on self face recognition using real time morphing. Sci Rep 2024; 14:13090. [PMID: 38849381 PMCID: PMC11161490 DOI: 10.1038/s41598-024-63233-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 05/27/2024] [Indexed: 06/09/2024] Open
Abstract
Face recognition is a crucial aspect of self-image and social interactions. Previous studies have focused on static images to explore the boundary of self-face recognition. Our research, however, investigates the dynamics of face recognition in contexts involving motor-visual synchrony. We first validated our morphing face metrics for self-face recognition. We then conducted an experiment using state-of-the-art video processing techniques for real-time face identity morphing during facial movement. We examined self-face recognition boundaries under three conditions: synchronous, asynchronous, and static facial movements. Our findings revealed that participants recognized a narrower self-face boundary with moving facial images compared to static ones, with no significant differences between synchronous and asynchronous movements. The direction of morphing consistently biased the recognized self-face boundary. These results suggest that while motor information of the face is vital for self-face recognition, it does not rely on movement synchronization, and the sense of agency over facial movements does not affect facial identity judgment. Our methodology offers a new approach to exploring the 'self-face boundary in action', allowing for an independent examination of motion and identity.
Collapse
Affiliation(s)
- Shunichi Kasahara
- Sony Computer Science Laboratories, Inc., Tokyo, 141-0022, Japan.
- Okinawa Institute of Science and Technology Graduate University, Okinawa, 904-0412, Japan.
| | - Nanako Kumasaki
- Sony Computer Science Laboratories, Inc., Tokyo, 141-0022, Japan
| | - Kye Shimizu
- Sony Computer Science Laboratories, Inc., Tokyo, 141-0022, Japan
| |
Collapse
|
2
|
Kopnarski L, Lippert L, Rudisch J, Voelcker-Rehage C. Predicting object properties based on movement kinematics. Brain Inform 2023; 10:29. [PMID: 37925367 PMCID: PMC10625504 DOI: 10.1186/s40708-023-00209-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 10/01/2023] [Indexed: 11/06/2023] Open
Abstract
In order to grasp and transport an object, grip and load forces must be scaled according to the object's properties (such as weight). To select the appropriate grip and load forces, the object weight is estimated based on experience or, in the case of robots, usually by use of image recognition. We propose a new approach that makes a robot's weight estimation less dependent on prior learning and, thereby, allows it to successfully grasp a wider variety of objects. This study evaluates whether it is feasible to predict an object's weight class in a replacement task based on the time series of upper body angles of the active arm or on object velocity profiles. Furthermore, we wanted to investigate how prediction accuracy is affected by (i) the length of the time series and (ii) different cross-validation (CV) procedures. To this end, we recorded and analyzed the movement kinematics of 12 participants during a replacement task. The participants' kinematics were recorded by an optical motion tracking system while transporting an object, 80 times in total from varying starting positions to a predefined end position on a table. The object's weight was modified (made lighter and heavier) without changing the object's visual appearance. Throughout the experiment, the object's weight (light/heavy) was randomly changed without the participant's knowledge. To predict the object's weight class, we used a discrete cosine transform to smooth and compress the time series and a support vector machine for supervised learning from the achieved discrete cosine transform parameters. Results showed good prediction accuracy (up to [Formula: see text], depending on the CV procedure and the length of the time series). Even at the beginning of a movement (after only 300 ms), we were able to predict the object weight reliably (within a classification rate of [Formula: see text]).
Collapse
Affiliation(s)
- Lena Kopnarski
- Department of Neuromotor Behavior and Exercise, Institute of Sport and Exercise Sciences, University of Münster, Wilhelm-Schickard-Str. 8, 48149, Münster, Germany
| | - Laura Lippert
- Applied Functional Analysis, Chemnitz University of Technology, 09107, Chemnitz, Germany
| | - Julian Rudisch
- Department of Neuromotor Behavior and Exercise, Institute of Sport and Exercise Sciences, University of Münster, Wilhelm-Schickard-Str. 8, 48149, Münster, Germany
| | - Claudia Voelcker-Rehage
- Department of Neuromotor Behavior and Exercise, Institute of Sport and Exercise Sciences, University of Münster, Wilhelm-Schickard-Str. 8, 48149, Münster, Germany.
| |
Collapse
|
3
|
Yeung SC, Sidhu J, Youn S, Schaefer HRH, Barton JJS, Corrow SL. The role of the upper and lower face in the recognition of facial identity in dynamic stimuli. Vision Res 2023; 206:108194. [PMID: 36801665 PMCID: PMC10085847 DOI: 10.1016/j.visres.2023.108194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Revised: 02/03/2023] [Accepted: 02/04/2023] [Indexed: 02/18/2023]
Abstract
Studies with static faces find that upper face halves are more easily recognized than lower face halves-an upper-face advantage. However, faces are usually encountered as dynamic stimuli, and there is evidence that dynamic information influences face identity recognition. This raises the question of whether dynamic faces also show an upper-face advantage. The objective of this study was to examine whether familiarity for recently learned faces was more accurate for upper or lower face halves, and whether this depended upon whether the face was presented as static or dynamic. In Experiment 1, subjects learned a total of 12 faces--6 static images and 6 dynamic video-clips of actors in silent conversation. In experiment 2, subjects learned 12 faces, all dynamic video-clips. During the testing phase of Experiments 1 (between subjects) and 2 (within subjects), subjects were asked to recognize upper and lower face halves from either static images and/or dynamic clips. The data did not provide evidence for a difference in the upper-face advantage between static and dynamic faces. However, in both experiments, we found an upper-face advantage, consistent with prior literature, for female faces, but not for male faces. In conclusion, the use of dynamic stimuli may have little effect on the presence of an upper-face advantage, especially when the static comparison contains a series of static images, rather than a single static image, and is of sufficient image quality. Future studies could investigate the influence of face gender on the presence of an upper-face advantage.
Collapse
Affiliation(s)
- Shanna C Yeung
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Jhunam Sidhu
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Sena Youn
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Heidi R H Schaefer
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Jason J S Barton
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Sherryse L Corrow
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada.
| |
Collapse
|
4
|
Furl N, Begum F, Ferrarese FP, Jans S, Woolley C, Sulik J. Caricatured facial movements enhance perception of emotional facial expressions. Perception 2022; 51:313-343. [PMID: 35341407 PMCID: PMC9017061 DOI: 10.1177/03010066221086452] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Although faces “in the wild” constantly undergo complicated movements, humans adeptly
perceive facial identity and expression. Previous studies, focusing mainly on identity,
used photographic caricature to show that distinctive form increases perceived
dissimilarity. We tested whether distinctive facial movements showed
similar effects, and we focussed on both perception of expression and
identity. We caricatured the movements of an animated computer head,
using physical motion metrics extracted from videos. We verified that these “ground truth”
metrics showed the expected effects: Caricature increased physical dissimilarity between
faces differing in expression and those differing in identity. Like the ground truth
dissimilarity, participants’ dissimilarity perception was increased by caricature when
faces differed in expression. We found these perceived dissimilarities to reflect the
“representational geometry” of the ground truth. However, neither of these findings held
for faces differing in identity. These findings replicated across two paradigms: pairwise
ratings and multiarrangement. In a final study, motion caricature did not improve
recognition memory for identity, whether manipulated at study or test. We report several
forms of converging evidence for spatiotemporal caricature effects on dissimilarity
perception of different expressions. However, more work needs to be done to discover what
identity-specific movements can enhance face identification.
Collapse
Affiliation(s)
| | | | | | - Sarah Jans
- Royal Holloway, 3162University of London, UK
| | | | - Justin Sulik
- Royal Holloway, 3162University of London, UK; Cognition, Values & Behavior, Ludwig Maximilian University of Munich, Germany
| |
Collapse
|
5
|
Maguinness C, von Kriegstein K. Visual mechanisms for voice-identity recognition flexibly adjust to auditory noise level. Hum Brain Mapp 2021; 42:3963-3982. [PMID: 34043249 PMCID: PMC8288083 DOI: 10.1002/hbm.25532] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 04/26/2021] [Accepted: 05/02/2021] [Indexed: 11/24/2022] Open
Abstract
Recognising the identity of voices is a key ingredient of communication. Visual mechanisms support this ability: recognition is better for voices previously learned with their corresponding face (compared to a control condition). This so‐called ‘face‐benefit’ is supported by the fusiform face area (FFA), a region sensitive to facial form and identity. Behavioural findings indicate that the face‐benefit increases in noisy listening conditions. The neural mechanisms for this increase are unknown. Here, using functional magnetic resonance imaging, we examined responses in face‐sensitive regions while participants recognised the identity of auditory‐only speakers (previously learned by face) in high (SNR −4 dB) and low (SNR +4 dB) levels of auditory noise. We observed a face‐benefit in both noise levels, for most participants (16 of 21). In high‐noise, the recognition of face‐learned speakers engaged the right posterior superior temporal sulcus motion‐sensitive face area (pSTS‐mFA), a region implicated in the processing of dynamic facial cues. The face‐benefit in high‐noise also correlated positively with increased functional connectivity between this region and voice‐sensitive regions in the temporal lobe in the group of 16 participants with a behavioural face‐benefit. In low‐noise, the face‐benefit was robustly associated with increased responses in the FFA and to a lesser extent the right pSTS‐mFA. The findings highlight the remarkably adaptive nature of the visual network supporting voice‐identity recognition in auditory‐only listening conditions.
Collapse
Affiliation(s)
- Corrina Maguinness
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
6
|
Bylemans T, Vrancken L, Verfaillie K. Developmental Prosopagnosia and Elastic Versus Static Face Recognition in an Incidental Learning Task. Front Psychol 2020; 11:2098. [PMID: 32982859 PMCID: PMC7488957 DOI: 10.3389/fpsyg.2020.02098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Accepted: 07/28/2020] [Indexed: 11/26/2022] Open
Abstract
Previous research on the beneficial effect of motion has postulated that learning a face in motion provides additional cues to recognition. Surprisingly, however, few studies have examined the beneficial effect of motion in an incidental learning task and developmental prosopagnosia (DP) even though such studies could provide more valuable information about everyday face recognition compared to the perception of static faces. In the current study, 18 young adults (Experiment 1) and five DPs and 10 age-matched controls (Experiment 2) participated in an incidental learning task during which both static and elastically moving unfamiliar faces were sequentially presented and were to be recognized in a delayed visual search task during which the faces could either keep their original presentation or switch (from static to elastically moving or vice versa). In Experiment 1, performance in the elastic-elastic condition reached a significant improvement relative to the elastic-static and static-elastic condition, however, no significant difference could be detected relative to the static-static condition. Except for higher scores in the elastic-elastic compared to the static-elastic condition in the age-matched group, no other significant differences were detected between conditions for both the DPs and the age-matched controls. The current study could not provide compelling evidence for a general beneficial effect of motion. Age-matched controls performed generally worse than DPs, which may potentially be explained by their higher rates of false alarms. Factors that could have influenced the results are discussed.
Collapse
Affiliation(s)
- Tom Bylemans
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Leia Vrancken
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Karl Verfaillie
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| |
Collapse
|
7
|
Bekemeier HHH, Maycock JW, Ritter HJJ. What Does a Hand-Over Tell?-Individuality of ShortMotion Sequences. Biomimetics (Basel) 2019; 4:biomimetics4030055. [PMID: 31394826 PMCID: PMC6784304 DOI: 10.3390/biomimetics4030055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Revised: 08/02/2019] [Accepted: 08/05/2019] [Indexed: 11/16/2022] Open
Abstract
How much information with regard to identity and further individual participant characteristics are revealed by relatively short spatio-temporal motion trajectories of a person? We study this question by selecting a set of individual participant characteristics and analysing motion captured trajectories of an exemplary class of familiar movements, namely handover of an object to another person. The experiment is performed with different participants under different, predefined conditions. A selection of participant characteristics, such as the Big Five personality traits, gender, weight, or sportiness, are assessed and we analyse the impact of the three factor groups “participant identity”, “participant characteristics”, and “experimental conditions” on the observed hand trajectories. The participants’ movements are recorded via optical marker-based hand motion capture. One participant, the giver, hands over an object to the receiver. The resulting time courses of three-dimensional positions of markers are analysed. Multidimensional scaling is used to project trajectories to points in a dimension-reduced feature space. Supervised learning is also applied. We find that “participant identity” seems to have the highest correlation with the trajectories, with factor group “experimental conditions” ranking second. On the other hand, it is not possible to find a correlation between the “participant characteristics” and the hand trajectory features.
Collapse
Affiliation(s)
| | | | - Helge J J Ritter
- Neuroinformatics Group, Bielefeld University, 33615 Bielefeld, Germany.
| |
Collapse
|
8
|
Dobs K, Bülthoff I, Schultz J. Use and Usefulness of Dynamic Face Stimuli for Face Perception Studies-a Review of Behavioral Findings and Methodology. Front Psychol 2018; 9:1355. [PMID: 30123162 PMCID: PMC6085596 DOI: 10.3389/fpsyg.2018.01355] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Accepted: 07/13/2018] [Indexed: 01/01/2023] Open
Abstract
Faces that move contain rich information about facial form, such as facial features and their configuration, alongside the motion of those features. During social interactions, humans constantly decode and integrate these cues. To fully understand human face perception, it is important to investigate what information dynamic faces convey and how the human visual system extracts and processes information from this visual input. However, partly due to the difficulty of designing well-controlled dynamic face stimuli, many face perception studies still rely on static faces as stimuli. Here, we focus on evidence demonstrating the usefulness of dynamic faces as stimuli, and evaluate different types of dynamic face stimuli to study face perception. Studies based on dynamic face stimuli revealed a high sensitivity of the human visual system to natural facial motion and consistently reported dynamic advantages when static face information is insufficient for the task. These findings support the hypothesis that the human perceptual system integrates sensory cues for robust perception. In the present paper, we review the different types of dynamic face stimuli used in these studies, and assess their usefulness for several research questions. Natural videos of faces are ecological stimuli but provide limited control of facial form and motion. Point-light faces allow for good control of facial motion but are highly unnatural. Image-based morphing is a way to achieve control over facial motion while preserving the natural facial form. Synthetic facial animations allow separation of facial form and motion to study aspects such as identity-from-motion. While synthetic faces are less natural than videos of faces, recent advances in photo-realistic rendering may close this gap and provide naturalistic stimuli with full control over facial motion. We believe that many open questions, such as what dynamic advantages exist beyond emotion and identity recognition and which dynamic aspects drive these advantages, can be addressed adequately with different types of stimuli and will improve our understanding of face perception in more ecological settings.
Collapse
Affiliation(s)
- Katharina Dobs
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, United States.,Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Isabelle Bülthoff
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Johannes Schultz
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Division of Medical Psychology and Department of Psychiatry, University of Bonn, Bonn, Germany
| |
Collapse
|
9
|
Jesse A, Bartoli M. Learning to recognize unfamiliar talkers: Listeners rapidly form representations of facial dynamic signatures. Cognition 2018; 176:195-208. [DOI: 10.1016/j.cognition.2018.03.018] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2017] [Revised: 03/13/2018] [Accepted: 03/21/2018] [Indexed: 11/25/2022]
|
10
|
Dobs K, Bülthoff I, Schultz J. Identity information content depends on the type of facial movement. Sci Rep 2016; 6:34301. [PMID: 27683087 PMCID: PMC5041143 DOI: 10.1038/srep34301] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2016] [Accepted: 09/09/2016] [Indexed: 11/09/2022] Open
Abstract
Facial movements convey information about many social cues, including identity. However, how much information about a person's identity is conveyed by different kinds of facial movements is unknown. We addressed this question using a recent motion capture and animation system, with which we animated one avatar head with facial movements of three types: (1) emotional, (2) emotional in social interaction and (3) conversational, all recorded from several actors. In a delayed match-to-sample task, observers were best at matching actor identity across conversational movements, worse with emotional movements in social interactions, and at chance level with emotional facial expressions. Model observers performing this task showed similar performance profiles, indicating that performance variation was due to differences in information content, rather than processing. Our results suggest that conversational facial movements transmit more dynamic identity information than emotional facial expressions, thus suggesting different functional roles and processing mechanisms for different types of facial motion.
Collapse
Affiliation(s)
- Katharina Dobs
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Université de Toulouse, Centre de Recherche Cerveau et Cognition, Université Paul Sabatier, Toulouse, France.,CNRS, Faculté de Médecine de Purpan, UMR 5549, Toulouse, France
| | - Isabelle Bülthoff
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Johannes Schultz
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Division of Medical Psychology and Department of Psychiatry, University of Bonn, Bonn, Germany
| |
Collapse
|
11
|
Liu CH, Chen W, Ward J, Takahashi N. Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View. Sci Rep 2016; 6:31001. [PMID: 27499252 PMCID: PMC4976339 DOI: 10.1038/srep31001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Accepted: 07/11/2016] [Indexed: 11/18/2022] Open
Abstract
Prior research based on static images has found limited improvement for recognising previously learnt faces in a new expression after several different facial expressions of these faces had been shown during the learning session. We investigated whether non-rigid motion of facial expression facilitates the learning process. In Experiment 1, participants remembered faces that were either presented in short video clips or still images. To assess the effect of exposure to expression variation, each face was either learnt through a single expression or three different expressions. Experiment 2 examined whether learning faces in video clips could generalise more effectively to a new view. The results show that faces learnt from video clips generalised effectively to a new expression with exposure to a single expression, whereas faces learnt from stills showed poorer generalisation with exposure to either single or three expressions. However, although superior recognition performance was demonstrated for faces learnt through video clips, dynamic facial expression did not create better transfer of learning to faces tested in a new view. The data thus fail to support the hypothesis that non-rigid motion enhances viewpoint invariance. These findings reveal both benefits and limitations of exposures to moving expressions for expression-invariant face recognition.
Collapse
Affiliation(s)
- Chang Hong Liu
- Department of Psychology, Faculty of Science and Technology Bournemouth University, Talbot Campus Fern Barrow Poole, Dorset, BH12 5BB, United Kingdom
| | - Wenfeng Chen
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Chaoyang District, Beijing 100101, China
| | - James Ward
- Department of Computer Science, University of Hull, Cottingham Road, Hull, HU6 7RX, United Kingdom
| | - Nozomi Takahashi
- Department of Psychology, Graduate School of Literature and Social Science Nihon University, 3-25-40, Setagaya-ku, Sakurajosui Tokyo 156-8550, Japan
| |
Collapse
|
12
|
Abstract
Several neuroimaging studies have revealed that the superior temporal sulcus (STS) is highly implicated in the processing of facial motion. A limitation of these investigations, however, is that many of them utilize unnatural stimuli (e.g., morphed videos) or those which contain many confounding spatial cues. As a result, the underlying mechanisms may not be fully engaged during such perception. The aim of the current study was to build upon the existing literature by implementing highly detailed and accurate models of facial movement. Accordingly, neurologically healthy participants viewed simultaneous sequences of rigid and nonrigid motion that was retargeted onto a standard computer generated imagery face model. Their task was to discriminate between different facial motion videos in a two-alternative forced choice paradigm. Presentations varied between upright and inverted orientations. In corroboration with previous data, the perception of natural facial motion strongly activated a portion of the posterior STS. The analysis also revealed engagement of the lingual gyrus, fusiform gyrus, precentral gyrus, and cerebellum. These findings therefore suggest that the processing of dynamic facial information is supported by a network of visuomotor substrates.
Collapse
Affiliation(s)
- Christine Girges
- a College of Health and Life Sciences, Department of Psychology , Brunel University , London , UK
| | - Justin O'Brien
- a College of Health and Life Sciences, Department of Psychology , Brunel University , London , UK
| | - Janine Spencer
- a College of Health and Life Sciences, Department of Psychology , Brunel University , London , UK
| |
Collapse
|