1
|
Butcher N, Bennetts RJ, Sexton L, Barbanta A, Lander K. Eye movement differences when recognising and learning moving and static faces. Q J Exp Psychol (Hove) 2024:17470218241252145. [PMID: 38644390 DOI: 10.1177/17470218241252145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Seeing a face in motion can help subsequent face recognition. Several explanations have been proposed for this "motion advantage," but other factors that might play a role have received less attention. For example, facial movement might enhance recognition by attracting attention to the internal facial features, thereby facilitating identification. However, there is no direct evidence that motion increases attention to regions of the face that facilitate identification (i.e., internal features) compared with static faces. We tested this hypothesis by recording participants' eye movements while they completed the famous face recognition (Experiment 1, N = 32), and face-learning (Experiment 2, N = 60, Experiment 3, N = 68) tasks, with presentation style manipulated (moving or static). Across all three experiments, a motion advantage was found, and participants directed a higher proportion of fixations to the internal features (i.e., eyes, nose, and mouth) of moving faces versus static. Conversely, the proportion of fixations to the internal non-feature area (i.e., cheeks, forehead, chin) and external area (Experiment 3) was significantly reduced for moving compared with static faces (all ps < .05). Results suggest that during both familiar and unfamiliar face recognition, facial motion is associated with increased attention to internal facial features, but only during familiar face recognition is the magnitude of the motion advantage significantly related functionally to the proportion of fixations directed to the internal features.
Collapse
Affiliation(s)
- Natalie Butcher
- Department of Psychology, Teesside University, Middlesbrough, UK
| | | | - Laura Sexton
- Department of Psychology, Teesside University, Middlesbrough, UK
- School of Psychology, Faculty of Health Sciences and Wellbeing, University of Sunderland, Sunderland, UK
| | | | - Karen Lander
- Division of Psychology, Communication and Human Neuroscience, University of Manchester, Manchester, UK
| |
Collapse
|
2
|
Yeung SC, Sidhu J, Youn S, Schaefer HRH, Barton JJS, Corrow SL. The role of the upper and lower face in the recognition of facial identity in dynamic stimuli. Vision Res 2023; 206:108194. [PMID: 36801665 PMCID: PMC10085847 DOI: 10.1016/j.visres.2023.108194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Revised: 02/03/2023] [Accepted: 02/04/2023] [Indexed: 02/18/2023]
Abstract
Studies with static faces find that upper face halves are more easily recognized than lower face halves-an upper-face advantage. However, faces are usually encountered as dynamic stimuli, and there is evidence that dynamic information influences face identity recognition. This raises the question of whether dynamic faces also show an upper-face advantage. The objective of this study was to examine whether familiarity for recently learned faces was more accurate for upper or lower face halves, and whether this depended upon whether the face was presented as static or dynamic. In Experiment 1, subjects learned a total of 12 faces--6 static images and 6 dynamic video-clips of actors in silent conversation. In experiment 2, subjects learned 12 faces, all dynamic video-clips. During the testing phase of Experiments 1 (between subjects) and 2 (within subjects), subjects were asked to recognize upper and lower face halves from either static images and/or dynamic clips. The data did not provide evidence for a difference in the upper-face advantage between static and dynamic faces. However, in both experiments, we found an upper-face advantage, consistent with prior literature, for female faces, but not for male faces. In conclusion, the use of dynamic stimuli may have little effect on the presence of an upper-face advantage, especially when the static comparison contains a series of static images, rather than a single static image, and is of sufficient image quality. Future studies could investigate the influence of face gender on the presence of an upper-face advantage.
Collapse
Affiliation(s)
- Shanna C Yeung
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Jhunam Sidhu
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Sena Youn
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Heidi R H Schaefer
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Jason J S Barton
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Sherryse L Corrow
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada.
| |
Collapse
|
3
|
Huang Y. Dynamic Face Perception: The Role of Expertise in Dual Processing of Features and Configuration. JOURNAL OF UNDERGRADUATE LIFE SCIENCES 2023. [DOI: 10.33137/juls.v16i1.40382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
Face perception is the basis of many types of social information exchange, but there is controversy over its underlying mechanisms. Researchers have theorized two processing pathways underlying facial perception: configural processing and featural processing. Featural processing focuses on the individual features of a face, whereas configural processing focuses on the spatial relations of features. To resolve the debate on the relative contribution of the two pathways in face perception, researchers have proposed a dual processing model that the two pathways contribute to two different perceptions, detecting face-like patterns and identifying individual faces. The dual processing model is based on face perception experiments that primarily use static faces. As we mostly interact with dynamic faces in real life, the generalization of the model to dynamic faces will advance our understanding of how faces are perceived in real life. This paper proposes a refined dual processing model of dynamic face perception, in which expertise in dynamic face perception supports identifying individual faces, and it is a learned behaviour that develops with age. Specifically, facial motions account for the advantages of dynamic faces, compared to static faces. This paper highlights two intrinsic characteristics of facial motions that enable the advantages of dynamic faces in face perception. Firstly, facial motion provides facial information from various viewpoints, and thus supports the generalization of face perception to the unlearned view of faces. Secondly, distinctive motion patterns serve as a cue to the identity of the face.
Collapse
|
4
|
Dunn JD, Varela VPL, Nicholls VI, Papinutto M, White D, Miellet S. Face-Information Sampling in Super-Recognizers. Psychol Sci 2022; 33:1615-1630. [PMID: 36044042 DOI: 10.1177/09567976221096320] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Perceptual processes underlying individual differences in face-recognition ability remain poorly understood. We compared visual sampling of 37 adult super-recognizers-individuals with superior face-recognition ability-with that of 68 typical adult viewers by measuring gaze position as they learned and recognized unfamiliar faces. In both phases, participants viewed faces through "spotlight" apertures that varied in size, with face information restricted in real time around their point of fixation. We found higher accuracy in super-recognizers at all aperture sizes-showing that their superiority does not rely on global sampling of face information but is also evident when they are forced to adopt piecemeal sampling. Additionally, super-recognizers made more fixations, focused less on eye region, and distributed their gaze more than typical viewers. These differences were most apparent when learning faces and were consistent with trends we observed across the broader ability spectrum, suggesting that they are reflective of factors that vary dimensionally in the broader population.
Collapse
Affiliation(s)
- James D Dunn
- School of Psychology, University of New South Wales
| | | | - Victoria I Nicholls
- Faculty of Science & Technology, Bournemouth University.,Department of Psychology, University of Cambridge
| | | | - David White
- School of Psychology, University of New South Wales
| | | |
Collapse
|
5
|
Maguinness C, von Kriegstein K. Visual mechanisms for voice-identity recognition flexibly adjust to auditory noise level. Hum Brain Mapp 2021; 42:3963-3982. [PMID: 34043249 PMCID: PMC8288083 DOI: 10.1002/hbm.25532] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 04/26/2021] [Accepted: 05/02/2021] [Indexed: 11/24/2022] Open
Abstract
Recognising the identity of voices is a key ingredient of communication. Visual mechanisms support this ability: recognition is better for voices previously learned with their corresponding face (compared to a control condition). This so‐called ‘face‐benefit’ is supported by the fusiform face area (FFA), a region sensitive to facial form and identity. Behavioural findings indicate that the face‐benefit increases in noisy listening conditions. The neural mechanisms for this increase are unknown. Here, using functional magnetic resonance imaging, we examined responses in face‐sensitive regions while participants recognised the identity of auditory‐only speakers (previously learned by face) in high (SNR −4 dB) and low (SNR +4 dB) levels of auditory noise. We observed a face‐benefit in both noise levels, for most participants (16 of 21). In high‐noise, the recognition of face‐learned speakers engaged the right posterior superior temporal sulcus motion‐sensitive face area (pSTS‐mFA), a region implicated in the processing of dynamic facial cues. The face‐benefit in high‐noise also correlated positively with increased functional connectivity between this region and voice‐sensitive regions in the temporal lobe in the group of 16 participants with a behavioural face‐benefit. In low‐noise, the face‐benefit was robustly associated with increased responses in the FFA and to a lesser extent the right pSTS‐mFA. The findings highlight the remarkably adaptive nature of the visual network supporting voice‐identity recognition in auditory‐only listening conditions.
Collapse
Affiliation(s)
- Corrina Maguinness
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
6
|
Rubínová E, Fitzgerald RJ, Juncu S, Ribbers E, Hope L, Sauer JD. Live presentation for eyewitness identification is not superior to photo or video presentation. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2021. [DOI: 10.1016/j.jarmac.2020.08.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
7
|
Independent contributions of the face, body, and gait to the representation of the whole person. Atten Percept Psychophys 2020; 83:199-214. [PMID: 33083987 DOI: 10.3758/s13414-020-02110-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Most studies on person perception have primarily investigated static images of faces. However, real-life person perception also involves the body and often the gait of the whole person. Whereas some studies indicated that the face dominates the representation of the whole person, others have emphasized the additional contribution of the body and gait. Here, we compared models of whole-person perception by asking whether a model that includes the body for static whole-person stimuli and also the gait for dynamic whole-person stimuli accounts better for the representation of the whole person than a model that takes into account the face alone. Participants rated the distinctiveness of static or dynamic displays of different people based on either the whole person, face, body, or gait. By fitting a linear regression model to the representation of the whole person based on the face, body, and gait, we revealed that the face and body contribute uniquely and independently to the representation of the static whole person, and that gait further contributes to the representation of the dynamic person. A complementary analysis examined whether these components are also valid dimensions of a whole-person representational space. This analysis further confirmed that the body in addition to the face as well as the gait are valid dimensions of the static and dynamic whole-person representations, respectively. These data clearly show that whole-person perception goes beyond the face and is significantly influenced by the body and gait.
Collapse
|
8
|
Bylemans T, Vrancken L, Verfaillie K. Developmental Prosopagnosia and Elastic Versus Static Face Recognition in an Incidental Learning Task. Front Psychol 2020; 11:2098. [PMID: 32982859 PMCID: PMC7488957 DOI: 10.3389/fpsyg.2020.02098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Accepted: 07/28/2020] [Indexed: 11/26/2022] Open
Abstract
Previous research on the beneficial effect of motion has postulated that learning a face in motion provides additional cues to recognition. Surprisingly, however, few studies have examined the beneficial effect of motion in an incidental learning task and developmental prosopagnosia (DP) even though such studies could provide more valuable information about everyday face recognition compared to the perception of static faces. In the current study, 18 young adults (Experiment 1) and five DPs and 10 age-matched controls (Experiment 2) participated in an incidental learning task during which both static and elastically moving unfamiliar faces were sequentially presented and were to be recognized in a delayed visual search task during which the faces could either keep their original presentation or switch (from static to elastically moving or vice versa). In Experiment 1, performance in the elastic-elastic condition reached a significant improvement relative to the elastic-static and static-elastic condition, however, no significant difference could be detected relative to the static-static condition. Except for higher scores in the elastic-elastic compared to the static-elastic condition in the age-matched group, no other significant differences were detected between conditions for both the DPs and the age-matched controls. The current study could not provide compelling evidence for a general beneficial effect of motion. Age-matched controls performed generally worse than DPs, which may potentially be explained by their higher rates of false alarms. Factors that could have influenced the results are discussed.
Collapse
Affiliation(s)
- Tom Bylemans
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Leia Vrancken
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Karl Verfaillie
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| |
Collapse
|
9
|
Lander K, Butcher NL. Recognizing Genuine From Posed Facial Expressions: Exploring the Role of Dynamic Information and Face Familiarity. Front Psychol 2020; 11:1378. [PMID: 32719634 PMCID: PMC7347903 DOI: 10.3389/fpsyg.2020.01378] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Accepted: 05/22/2020] [Indexed: 11/13/2022] Open
Abstract
The accurate recognition of emotion is important for interpersonal interaction and when navigating our social world. However, not all facial displays reflect the emotional experience currently being felt by the expresser. Indeed, faces express both genuine and posed displays of emotion. In this article, we summarize the importance of motion for the recognition of face identity before critically outlining the role of dynamic information in determining facial expressions and distinguishing between genuine and posed expressions of emotion. We propose that both dynamic information and face familiarity may modulate our ability to determine whether an expression is genuine or not. Finally, we consider the shared role for dynamic information across different face recognition tasks and the wider impact of face familiarity on determining genuine from posed expressions during real-world interactions.
Collapse
Affiliation(s)
- Karen Lander
- Division of Neuroscience and Experimental Psychology, University of Manchester, Manchester, United Kingdom
| | - Natalie L Butcher
- School of Social Sciences, Humanities and Law, Teesside University, Middlesbrough, United Kingdom
| |
Collapse
|
10
|
Face search in CCTV surveillance. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2019; 4:37. [PMID: 31549263 PMCID: PMC6757089 DOI: 10.1186/s41235-019-0193-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2019] [Accepted: 08/14/2019] [Indexed: 11/10/2022]
Abstract
Background We present a series of experiments on visual search in a highly complex environment, security closed-circuit television (CCTV). Using real surveillance footage from a large city transport hub, we ask viewers to search for target individuals. Search targets are presented in a number of ways, using naturally occurring images including their passports and photo ID, social media and custody images/videos. Our aim is to establish general principles for search efficiency within this realistic context. Results Across four studies we find that providing multiple photos of the search target consistently improves performance. Three different photos of the target, taken at different times, give substantial performance improvements by comparison to a single target. By contrast, providing targets in moving videos or with biographical context does not lead to improvements in search accuracy. Conclusions We discuss the multiple-image advantage in relation to a growing understanding of the importance of within-person variability in face recognition. Electronic supplementary material The online version of this article (10.1186/s41235-019-0193-0) contains supplementary material, which is available to authorized users.
Collapse
|
11
|
The Frozen Effect: Objects in motion are more aesthetically appealing than objects frozen in time. PLoS One 2019; 14:e0215813. [PMID: 31095600 PMCID: PMC6522023 DOI: 10.1371/journal.pone.0215813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Accepted: 04/09/2019] [Indexed: 11/20/2022] Open
Abstract
Videos of moving faces are more flattering than static images of the same face, a phenomenon dubbed the Frozen Face Effect. This may reflect an aesthetic preference for faces viewed in a more ecological context than still photographs. In the current set of experiments, we sought to determine whether this effect is unique to facial processing, or if motion confers an aesthetic benefit to other stimulus categories as well, such as bodies and objects—that is, a more generalized ‘Frozen Effect’ (FE). If motion were the critical factor in the FE, we would expect the video of a body or object in motion to be significantly more appealing than when seen in individual, static frames. To examine this, we asked participants to rate sets of videos of bodies and objects in motion along with the still frames constituting each video. Extending the original FFE, we found that participants rated videos as significantly more flattering than each video’s corresponding still images, regardless of stimulus domain, suggesting that the FFE generalizes well beyond face perception. Interestingly, the magnitude of the FE increased with the predictability of stimulus movement. Our results suggest that observers prefer bodies and objects in motion over the same information presented in static form, and the more predictable the motion, the stronger the preference. Motion imbues objects and bodies with greater aesthetic appeal, which has implications for how one might choose to portray oneself in various social media platforms.
Collapse
|
12
|
Fitzgerald RJ, Price HL, Valentine T. Eyewitness Identification: Live, Photo, and Video Lineups. ACTA ACUST UNITED AC 2018; 24:307-325. [PMID: 30100702 PMCID: PMC6078069 DOI: 10.1037/law0000164] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
The medium used to present lineup members for eyewitness identification varies according to the location of the criminal investigation. Although in some jurisdictions live lineups remain the default procedure, elsewhere this practice has been replaced with photo or video lineups. This divergence leads to two possibilities: Either some jurisdictions are not using the lineup medium that best facilitates accurate eyewitness identification or the lineup medium has no bearing on the accuracy of eyewitness identification. Photo and video lineups are the more practical options, but proponents of live lineups believe witnesses make better identification decisions when the lineup members are physically present. Here, the authors argue against this live superiority hypothesis. To be superior in practice, the benefits of live presentation would have to be substantial enough to overcome the inherent difficulties of organizing and administering a live lineup. The review of the literature suggests that even in experimental settings, where these difficulties can be minimized, it is not clear that live lineups are superior. The authors conclude that live lineups are rarely the best option in practice and encourage further research to establish which nonlive medium provides the best balance between probative value and practical utility.
Collapse
Affiliation(s)
| | | | - Tim Valentine
- Department of Psychology, Goldsmiths, University of London
| |
Collapse
|
13
|
Bindemann M, Johnston RA. Understanding how unfamiliar faces become familiar: Introduction to a special issue on face learning. Q J Exp Psychol (Hove) 2017; 70:859-862. [PMID: 27918245 DOI: 10.1080/17470218.2016.1267235] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Markus Bindemann
- a School of Psychology , University of Kent , Canterbury , Kent , UK
| | - Robert A Johnston
- a School of Psychology , University of Kent , Canterbury , Kent , UK
| |
Collapse
|
14
|
Butcher N, Lander K, Jagger R. A search advantage for dynamic same-race and other-race faces. VISUAL COGNITION 2016. [DOI: 10.1080/13506285.2016.1262487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Natalie Butcher
- Social Futures Institute, Teesside University, Middlesbrough, UK
| | - Karen Lander
- School of Psychological Sciences, University of Manchester, Manchester, UK
| | - Rachel Jagger
- School of Psychological Sciences, University of Manchester, Manchester, UK
| |
Collapse
|
15
|
Affiliation(s)
- Karin S. Pilz
- School of Psychology, University of Aberdeen, Aberdeen, Scotland, UK
| | - Ian M. Thornton
- Department of Cognitive Science, Faculty of Media & Knowledge Science, University of Malta, Msida, Malta
| |
Collapse
|
16
|
Liu CH, Chen W, Ward J, Takahashi N. Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View. Sci Rep 2016; 6:31001. [PMID: 27499252 PMCID: PMC4976339 DOI: 10.1038/srep31001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Accepted: 07/11/2016] [Indexed: 11/18/2022] Open
Abstract
Prior research based on static images has found limited improvement for recognising previously learnt faces in a new expression after several different facial expressions of these faces had been shown during the learning session. We investigated whether non-rigid motion of facial expression facilitates the learning process. In Experiment 1, participants remembered faces that were either presented in short video clips or still images. To assess the effect of exposure to expression variation, each face was either learnt through a single expression or three different expressions. Experiment 2 examined whether learning faces in video clips could generalise more effectively to a new view. The results show that faces learnt from video clips generalised effectively to a new expression with exposure to a single expression, whereas faces learnt from stills showed poorer generalisation with exposure to either single or three expressions. However, although superior recognition performance was demonstrated for faces learnt through video clips, dynamic facial expression did not create better transfer of learning to faces tested in a new view. The data thus fail to support the hypothesis that non-rigid motion enhances viewpoint invariance. These findings reveal both benefits and limitations of exposures to moving expressions for expression-invariant face recognition.
Collapse
Affiliation(s)
- Chang Hong Liu
- Department of Psychology, Faculty of Science and Technology Bournemouth University, Talbot Campus Fern Barrow Poole, Dorset, BH12 5BB, United Kingdom
| | - Wenfeng Chen
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Chaoyang District, Beijing 100101, China
| | - James Ward
- Department of Computer Science, University of Hull, Cottingham Road, Hull, HU6 7RX, United Kingdom
| | - Nozomi Takahashi
- Department of Psychology, Graduate School of Literature and Social Science Nihon University, 3-25-40, Setagaya-ku, Sakurajosui Tokyo 156-8550, Japan
| |
Collapse
|