1
|
Packard PA, Soto-Faraco S. Crossmodal semantic congruence and rarity improve episodic memory. Mem Cognit 2025:10.3758/s13421-024-01659-9. [PMID: 39971892 DOI: 10.3758/s13421-024-01659-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/11/2024] [Indexed: 02/21/2025]
Abstract
Semantic congruence across sensory modalities at encoding of information has been shown to improve memory performance over a short time span. However, the beneficial effect of crossmodal congruence is less well established when it comes to episodic memories over longer retention periods. This gap in knowledge is particularly wide for cross-modal semantic congruence under incidental encoding conditions, a process that is especially relevant in everyday life. Here, we present the results of a series of four experiments (total N = 232) using the dual-process signal detection model to examine crossmodal semantic effects on recollection and familiarity. In Experiment 1, we established the beneficial effects of crossmodal semantics in younger adults: hearing congruent compared with incongruent object sounds during the incidental encoding of object images increased recollection and familiarity after 48 h. In Experiment 2 we reproduced and extended the finding to a sample of older participants (50-65 years old): older people displayed a commensurable crossmodal congruence effect, despite a selective decline in recollection compared with younger adults. In Experiment 3, we showed that crossmodal facilitation is resilient to large imbalances between the frequency of congruent versus incongruent events (from 10 to 90%): Albeit rare events are more memorable than frequent ones overall, the impact of this rarity effect on the crossmodal benefit was small, and only affected familiarity. Collectively, these findings reveal a robust crossmodal semantic congruence effect for incidentally encoded visual stimuli over a long retention span, bearing the hallmarks of episodic memory enhancement.
Collapse
Affiliation(s)
- Pau Alexander Packard
- Center for Brain and Cognition, Universitat Pompeu Fabra, Carrer de Ramon Trias Fargas, 25-27, 08005, Barcelona, Spain
| | - Salvador Soto-Faraco
- Center for Brain and Cognition, Universitat Pompeu Fabra, Carrer de Ramon Trias Fargas, 25-27, 08005, Barcelona, Spain.
- Institució Catalana de Recerca I Estudis Avançats, ICREA, Barcelona, Spain.
| |
Collapse
|
2
|
Le Floch A, Ropars G. Invited reply: Comment on 'Left-right asymmetry of the Maxwell spot centroids in adults without and with dyslexia'. Proc Biol Sci 2025; 292:20243036. [PMID: 39936338 DOI: 10.1098/rspb.2024.3036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Revised: 01/10/2025] [Accepted: 01/10/2025] [Indexed: 02/13/2025] Open
Affiliation(s)
- Albert Le Floch
- Laboratoire de Physique des Lasers, UFR SPM, Université de Rennes, Rennes 35042, France
- Laboratoire d'Electronique Quantique et Chiralités, 20 Square Marcel Bouget, Rennes 35700, France
| | - Guy Ropars
- Laboratoire de Physique des Lasers, UFR SPM, Université de Rennes, Rennes 35042, France
| |
Collapse
|
3
|
Le Floch A, Ropars G. Hebbian Optocontrol of Cross-Modal Disruptive Reading in Increasing Acoustic Noise in an Adult with Developmental Coordination Disorder: A Case Report. Brain Sci 2024; 14:1208. [PMID: 39766407 PMCID: PMC11674537 DOI: 10.3390/brainsci14121208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2024] [Revised: 11/22/2024] [Accepted: 11/27/2024] [Indexed: 01/11/2025] Open
Abstract
Acoustic noise is known to perturb reading for good readers, including children and adults. This external acoustic noise interfering at the multimodal areas in the brain causes difficulties reducing reading and writing performances. Moreover, it is known that people with developmental coordination disorder (DCD) and dyslexia have reading deficits even in the absence of acoustic noise. The goal of this study is to investigate the effects of additional acoustic noise on an adult with DCD and dyslexia. Indeed, as vision is the main source of information for the brain during reading, a noisy internal visual crowding has been observed in many cases of readers with dyslexia, as additional mirror or duplicated images of words are perceived by these observers, simultaneously with the primary images. Here, we show that when the noisy internal visual crowding and an increasing external acoustic noise are superimposed, a reading disruptive threshold at about 50 to 60 dBa of noise is reached, depending on the type of acoustic noise for a young adult with DCD and dyslexia but not for a control. More interestingly, we report that this disruptive noise threshold can be controlled by Hebbian mechanisms linked to a pulse-modulated lighting that erases the confusing internal crowding images. An improvement of 12 dBa in the disruptive threshold is then observed with two types of acoustic noises, showing the potential utility of Hebbian optocontrol in managing reading difficulties in adults with DCD and dyslexia.
Collapse
Affiliation(s)
- Albert Le Floch
- Laser Physics Laboratory, University of Rennes, 35042 Rennes Cedex, France;
- Quantum Electronics and Chiralities Laboratory, 20 Square Marcel Bouget, 35700 Rennes Cedex, France
| | - Guy Ropars
- Laser Physics Laboratory, University of Rennes, 35042 Rennes Cedex, France;
- UFR SPM, University of Rennes, 35042 Rennes Cedex, France
| |
Collapse
|
4
|
Cai B, Tang X, Wang A, Zhang M. Semantically congruent bimodal presentation modulates cognitive control over attentional guidance by working memory. Mem Cognit 2024; 52:1065-1078. [PMID: 38308161 DOI: 10.3758/s13421-024-01521-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/13/2024] [Indexed: 02/04/2024]
Abstract
Although previous studies have well established that audiovisual enhancement has a promoting effect on working memory and selective attention, there remains an open question about the influence of audiovisual enhancement on attentional guidance by working memory. To address this issue, the present study adopted a dual-task paradigm that combines a working memory task and a visual search task, in which the content of working memory was presented in audiovisual or visual modalities. Given the importance of search speed in memory-driven attentional suppression, we divided participants into two groups based on their reaction time (RT) in neutral trials and examined whether audiovisual enhancement in attentional suppression was modulated by search speed. The results showed that the slow search group exhibited a robust memory-driven attentional suppression effect, and the suppression effect started earlier and its magnitude was greater in the audiovisual condition than in the visual-only condition. However, among the faster search group, the suppression effect only occurred in the trials with longer RTs in the visual-only condition, and its temporal dynamics were selectively improved in the audiovisual condition. Furthermore, audiovisual enhancement of memory-driven attention evolved over time. These findings suggest that semantically congruent bimodal presentation can progressively facilitate the strength and temporal dynamics of memory-driven attentional suppression, and that search speed plays an important role in this process. This may be due to a synergistic effect between multisensory working memory representation and top-down suppression mechanism. The present study demonstrates the flexible role of audiovisual enhancement on cognitive control over memory-driven attention.
Collapse
Affiliation(s)
- Biye Cai
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.
| |
Collapse
|
5
|
Yasoda-Mohan A, Chen F, Ó Sé C, Allard R, Ost J, Vanneste S. Phantom perception as a Bayesian inference problem: a pilot study. J Neurophysiol 2024; 131:1311-1327. [PMID: 38718414 DOI: 10.1152/jn.00349.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 04/18/2024] [Accepted: 05/01/2024] [Indexed: 06/19/2024] Open
Abstract
Tinnitus is the perception of a continuous sound in the absence of an external source. Although the role of the auditory system is well investigated, there is a gap in how multisensory signals are integrated to produce a single percept in tinnitus. Here, we train participants to learn a new sensory environment by associating a cue with a target signal that varies in perceptual threshold. In the test phase, we present only the cue to see whether the person perceives an illusion of the target signal. We perform two separate experiments to observe the behavioral and electrophysiological responses to the learning and test phases in 1) healthy young adults and 2) people with continuous subjective tinnitus and matched control subjects. We observed that in both parts of the study the percentage of false alarms was negatively correlated with the 75% detection threshold. Additionally, the perception of an illusion goes together with increased evoked response potential in frontal regions of the brain. Furthermore, in patients with tinnitus, we observe no significant difference in behavioral or evoked response in the auditory paradigm, whereas patients with tinnitus were more likely to report false alarms along with increased evoked activity during the learning and test phases in the visual paradigm. This emphasizes the importance of integrity of sensory pathways in multisensory integration and how this process may be disrupted in people with tinnitus. Furthermore, the present study also presents preliminary data supporting evidence that tinnitus patients may be building stronger perceptual models, which needs future studies with a larger population to provide concrete evidence on.NEW & NOTEWORTHY Tinnitus is the continuous phantom perception of a ringing in the ears. Recently, it has been suggested that tinnitus may be a maladaptive inference of the brain to auditory anomalies, whether they are detected or undetected by an audiogram. The present study presents empirical evidence for this hypothesis by inducing an illusion in a sensory domain that is damaged (auditory) and one that is intact (visual). It also presents novel information about how people with tinnitus process multisensory stimuli in the audio-visual domain.
Collapse
Affiliation(s)
- Anusha Yasoda-Mohan
- Global Brain Health Institute, Trinity College Dublin, Dublin, Ireland
- Lab for Clinical and Integrative Neuroscience, Trinity College Institute for Neuroscience, School of Psychology, Trinity College Dublin, Dublin, Ireland
| | - Feifan Chen
- Lab for Clinical and Integrative Neuroscience, Trinity College Institute for Neuroscience, School of Psychology, Trinity College Dublin, Dublin, Ireland
| | - Colum Ó Sé
- Lab for Clinical and Integrative Neuroscience, Trinity College Institute for Neuroscience, School of Psychology, Trinity College Dublin, Dublin, Ireland
| | - Remy Allard
- School of Optometry, University of Montreal, Montreal, Quebec, Canada
| | - Jan Ost
- Brain Research Center for Advanced, International, Innovative and Interdisciplinary Neuromodulation, Ghent, Belgium
| | - Sven Vanneste
- Global Brain Health Institute, Trinity College Dublin, Dublin, Ireland
- Lab for Clinical and Integrative Neuroscience, Trinity College Institute for Neuroscience, School of Psychology, Trinity College Dublin, Dublin, Ireland
- Brain Research Center for Advanced, International, Innovative and Interdisciplinary Neuromodulation, Ghent, Belgium
| |
Collapse
|
6
|
Yasoda-Mohan A, Faubert J, Ost J, Kropotov JD, Vanneste S. Investigating sensitivity to multi-domain prediction errors in chronic auditory phantom perception. Sci Rep 2024; 14:11036. [PMID: 38744906 PMCID: PMC11094085 DOI: 10.1038/s41598-024-61045-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/29/2024] [Indexed: 05/16/2024] Open
Abstract
The perception of a continuous phantom in a sensory domain in the absence of an external stimulus is explained as a maladaptive compensation of aberrant predictive coding, a proposed unified theory of brain functioning. If this were true, these changes would occur not only in the domain of the phantom percept but in other sensory domains as well. We confirm this hypothesis by using tinnitus (continuous phantom sound) as a model and probe the predictive coding mechanism using the established local-global oddball paradigm in both the auditory and visual domains. We observe that tinnitus patients are sensitive to changes in predictive coding not only in the auditory but also in the visual domain. We report changes in well-established components of event-related EEG such as the mismatch negativity. Furthermore, deviations in stimulus characteristics were correlated with the subjective tinnitus distress. These results provide an empirical confirmation that aberrant perceptions are a symptom of a higher-order systemic disorder transcending the domain of the percept.
Collapse
Affiliation(s)
- Anusha Yasoda-Mohan
- Lab for Clinical and Integrative Neuroscience, School of Psychology, Trinity College Institute for Neuroscience, Trinity College Dublin, College Green, Dublin 2, Ireland
- Global Brain Health Institute, Trinity College Dublin, Dublin 2, Ireland
| | - Jocelyn Faubert
- Faubert Lab, School of Optometry, University of Montreal, Montreal, Canada
| | - Jan Ost
- Brain Research Center for Advanced International Innovative and Interdisciplinary Neuromodulation, Ghent, Belgium
| | - Juri D Kropotov
- N.P. Bechtereva Institute of the Human Brain of Russian Academy of Sciences, St. Petersburg, Russia
| | - Sven Vanneste
- Lab for Clinical and Integrative Neuroscience, School of Psychology, Trinity College Institute for Neuroscience, Trinity College Dublin, College Green, Dublin 2, Ireland.
- Global Brain Health Institute, Trinity College Dublin, Dublin 2, Ireland.
- Brain Research Center for Advanced International Innovative and Interdisciplinary Neuromodulation, Ghent, Belgium.
| |
Collapse
|
7
|
Zhao S, Zhou Y, Ma F, Xie J, Feng C, Feng W. The dissociation of semantically congruent and incongruent cross-modal effects on the visual attentional blink. Front Neurosci 2023; 17:1295010. [PMID: 38161792 PMCID: PMC10755906 DOI: 10.3389/fnins.2023.1295010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 11/29/2023] [Indexed: 01/03/2024] Open
Abstract
Introduction Recent studies have found that the sound-induced alleviation of visual attentional blink, a well-known phenomenon exemplifying the beneficial influence of multisensory integration on time-based attention, was larger when that sound was semantically congruent relative to incongruent with the second visual target (T2). Although such an audiovisual congruency effect has been attributed mainly to the semantic conflict carried by the incongruent sound restraining that sound from facilitating T2 processing, it is still unclear whether the integrated semantic information carried by the congruent sound benefits T2 processing. Methods To dissociate the congruence-induced benefit and incongruence-induced reduction in the alleviation of visual attentional blink at the behavioral and neural levels, the present study combined behavioral measures and event-related potential (ERP) recordings in a visual attentional blink task wherein the T2-accompanying sound, when delivered, could be semantically neutral in addition to congruent or incongruent with respect to T2. Results The behavioral data clearly showed that compared to the neutral sound, the congruent sound improved T2 discrimination during the blink to a higher degree while the incongruent sound improved it to a lesser degree. The T2-locked ERP data revealed that the early occipital cross-modal N195 component (192-228 ms after T2 onset) was uniquely larger in the congruent-sound condition than in the neutral-sound and incongruent-sound conditions, whereas the late parietal cross-modal N440 component (400-500 ms) was prominent only in the incongruent-sound condition. Discussion These findings provide strong evidence that the modulating effect of audiovisual semantic congruency on the sound-induced alleviation of visual attentional blink contains not only a late incongruence-induced cost but also an early congruence-induced benefit, thereby demonstrating for the first time an unequivocal congruent-sound-induced benefit in alleviating the limitation of time-based visual attention.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Yuxin Zhou
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Fangfang Ma
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Jimei Xie
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, China
- Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| |
Collapse
|
8
|
Sangari A, Bingham MA, Cummins M, Sood A, Tong A, Purcell P, Schlesinger JJ. A Spatiotemporal and Multisensory Approach to Designing Wearable Clinical ICU Alarms. J Med Syst 2023; 47:105. [PMID: 37847469 DOI: 10.1007/s10916-023-01997-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 09/23/2023] [Indexed: 10/18/2023]
Abstract
In health care, auditory alarms are an important aspect of an informatics system that monitors patients and alerts clinicians attending to multiple concurrent tasks. However, the volume, design, and pervasiveness of existing Intensive Care Unit (ICU) alarms can make it difficult to quickly distinguish their meaning and importance. In this study, we evaluated the effectiveness of two design approaches not yet explored in a smartwatch-based alarm system designed for ICU use: (1) using audiovisual spatial colocalization and (2) adding haptic (i.e., touch) information. We compared the performance of 30 study participants using ICU smartwatch alarms containing auditory icons in two implementations of the audio modality: colocalized with the visual cue on the smartwatch's low-quality speaker versus delivered from a higher quality speaker located two feet away from participants (like a stationary alarm bay situated near patients in the ICU). Additionally, we compared participant performance using alarms with two sensory modalities (visual and audio) against alarms with three sensory modalities (adding haptic cues). Participants were 10.1% (0.24s) faster at responding to alarms when auditory information was delivered from the smartwatch instead of the higher quality external speaker. Meanwhile, adding haptic information to alarms improved response times to alarms by 12.2% (0.23s) and response times on their primary task by 10.3% (0.08s). Participants rated learnability and ease of use higher for alarms with haptic information. These small but statistically significant improvements demonstrate that audiovisual colocalization and multisensory alarm design can improve user response times.
Collapse
Affiliation(s)
- Ayush Sangari
- Renaissance School of Medicine, Stony Brook University, 100 Nicolls Rd, Stony Brook, NY, 11790, USA.
| | - Molly A Bingham
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Mabel Cummins
- Department of Neuroscience, Vanderbilt University, Nashville, TN, USA
| | - Aditya Sood
- Long Island Jewish Medical Center, New Hyde Park, New York, USA
| | - Anqy Tong
- Department of Neuroscience, Vanderbilt University, Nashville, TN, USA
| | | | - Joseph J Schlesinger
- Division of Critical Care Medicine, Department of Anesthesiology, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
9
|
Cheng J, Li J, Wang A, Zhang M. Semantic Bimodal Presentation Differentially Slows Working Memory Retrieval. Brain Sci 2023; 13:brainsci13050811. [PMID: 37239283 DOI: 10.3390/brainsci13050811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Revised: 05/14/2023] [Accepted: 05/16/2023] [Indexed: 05/28/2023] Open
Abstract
Although evidence has shown that working memory (WM) can be differentially affected by the multisensory congruency of different visual and auditory stimuli, it remains unclear whether different multisensory congruency about concrete and abstract words could impact further WM retrieval. By manipulating the attention focus toward different matching conditions of visual and auditory word characteristics in a 2-back paradigm, the present study revealed that for the characteristically incongruent condition under the auditory retrieval condition, the response to abstract words was faster than that to concrete words, indicating that auditory abstract words are not affected by visual representation, while auditory concrete words are. Alternatively, for concrete words under the visual retrieval condition, WM retrieval was faster in the characteristically incongruent condition than in the characteristically congruent condition, indicating that visual representation formed by auditory concrete words may interfere with WM retrieval of visual concrete words. The present findings demonstrated that concrete words in multisensory conditions may be too aggressively encoded with other visual representations, which would inadvertently slow WM retrieval. However, abstract words seem to suppress interference better, showing better WM performance than concrete words in the multisensory condition.
Collapse
Affiliation(s)
- Jia Cheng
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou 215123, China
| | - Jingjing Li
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou 215123, China
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou 215123, China
| | - Ming Zhang
- Department of Psychology, Suzhou University of Science and Technology, Suzhou 215009, China
- Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama 700-0082, Japan
| |
Collapse
|
10
|
Kandemir G, Akyürek EG. Impulse perturbation reveals cross-modal access to sensory working memory through learned associations. Neuroimage 2023; 274:120156. [PMID: 37146781 DOI: 10.1016/j.neuroimage.2023.120156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 04/22/2023] [Accepted: 05/03/2023] [Indexed: 05/07/2023] Open
Abstract
We investigated if learned associations between visual and auditory stimuli can afford full cross-modal access to working memory. Previous research using the impulse perturbation technique has shown that cross-modal access to working memory is one-sided; visual impulses reveal both auditory and visual memoranda, but auditory impulses do not seem to reveal visual memoranda (Wolff et al., 2020b). Our participants first learned to associate six auditory pure tones with six visual orientation gratings. Next, a delayed match-to-sample task for the orientations was completed, while EEG was recorded. Orientation memories were recalled either via their learned auditory counterpart, or were visually presented. We then decoded the orientation memories from the EEG responses to both auditory and visual impulses presented during the memory delay. Working memory content could always be decoded from visual impulses. Importantly, through recall of the learned associations, the auditory impulse also evoked a decodable response from the visual WM network, providing evidence for full cross-modal access. We also observed that after a brief initial dynamic period, the representational codes of the memory items generalized across time, as well as between perceptual maintenance and long-term recall conditions. Our results thus demonstrate that accessing learned associations in long-term memory provides a cross-modal pathway to working memory that seems to be based on a common coding scheme.
Collapse
Affiliation(s)
- Güven Kandemir
- Department of Experimental Psychology, University of Groningen, The Netherlands; Institute for Brain and Behavior, Vrije Universiteit Amsterdam, The Netherlands.
| | - Elkan G Akyürek
- Department of Experimental Psychology, University of Groningen, The Netherlands
| |
Collapse
|
11
|
Abstract
Humans, like other species, have a preference for symmetrical visual stimuli, a preference that is influenced by factors such as age, sex, and artistic training. In particular, artistic training seems to decrease the rejection of asymmetry in abstract stimuli. However, it is not known whether the same trend would be observed in relation to concrete stimuli such as human faces. In this article, we investigated the role of expertise in visual arts, music, and dance, in the perceived beauty and attractiveness of human faces with different asymmetries. With this objective, the beauty and attractiveness of 100 photographs of faces with different degrees of asymmetry were evaluated by 116 participants with different levels of art expertise. Expertise in visual arts and dance was associated with the extent to which facial asymmetry influenced the beauty ratings assigned to the faces. The greater the art expertise in visual arts and dance, the more indifferent to facial asymmetry the participant was to evaluate beauty. The same effect was not found for music and neither for attractiveness ratings. These findings are important to help understand how face aesthetic evaluation is modified by artistic training and the difference between beauty and attractiveness evaluations.
Collapse
|