1
|
Sun X, Fu Q. The Visual Advantage Effect in Comparing Uni-Modal and Cross-Modal Probabilistic Category Learning. J Intell 2023; 11:218. [PMID: 38132836 PMCID: PMC10744040 DOI: 10.3390/jintelligence11120218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 11/14/2023] [Accepted: 11/23/2023] [Indexed: 12/23/2023] Open
Abstract
People rely on multiple learning systems to complete weather prediction (WP) tasks with visual cues. However, how people perform in audio and audiovisual modalities remains elusive. The present research investigated how the cue modality influences performance in probabilistic category learning and conscious awareness about the category knowledge acquired. A modified weather prediction task was adopted, in which the cues included two dimensions from visual, auditory, or audiovisual modalities. The results of all three experiments revealed better performances in the visual modality relative to the audio and audiovisual modalities. Moreover, participants primarily acquired unconscious knowledge in the audio and audiovisual modalities, while conscious knowledge was acquired in the visual modality. Interestingly, factors such as the amount of training, the complexity of visual stimuli, and the number of objects to which the two cues belonged influenced the amount of conscious knowledge acquired but did not change the visual advantage effect. These findings suggest that individuals can learn probabilistic cues and category associations across different modalities, but a robust visual advantage persists. Specifically, visual associations can be learned more effectively, and are more likely to become conscious. The possible causes and implications of these effects are discussed.
Collapse
Affiliation(s)
- Xunwei Sun
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China;
- Department of Psychology, University of Chinese Academy of Sciences, Beijing 100083, China
- Beijing Key Laboratory of Behavior and Mental Health, School of Psychological and Cognitive Sciences, Peking University, Beijing 100080, China
| | - Qiufang Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China;
- Department of Psychology, University of Chinese Academy of Sciences, Beijing 100083, China
| |
Collapse
|
2
|
Newell FN, McKenna E, Seveso MA, Devine I, Alahmad F, Hirst RJ, O'Dowd A. Multisensory perception constrains the formation of object categories: a review of evidence from sensory-driven and predictive processes on categorical decisions. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220342. [PMID: 37545304 PMCID: PMC10404931 DOI: 10.1098/rstb.2022.0342] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 06/29/2023] [Indexed: 08/08/2023] Open
Abstract
Although object categorization is a fundamental cognitive ability, it is also a complex process going beyond the perception and organization of sensory stimulation. Here we review existing evidence about how the human brain acquires and organizes multisensory inputs into object representations that may lead to conceptual knowledge in memory. We first focus on evidence for two processes on object perception, multisensory integration of redundant information (e.g. seeing and feeling a shape) and crossmodal, statistical learning of complementary information (e.g. the 'moo' sound of a cow and its visual shape). For both processes, the importance attributed to each sensory input in constructing a multisensory representation of an object depends on the working range of the specific sensory modality, the relative reliability or distinctiveness of the encoded information and top-down predictions. Moreover, apart from sensory-driven influences on perception, the acquisition of featural information across modalities can affect semantic memory and, in turn, influence category decisions. In sum, we argue that both multisensory processes independently constrain the formation of object categories across the lifespan, possibly through early and late integration mechanisms, respectively, to allow us to efficiently achieve the everyday, but remarkable, ability of recognizing objects. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- F. N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - E. McKenna
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - M. A. Seveso
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - I. Devine
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - F. Alahmad
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - R. J. Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - A. O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| |
Collapse
|
3
|
Roark CL, Lescht E, Wray AH, Chandrasekaran B. Auditory and visual category learning in children and adults. Dev Psychol 2023; 59:963-975. [PMID: 36862449 PMCID: PMC10164074 DOI: 10.1037/dev0001525] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2023]
Abstract
Categories are fundamental to everyday life and the ability to learn new categories is relevant across the lifespan. Categories are ubiquitous across modalities, supporting complex processes such as object recognition and speech perception. Prior work has proposed that different categories may engage learning systems with unique developmental trajectories. There is a limited understanding of how perceptual and cognitive development influences learning as prior studies have examined separate participants in a single modality. The current study presents a comprehensive assessment of category learning in 8-12-year-old children (12 female; 34 white, 1 Asian, 1 more than one race; M household income $85-$100 K) and 18-61-year-old adults (13 female; 32 white, 10 Black or African American, 4 Asian, 2 more than one race, 1 other; M household income $40-55 K) in a broad sample collected online from the United States. Across multiple sessions, participants learned categories across modalities (auditory, visual) that engage different learning systems (explicit, procedural). Unsurprisingly, adults outperformed children across all tasks. However, this enhanced performance was asymmetrical across categories and modalities. Adults far outperformed children in learning visual explicit categories and auditory procedural categories, with fewer differences across development for other types of categories. Adults' general benefit over children was due to enhanced information processing, while their superior performance for visual explicit and auditory procedural categories was associated with less cautious correct responses. These results demonstrate an interaction between perceptual and cognitive development that influences learning of categories that may correspond to the development of real-world skills such as speech perception and reading. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
- Casey L. Roark
- University of Pittsburgh, Department of Communication Science and Disorders
- Center for the Neural Basis of Cognition
| | - Erica Lescht
- University of Pittsburgh, Department of Communication Science and Disorders
| | - Amanda Hampton Wray
- University of Pittsburgh, Department of Communication Science and Disorders
- Center for the Neural Basis of Cognition
| | - Bharath Chandrasekaran
- University of Pittsburgh, Department of Communication Science and Disorders
- Center for the Neural Basis of Cognition
| |
Collapse
|
4
|
Ren J, Wang M. Development of statistical learning ability across modalities, domains, and languages. J Exp Child Psychol 2023; 226:105570. [PMID: 36332433 DOI: 10.1016/j.jecp.2022.105570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 09/22/2022] [Accepted: 09/28/2022] [Indexed: 11/06/2022]
Abstract
Statistical learning (SL) is defined as our ability to use statistics (e.g., frequencies or transitional probabilities) to detect implicit regularities in the environment. Limited research has examined the developmental trajectory of SL across domains and modalities, and no previous research has made systematic comparisons across domains, modalities, and languages using comparable tasks. The current study investigated the development of SL ability across 9-, 11-, and 13-year-old native Chinese-speaking children in non-linguistic visual and auditory SL, first-language Chinese visual and auditory SL, and second-language English visual and auditory SL. Results showed that children across the three age groups achieved all types of SL, and they performed better in visual modality than in auditory modality. Furthermore, while visual SL constantly improved from 9- to 11- to 13-year-olds, auditory SL improved only from 11- to 13-year-olds but not from 9- to 11-year-olds, which could be explained by the discrepancy in developmental trajectory between auditory language and working memory. This pattern of age and modality interaction was similar across non-linguistic Chinese and English SL. A significant interaction between modality and language type also showed that better learning was achieved in visual SL as compared with auditory SL in both non-linguistic and English stimuli. However, children performed similarly across the two modalities in Chinese, possibly due to the contribution of tonal information. Together, our findings point to the joint function of age, modality, and language type in SL development.
Collapse
Affiliation(s)
- Jinglei Ren
- Department of Human Development and Quantitative Methodology, University of Maryland, College Park, College Park, MD 20742, USA
| | - Min Wang
- Department of Human Development and Quantitative Methodology, University of Maryland, College Park, College Park, MD 20742, USA.
| |
Collapse
|
5
|
Sun Y, Fu Q. How do irrelevant stimuli from another modality influence responses to the targets in a same-different task. Conscious Cogn 2023; 107:103455. [PMID: 36586291 DOI: 10.1016/j.concog.2022.103455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 11/13/2022] [Accepted: 12/13/2022] [Indexed: 12/30/2022]
Abstract
It remains unclear whether multisensory interaction can implicitly occur at the abstract level. To address this issue, a same-different task was used to select comparable images and sounds in Experiment 1. Then, the stimuli with various levels of discrimination difficulty were adopted in a modified same-different task in Experiments 2, 3, and 4. The resultsshowed that only when the irrelevant stimuli were easily distinguishable, aconsistency effectcould beobservedin the testing phase. Moreover, when easily distinguishableirrelevant stimuliwere simultaneously presented with difficulttarget stimuli, irrelevant auditorystimuli facilitated responses to visual targets whereas irrelevant visual stimuli interfered with responses to auditorytargetsin the training phase,indicating an asymmetry in the role of visual and auditory in abstract multisensory integration. The results suggested that abstract multisensory information could be implicitly integrated and the inverse effectiveness principle might not apply to high-level processing of abstract multisensory integration.
Collapse
Affiliation(s)
- Ying Sun
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Qiufang Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
6
|
Wang J, Lu J, Xu Z, Wang X. When Lights Can Breathe: Investigating the Influences of Breathing Lights on Users' Emotion. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:13205. [PMID: 36293785 PMCID: PMC9603525 DOI: 10.3390/ijerph192013205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 09/30/2022] [Accepted: 10/02/2022] [Indexed: 06/16/2023]
Abstract
Light can significantly influence users' physiological and behavioural performance. However, how light breathing influences users' mood regulation remains unknown. To fill this gap, this study conducted a 2-by-2 experiment (N = 20) with light breathing as the between-subject factor and light condition as the within-subject factor. Both physiological indicators and subjective indicators are measured to reflect mood regulation. The data were analysed using a generalised linear mixed model. The results showed that breathing lights are effective in regulating users' moods. More specifically, breathing lights help users lower their electrodermal values and heart rates. Users did not report any significant difference in terms of subjective measures, which suggest that the influence of a breathing light happens unconsciously. Furthermore, this effect is significant for both cold and warm colour temperatures. Designers and engineers can use the research findings to manage user emotion when necessary.
Collapse
|
7
|
Li J, Deng SW. Facilitation and interference effects of the multisensory context on learning: a systematic review and meta-analysis. PSYCHOLOGICAL RESEARCH 2022; 87:1334-1352. [DOI: 10.1007/s00426-022-01733-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 08/28/2022] [Indexed: 11/24/2022]
|
8
|
Are children with unilateral hearing loss more tired? Int J Pediatr Otorhinolaryngol 2022; 155:111075. [PMID: 35189448 DOI: 10.1016/j.ijporl.2022.111075] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Revised: 12/15/2021] [Accepted: 02/12/2022] [Indexed: 11/22/2022]
Abstract
OBJECTIVE To determine whether children with unilateral sensorineural hearing loss (USNHL) and unilateral conductive hearing loss (UCHL) have higher levels of fatigue than literature reported normal hearing (LRNH) children. METHODS This was a cross-sectional survey utilizing the PedsQL™ Multidimensional Fatigue Scale administered to children with unilateral hearing loss (UHL) and their parents at two tertiary care academic medical centers and a nationwide microtia/atresia conference. The PedsQL™ Multidimensional Fatigue Scale was used to compare child and parental proxy reports of fatigue among USNHL, UCHL, and LRNH children. ANOVA and post-hoc Tukey Honest Significant Difference testing were used for statistical analysis. RESULTS Of 69 children included in the study, 42 had UCHL (61%) and 27 (39%) had USNHL. Children with USNHL reported more total fatigue (mean 69.1, SD 19.3) than LRNH children (mean 80.5, SD 13.3; difference -11.4; 95% CI: -19.98 to -2.84) and children with UCHL (mean 78.0, SD 14.5; difference -8.95; 95% CI: -17.86 to 0.04). Children with UCHL reported similar levels of fatigue compared to LRNH children (difference -2.5; 95% CI: -9.95 to 5.03). Parents of children with USNHL reported greater levels of fatigue (mean 67.6, SD 22.6) in their children than parents of LRNH children (mean 89.6, SD 11.4; difference -22.0; 95% CI: -29.8 to -14.3) and parents of children with UCHL (mean 76.2, SD 17.3; difference -8.6; 95% CI: -17.5 to 0.21). Parents of children with UCHL also report higher levels of fatigue than parents of LRNH children (difference -13.4; 95% CI: -19.98 to -6.84). CONCLUSIONS Children with USNHL reported greater levels of fatigue than LRNH children and children with UCHL. Results implicate cognitive load as an important consideration in children with hearing loss. The measurement of fatigue may be a useful indicator to determine the benefit of intervention (e.g., amplification) for these children.
Collapse
|
9
|
Boustani N, Pishghadam R, Shayesteh S. Multisensory Input Modulates P200 and L2 Sentence Comprehension: A One-Week Consolidation Phase. Front Psychol 2021; 12:746813. [PMID: 34616346 PMCID: PMC8488095 DOI: 10.3389/fpsyg.2021.746813] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Accepted: 08/31/2021] [Indexed: 11/18/2022] Open
Abstract
Multisensory input is an aid to language comprehension; however, it remains to be seen to what extent various combinations of senses may affect the P200 component and attention-related cognitive processing associated with L2 sentence comprehension along with the N400 as a later component. To this aim, we provided some multisensory input (enriched with data from three (i.e., exvolvement) and five senses (i.e., involvement)) for a list of unfamiliar words to 18 subjects. Subsequently, the words were embedded in an acceptability judgment task with 360 pragmatically correct and incorrect sentences. The task, along with the ERP recording, was conducted after a 1-week consolidation period to track any possible behavioral and electrophysiological distinctions in the retrieval of information with various sense combinations. According to the behavioral results, we found that the combination of five senses leads to more accurate and quicker responses. Based on the electrophysiological results, the combination of five senses induced a larger P200 amplitude compared to the three-sense combination. The implication is that as the sensory weight of the input increases, vocabulary retrieval is facilitated and more attention is directed to the overall comprehension of L2 sentences which leads to more accurate and quicker responses. This finding was not, however, reflected in the neural activity of the N400 component.
Collapse
Affiliation(s)
- Nasim Boustani
- Department of English, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Reza Pishghadam
- Department of English, Ferdowsi University of Mashhad, Mashhad, Iran
| | | |
Collapse
|
10
|
Karagiorgis AT, Chalas N, Karagianni M, Papadelis G, Vivas AB, Bamidis P, Paraskevopoulos E. Computerized Music-Reading Intervention Improves Resistance to Unisensory Distraction Within a Multisensory Task, in Young and Older Adults. Front Hum Neurosci 2021; 15:742607. [PMID: 34566611 PMCID: PMC8461100 DOI: 10.3389/fnhum.2021.742607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 08/23/2021] [Indexed: 11/13/2022] Open
Abstract
Incoming information from multiple sensory channels compete for attention. Processing the relevant ones and ignoring distractors, while at the same time monitoring the environment for potential threats, is crucial for survival, throughout the lifespan. However, sensory and cognitive mechanisms often decline in aging populations, making them more susceptible to distraction. Previous interventions in older adults have successfully improved resistance to distraction, but the inclusion of multisensory integration, with its unique properties in attentional capture, in the training protocol is underexplored. Here, we studied whether, and how, a 4-week intervention, which targets audiovisual integration, affects the ability to deal with task-irrelevant unisensory deviants within a multisensory task. Musically naïve participants engaged in a computerized music reading game and were asked to detect audiovisual incongruences between the pitch of a song's melody and the position of a disk on the screen, similar to a simplistic music staff. The effects of the intervention were evaluated via behavioral and EEG measurements in young and older adults. Behavioral findings include the absence of age-related differences in distraction and the indirect improvement of performance due to the intervention, seen as an amelioration of response bias. An asymmetry between the effects of auditory and visual deviants was identified and attributed to modality dominance. The electroencephalographic results showed that both groups shared an increase in activation strength after training, when processing auditory deviants, located in the left dorsolateral prefrontal cortex. A functional connectivity analysis revealed that only young adults improved flow of information, in a network comprised of a fronto-parietal subnetwork and a multisensory temporal area. Overall, both behavioral measures and neurophysiological findings suggest that the intervention was indirectly successful, driving a shift in response strategy in the cognitive domain and higher-level or multisensory brain areas, and leaving lower level unisensory processing unaffected.
Collapse
Affiliation(s)
- Alexandros T Karagiorgis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece.,School of Music Studies, Faculty of Fine Arts, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Nikolas Chalas
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| | - Maria Karagianni
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Georgios Papadelis
- School of Music Studies, Faculty of Fine Arts, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Ana B Vivas
- Department of Psychology, CITY College, University of York Europe Campus, Thessaloniki, Greece
| | - Panagiotis Bamidis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Evangelos Paraskevopoulos
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece.,Department of Psychology, University of Cyprus, Nicosia, Cyprus
| |
Collapse
|
11
|
Wu J, Li Q, Fu Q, Rose M, Jing L. Multisensory Information Facilitates the Categorization of Untrained Stimuli. Multisens Res 2021; 35:79-107. [PMID: 34388699 DOI: 10.1163/22134808-bja10061] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 07/30/2021] [Indexed: 11/19/2022]
Abstract
Although it has been demonstrated that multisensory information can facilitate object recognition and object memory, it remains unclear whether such facilitation effect exists in category learning. To address this issue, comparable car images and sounds were first selected by a discrimination task in Experiment 1. Then, those selected images and sounds were utilized in a prototype category learning task in Experiments 2 and 3, in which participants were trained with auditory, visual, and audiovisual stimuli, and were tested with trained or untrained stimuli within the same categories presented alone or accompanied with a congruent or incongruent stimulus in the other modality. In Experiment 2, when low-distortion stimuli (more similar to the prototypes) were trained, there was higher accuracy for audiovisual trials than visual trials, but no significant difference between audiovisual and auditory trials. During testing, accuracy was significantly higher for congruent trials than unisensory or incongruent trials, and the congruency effect was larger for untrained high-distortion stimuli than trained low-distortion stimuli. In Experiment 3, when high-distortion stimuli (less similar to the prototypes) were trained, there was higher accuracy for audiovisual trials than visual or auditory trials, and the congruency effect was larger for trained high-distortion stimuli than untrained low-distortion stimuli during testing. These findings demonstrated that higher degree of stimuli distortion resulted in more robust multisensory effect, and the categorization of not only trained but also untrained stimuli in one modality could be influenced by an accompanying stimulus in the other modality.
Collapse
Affiliation(s)
- Jie Wu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China.,Department of Psychology, Chinese Academy of Sciences, Beijing, 100101, China.,NeuroImage Nord, Department for Systems Neuroscience, University Medical Center Hamburg Eppendorf, 20246 Hamburg, Germany
| | - Qitian Li
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China.,Department of Psychology, Chinese Academy of Sciences, Beijing, 100101, China.,NeuroImage Nord, Department for Systems Neuroscience, University Medical Center Hamburg Eppendorf, 20246 Hamburg, Germany
| | - Qiufang Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China.,Department of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
| | - Michael Rose
- NeuroImage Nord, Department for Systems Neuroscience, University Medical Center Hamburg Eppendorf, 20246 Hamburg, Germany
| | - Liping Jing
- Beijing Key Lab of Traffic Data Analysis and Mining Beijing Jiaotong University, Beijing, China
| |
Collapse
|
12
|
Barutchu A, Spence C. Top-down task-specific determinants of multisensory motor reaction time enhancements and sensory switch costs. Exp Brain Res 2021; 239:1021-1034. [PMID: 33515085 PMCID: PMC7943519 DOI: 10.1007/s00221-020-06014-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 12/08/2020] [Indexed: 12/19/2022]
Abstract
This study was designed to investigate the complex interplay between multisensory processing, top–down processes related to the task relevance of sensory signals, and sensory switching. Thirty-five adults completed either a speeded detection or a discrimination task using the same auditory and visual stimuli and experimental setup. The stimuli consisted of unisensory and multisensory presentations of the letters ‘b’ and ‘d’. The multisensory stimuli were either congruent (e.g., the grapheme ‘b’ with the phoneme /b/) or incongruent (e.g., the grapheme ‘b’ with the phoneme /d/). In the detection task, the participants had to respond to all of the stimuli as rapidly as possible while, in the discrimination task, they only responded on those trials where one prespecified letter (either ‘b’ or ‘d’) was present. Incongruent multisensory stimuli resulted in faster responses as compared to unisensory stimuli in the detection task. In the discrimination task, only the dual-target congruent stimuli resulted in faster RTs, while the incongruent multisensory stimuli led to slower RTs than to unisensory stimuli; RTs were the slowest when the visual (rather than the auditory) signal was irrelevant, thus suggesting visual dominance. Switch costs were also observed when switching between unisensory target stimuli, while dual-target multisensory stimuli were less likely to be affected by sensory switching. Taken together, these findings suggest that multisensory motor enhancements and sensory switch costs are influenced by top–down modulations determined by task instructions, which can override the influence of prior learnt associations.
Collapse
Affiliation(s)
- Ayla Barutchu
- Department of Experimental Psychology, University of Oxford, Oxford, OX1 3UD, UK.
| | - Charles Spence
- Department of Experimental Psychology, University of Oxford, Oxford, OX1 3UD, UK
| |
Collapse
|
13
|
Ross P, Atkins B, Allison L, Simpson H, Duffell C, Williams M, Ermolina O. Children cannot ignore what they hear: Incongruent emotional information leads to an auditory dominance in children. J Exp Child Psychol 2021; 204:105068. [PMID: 33434707 DOI: 10.1016/j.jecp.2020.105068] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 11/19/2020] [Accepted: 12/07/2020] [Indexed: 10/22/2022]
Abstract
Effective emotion recognition is imperative to successfully navigating social situations. Research suggests differing developmental trajectories for the recognition of bodily and vocal emotion, but emotions are usually studied in isolation and rarely considered as multimodal stimuli in the literature. When adults are presented with basic multimodal sensory stimuli, the Colavita effect suggests that they have a visual dominance, whereas more recent research finds that an auditory sensory dominance may be present in children under 8 years of age. However, it is not currently known whether this phenomenon holds for more complex multimodal social stimuli. Here we presented children and adults with multimodal social stimuli consisting of emotional bodies and voices, asking them to recognize the emotion in one modality while ignoring the other. We found that adults can perform this task with no detrimental effects on performance regardless of whether the ignored emotion was congruent or not. However, children find it extremely challenging to recognize bodily emotion while trying to ignore incongruent vocal emotional information. In several instances, they performed below chance level, indicating that the auditory modality actively informs their choice of bodily emotion. Therefore, this is the first evidence, to our knowledge, of an auditory dominance in children when presented with emotionally meaningful stimuli.
Collapse
Affiliation(s)
- Paddy Ross
- Department of Psychology, Durham University, Durham DH1 3LE, UK.
| | - Beth Atkins
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| | - Laura Allison
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| | - Holly Simpson
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| | | | - Matthew Williams
- Department of Psychology, Durham University, Durham DH1 3LE, UK; Department of Psychology, Newcastle University, Newcastle upon Tyne NE1 7RU, UK
| | - Olga Ermolina
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| |
Collapse
|
14
|
Broadbent H, Osborne T, Kirkham N, Mareschal D. Touch and look: The role of visual‐haptic cues for categorical learning in primary school children. INFANT AND CHILD DEVELOPMENT 2020. [DOI: 10.1002/icd.2168] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Hannah Broadbent
- Centre for Brain and Cognitive Development, Department of psychological Sciences, BirkbeckUniversity of London London UK
- Department of Psychology, Royal HollowayUniversity of London London UK
| | - Tamsin Osborne
- Centre for Brain and Cognitive Development, Department of psychological Sciences, BirkbeckUniversity of London London UK
| | - Natasha Kirkham
- Centre for Brain and Cognitive Development, Department of psychological Sciences, BirkbeckUniversity of London London UK
| | - Denis Mareschal
- Centre for Brain and Cognitive Development, Department of psychological Sciences, BirkbeckUniversity of London London UK
| |
Collapse
|
15
|
Broadbent H, Osborne T, Mareschal D, Kirkham N. Are two cues always better than one? The role of multiple intra-sensory cues compared to multi-cross-sensory cues in children's incidental category learning. Cognition 2020; 199:104202. [PMID: 32087397 DOI: 10.1016/j.cognition.2020.104202] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 01/09/2020] [Accepted: 01/22/2020] [Indexed: 10/25/2022]
Abstract
Simultaneous presentation of multisensory cues has been found to facilitate children's learning to a greater extent than unisensory cues (e.g., Broadbent, White, Mareschal, & Kirkham, 2017). Current research into children's multisensory learning, however, does not address whether these findings are due to having multiple cross-sensory cues that enhance stimuli perception or a matter of having multiple cues, regardless of modality, that are informative to category membership. The current study examined the role of multiple cross-sensory cues (e.g., audio-visual) compared to multiple intra-sensory cues (e.g., two visual cues) on children's incidental category learning. On a computerized incidental category learning task, children aged six to ten years (N = 454) were allocated to either a visual-only (V: unisensory), auditory-only (A: unisensory), audio-visual (AV: multisensory), Visual-Visual (VV: multi-cue) or Auditory-Auditory (AA: multi-cue) condition. In children over eight years of age, the availability of two informative cues, regardless of whether they had been presented across two different modalities or within the same modality, was found to be more beneficial to incidental learning than with unisensory cues. In six-year-olds, however, the presence of multiple auditory cues (AA) did not facilitate learning to the same extent as multiple visual cues (VV) or when cues were presented across two different modalities (AV). The findings suggest that multiple sensory cues presented across or within modalities may have differential effects on children's incidental learning across middle childhood, depending on the sensory domain in which they are presented. Implications for the use of multi-cross-sensory and multiple-intra-sensory cues for children's learning across this age range are discussed.
Collapse
Affiliation(s)
- H Broadbent
- Royal Holloway, University of London, United Kingdom of Great Britain and Northern Ireland; Centre for Brain and Cognitive Development, Birkbeck University of London, United Kingdom of Great Britain and Northern Ireland.
| | - T Osborne
- Centre for Brain and Cognitive Development, Birkbeck University of London, United Kingdom of Great Britain and Northern Ireland
| | - D Mareschal
- Centre for Brain and Cognitive Development, Birkbeck University of London, United Kingdom of Great Britain and Northern Ireland
| | - N Kirkham
- Centre for Brain and Cognitive Development, Birkbeck University of London, United Kingdom of Great Britain and Northern Ireland
| |
Collapse
|
16
|
Robinson CW, Hawthorn AM, Rahman AN. Developmental Differences in Filtering Auditory and Visual Distractors During Visual Selective Attention. Front Psychol 2019; 9:2564. [PMID: 30618983 PMCID: PMC6304370 DOI: 10.3389/fpsyg.2018.02564] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2018] [Accepted: 11/29/2018] [Indexed: 11/13/2022] Open
Abstract
The current experiment examined changes in visual selective attention in young children, older children, young adults, and older adults while participants were instructed to ignore auditory and visual distractors. The aims of the study were to: (a) determine if the Perceptual Load Hypothesis (PLH) (distraction greater under low perceptual load) could predict which irrelevant stimuli would disrupt visual selective attention, and (b) if auditory to visual shifts found in modality dominance research could be extended to selective attention tasks. Overall, distractibility decreased with age, with incompatible distractors having larger costs in young and older children than adults. In regard to accuracy, visual distractibility did not differ across age nor load, whereas, auditory interference was more pronounced early in development and correlated with age. Auditory and visual distractors also slowed down responses in young and older children more than adults. Finally, the PLH did not predict performance. Rather, children often showed the opposite pattern, with visual distractors having a greater cost in the high load condition (older children) and auditory distractors having a greater cost in the high load condition (young children). These findings are consistent with research examining the development of modality dominance and shed light on changes in multisensory processing and selective attention across the lifespan.
Collapse
Affiliation(s)
| | - Andrew M Hawthorn
- Department of Psychology, The Ohio State University Newark, Newark, OH, United States
| | - Arisha N Rahman
- Department of Psychology, The Ohio State University Newark, Newark, OH, United States
| |
Collapse
|
17
|
Robinson CW, Moore RL, Crook TA. Bimodal Presentation Speeds up Auditory Processing and Slows Down Visual Processing. Front Psychol 2018; 9:2454. [PMID: 30568624 PMCID: PMC6290778 DOI: 10.3389/fpsyg.2018.02454] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2018] [Accepted: 11/20/2018] [Indexed: 11/25/2022] Open
Abstract
Many situations require the simultaneous processing of auditory and visual information, however, stimuli presented to one sensory modality can sometimes interfere with processing in a second sensory modality (i.e., modality dominance). The current study further investigated modality dominance by examining how task demands and bimodal presentation affect speeded auditory and visual discriminations. Participants in the current study had to quickly determine if two words, two pictures, or two word-picture pairings were the same or different, and we manipulated task demands across three different conditions. In an immediate recognition task, there was only one second between the two stimuli/stimulus pairs and auditory dominance was found. Compared to the respective unimodal baselines, pairing pictures and words together slowed down visual responses and sped up auditory responses. Increasing the interstimulus interval to four seconds and blocking verbal rehearsal weakened auditory dominance effects, however, conflicting and redundant visual cues sped up auditory discriminations. Thus, simultaneously presenting pictures and words had different effects on auditory and visual processing, with bimodal presentation slowing down visual processing and speeding up auditory processing. These findings are consistent with a proposed mechanism underlying auditory dominance, which posits that auditory stimuli automatically grab attention and attenuate/delay visual processing.
Collapse
Affiliation(s)
| | - Robert L Moore
- Department of Psychology, The Ohio State University, Newark, Newark, OH, United States
| | - Thomas A Crook
- Department of Psychology, The Ohio State University, Newark, Newark, OH, United States
| |
Collapse
|
18
|
Broadbent HJ, Osborne T, Mareschal D, Kirkham NZ. Withstanding the test of time: Multisensory cues improve the delayed retention of incidental learning. Dev Sci 2018; 22:e12726. [PMID: 30184309 DOI: 10.1111/desc.12726] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2017] [Revised: 06/13/2018] [Accepted: 07/19/2018] [Indexed: 11/28/2022]
Abstract
Multisensory tools are commonly employed within educational settings (e.g. Carter & Stephenson, ), and there is a growing body of literature advocating the benefits of presenting children with multisensory information over unisensory cues for learning (Baker & Jordan, ; Jordan & Baker, ). This is even the case when the informative cues are only arbitrarily related (Broadbent, White, Mareschal, & Kirkham, ). However, the delayed retention of learning following exposure to multisensory compared to unisensory cues has not been evaluated, and has important implications for the utility of multisensory educational tools. This study examined the retention of incidental categorical learning in 5-, 7- and 9-year-olds (N = 181) using either unisensory or multisensory cues. Results found significantly greater retention of learning following multisensory cue exposure than with unisensory information when category knowledge was tested following a 24-hour period of delay. No age-related changes were found, suggesting that multisensory information can facilitate the retention of learning across this age range.
Collapse
Affiliation(s)
- Hannah J Broadbent
- Centre for Brain and Cognitive Development, Birkbeck, University of London, London, UK
| | - Tamsin Osborne
- Centre for Brain and Cognitive Development, Birkbeck, University of London, London, UK
| | - Denis Mareschal
- Centre for Brain and Cognitive Development, Birkbeck, University of London, London, UK
| | - Natasha Z Kirkham
- Centre for Brain and Cognitive Development, Birkbeck, University of London, London, UK
| |
Collapse
|