1
|
Li J, Deng SW. Attentional focusing and filtering in multisensory categorization. Psychon Bull Rev 2024; 31:708-720. [PMID: 37673842 DOI: 10.3758/s13423-023-02370-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/11/2023] [Indexed: 09/08/2023]
Abstract
Selective attention refers to the ability to focus on goal-relevant information while filtering out irrelevant information. In a multisensory context, how do people selectively attend to multiple inputs when making categorical decisions? Here, we examined the role of selective attention in cross-modal categorization in two experiments. In a speed categorization task, participants were asked to attend to visual or auditory targets and categorize them while ignoring other irrelevant stimuli. A response-time extended multinomial processing tree (RT-MPT) model was implemented to estimate the contribution of attentional focusing on task-relevant information and attentional filtering of distractors. The results indicated that the role of selective attention was modality-specific, with differences found in attentional focusing and filtering between visual and auditory modalities. Visual information could be focused on or filtered out more effectively, whereas auditory information was more difficult to filter out, causing greater interference with task-relevant performance. The findings suggest that selective attention plays a critical and differential role across modalities, which provides a novel and promising approach to understanding multisensory processing and attentional focusing and filtering mechanisms of categorical decision-making.
Collapse
Affiliation(s)
- Jianhua Li
- Department of Psychology, University of Macau, Avenida da Universidade, Taipa, Macau
- Center for Cognitive and Brain Sciences, University of Macau, Taipa, Macau
| | - Sophia W Deng
- Department of Psychology, University of Macau, Avenida da Universidade, Taipa, Macau.
- Center for Cognitive and Brain Sciences, University of Macau, Taipa, Macau.
| |
Collapse
|
2
|
Valzolgher C, Bouzaid S, Grenouillet S, Gatel J, Ratenet L, Murenu F, Verdelet G, Salemme R, Gaveau V, Coudert A, Hermann R, Truy E, Farnè A, Pavani F. Training spatial hearing in unilateral cochlear implant users through reaching to sounds in virtual reality. Eur Arch Otorhinolaryngol 2023; 280:3661-3672. [PMID: 36905419 PMCID: PMC10313844 DOI: 10.1007/s00405-023-07886-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 02/13/2023] [Indexed: 03/12/2023]
Abstract
BACKGROUND AND PURPOSE Use of unilateral cochlear implant (UCI) is associated with limited spatial hearing skills. Evidence that training these abilities in UCI user is possible remains limited. In this study, we assessed whether a Spatial training based on hand-reaching to sounds performed in virtual reality improves spatial hearing abilities in UCI users METHODS: Using a crossover randomized clinical trial, we compared the effects of a Spatial training protocol with those of a Non-Spatial control training. We tested 17 UCI users in a head-pointing to sound task and in an audio-visual attention orienting task, before and after each training. Study is recorded in clinicaltrials.gov (NCT04183348). RESULTS During the Spatial VR training, sound localization errors in azimuth decreased. Moreover, when comparing head-pointing to sounds before vs. after training, localization errors decreased after the Spatial more than the control training. No training effects emerged in the audio-visual attention orienting task. CONCLUSIONS Our results showed that sound localization in UCI users improves during a Spatial training, with benefits that extend also to a non-trained sound localization task (generalization). These findings have potentials for novel rehabilitation procedures in clinical contexts.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini, 31 Rovereto, Trento, Italy.
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France.
| | - Sabrina Bouzaid
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | - Solene Grenouillet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | | | | | - Francesca Murenu
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | - Grégoire Verdelet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Romeo Salemme
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Valérie Gaveau
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | | | | | - Eric Truy
- Hospices Civils de Lyon, Lyon, France
| | - Alessandro Farnè
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Francesco Pavani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini, 31 Rovereto, Trento, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Centro Interuniversitario di Ricerca "Cognizione, Linguaggio e Sordità" (CIRCLeS), Trento, Italy
| |
Collapse
|
3
|
Li H, Xu J, Fang M, Tang L, Pan Y. A Study and Analysis of the Relationship between Visual-Auditory Logos and Consumer Behavior. Behav Sci (Basel) 2023; 13:613. [PMID: 37504059 PMCID: PMC10376566 DOI: 10.3390/bs13070613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 07/11/2023] [Accepted: 07/14/2023] [Indexed: 07/29/2023] Open
Abstract
Given enterprises' participation in market competition and the development of sensory marketing, in addition to the traditional visual identity, some enterprises gradually begin to pay attention to auditory and then introduce sound design when designing logos. Audio-visual stimulation and media innovation are committed to creating positive attitudes among consumers. This study constructs a model of visual and auditory interactive relationships with consumer behavior using the SOR model. It tests the conceptual model and checks the hypotheses proposed in the study. This study summarizes and contributes to the visual and auditory interactive relationship between information integration, information synergy, mutual competition, and matching degree. It further proposes the influence of purchase intention and consumer support on consumer behavior of perceived brand perception, credibility, and quality perception. The results and highlights ensure brand identities reflect a significant positive result through consumer behavior. In this paper, we collected questionnaires from a random sample of 1407 respondents. We used regression analysis to test the association between visual and auditory interactive relationships as well as consumer behavior. We further verified the mediating role of consumer perception variables. Adding audiovisual logo design to the marketing process can be an effective way for companies and brands to attract customers and increase their support and purchase intentions.
Collapse
Affiliation(s)
- Hui Li
- College of Fine Arts, Guangxi Normal University, Guilin 541006, China
- Department of Smart Experience Design, Kookmin University, Seoul 02707, Republic of Korea
| | - Junping Xu
- Department of Smart Experience Design, Kookmin University, Seoul 02707, Republic of Korea
| | - Meichen Fang
- Department of Smart Experience Design, Kookmin University, Seoul 02707, Republic of Korea
| | - Lingzi Tang
- School of Humanities, Arts and Design, Guangxi University of Science and Technology, Liuzhou 545006, China
| | - Younghwan Pan
- Department of Smart Experience Design, Kookmin University, Seoul 02707, Republic of Korea
| |
Collapse
|
4
|
Valzolgher C, Todeschini M, Verdelet G, Gatel J, Salemme R, Gaveau V, Truy E, Farnè A, Pavani F. Adapting to altered auditory cues: Generalization from manual reaching to head pointing. PLoS One 2022; 17:e0263509. [PMID: 35421095 PMCID: PMC9009652 DOI: 10.1371/journal.pone.0263509] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 01/21/2022] [Indexed: 12/02/2022] Open
Abstract
Localising sounds means having the ability to process auditory cues deriving from the interplay among sound waves, the head and the ears. When auditory cues change because of temporary or permanent hearing loss, sound localization becomes difficult and uncertain. The brain can adapt to altered auditory cues throughout life and multisensory training can promote the relearning of spatial hearing skills. Here, we study the training potentials of sound-oriented motor behaviour to test if a training based on manual actions toward sounds can learning effects that generalize to different auditory spatial tasks. We assessed spatial hearing relearning in normal hearing adults with a plugged ear by using visual virtual reality and body motion tracking. Participants performed two auditory tasks that entail explicit and implicit processing of sound position (head-pointing sound localization and audio-visual attention cueing, respectively), before and after having received a spatial training session in which they identified sound position by reaching to auditory sources nearby. Using a crossover design, the effects of the above-mentioned spatial training were compared to a control condition involving the same physical stimuli, but different task demands (i.e., a non-spatial discrimination of amplitude modulations in the sound). According to our findings, spatial hearing in one-ear plugged participants improved more after reaching to sound trainings rather than in the control condition. Training by reaching also modified head-movement behaviour during listening. Crucially, the improvements observed during training generalize also to a different sound localization task, possibly as a consequence of acquired and novel head-movement strategies.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
| | - Michela Todeschini
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Trento, Italy
| | - Gregoire Verdelet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | | | - Romeo Salemme
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | - Valerie Gaveau
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- University of Lyon 1, Villeurbanne, France
| | - Eric Truy
- Hospices Civils de Lyon, Lyon, France
| | - Alessandro Farnè
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | - Francesco Pavani
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
| |
Collapse
|
5
|
Trepkowski C, Marquardt A, Eibich TD, Shikanai Y, Maiero J, Kiyokawa K, Kruijff E, Schoning J, Konig P. Multisensory Proximity and Transition Cues for Improving Target Awareness in Narrow Field of View Augmented Reality Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1342-1362. [PMID: 34591771 DOI: 10.1109/tvcg.2021.3116673] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Augmented reality applications allow users to enrich their real surroundings with additional digital content. However, due to the limited field of view of augmented reality devices, it can sometimes be difficult to become aware of newly emerging information inside or outside the field of view. Typical visual conflicts like clutter and occlusion of augmentations occur and can be further aggravated especially in the context of dense information spaces. In this article, we evaluate how multisensory cue combinations can improve the awareness for moving out-of-view objects in narrow field of view augmented reality displays. We distinguish between proximity and transition cues in either visual, auditory or tactile manner. Proximity cues are intended to enhance spatial awareness of approaching out-of-view objects while transition cues inform the user that the object just entered the field of view. In study 1, user preference was determined for 6 different cue combinations via forced-choice decisions. In study 2, the 3 most preferred modes were then evaluated with respect to performance and awareness measures in a divided attention reaction task. Both studies were conducted under varying noise levels. We show that on average the Visual-Tactile combination leads to 63% and Audio-Tactile to 65% faster reactions to incoming out-of-view augmentations than their Visual-Audio counterpart, indicating a high usefulness of tactile transition cues. We further show a detrimental effect of visual and audio noise on performance when feedback included visual proximity cues. Based on these results, we make recommendations to determine which cue combination is appropriate for which application.
Collapse
|
6
|
Marquardt A, Trepkowski C, Eibich TD, Maiero J, Kruijff E, Schoning J. Comparing Non-Visual and Visual Guidance Methods for Narrow Field of View Augmented Reality Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:3389-3401. [PMID: 32941150 DOI: 10.1109/tvcg.2020.3023605] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Current augmented reality displays still have a very limited field of view compared to the human vision. In order to localize out-of-view objects, researchers have predominantly explored visual guidance approaches to visualize information in the limited (in-view) screen space. Unfortunately, visual conflicts like cluttering or occlusion of information often arise, which can lead to search performance issues and a decreased awareness about the physical environment. In this paper, we compare an innovative non-visual guidance approach based on audio-tactile cues with the state-of-the-art visual guidance technique EyeSee360 for localizing out-of-view objects in augmented reality displays with limited field of view. In our user study, we evaluate both guidance methods in terms of search performance and situation awareness. We show that although audio-tactile guidance is generally slower than the well-performing EyeSee360 in terms of search times, it is on a par regarding the hit rate. Even more so, the audio-tactile method provides a significant improvement in situation awareness compared to the visual approach.
Collapse
|
7
|
Abstract
Cross-modal attention and multisensory integration are very essential for us to perceive the world. The most intuitive feelings about the environment around us are based on what we see and what we hear. Therefore, it is important to understand the interactions between visual inputs and auditory inputs. Previous studies have shown that multisensory integration can be modulated by attention. However, how top-down attention is controlled or allocated across the sensory modalities remains unclear. In this study, we measured the cortical areas activated by the cue-target spatial attention paradigm in both visual and auditory fields using functional MRI. The reaction times of the behavioral results indicated that interactions between the two types of stimuli exist. The imaging results indicated that interactions between multisensory inputs can lead to enhancement or depression of the cortical response with top-down spatial attention. Moreover, the activation of the middle temporal gyrus and insula in tasks with irrelevant stimuli appears to indicate that multisensory integration proceeds automatically.
Collapse
|
8
|
Kristjánsson T, Thorvaldsson TP, Kristjánsson A. Divided multimodal attention sensory trace and context coding strategies in spatially congruent auditory and visual presentation. Multisens Res 2014; 27:91-110. [PMID: 25296473 DOI: 10.1163/22134808-00002448] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Previous research involving both unimodal and multimodal studies suggests that single-response change detection is a capacity-free process while a discriminatory up or down identification is capacity-limited. The trace/context model assumes that this reflects different memory strategies rather than inherent differences between identification and detection. To perform such tasks, one of two strategies is used, a sensory trace or a context coding strategy, and if one is blocked, people will automatically use the other. A drawback to most preceding studies is that stimuli are presented at separate locations, creating the possibility of a spatial confound, which invites alternative interpretations of the results. We describe a series of experiments, investigating divided multimodal attention, without the spatial confound. The results challenge the trace/context model. Our critical experiment involved a gap before a change in volume and brightness, which according to the trace/context model blocks the sensory trace strategy, simultaneously with a roaming pedestal, which should block the context coding strategy. The results clearly show that people can use strategies other than sensory trace and context coding in the tasks and conditions of these experiments, necessitating changes to the trace/context model.
Collapse
|
9
|
Yang Z, Mayer AR. An event-related FMRI study of exogenous orienting across vision and audition. Hum Brain Mapp 2013; 35:964-74. [PMID: 23288620 DOI: 10.1002/hbm.22227] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2012] [Revised: 11/02/2012] [Accepted: 11/05/2012] [Indexed: 11/11/2022] Open
Abstract
The orienting of attention to the spatial location of sensory stimuli in one modality based on sensory stimuli presented in another modality (i.e., cross-modal orienting) is a common mechanism for controlling attentional shifts. The neuronal mechanisms of top-down cross-modal orienting have been studied extensively. However, the neuronal substrates of bottom-up audio-visual cross-modal spatial orienting remain to be elucidated. Therefore, behavioral and event-related functional magnetic resonance imaging (FMRI) data were collected while healthy volunteers (N = 26) performed a spatial cross-modal localization task modeled after the Posner cuing paradigm. Behavioral results indicated that although both visual and auditory cues were effective in producing bottom-up shifts of cross-modal spatial attention, reorienting effects were greater for the visual cues condition. Statistically significant evidence of inhibition of return was not observed for either condition. Functional results also indicated that visual cues with auditory targets resulted in greater activation within ventral and dorsal frontoparietal attention networks, visual and auditory "where" streams, primary auditory cortex, and thalamus during reorienting across both short and long stimulus onset asynchronys. In contrast, no areas of unique activation were associated with reorienting following auditory cues with visual targets. In summary, current results question whether audio-visual cross-modal orienting is supramodal in nature, suggesting rather that the initial modality of cue presentation heavily influences both behavioral and functional results. In the context of localization tasks, reorienting effects accompanied by the activation of the frontoparietal reorienting network are more robust for visual cues with auditory targets than for auditory cues with visual targets.
Collapse
Affiliation(s)
- Zhen Yang
- The Mind Research Network/Lovelace Biomedical and Environmental Research Institute, Albuquerque, New Mexico 87106
| | | |
Collapse
|
10
|
Attention and the multiple stages of multisensory integration: A review of audiovisual studies. Acta Psychol (Amst) 2010; 134:372-84. [PMID: 20427031 DOI: 10.1016/j.actpsy.2010.03.010] [Citation(s) in RCA: 166] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2009] [Revised: 03/23/2010] [Accepted: 03/27/2010] [Indexed: 11/20/2022] Open
Abstract
Multisensory integration and crossmodal attention have a large impact on how we perceive the world. Therefore, it is important to know under what circumstances these processes take place and how they affect our performance. So far, no consensus has been reached on whether multisensory integration and crossmodal attention operate independently and whether they represent truly automatic processes. This review describes the constraints under which multisensory integration and crossmodal attention occur and in what brain areas these processes take place. Some studies suggest that multisensory integration and crossmodal attention take place in higher heteromodal brain areas, while others show the involvement of early sensory specific areas. Additionally, the current literature suggests that multisensory integration and attention interact depending on what processing level integration takes place. To shed light on this issue, different frameworks regarding the level at which multisensory interactions takes place are discussed. Finally, this review focuses on the question whether audiovisual interactions and crossmodal attention in particular are automatic processes. Recent studies suggest that this is not always the case. Overall, this review provides evidence for a parallel processing framework suggesting that both multisensory integration and attentional processes take place and can interact at multiple stages in the brain.
Collapse
|
11
|
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, OX1 3UD, United Kingdom.
| |
Collapse
|
12
|
Three-dimensional motion analysis of the effects of auditory cueing on gait pattern in patients with Parkinson’s disease: a preliminary investigation. Neurol Sci 2010; 31:423-30. [DOI: 10.1007/s10072-010-0228-2] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2009] [Accepted: 01/20/2010] [Indexed: 10/19/2022]
|