1
|
Maack MC, Ostrowski J, Rose M. The order of multisensory associative sequences is reinstated as context feature during successful recognition. Sci Rep 2025; 15:18120. [PMID: 40413194 PMCID: PMC12103560 DOI: 10.1038/s41598-025-02553-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2024] [Accepted: 05/14/2025] [Indexed: 05/27/2025] Open
Abstract
The ability of the human brain to encode and recognize sequential information from different sensory modalities is key to memory formation. The sequence in which these modalities are presented during encoding critically affects recognition. This study investigates the encoding of sensory modality sequences and its neural impact on recognition using multivariate pattern analysis (MVPA) of oscillatory EEG activity. We examined the reinstatement of multisensory episode-specific sequences in n = 32 participants who encoded sound-image associations (e.g., the image of a ship with the sound of a frog). Images and sounds were natural scenes and 2-second real-life sounds, presented sequentially during encoding. During recognition, stimulus pairs were presented simultaneously, and classification was used to test whether the modality sequence order could be decoded as a contextual feature in memory. Oscillatory results identified a distinct neural signature during successful retrieval, associated with the original modality sequence. Furthermore, MVPA successfully decoded neural patterns of different modality sequences, hinting at specific memory traces. These findings suggest that the sequence in which sensory modalities are encoded forms a neural signature, affecting later recognition. This study provides novel insights into the relationship between modality encoding and recognition, with broad implications for cognitive neuroscience and memory research.
Collapse
Affiliation(s)
- Marike Christiane Maack
- Department of Systems Neuroscience, University Medical Center Hamburg‑Eppendorf, Martinistr. 52, Building W34, 20248, Hamburg, Germany
| | - Jan Ostrowski
- Department of Systems Neuroscience, University Medical Center Hamburg‑Eppendorf, Martinistr. 52, Building W34, 20248, Hamburg, Germany
| | - Michael Rose
- Department of Systems Neuroscience, University Medical Center Hamburg‑Eppendorf, Martinistr. 52, Building W34, 20248, Hamburg, Germany.
| |
Collapse
|
2
|
Sun Q, Wang C, Dai DY, Li X. Developmental effects of digitally contextualized reading on preschooler's creative thinking: A quasi-experimental study. J Exp Child Psychol 2025; 259:106307. [PMID: 40408937 DOI: 10.1016/j.jecp.2025.106307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2025] [Revised: 05/11/2025] [Accepted: 05/12/2025] [Indexed: 05/25/2025]
Abstract
It has been argued that preschool years are an important period for fostering creative thinking skills. Digital technologies are increasingly used in class activities to enhance creativity. However, few studies have experimentally explored how digital technologies can create multisensory reading activities to promote creative thinking in preschoolers. This study introduced digitally contextualized reading, which transforms picture-based reading into an emotionally immersive video reading experience. Unlike traditional methods, this approach emphasizes multisensory immersion and interactive engagement. To examine its impact on creative thinking, a quasi-experimental study was conducted with 251 children aged three to six in China (Intervention group N = 137) over six months, using pre- and post-tests. Results of independent t-tests, ANCOVA, and a two-way ANOVA showed that digitally contextualized reading significantly improved the fluency, elaboration, originality, and abstractness of creative thinking as measured by Torrance Tests of Creative Thinking. These findings highlight the viable role of digital technologies in fostering creative thinking, and its potential to enrich educational practices and advance young learners' creative abilities.
Collapse
Affiliation(s)
- Qi Sun
- Department of Educational and Counseling Psychology, University at Albany, State University of New York, United States; School of Education Science, Nantong University, China.
| | - Canming Wang
- School of Education Science, Nantong University, China; Institute of Contextual Education, Nantong University, China
| | - David Yun Dai
- Department of Educational and Counseling Psychology, University at Albany, State University of New York, United States
| | - Xinyu Li
- Department of Educational and Counseling Psychology, University at Albany, State University of New York, United States
| |
Collapse
|
3
|
Vassall SG, Wallace MT. Sensory and Multisensory Processing Changes and Their Contributions to Autism and Schizophrenia. Curr Top Behav Neurosci 2025. [PMID: 40346436 DOI: 10.1007/7854_2025_589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/11/2025]
Abstract
Natural environments are typically multisensory, comprising information from multiple sensory modalities. It is in the integration of these incoming sensory signals that we form our perceptual gestalt that allows us to navigate through the world with relative ease. However, differences in multisensory integration (MSI) ability are found in a number of clinical conditions. Throughout this chapter, we discuss how MSI differences contribute to phenotypic characterization of autism and schizophrenia. Although these clinical populations are often described as opposite each other on a number of spectra, we describe similarities in behavioral performance and neural functions between the two conditions. Understanding the shared features of autism and schizophrenia through the lens of MSI research allows us to better understand the neural and behavioral underpinnings of both disorders. We provide potential avenues for remediation of MSI function in these populations.
Collapse
Affiliation(s)
- Sarah G Vassall
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Mark T Wallace
- Department of Psychology, Vanderbilt University, Nashville, TN, USA.
- Department of Hearing and Speech, Vanderbilt University Medical Center, Nashville, TN, USA.
- Vanderbilt Vision Research Center, Nashville, TN, USA.
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
- Department of Pharmacology, Vanderbilt University, Nashville, TN, USA.
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA.
| |
Collapse
|
4
|
Lebrun F, Simon C, Boukezzi A, Otmane S, Chellali A. Mentor-Guided Learning in Immersive Virtual Environments: The Impact of Visual and Haptic Feedback on Skill Acquisition. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:3547-3557. [PMID: 40063467 DOI: 10.1109/tvcg.2025.3549547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/14/2025]
Abstract
In the early stages of learning a technical skill, trainees require guidance from a mentor through augmented feedback to develop higher expertise. However, the impact of such feedback and the different modalities used to communicate it remain underexplored in immersive virtual environments (IVE). This paper presents a study in which 27 participants were divided into three groups to learn a tool manipulation trajectory in an IVE. Two experimental groups received guidance from an expert using visual and/or haptic augmented feedback, while the control group received no feedback. The results indicate that both experimental groups showed significantly greater improvement in tool trajectory performance than the control group from pre- to post-test, with no significant differences between them. Analysis of their learning curves revealed similar performance improvements in tool trajectory across trials, outperforming the control group. Additionally, the visual-haptic feedback condition was linked to lower task load in three out of six dimensions of the NASA-TLX and a higher perceived interdependence with the expert's actions. These findings suggest that augmented feedback from an expert enhances the learning of tool manipulation skills. Although adding haptic feedback did not lead to better learning outcomes compared to visual feedback alone, it did enhance the overall user experience. These results offer valuable insights for designing IVEs that support mentor-trainee interactions through augmented feedback.
Collapse
|
5
|
Leow LA, Nguyen A, Corti E, Marinovic W. Informative Auditory Cues Enhance Motor Sequence Learning. Eur J Neurosci 2025; 61:e70140. [PMID: 40399234 DOI: 10.1111/ejn.70140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2024] [Revised: 04/01/2025] [Accepted: 04/29/2025] [Indexed: 05/23/2025]
Abstract
Motor sequence learning, or the ability to learn and remember sequences of actions, such as the sequence of actions required to tie one's shoelaces, is ubiquitous to everyday life. Contemporary research on motor sequence learning has been largely unimodal, ignoring the possibility that our nervous system might benefit from sensory inputs from multiple modalities. In this study, we investigated the properties of motor sequence learning in response to audiovisual stimuli. We found that sequence learning with auditory-visual stimuli showed a hallmark feature of traditional unimodal sequence learning tasks: sensitivity to stimulus timing, where lengthier interstimulus intervals of 500 ms improved sequence learning compared to briefer interstimulus intervals of 200 ms. Consistent with previous findings, we also found that auditory-visual stimuli improved learning compared to a unimodal visual-only condition. Furthermore, the informativeness of the auditory stimuli was important, as auditory stimuli which predicted the location of visual cues improved sequence learning compared to uninformative auditory stimuli which did not predict the location of the visual cues. Our findings suggest a potential utility of leveraging audiovisual stimuli in sequence learning interventions to enhance skill acquisition in education and rehabilitation contexts.
Collapse
Affiliation(s)
- Li-Ann Leow
- School of Arts and Humanities, Edith Cowan University, Joondalup, Western Australia, Australia
- School of Population Health, Curtin University, Bentley, Western Australia, Australia
| | - An Nguyen
- School of Population Health, Curtin University, Bentley, Western Australia, Australia
| | - Emily Corti
- School of Population Health, Curtin University, Bentley, Western Australia, Australia
| | - Welber Marinovic
- School of Population Health, Curtin University, Bentley, Western Australia, Australia
| |
Collapse
|
6
|
Chang S, Zheng B, Keniston L, Xu J, Yu L. Auditory cortex learns to discriminate audiovisual cues through selective multisensory enhancement. eLife 2025; 13:RP102926. [PMID: 40261274 PMCID: PMC12014134 DOI: 10.7554/elife.102926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/24/2025] Open
Abstract
Multisensory object discrimination is essential in everyday life, yet the neural mechanisms underlying this process remain unclear. In this study, we trained rats to perform a two-alternative forced-choice task using both auditory and visual cues. Our findings reveal that multisensory perceptual learning actively engages auditory cortex (AC) neurons in both visual and audiovisual processing. Importantly, many audiovisual neurons in the AC exhibited experience-dependent associations between their visual and auditory preferences, displaying a unique integration model. This model employed selective multisensory enhancement for the auditory-visual pairing guiding the contralateral choice, which correlated with improved multisensory discrimination. Furthermore, AC neurons effectively distinguished whether a preferred auditory stimulus was paired with its associated visual stimulus using this distinct integrative mechanism. Our results highlight the capability of sensory cortices to develop sophisticated integrative strategies, adapting to task demands to enhance multisensory discrimination abilities.
Collapse
Affiliation(s)
- Song Chang
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal UniversityShanghaiChina
| | - Beilin Zheng
- College of Information Engineering, Hangzhou Vocational and Technical CollegeHangzhouChina
| | - Les Keniston
- Department of Biomedical Sciences, Kentucky College of Osteopathic Medicine, University of PikevillePikevilleUnited States
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal UniversityShanghaiChina
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal UniversityShanghaiChina
| |
Collapse
|
7
|
Kunnath AJ, Bertisch HS, Kim AS, Gifford RH, Wallace MT. Effects of multisensory simultaneity judgment training on the comprehension and cortical processing of speech in noise: a randomized controlled trial. Sci Rep 2025; 15:12956. [PMID: 40234646 PMCID: PMC12000426 DOI: 10.1038/s41598-025-96121-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2024] [Accepted: 03/26/2025] [Indexed: 04/17/2025] Open
Abstract
Understanding speech in noise can be facilitated by integrating auditory and visual speech cues. Audiovisual temporal acuity, which can be indexed by the temporal binding window (TBW), is critical for this process and can be enhanced through simultaneity judgment training. We hypothesized that multisensory training would narrow the TBW and improve speech understanding in noise. Participants were randomized to receive either training and testing (n = 15) or testing-only (n = 15) over three days. Trained participants demonstrated significant narrowing in their mean TBW size (403ms to 345ms; p = 0.030), whereas control participants did not (409ms to 474ms; p = 0.061). Although there were no group-level changes in word recognition scores, trained participants with larger TBW decreases exhibited larger improvements in auditory word recognition in noise (R2 = 0.291; p = 0.038). Individual differences in responses to training were found to be related to differences in cortical speech processing using functional near-infrared spectroscopy. Low audiovisual-evoked activity in the left middle temporal gyrus (R2 = 0.87; p = 0.006), left angular and superior temporal gyrus (R2 = 0.85; p = 0.006), and visual cortices (R2 = 0.74; p = 0.041) was associated with larger improvements in auditory word recognition after training. Multisensory training transfers benefits to speech comprehension in noise, and this effect may be mediated by upregulating activity in multisensory cortical networks for individuals with low baseline activity.
Collapse
Affiliation(s)
- Ansley J Kunnath
- Vanderbilt University School of Medicine, Nashville, TN, USA.
- Vanderbilt Brain Institute, Nashville, TN, USA.
| | | | | | - René H Gifford
- Vanderbilt Brain Institute, Nashville, TN, USA
- Vanderbilt University Medical Center, Nashville, TN, USA
| | - Mark T Wallace
- Vanderbilt Brain Institute, Nashville, TN, USA
- Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
8
|
Matinfar S, Dehghani S, Salehi M, Sommersperger M, Navab N, Faridpooya K, Fairhurst M, Navab N. From tissue to sound: A new paradigm for medical sonic interaction design. Med Image Anal 2025; 103:103571. [PMID: 40222195 DOI: 10.1016/j.media.2025.103571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2024] [Accepted: 03/25/2025] [Indexed: 04/15/2025]
Abstract
Medical imaging maps tissue characteristics into image intensity values, enhancing human perception. However, comprehending this data, especially in high-stakes scenarios such as surgery, is prone to errors. Additionally, current multimodal methods do not fully leverage this valuable data in their design. We introduce "From Tissue to Sound," a new paradigm for medical sonic interaction design. This paradigm establishes a comprehensive framework for mapping tissue characteristics to auditory displays, providing dynamic and intuitive access to medical images that complement visual data, thereby enhancing multimodal perception. "From Tissue to Sound" provides an advanced and adaptable framework for the interactive sonification of multimodal medical imaging data. This framework employs a physics-based sound model composed of a network of multiple oscillators, whose mechanical properties-such as friction and stiffness-are defined by tissue characteristics extracted from imaging data. This approach enables the representation of anatomical structures and the creation of unique acoustic profiles in response to excitations of the sound model. This method allows users to explore data at a fundamental level, identifying tissue characteristics ranging from rigid to soft, dense to sparse, and structured to scattered. It facilitates intuitive discovery of both general and detailed patterns with minimal preprocessing. Unlike conventional methods that transform low-dimensional data into global sound features through a parametric approach, this method utilizes model-based unsupervised mapping between data and an anatomical sound model, enabling high-dimensional data processing. The versatility of this method is demonstrated through feasibility experiments confirming the generation of perceptually discernible acoustic signals. Furthermore, we present a novel application developed based on this framework for retinal surgery. This new paradigm opens up possibilities for designing multisensory applications for multimodal imaging data. It also facilitates the creation of interactive sonification models with various auditory causality approaches, enhancing both directness and richness.
Collapse
Affiliation(s)
- Sasan Matinfar
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, Munich, Germany.
| | - Shervin Dehghani
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, Munich, Germany
| | - Mehrdad Salehi
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, Munich, Germany
| | - Michael Sommersperger
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, Munich, Germany
| | - Navid Navab
- Topological Media Lab, Concordia University, Montreal, Canada
| | | | - Merle Fairhurst
- Centre for Tactile Internet with Human-in-the-Loop, Technical University of Dresden, Dresden, Germany
| | - Nassir Navab
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, Munich, Germany
| |
Collapse
|
9
|
Sunami R, Nakamoto T, Cohen N, Kobayashi T, Yamamoto K. Exploring the effects of olfactory VR on visuospatial memory and cognitive processing in older adults. Sci Rep 2025; 15:10805. [PMID: 40155673 PMCID: PMC11953428 DOI: 10.1038/s41598-025-94693-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2024] [Accepted: 03/17/2025] [Indexed: 04/01/2025] Open
Abstract
This study examined the effects of Olfactory Virtual Reality (VR) Gaming on cognitive performance in older adults. A VR game environment ("Interactive Smellscape") was created to enable this, and 30 participants aged 63-90 years completed both VR gaming sessions and cognitive assessments, conducted with a 6-day interval between the two sessions. Significant improvements were observed in spatial tasks of Japanese characters and words, with notable enhancements specifically in visuospatial rotation performance and word-location recall accuracy. However, no significant changes were detected in olfactory identification or other general cognitive tasks. These findings suggest potential cognitive benefits of incorporating VR and olfactory stimuli into interventions for older populations, particularly for tasks requiring attention and spatial processing. The results further underscore the importance of task-specific designs to maximize the utility of multisensory VR systems for cognitive rehabilitation.
Collapse
Affiliation(s)
- Ryota Sunami
- School of Engineering, Institute of Science Tokyo, Yokohama, Japan
| | - Takamichi Nakamoto
- School of Engineering, Institute of Science Tokyo, Yokohama, Japan.
- Institute of Integrated Research, Institute of Science Tokyo, Yokohama, Japan.
| | - Nathan Cohen
- Central Saint Martins, University of the Arts London, London, UK
| | | | - Kohsuke Yamamoto
- Faculty of Science and Engineering, Hosei University, Koganei, Japan
| |
Collapse
|
10
|
Van Hoof TJ, Sumeracki MA, Madan CR. Science of Learning Strategy Series: Article 7, The Role of Context in Learning. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2025:00005141-990000000-00153. [PMID: 40126196 DOI: 10.1097/ceh.0000000000000601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/25/2025]
Abstract
ABSTRACT The science of learning (learning science) is an interprofessional field that concerns itself with how the brain learns and remembers important information. Learning science has compiled a set of evidence-based strategies, such as distributed practice, retrieval practice, interleaving, and elaboration, which are quite relevant to continuing professional development (CPD). Spreading out study and practice separated by cognitive breaks (distributed practice), testing oneself to check mastery and memory of previously learned information (retrieval practice), mixing the learning of separate but associated information (interleaving), and making connections between concepts one is trying to learn and other known concepts (elaboration) represent strategies that are underused in CPD. Participants and planners alike can benefit from learning science recommendations to inform their decisions. Contextual learning, the subject of this article, is another evidence-based strategy that supports the study and practice of important information. By better understanding how the context in which one learns later affects retention and performance, CPD participants and planners can make more informed educational decisions.
Collapse
Affiliation(s)
- Thomas J Van Hoof
- Dr. Van Hoof: Associate Professor, University of Connecticut School of Nursing, Storrs, and Associate Professor, Department of Community Medicine and Health Care, University of Connecticut School of Medicine, Farmington, Storrs, CT; Dr. Sumeracki: Associate Professor, Department of Psychology, Rhode Island College, Providence, RI; Dr. Madan: Assistant Professor, School of Psychology, University of Nottingham, Nottingham, United Kingdom
| | | | | |
Collapse
|
11
|
Chiu CJ, Hua LC, Chiang JH, Chou CY. User-Centered Prototype Design of a Health Care Robot for Treating Type 2 Diabetes in the Community Pharmacy: Development and Usability Study. JMIR Hum Factors 2025; 12:e48226. [PMID: 40104938 PMCID: PMC11936303 DOI: 10.2196/48226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 01/14/2025] [Accepted: 01/14/2025] [Indexed: 03/20/2025] Open
Abstract
Background Technology can be an effective tool for providing health services and disease self-management, especially in diabetes care. Technology tools for disease self-management include health-related applications for computers and smartphones as well as the use of robots. To provide a more effective continuity of care and to better understand and facilitate disease management in middle-aged and older adult patients with diabetes, robots can be used to improve the quality of care and supplement community health resources, such as community pharmacies. Objective The aim of this study was to develop a health care robot prototype that can be integrated into current community pharmacies. Methods Three user-centered approaches were used: (1) review of the literature on technology use among older adults, 2) reference to the seven key diabetes self-care behaviors by the American Association of Diabetes Educators (AADE), and (3) meeting with health care providers in the community. Field investigations and interviews were conducted at community pharmacies and diabetes health education centers to determine the appearance, interface, content, and function of the robot. Results The results show that diabetes health care prototype robots can be established through user-centered design. The following important features were revealed: (1) perceived ease of use is considered a friendly operating interface; therefore, we used less than 3 buttons in an interface; (2) minimization of the interface between blue and yellow, which is unfriendly to older adults; (3) the health education mode was the most preferred mode with sound, image, and video presentation; (4) the most predilected functions are health education resources and health records, and that patient data can be easily collected through health education games and dialogue with robots; and (5) touching the screen is the most preferred operation mode. Conclusions An evidence-based health care robot can be developed through user-centered design, an approach in which a model that connects medical needs to people with health conditions can be built, thereby facilitating the sustainable development of technology in the diabetes care field.
Collapse
Affiliation(s)
- Ching-Ju Chiu
- Institute of Gerontology, College of Medicine, National Cheng Kung University, Tainan City, Taiwan
| | - Lin-Chun Hua
- Institute of Gerontology, College of Medicine, National Cheng Kung University, Tainan City, Taiwan
| | - Jung-Hsien Chiang
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan City, Taiwan
| | - Chieh-Ying Chou
- Institute of Gerontology, College of Medicine, National Cheng Kung University, Tainan City, Taiwan
- Department of Family Medicine, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, No. 138, Sheng Li Road, Tainan City, 70403, Taiwan, 886 6-2353535 ext 5210, 886 6-2091433
| |
Collapse
|
12
|
Groves H, Fuller K, Mahon V, Butkus S, Varshney A, Brawn B, Heagerty J, Li S, Lee E, Murthi SB, Puche AC. Assessing the efficacy of a virtual reality lower leg fasciotomy surgery training model compared to cadaveric training. BMC MEDICAL EDUCATION 2025; 25:269. [PMID: 39972328 PMCID: PMC11841149 DOI: 10.1186/s12909-025-06835-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 02/06/2025] [Indexed: 02/21/2025]
Abstract
BACKGROUND Virtual reality (VR) holds great potential in education that has not been actualized in surgical training programs; much of the research into medical applications of VR have been in management and decision making rather than procedural training. This pilot study assessed the feasibility of virtual reality surgical educational training (VR-SET) in open trauma surgery procedures compared to in person cadaver-based training (CBT). In traditional surgical educational settings multiple trainees share a cadaver, often due to logistical and fiscal limitations precluding routine one-to-one trainee to cadaver ratios. Thus, some procedures are learned via observation of a fellow trainee performance on the cadaver rather than hands on performance. Cadaveric training opportunities are also less frequent for those practicing in low resource environments such as rural communities, smaller medical facilities and military combat zones. METHODS Medical students (4th year, n = 10) who completed VR-SET training were compared to a control group (residents, n = 22) who completed an in-person Advanced Surgical Skills for Exposure in Trauma (ASSET) course. Participants were evaluated on performance of a lower extremity fasciotomy on a cadaver. RESULTS VR-SET study participants decompressed an average of 2.45 ± 1.09 (range 1 to 4) compartments compared to the control group decompressed had an average of 2.06 ± 0.93 (range 0.5 to 4), statistically indistinguishable between the groups (p = 0.35). Numerical scores for anatomic knowledge, surgical management, and procedure performance were also not significantly different between groups. Control subjects had significantly higher pathophysiology knowledge and surgical technique scores. CONCLUSIONS Overall, VR-SET participants were indistinguishable from the in-person CBT cohort in number of compartments successfully decompressed. This pilot study suggests utilization of VR technologies in trauma educational settings may be effective and considered as a cost-effective solution for training to supplement cadaveric based courses.
Collapse
Affiliation(s)
- Heather Groves
- Department of Neurobiology, University of Maryland School of Medicine, 20 Penn St., Rm. 216 (mailing) 685 West Baltimore St., Rm. 280M (office), Baltimore, MD, 21201, USA
| | - Kristina Fuller
- Department of Neurobiology, University of Maryland School of Medicine, 20 Penn St., Rm. 216 (mailing) 685 West Baltimore St., Rm. 280M (office), Baltimore, MD, 21201, USA
| | - Vondel Mahon
- Department of Neurobiology, University of Maryland School of Medicine, 20 Penn St., Rm. 216 (mailing) 685 West Baltimore St., Rm. 280M (office), Baltimore, MD, 21201, USA
| | | | - Amitabh Varshney
- College of Computer, Mathematical, and Natural Sciences, University of Maryland, College Park, MD, USA
| | - Barbara Brawn
- College of Computer, Mathematical, and Natural Sciences, University of Maryland, College Park, MD, USA
| | - Jonathan Heagerty
- College of Computer, Mathematical, and Natural Sciences, University of Maryland, College Park, MD, USA
| | - Sida Li
- College of Computer, Mathematical, and Natural Sciences, University of Maryland, College Park, MD, USA
| | - Eric Lee
- College of Computer, Mathematical, and Natural Sciences, University of Maryland, College Park, MD, USA
| | | | - Adam C Puche
- Department of Neurobiology, University of Maryland School of Medicine, 20 Penn St., Rm. 216 (mailing) 685 West Baltimore St., Rm. 280M (office), Baltimore, MD, 21201, USA.
| |
Collapse
|
13
|
Packard PA, Soto-Faraco S. Crossmodal semantic congruence and rarity improve episodic memory. Mem Cognit 2025:10.3758/s13421-024-01659-9. [PMID: 39971892 DOI: 10.3758/s13421-024-01659-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/11/2024] [Indexed: 02/21/2025]
Abstract
Semantic congruence across sensory modalities at encoding of information has been shown to improve memory performance over a short time span. However, the beneficial effect of crossmodal congruence is less well established when it comes to episodic memories over longer retention periods. This gap in knowledge is particularly wide for cross-modal semantic congruence under incidental encoding conditions, a process that is especially relevant in everyday life. Here, we present the results of a series of four experiments (total N = 232) using the dual-process signal detection model to examine crossmodal semantic effects on recollection and familiarity. In Experiment 1, we established the beneficial effects of crossmodal semantics in younger adults: hearing congruent compared with incongruent object sounds during the incidental encoding of object images increased recollection and familiarity after 48 h. In Experiment 2 we reproduced and extended the finding to a sample of older participants (50-65 years old): older people displayed a commensurable crossmodal congruence effect, despite a selective decline in recollection compared with younger adults. In Experiment 3, we showed that crossmodal facilitation is resilient to large imbalances between the frequency of congruent versus incongruent events (from 10 to 90%): Albeit rare events are more memorable than frequent ones overall, the impact of this rarity effect on the crossmodal benefit was small, and only affected familiarity. Collectively, these findings reveal a robust crossmodal semantic congruence effect for incidentally encoded visual stimuli over a long retention span, bearing the hallmarks of episodic memory enhancement.
Collapse
Affiliation(s)
- Pau Alexander Packard
- Center for Brain and Cognition, Universitat Pompeu Fabra, Carrer de Ramon Trias Fargas, 25-27, 08005, Barcelona, Spain
| | - Salvador Soto-Faraco
- Center for Brain and Cognition, Universitat Pompeu Fabra, Carrer de Ramon Trias Fargas, 25-27, 08005, Barcelona, Spain.
- Institució Catalana de Recerca I Estudis Avançats, ICREA, Barcelona, Spain.
| |
Collapse
|
14
|
López Assef B, Zamuner T. Task effects in children's word recall: Expanding the reverse production effect. JOURNAL OF CHILD LANGUAGE 2025:1-13. [PMID: 39901579 DOI: 10.1017/s0305000925000030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2025]
Abstract
Words said aloud are typically recalled more than words studied under other techniques. In certain circumstances, production does not lead to this memory advantage. We investigated the nature of this effect by varying the task during learning. Children aged five to six years were trained on novel words which required no action (Heard) compared to Verbal-Speech (production), Non-Verbal-Speech (stick out tongue), and Non-Verbal-Non-Speech (touch nose). Eye-tracking showed successful learning of novel words in all training conditions, but no differences between conditions. Both non-verbal tasks disrupted recall, demonstrating that encoding can be disrupted when children perform different types of concurrent actions.
Collapse
Affiliation(s)
| | - Tania Zamuner
- Department of Linguistics, University of Ottawa, Ottawa, Canada
| |
Collapse
|
15
|
Duarte SE, Yonelinas AP, Ghetti S, Geng JJ. Multisensory processing impacts memory for objects and their sources. Mem Cognit 2025; 53:646-665. [PMID: 38831161 PMCID: PMC11868352 DOI: 10.3758/s13421-024-01592-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/10/2024] [Indexed: 06/05/2024]
Abstract
Multisensory object processing improves recognition memory for individual objects, but its impact on memory for neighboring visual objects and scene context remains largely unknown. It is therefore unclear how multisensory processing impacts episodic memory for information outside of the object itself. We conducted three experiments to test the prediction that the presence of audiovisual objects at encoding would improve memory for nearby visual objects, and improve memory for the environmental context in which they occurred. In Experiments 1a and 1b, participants viewed audiovisual-visual object pairs or visual-visual object pairs with a control sound during encoding and were subsequently tested on their memory for each object individually. In Experiment 2, objects were paired with semantically congruent or meaningless control sounds and appeared within four different scene environments. Memory for the environment was tested. Results from Experiments 1a and 1b showed that encoding a congruent audiovisual object did not significantly benefit memory for neighboring visual objects, but Experiment 2 showed that encoding a congruent audiovisual object did improve memory for the environments in which those objects were encoded. These findings suggest that multisensory processing can influence memory beyond the objects themselves and that it has a unique role in episodic memory formation. This is particularly important for understanding how memories and associations are formed in real-world situations, in which objects and their surroundings are often multimodal.
Collapse
Affiliation(s)
- Shea E Duarte
- Department of Psychology, University of California, Davis, CA, 95616, USA.
- Center for Mind and Brain, University of California, Davis, CA, 95618, USA.
| | - Andrew P Yonelinas
- Department of Psychology, University of California, Davis, CA, 95616, USA
- Center for Neuroscience, University of California, Davis, CA, 95618, USA
| | - Simona Ghetti
- Department of Psychology, University of California, Davis, CA, 95616, USA
- Center for Mind and Brain, University of California, Davis, CA, 95618, USA
| | - Joy J Geng
- Department of Psychology, University of California, Davis, CA, 95616, USA
- Center for Mind and Brain, University of California, Davis, CA, 95618, USA
| |
Collapse
|
16
|
O'Dowd A, Hirst RJ, Seveso MA, McKenna EM, Newell FN. Generalisation to novel exemplars of learned shape categories based on visual and auditory spatial cues does not benefit from multisensory information. Psychon Bull Rev 2025; 32:417-429. [PMID: 39103708 PMCID: PMC11836203 DOI: 10.3758/s13423-024-02548-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/18/2024] [Indexed: 08/07/2024]
Abstract
Although the integration of information across multiple senses can enhance object representations in memory, how multisensory information affects the formation of categories is uncertain. In particular, it is unclear to what extent categories formed from multisensory information benefit object recognition over unisensory inputs. Two experiments investigated the categorisation of novel auditory and visual objects, with categories defined by spatial similarity, and tested generalisation to novel exemplars. Participants learned to categorise exemplars based on visual-only (geometric shape), auditory-only (spatially defined soundscape) or audio-visual spatial cues. Categorisation to learned as well as novel exemplars was then tested under the same sensory learning conditions. For all learning modalities, categorisation generalised to novel exemplars. However, there was no evidence of enhanced categorisation performance for learned multisensory exemplars. At best, bimodal performance approximated that of the most accurate unimodal condition, although this was observed only for a subset of exemplars within a category. These findings provide insight into the perceptual processes involved in the formation of categories and have relevance for understanding the sensory nature of object representations underpinning these categories.
Collapse
Affiliation(s)
- A O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland.
| | - R J Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - M A Seveso
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - E M McKenna
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - F N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
17
|
Parise C, Gori M, Finocchietti S, Ernst M, Esposito D, Tonelli A. Happy new ears: Rapid adaptation to novel spectral cues in vertical sound localization. iScience 2024; 27:111308. [PMID: 39640573 PMCID: PMC11617380 DOI: 10.1016/j.isci.2024.111308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 04/15/2024] [Accepted: 10/30/2024] [Indexed: 12/07/2024] Open
Abstract
Humans can adapt to changes in the acoustic properties of the head and exploit the resulting novel spectral cues for sound source localization. However, the adaptation rate varies across studies and is not associated with the aftereffects commonly found after adaptation in other sensory domains. To investigate the adaptation' rate and measure potential aftereffects, our participants wore new-ears to alter the spectral cues for sound localization and underwent sensorimotor training to induce rapid adaptation. Within 20 min, our sensorimotor-training induced full adaptation to the new-ears, as demonstrated by changes in various performance indexes, including the localization gain, bias, and precision. Once the new ears were removed, participants displayed systematic aftereffects, evident as drop in the precision of localization lasting only a few trials. These results highlight the short-term plasticity of human spatial hearing, which is capable to quickly adapt to spectral perturbations and inducing large, yet short lived, aftereffects.
Collapse
Affiliation(s)
- Cesare Parise
- Department of Psychology, University of Liverpool, Liverpool, UK
| | - Monica Gori
- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy
| | - Sara Finocchietti
- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy
| | - Marc Ernst
- Department of Psychology, University of Ulm, Ulm, Germany
| | - Davide Esposito
- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy
| | - Alessia Tonelli
- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy
- School of Psychology, University of Sydney, Sydney, Australia
| |
Collapse
|
18
|
An W, Zhang N, Li S, Yu Y, Wu J, Yang J. The Impact of Selective Spatial Attention on Auditory-Tactile Integration: An Event-Related Potential Study. Brain Sci 2024; 14:1258. [PMID: 39766457 PMCID: PMC11674746 DOI: 10.3390/brainsci14121258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2024] [Revised: 12/12/2024] [Accepted: 12/13/2024] [Indexed: 01/11/2025] Open
Abstract
BACKGROUND Auditory-tactile integration is an important research area in multisensory integration. Especially in special environments (e.g., traffic noise and complex work environments), auditory-tactile integration is crucial for human response and decision making. We investigated the influence of attention on the temporal course and spatial distribution of auditory-tactile integration. METHODS Participants received auditory stimuli alone, tactile stimuli alone, and simultaneous auditory and tactile stimuli, which were randomly presented on the left or right side. For each block, participants attended to all stimuli on the designated side and detected uncommon target stimuli while ignoring all stimuli on the other side. Event-related potentials (ERPs) were recorded via 64 scalp electrodes. Integration was quantified by comparing the response to the combined stimulus to the sum of the responses to the auditory and tactile stimuli presented separately. RESULTS The results demonstrated that compared to the unattended condition, integration occurred earlier and involved more brain regions in the attended condition when the stimulus was presented in the left hemispace. The unattended condition involved a more extensive range of brain regions and occurred earlier than the attended condition when the stimulus was presented in the right hemispace. CONCLUSIONS Attention can modulate auditory-tactile integration and show systematic differences between the left and right hemispaces. These findings contribute to the understanding of the mechanisms of auditory-tactile information processing in the human brain.
Collapse
Affiliation(s)
| | | | | | | | | | - Jiajia Yang
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, 3-1-1 Tsushima-Naka, Okayama 700-8530, Japan; (W.A.)
| |
Collapse
|
19
|
Brunetti R, Ferrante S, Avella AM, Indraccolo A, Del Gatto C. Turning stories into learning journeys: the principles and methods of Immersive Education. Front Psychol 2024; 15:1471459. [PMID: 39712545 PMCID: PMC11659684 DOI: 10.3389/fpsyg.2024.1471459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2024] [Accepted: 11/25/2024] [Indexed: 12/24/2024] Open
Abstract
This paper describes the theoretical and practical aspects of Immersive Education, an educational methodology based on interactive narratives, articulated as emotional journeys, to develop competencies. It has been developed throughout three school years (2021-2024) with more than 400 students (8-12 years old) in Public Schools in Italy and Spain. Immersive Education can be integrated with curricular school activities and can be used to target both curricular and transversal learning objectives, specifically the ones connected with the Personal, Social and Learning to learn Key Competence (LifeComp European framework). The paper describes the inspirations that led to the creation of the methodology, including similar experiential learning approaches. It then analyses the theoretical principles of the methodology, dividing them in four key-concepts, along with psychological evidence supporting them. The four key-concepts describe how immersive education aims at being a motivation trigger, featuring a dramatic structure, how it is based on the involvement of the self, and how it focuses on fostering a continuous engagement. It continues with a detailed analysis of implementation strategies, specifically about the management of emotional triggers and reactions, enriched by numerous examples taken from the projects implemented with the students. The conclusions open the way to future research directions to measure the impact of this approach on the development of transversal and specific competences.
Collapse
Affiliation(s)
- Riccardo Brunetti
- Experimental and Applied Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, Rome, Italy
- Project xx1, Rome, Italy
| | - Silvia Ferrante
- Project xx1, Rome, Italy
- Department of Developmental Psychology and Educational Research, ‘Sapienza’ University of Rome, Rome, Italy
| | | | - Allegra Indraccolo
- Experimental and Applied Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, Rome, Italy
| | - Claudia Del Gatto
- Experimental and Applied Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, Rome, Italy
| |
Collapse
|
20
|
Farnlacher E, Friend MM, Holtcamp K, Nicodemus MC, Swanson R, Lemley C, Cavinder C, Prince P. Cortisol concentrations in substance use disorder patients undergoing short-term psychotherapy incorporating equine interaction compared to cognitive behavioral therapy: A preliminary study. J Equine Vet Sci 2024; 143:105208. [PMID: 39384121 DOI: 10.1016/j.jevs.2024.105208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 06/27/2024] [Accepted: 10/06/2024] [Indexed: 10/11/2024]
Abstract
Psychotherapy incorporating equine interaction (PIE) is emerging as an effective supplemental substance use disorder (SUD) treatment. Benefits are attributed to decreased stress levels associated with the presence of the horse, however, research concerning stress parameters related to short-term equine interaction during SUD treatment is limited. Therefore, the purpose of this preliminary study was to investigate cortisol concentrations in SUD patients participating in PIE for two weeks compared with those in traditional cognitive behavioral therapy (CBT). Salivary cortisol samples were collected from two populations of SUD patients: 1) PIE participants (n = 18) and 2) CBT participants (n = 5). The impacts of the therapy type and the week of sampling were analyzed using a mixed linear model in SAS. Significance level was set at P ≤ 0.05. When comparing PIE to CBT, no impact associated with therapy type was determined (P = 0.74). Cortisol concentrations lacked significant changes during the two-week period for both therapeutic interventions. While short-term intervention lacked improvement in cortisol levels for both therapy types, further research is warranted to determine the most effective approach and duration of therapy.
Collapse
Affiliation(s)
- E Farnlacher
- Department of Animal and Dairy Sciences, Mississippi State University, Box 9815, Mississippi State, MS 39762, United States
| | - M M Friend
- Huck Institutes of Life Sciences, 101 Huck Life Sciences Building, The Pennsylvania State University, University Park, PA 16802, United States
| | - K Holtcamp
- Office of Psychological Services, College of Veterinary Medicine, Mississippi State University, PO Box 6100, Mississippi State, MS 39762, United States
| | - M C Nicodemus
- Department of Animal and Dairy Sciences, Mississippi State University, Box 9815, Mississippi State, MS 39762, United States.
| | - R Swanson
- Department of Animal and Dairy Sciences, Mississippi State University, Box 9815, Mississippi State, MS 39762, United States
| | - C Lemley
- Department of Animal and Dairy Sciences, Mississippi State University, Box 9815, Mississippi State, MS 39762, United States
| | - C Cavinder
- Department of Animal and Dairy Sciences, Mississippi State University, Box 9815, Mississippi State, MS 39762, United States
| | - P Prince
- Office of Psychological Services, College of Veterinary Medicine, Mississippi State University, PO Box 6100, Mississippi State, MS 39762, United States
| |
Collapse
|
21
|
Chow HM, Ma YK, Tseng CH. Social and communicative not a prerequisite: Preverbal infants learn an abstract rule only from congruent audiovisual dynamic pitch-height patterns. J Exp Child Psychol 2024; 248:106046. [PMID: 39241321 DOI: 10.1016/j.jecp.2024.106046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 07/23/2024] [Accepted: 07/29/2024] [Indexed: 09/09/2024]
Abstract
Learning in the everyday environment often requires the flexible integration of relevant multisensory information. Previous research has demonstrated preverbal infants' capacity to extract an abstract rule from audiovisual temporal sequences matched in temporal synchrony. Interestingly, this capacity was recently reported to be modulated by crossmodal correspondence beyond spatiotemporal matching (e.g., consistent facial emotional expressions or articulatory mouth movements matched with sound). To investigate whether such modulatory influence applies to non-social and non-communicative stimuli, we conducted a critical test using audiovisual stimuli free of social information: visually upward (and downward) moving objects paired with a congruent tone of ascending or incongruent (descending) pitch. East Asian infants (8-10 months old) from a metropolitan area in Asia demonstrated successful abstract rule learning in the congruent audiovisual condition and demonstrated weaker learning in the incongruent condition. This implies that preverbal infants use crossmodal dynamic pitch-height correspondence to integrate multisensory information before rule extraction. This result confirms that preverbal infants are ready to use non-social non-communicative information in serving cognitive functions such as rule extraction in a multisensory context.
Collapse
Affiliation(s)
- Hiu Mei Chow
- Department of Psychology, St. Thomas University, Fredericton, New Brunswick E3B 5G3, Canada
| | - Yuen Ki Ma
- Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong
| | - Chia-Huei Tseng
- Research Institute of Electrical Communication, Tohoku University, Sendai, Miyagi 980-0812, Japan.
| |
Collapse
|
22
|
Ávila-Cascajares F, Waleczek C, Kerres S, Suchan B, Völter C. Cross-Modal Plasticity in Postlingual Hearing Loss Predicts Speech Perception Outcomes After Cochlear Implantation. J Clin Med 2024; 13:7016. [PMID: 39685477 DOI: 10.3390/jcm13237016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2024] [Revised: 11/13/2024] [Accepted: 11/19/2024] [Indexed: 12/18/2024] Open
Abstract
Background: Sensory loss may lead to intra- and cross-modal cortical reorganization. Previous research showed a significant correlation between the cross-modal contribution of the right auditory cortex to visual evoked potentials (VEP) and speech perception in cochlear implant (CI) users with prelingual hearing loss (HL), but not in those with postlingual HL. The present study aimed to explore the cortical reorganization induced by postlingual HL, particularly in the right temporal region, and how it correlates with speech perception outcome with a CI. Material and Methods: A total of 53 adult participants were divided into two groups according to hearing ability: 35 had normal hearing (NH) (mean age = 62.10 years (±7.48)) and 18 had profound postlingual HL (mean age = 63.78 years (±8.44)). VEPs, using a 29-channel electroencephalogram (EEG) system, were recorded preoperatively in the 18 patients scheduled for cochlear implantation and in 35 NH adults who served as the control group. Amplitudes and latencies of the P100, N100, and P200 components were analyzed across frontal, temporal, and occipital areas and compared between NH and HL subjects using repeated measures ANOVA. For the HL group, speech perception in quiet was assessed at 6 and 12 months of CI use. Results: No difference was found in amplitudes or latencies of the P100, N100, and P200 VEP components between the NH and HL groups. Further analysis using Spearman correlations between preoperative amplitudes and latencies of the P100, N100, and P200 VEP components at the right temporal electrode position T8 and postoperative speech perception showed that the HL group had either significantly higher or significantly lower amplitudes of the P200 component at the right temporal electrode position T8 compared to the NH controls. The HL subgroup with higher amplitudes had better speech perception than the subgroup with lower amplitudes at 6 months and 12 months of CI use. Conclusions: Preoperative evaluation of cortical plasticity can reveal plasticity profiles, which might help to better predict postoperative speech outcomes and adapt the rehabilitation regimen after CI activation. Further research is needed to understand the susceptibility of each component to cross-modal reorganization and their specific contribution to outcome prediction.
Collapse
Affiliation(s)
- Fátima Ávila-Cascajares
- Cochlear Implant Center, Department of Otorhinolaryngology, Head and Neck Surgery, Catholic Hospital Bochum, Ruhr University Bochum, Bleichstr. 15, 44787 Bochum, Germany
- Clinical Neuropsychology, Faculty of Psychology, Ruhr University Bochum, Universitätsstr. 150, 44801 Bochum, Germany
- International Graduate School of Neuroscience, Ruhr University Bochum, Universitätsstr. 150, 44801 Bochum, Germany
| | - Clara Waleczek
- Cochlear Implant Center, Department of Otorhinolaryngology, Head and Neck Surgery, Catholic Hospital Bochum, Ruhr University Bochum, Bleichstr. 15, 44787 Bochum, Germany
| | - Sophie Kerres
- Cochlear Implant Center, Department of Otorhinolaryngology, Head and Neck Surgery, Catholic Hospital Bochum, Ruhr University Bochum, Bleichstr. 15, 44787 Bochum, Germany
| | - Boris Suchan
- Clinical Neuropsychology, Faculty of Psychology, Ruhr University Bochum, Universitätsstr. 150, 44801 Bochum, Germany
| | - Christiane Völter
- Cochlear Implant Center, Department of Otorhinolaryngology, Head and Neck Surgery, Catholic Hospital Bochum, Ruhr University Bochum, Bleichstr. 15, 44787 Bochum, Germany
| |
Collapse
|
23
|
Kwok TCK, Kiefer P, Schinazi VR, Hoelscher C, Raubal M. Gaze-based detection of mind wandering during audio-guided panorama viewing. Sci Rep 2024; 14:27955. [PMID: 39543376 PMCID: PMC11564806 DOI: 10.1038/s41598-024-79172-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 11/06/2024] [Indexed: 11/17/2024] Open
Abstract
Unlike classic audio guides, intelligent audio guides can detect users' level of attention and help them regain focus. In this paper, we investigate the detection of mind wandering (MW) from eye movements in a use case with a long focus distance. We present a novel MW annotation method for combined audio-visual stimuli and collect annotated MW data for the use case of audio-guided city panorama viewing. In two studies, MW classifiers are trained and validated, which are able to successfully detect MW in a 1-s time window. In study 1 (n = 27), MW classifiers from gaze features with and without eye vergence are trained (area under the curve of at least 0.80). We then re-validate the classifier with unseen data (study 2, n = 31) that are annotated using a memory task and find a positive correlation (repeated measure correlation = 0.49, p < 0.001) between incorrect quiz answering and the percentage of time users spent mind wandering. Overall, this paper contributes significant new knowledge on the detection of MW from gaze for use cases with audio-visual stimuli.
Collapse
Affiliation(s)
- Tiffany C K Kwok
- Institute of Cartography and Geoinformation, ETH Zürich, Zurich, Switzerland.
- Lufthansa Systems FlightNav, Opfikon, Switzerland.
| | - Peter Kiefer
- Institute of Cartography and Geoinformation, ETH Zürich, Zurich, Switzerland.
| | | | | | - Martin Raubal
- Institute of Cartography and Geoinformation, ETH Zürich, Zurich, Switzerland
| |
Collapse
|
24
|
O'Dowd A, O'Connor DMA, Hirst RJ, Setti A, Kenny RA, Newell FN. Nutrition is associated with differences in multisensory integration in healthy older adults. Nutr Neurosci 2024; 27:1226-1236. [PMID: 38386286 DOI: 10.1080/1028415x.2024.2316446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2024]
Abstract
Diet can influence cognitive functioning in older adults and is a modifiable risk factor for cognitive decline. However, it is unknown if an association exists between diet and lower-level processes in the brain underpinning cognition, such as multisensory integration. We investigated whether temporal multisensory integration is associated with daily intake of fruit and vegetables (FV) or products high in fat/sugar/salt (FSS) in a large sample (N = 2,693) of older adults (mean age = 64.06 years, SD = 7.60; 56% female) from The Irish Longitudinal Study on Ageing (TILDA). Older adults completed a Food Frequency Questionnaire from which the total number of daily servings of FV and FSS items respectively was calculated. Older adults' susceptibility to the Sound Induced Flash Illusion (SIFI) measured the temporal precision of audio-visual integration, which included three audio-visual Stimulus Onset Asynchronies (SOAs): 70, 150 and 230 ms. Older adults who self-reported a higher daily consumption of FV were less susceptible to the SIFI at the longest versus shortest SOAs (i.e. increased temporal precision) compared to those reporting the lowest daily consumption (p = .013). In contrast, older adults reporting a higher daily consumption of FSS items were more susceptible to the SIFI at the longer versus shortest SOAs (i.e. reduced temporal precision) compared to those reporting the lowest daily consumption (p < .001). The temporal precision of multisensory integration is differentially associated with levels of daily consumption of FV versus products high in FSS, consistent with broader evidence that habitual diet is associated with brain health.
Collapse
Affiliation(s)
- Alan O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
| | - Deirdre M A O'Connor
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
- Mercer Institute for Successful Ageing, St. James Hospital, Dublin, Ireland
- Department of Medical Gerontology, School of Medicine, Trinity College Dublin, Dublin, Ireland
| | - Rebecca J Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
| | - Annalisa Setti
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
- School of Applied Psychology, University College Cork, Cork, Ireland
| | - Rose Anne Kenny
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
- Mercer Institute for Successful Ageing, St. James Hospital, Dublin, Ireland
- Department of Medical Gerontology, School of Medicine, Trinity College Dublin, Dublin, Ireland
| | - Fiona N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| |
Collapse
|
25
|
Ma Q, Tan Y, He Y, Cheng L, Wang M. Why does mobile payment promote purchases? Revisiting the pain of paying, and understanding the implicit pleasure via selective attention. Psych J 2024; 13:760-779. [PMID: 38752779 PMCID: PMC11444724 DOI: 10.1002/pchj.765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 04/01/2024] [Indexed: 10/03/2024]
Abstract
The past years have witnessed a phenomenal growth of the mobile payment market, but how mobile payment affects purchase behavior receives less attention from academics. Recent studies suggested that lower pain of paying may not fully clarify the relationship between mobile payment and increased purchases (i.e., mobile payment effect). The current research first introduced price level in Study 1 and demonstrated that the pain of paying served as an underlying mechanism only in the high-price condition rather than the low-price condition. As such, Study 2 was conducted in a low-price context to address the uncovered mechanisms. We propose a new concept of "pleasure of payment" that is defined as an implicit and consumption-related hedonic response based on the cue theory of consumption. By tracking spontaneous attention to positive attributes (i.e., benefits) of products, Study 2 demonstrated this implicit pleasure as a psychological mechanism for the mobile payment effect when the pain of paying was not at play. These findings have important implications for mobile payment in research and practice by identifying price level as a boundary condition for the role of pain of paying and understanding the positive downstream consequences of mobile payment usage on consumer psychology.
Collapse
Affiliation(s)
- Qingguo Ma
- School of Management, Zhejiang University, Hangzhou, China
- Institute of Neural Management Sciences, Zhejiang University of Technology, Hangzhou, China
| | - Yulin Tan
- School of Management, Zhejiang University, Hangzhou, China
| | - Yijin He
- School of Management, Zhejiang University, Hangzhou, China
| | - Lu Cheng
- Chinese Academy of Science and Education Evaluation, Hangzhou Dianzi University, Hangzhou, China
| | - Manlin Wang
- Business & Tourism Institute, Hangzhou Vocational & Technical College, Hangzhou, China
| |
Collapse
|
26
|
Vivas AB, Estévez AF, Khan I, Roldán-Tapia L, Markelius A, Nielsen S, Lowe R. DigiDOP: A framework for applying digital technology to the Differential Outcomes Procedure (DOP) for cognitive interventions in persons with neurocognitive disorders. Neurosci Biobehav Rev 2024; 165:105838. [PMID: 39122198 DOI: 10.1016/j.neubiorev.2024.105838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2024] [Revised: 07/02/2024] [Accepted: 07/31/2024] [Indexed: 08/12/2024]
Abstract
We present a framework -Digi-DOP- that includes a series of evidence-based recommendations to design and apply cognitive interventions for people with Neurocognitive Disorders (NCDs) using a relatively new approach, the Differential Outcomes Procedure (DOP). To do so, we critically review the substantial experimental research conducted with relevant clinical and non-clinical populations, and the theoretical underpinnings of this procedure. We further discuss how existing digital technologies that have been used for cognitive interventions could be applied to overcome some of the limitations of DOP-based interventions and further enhance DOP benefits. Specifically, we present three digital DOP developments that are currently being designed, investigated and/or tested. Finally, we discuss constraints, ethical and legal considerations that need to be taken into account to ensure that the use of technology in DOP-based interventions proposed here does not widen disparities and inequalities. We hope that this framework will inform and guide digital health leaders and developers, researchers and healthcare professionals to design and apply DOP-based interventions for people with NCDs.
Collapse
Affiliation(s)
- A B Vivas
- Neuroscience Research Center (NEUREC), CITY College, University of York Europe Campus, Thessaloniki, Greece
| | - A F Estévez
- CIBIS Research Center, University of Almería, Almería, Spain
| | - I Khan
- DICE Lab, Department of Applied IT, University of Gothenburg, Gothenburg, Sweden
| | - L Roldán-Tapia
- CEINSAUAL Research Center,University of Almería, Almería, Spain
| | - A Markelius
- DICE Lab, Department of Applied IT, University of Gothenburg, Gothenburg, Sweden; University of Cambridge, England, UK
| | | | - R Lowe
- DICE Lab, Department of Applied IT, University of Gothenburg, Gothenburg, Sweden; RISE AB, Gothenburg, Sweden.
| |
Collapse
|
27
|
Castro F, Schenke KC. Augmented action observation: Theory and practical applications in sensorimotor rehabilitation. Neuropsychol Rehabil 2024; 34:1327-1346. [PMID: 38117228 DOI: 10.1080/09602011.2023.2286012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 11/10/2023] [Indexed: 12/21/2023]
Abstract
Sensory feedback is a fundamental aspect of effective motor learning in sport and clinical contexts. One way to provide this is through sensory augmentation, where extrinsic sensory information are associated with, and modulated by, movement. Traditionally, sensory augmentation has been used as an online strategy, where feedback is provided during physical execution of an action. In this article, we argue that action observation can be an additional effective channel to provide augmented feedback, which would be complementary to other, more traditional, motor learning and sensory augmentation strategies. Given these similarities between observing and executing an action, action observation could be used when physical training is difficult or not feasible, for example during immobilization or during the initial stages of a rehabilitation protocol when peripheral fatigue is a common issue. We review the benefits of observational learning and preliminary evidence for the effectiveness of using augmented action observation to improve learning. We also highlight current knowledge gaps which make the transition from laboratory to practical contexts difficult. Finally, we highlight the key areas of focus for future research.
Collapse
Affiliation(s)
- Fabio Castro
- Institute of Sport, School of Life and Medical Sciences, University of Hertfordshire, Hatfield, UK
| | - Kimberley C Schenke
- School of Natural, Social and Sports Sciences, University of Gloucestershire, Cheltenham, UK
| |
Collapse
|
28
|
Maimon A, Wald IY, Snir A, Ben Oz M, Amedi A. Perceiving depth beyond sight: Evaluating intrinsic and learned cues via a proof of concept sensory substitution method in the visually impaired and sighted. PLoS One 2024; 19:e0310033. [PMID: 39321152 PMCID: PMC11423994 DOI: 10.1371/journal.pone.0310033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 08/23/2024] [Indexed: 09/27/2024] Open
Abstract
This study explores spatial perception of depth by employing a novel proof of concept sensory substitution algorithm. The algorithm taps into existing cognitive scaffolds such as language and cross modal correspondences by naming objects in the scene while representing their elevation and depth by manipulation of the auditory properties for each axis. While the representation of verticality utilized a previously tested correspondence with pitch, the representation of depth employed an ecologically inspired manipulation, based on the loss of gain and filtration of higher frequency sounds over distance. The study, involving 40 participants, seven of which were blind (5) or visually impaired (2), investigates the intrinsicness of an ecologically inspired mapping of auditory cues for depth by comparing it to an interchanged condition where the mappings of the two axes are swapped. All participants successfully learned to use the algorithm following a very brief period of training, with the blind and visually impaired participants showing similar levels of success for learning to use the algorithm as did their sighted counterparts. A significant difference was found at baseline between the two conditions, indicating the intuitiveness of the original ecologically inspired mapping. Despite this, participants were able to achieve similar success rates following the training in both conditions. The findings indicate that both intrinsic and learned cues come into play with respect to depth perception. Moreover, they suggest that by employing perceptual learning, novel sensory mappings can be trained in adulthood. Regarding the blind and visually impaired, the results also support the convergence view, which claims that with training, their spatial abilities can converge with those of the sighted. Finally, we discuss how the algorithm can open new avenues for accessibility technologies, virtual reality, and other practical applications.
Collapse
Affiliation(s)
- Amber Maimon
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- Computational Psychiatry and Neurotechnology Lab, Ben Gurion University, Be'er Sheva, Israel
| | - Iddo Yehoshua Wald
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- Digital Media Lab, University of Bremen, Bremen, Germany
| | - Adi Snir
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| | - Meshi Ben Oz
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| | - Amir Amedi
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| |
Collapse
|
29
|
Maguinness C, Schall S, Mathias B, Schoemann M, von Kriegstein K. Prior multisensory learning can facilitate auditory-only voice-identity and speech recognition in noise. Q J Exp Psychol (Hove) 2024:17470218241278649. [PMID: 39164830 DOI: 10.1177/17470218241278649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/22/2024]
Abstract
Seeing the visual articulatory movements of a speaker, while hearing their voice, helps with understanding what is said. This multisensory enhancement is particularly evident in noisy listening conditions. Multisensory enhancement also occurs even in auditory-only conditions: auditory-only speech and voice-identity recognition are superior for speakers previously learned with their face, compared to control learning; an effect termed the "face-benefit." Whether the face-benefit can assist in maintaining robust perception in increasingly noisy listening conditions, similar to concurrent multisensory input, is unknown. Here, in two behavioural experiments, we examined this hypothesis. In each experiment, participants learned a series of speakers' voices together with their dynamic face or control image. Following learning, participants listened to auditory-only sentences spoken by the same speakers and recognised the content of the sentences (speech recognition, Experiment 1) or the voice-identity of the speaker (Experiment 2) in increasing levels of auditory noise. For speech recognition, we observed that 14 of 30 participants (47%) showed a face-benefit. 19 of 25 participants (76%) showed a face-benefit for voice-identity recognition. For those participants who demonstrated a face-benefit, the face-benefit increased with auditory noise levels. Taken together, the results support an audio-visual model of auditory communication and suggest that the brain can develop a flexible system in which learned facial characteristics are used to deal with varying auditory uncertainty.
Collapse
Affiliation(s)
- Corrina Maguinness
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Sonja Schall
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Brian Mathias
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany
- School of Psychology, University of Aberdeen, Aberdeen, United Kingdom
| | - Martin Schoemann
- Chair of Psychological Methods and Cognitive Modelling, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
30
|
Leukel C, Loibl K, Leuders T. Integrating vision and somatosensation does not improve the accuracy and response time when estimating area and perimeter of rectangles in primary school. Trends Neurosci Educ 2024; 36:100238. [PMID: 39266122 DOI: 10.1016/j.tine.2024.100238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 07/30/2024] [Accepted: 08/01/2024] [Indexed: 09/14/2024]
Abstract
BACKGROUND Problem-solving and learning in mathematics involves sensory perception and processing. Multisensory integration may contribute by enhancing sensory estimates. This study aims to assess if combining visual and somatosensory information improves elementary students' perimeter and area estimates. METHODS 87 4th graders compared rectangles with respect to area or perimeter either solely using visual observation or additionally with somatosensory information. Three experiments targeted different task aspects. Statistical analyses tested success rates and response times. RESULTS Contrary to expectations, adding somatosensory information did not boost success rates for area and perimeter comparison. Response time even increased with adding somatosensory information. Children's difficulty in accurately tracing figures negatively impacted the success rate of area comparisons. DISCUSSION Results suggest visual observation alone suffices for accurately estimating and comparing area and perimeter of rectangles in 4th graders. IMPLICATIONS Careful deliberation on the inclusion of somatosensory information in mathematical tasks concerning perimeter and area estimations of rectangles is recommended.
Collapse
Affiliation(s)
- Christian Leukel
- University of Education Freiburg, Germany; Bernstein Center Freiburg, University of Freiburg, Germany.
| | | | | |
Collapse
|
31
|
Peng B, Huang JJ, Li Z, Zhang LI, Tao HW. Cross-modal enhancement of defensive behavior via parabigemino-collicular projections. Curr Biol 2024; 34:3616-3631.e5. [PMID: 39019036 PMCID: PMC11373540 DOI: 10.1016/j.cub.2024.06.052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 05/19/2024] [Accepted: 06/20/2024] [Indexed: 07/19/2024]
Abstract
Effective detection and avoidance from environmental threats are crucial for animals' survival. Integration of sensory cues associated with threats across different modalities can significantly enhance animals' detection and behavioral responses. However, the neural circuit-level mechanisms underlying the modulation of defensive behavior or fear response under simultaneous multimodal sensory inputs remain poorly understood. Here, we report in mice that bimodal looming stimuli combining coherent visual and auditory signals elicit more robust defensive/fear reactions than unimodal stimuli. These include intensified escape and prolonged hiding, suggesting a heightened defensive/fear state. These various responses depend on the activity of the superior colliculus (SC), while its downstream nucleus, the parabigeminal nucleus (PBG), predominantly influences the duration of hiding behavior. PBG temporally integrates visual and auditory signals and enhances the salience of threat signals by amplifying SC sensory responses through its feedback projection to the visual layer of the SC. Our results suggest an evolutionarily conserved pathway in defense circuits for multisensory integration and cross-modality enhancement.
Collapse
Affiliation(s)
- Bo Peng
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Neuroscience Graduate Program, University of Southern California, Los Angeles, CA 90089, USA
| | - Junxiang J Huang
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Graduate Program in Biomedical and Biological Sciences, University of Southern California, Los Angeles, CA 90033, USA
| | - Zhong Li
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA
| | - Li I Zhang
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Department of Physiology and Neuroscience, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA.
| | - Huizhong Whit Tao
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Department of Physiology and Neuroscience, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA.
| |
Collapse
|
32
|
Ghambari S, Arsham S, Ramezanzade H. The Effects of Motionless Interventions Based on Visual-Auditory Instructions With Sonification on Learning a Rhythmic Motor Skill. Percept Mot Skills 2024; 131:1321-1340. [PMID: 38758033 DOI: 10.1177/00315125241252855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/18/2024]
Abstract
Our aim in this study was to investigate the effects of motionless interventions, based on visual-auditory integration with a sonification technique, on the learning a complex rhythmic motor skill. We recruited 22 male participants with high physical fitness and provided them four acquisition sessions in which to practice hurdle running, based on a visual-auditory instructional pattern. Next, we divided participants into three groups: visual-auditory, auditory, and control. In six sessions of motionless interventions, with no physical practice, participants in the visual-auditory group received a visual-auditory pattern similar to their experience during the acquisition period. The auditory group only listened to the sound of sonified movements of an expert hurdler, and the control group received no instructional interventions. Finally, participants in all three groups underwent post-intervention and transfer tests to determine their errors in the spatial and relative timing of their leading leg's knee angular displacement. Both visual-auditory and auditory groups had significantly less spatial error than the control group. However, there were no significant group differences in relative timing in any test phase. These results indicate that the use of the sonification technique in the form of visual-auditory instruction adapted to the athletes' needs benefitted perception-sensory capacities to improve motor skill learning.
Collapse
Affiliation(s)
- Shiva Ghambari
- Department of Motor Behavior, Kharazmi University, Tehran, Iran
| | - Saeed Arsham
- Department of Motor Behavior, Kharazmi University, Tehran, Iran
| | - Hesam Ramezanzade
- Department of Sport Science, School of Humanities, Damghan University, Damghan, Iran
| |
Collapse
|
33
|
Ampollini S, Ardizzi M, Ferroni F, Cigala A. Synchrony perception across senses: A systematic review of temporal binding window changes from infancy to adolescence in typical and atypical development. Neurosci Biobehav Rev 2024; 162:105711. [PMID: 38729280 DOI: 10.1016/j.neubiorev.2024.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 04/14/2024] [Accepted: 05/03/2024] [Indexed: 05/12/2024]
Abstract
Sensory integration is increasingly acknowledged as being crucial for the development of cognitive and social abilities. However, its developmental trajectory is still little understood. This systematic review delves into the topic by investigating the literature about the developmental changes from infancy through adolescence of the Temporal Binding Window (TBW) - the epoch of time within which sensory inputs are perceived as simultaneous and therefore integrated. Following comprehensive searches across PubMed, Elsevier, and PsycInfo databases, only experimental, behavioral, English-language, peer-reviewed studies on multisensory temporal processing in 0-17-year-olds have been included. Non-behavioral, non-multisensory, and non-human studies have been excluded as those that did not directly focus on the TBW. The selection process was independently performed by two Authors. The 39 selected studies involved 2859 participants in total. Findings indicate a predisposition towards cross-modal asynchrony sensitivity and a composite, still unclear, developmental trajectory, with atypical development associated to increased asynchrony tolerance. These results highlight the need for consistent and thorough research into TBW development to inform potential interventions.
Collapse
Affiliation(s)
- Silvia Ampollini
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy.
| | - Martina Ardizzi
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Francesca Ferroni
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Ada Cigala
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy
| |
Collapse
|
34
|
Li J, Liu Y, Nehl E, Tucker JD. A behavioral economics approach to enhancing HIV preexposure and postexposure prophylaxis implementation. Curr Opin HIV AIDS 2024; 19:212-220. [PMID: 38686773 DOI: 10.1097/coh.0000000000000860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
PURPOSE OF REVIEW The 'PrEP cliff' phenomenon poses a critical challenge in global HIV PrEP implementation, marked by significant dropouts across the entire PrEP care continuum. This article reviews new strategies to address 'PrEP cliff'. RECENT FINDINGS Canadian clinicians have developed a service delivery model that offers presumptive PEP to patients in need and transits eligible PEP users to PrEP. Early findings are promising. This service model not only establishes a safety net for those who were not protected by PrEP, but it also leverages the immediate salience and perceived benefits of PEP as a natural nudge towards PrEP use. Aligning with Behavioral Economics, specifically the Salience Theory, this strategy holds potential in tackling PrEP implementation challenges. SUMMARY A natural pathway between PEP and PrEP has been widely observed. The Canadian service model exemplifies an innovative strategy that leverages this organic pathway and enhances the utility of both PEP and PrEP services. We offer theoretical insights into the reasons behind these PEP-PrEP transitions and evolve the Canadian model into a cohesive framework for implementation.
Collapse
Affiliation(s)
- Jingjing Li
- Department of Behavioral, Social and Health Education Sciences, Rollins School of Public Health
| | - Yaxin Liu
- Department of Psychology, Emory University, Atlanta, Georgia
| | - Eric Nehl
- Department of Behavioral, Social and Health Education Sciences, Rollins School of Public Health
| | - Joseph D Tucker
- Division of Infectious Diseases, University of North Carolina at Chapel Hill, Chapel Hill, USA
| |
Collapse
|
35
|
Yeatman JD, McCloy DR, Caffarra S, Clarke MD, Ender S, Gijbels L, Joo SJ, Kubota EC, Kuhl PK, Larson E, O'Brien G, Peterson ER, Takada ME, Taulu S. Reading instruction causes changes in category-selective visual cortex. Brain Res Bull 2024; 212:110958. [PMID: 38677559 PMCID: PMC11194742 DOI: 10.1016/j.brainresbull.2024.110958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 03/15/2024] [Accepted: 04/17/2024] [Indexed: 04/29/2024]
Abstract
Education sculpts specialized neural circuits for skills like reading that are critical to success in modern society but were not anticipated by the selective pressures of evolution. Does the emergence of brain regions that selectively process novel visual stimuli like words occur at the expense of cortical representations of other stimuli like faces and objects? "Neuronal Recycling" predicts that learning to read should enhance the response to words in ventral occipitotemporal cortex (VOTC) and decrease the response to other visual categories such as faces and objects. To test this hypothesis, and more broadly to understand the changes that are induced by the early stages of literacy instruction, we conducted a randomized controlled trial with pre-school children (five years of age). Children were randomly assigned to intervention programs focused on either reading skills or oral language skills and magnetoencephalography (MEG) data collected before and after the intervention was used to measure visual responses to images of text, faces, and objects. We found that being taught reading versus oral language skills induced different patterns of change in category-selective regions of visual cortex, but that there was not a clear tradeoff between the response to words versus other categories. Within a predefined region of VOTC corresponding to the visual word form area (VWFA) we found that the relative amplitude of responses to text, faces, and objects changed, but increases in the response to words were not linked to decreases in the response to faces or objects. How these changes play out over a longer timescale is still unknown but, based on these data, we can surmise that high-level visual cortex undergoes rapid changes as children enter school and begin establishing new skills like literacy.
Collapse
Affiliation(s)
- Jason D Yeatman
- Graduate School of Education, Stanford University, Stanford, CA, USA; Division of Developmental Behavioral Pediatrics, Stanford University School of Medicine, Stanford, CA, USA; Department of Psychology, Stanford University, Stanford, CA, USA.
| | - Daniel R McCloy
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, USA; Department of Speech & Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Sendy Caffarra
- Graduate School of Education, Stanford University, Stanford, CA, USA; Division of Developmental Behavioral Pediatrics, Stanford University School of Medicine, Stanford, CA, USA; Department of Biomedical, Metabolic and Neural Sciences, University of Modena and Reggio Emilia, Modena, Italy
| | - Maggie D Clarke
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, USA; Department of Speech & Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Suzanne Ender
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, USA; Department of Speech & Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Liesbeth Gijbels
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, USA; Department of Speech & Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Sung Jun Joo
- Department of Psychology, Pusan National University, Busan, Republic of Korea
| | - Emily C Kubota
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Patricia K Kuhl
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, USA; Department of Speech & Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Eric Larson
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, USA
| | - Gabrielle O'Brien
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, USA; Department of Speech & Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Erica R Peterson
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, USA; Department of Speech & Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Megumi E Takada
- Graduate School of Education, Stanford University, Stanford, CA, USA
| | - Samu Taulu
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, USA; Department of Physics, University of Washington, Seattle, WA, USA
| |
Collapse
|
36
|
Schlund M, Al-Badri N, Nicot R. Visuospatial abilities and 3D-printed based learning. Surg Radiol Anat 2024; 46:927-931. [PMID: 38652251 DOI: 10.1007/s00276-024-03370-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Accepted: 04/12/2024] [Indexed: 04/25/2024]
Abstract
PURPOSE The use of 3D-printing in every field of medicine is expanding, notably as an educational tool. The aim of this study was to assess how visuospatial abilities (VSA) of students may impact learning helped with 3D-printed models. METHODS Participants were undergraduate medical school students during their clinical rotation in oral and maxillofacial surgery in two French Universities. Students were included prospectively and consecutively from September 2021 to June 2023. First, a lecture about craniosynostosis was performed with the help of 3D-printed models of craniosynostotic skulls. Then, a mental rotation test (MRT) followed by a multiple-choice questions (MCQs) form about craniosynostosis presentations were submitted to the students. RESULTS Forty undergraduate students were finally included. Median MRT score was 15 (10.75;21) and median score to the MCQs was 13 (11.75;14). There was a significantly weak correlation between the MRT-A score and the score to the MCQs (rs = 0.364; p = 0.022). A simple linear regression was calculated to predict the result to the MCQs on MRT-A score [ (F(1,39) = 281.248; p < 0.0001), with a R2 of 0.878 ]. CONCLUSION This study showed that VSA has an impact on the recognition of complex clinical presentations, i.e. skulls with craniosynostosis. The correlation found between VSA and complex 3D shape recognition after learning aided with 3D-printed model is emphasizing the importance of VSA when using innovative technologies. Thus, VSA training should be envisioned during the curriculum.
Collapse
Affiliation(s)
- Matthias Schlund
- Service de Chirurgie Maxillo-Faciale et Stomatologie, Univ. Bordeaux, CHU Bordeaux, INSERM, BioTis, U1026, Bordeaux, 33000, France.
| | - Nour Al-Badri
- Service de Chirurgie Maxillo-Faciale et Stomatologie, Univ. Lille, CHU Lille, Lille, 59000, France
| | - Romain Nicot
- Service de Chirurgie Maxillo-Faciale et Stomatologie, Univ. Lille, CHU Lille, INSERM, U1008 - Advanced Durg Delivery Systems, Lille, 59000, France
| |
Collapse
|
37
|
Paraskevopoulos E, Anagnostopoulou A, Chalas N, Karagianni M, Bamidis P. Unravelling the multisensory learning advantage: Different patterns of within and across frequency-specific interactions drive uni- and multisensory neuroplasticity. Neuroimage 2024; 291:120582. [PMID: 38521212 DOI: 10.1016/j.neuroimage.2024.120582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 03/12/2024] [Accepted: 03/20/2024] [Indexed: 03/25/2024] Open
Abstract
In the field of learning theory and practice, the superior efficacy of multisensory learning over uni-sensory is well-accepted. However, the underlying neural mechanisms at the macro-level of the human brain remain largely unexplored. This study addresses this gap by providing novel empirical evidence and a theoretical framework for understanding the superiority of multisensory learning. Through a cognitive, behavioral, and electroencephalographic assessment of carefully controlled uni-sensory and multisensory training interventions, our study uncovers a fundamental distinction in their neuroplastic patterns. A multilayered network analysis of pre- and post- training EEG data allowed us to model connectivity within and across different frequency bands at the cortical level. Pre-training EEG analysis unveils a complex network of distributed sources communicating through cross-frequency coupling, while comparison of pre- and post-training EEG data demonstrates significant differences in the reorganizational patterns of uni-sensory and multisensory learning. Uni-sensory training primarily modifies cross-frequency coupling between lower and higher frequencies, whereas multisensory training induces changes within the beta band in a more focused network, implying the development of a unified representation of audiovisual stimuli. In combination with behavioural and cognitive findings this suggests that, multisensory learning benefits from an automatic top-down transfer of training, while uni-sensory training relies mainly on limited bottom-up generalization. Our findings offer a compelling theoretical framework for understanding the advantage of multisensory learning.
Collapse
Affiliation(s)
| | - Alexandra Anagnostopoulou
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Nikolas Chalas
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Germany
| | - Maria Karagianni
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Panagiotis Bamidis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| |
Collapse
|
38
|
Bernal-Berdun E, Vallejo M, Sun Q, Serrano A, Gutierrez D. Modeling the Impact of Head-Body Rotations on Audio-Visual Spatial Perception for Virtual Reality Applications. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2624-2632. [PMID: 38446650 DOI: 10.1109/tvcg.2024.3372112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
Humans perceive the world by integrating multimodal sensory feedback, including visual and auditory stimuli, which holds true in virtual reality (VR) environments. Proper synchronization of these stimuli is crucial for perceiving a coherent and immersive VR experience. In this work, we focus on the interplay between audio and vision during localization tasks involving natural head-body rotations. We explore the impact of audio-visual offsets and rotation velocities on users' directional localization acuity for various viewing modes. Using psychometric functions, we model perceptual disparities between visual and auditory cues and determine offset detection thresholds. Our findings reveal that target localization accuracy is affected by perceptual audio-visual disparities during head-body rotations, but remains consistent in the absence of stimuli-head relative motion. We then showcase the effectiveness of our approach in predicting and enhancing users' localization accuracy within realistic VR gaming applications. To provide additional support for our findings, we implement a natural VR game wherein we apply a compensatory audio-visual offset derived from our measured psychometric functions. As a result, we demonstrate a substantial improvement of up to 40% in participants' target localization accuracy. We additionally provide guidelines for content creation to ensure coherent and seamless VR experiences.
Collapse
|
39
|
Zhao Y, Liu J, Dosher BA, Lu ZL. Enabling identification of component processes in perceptual learning with nonparametric hierarchical Bayesian modeling. J Vis 2024; 24:8. [PMID: 38780934 PMCID: PMC11131338 DOI: 10.1167/jov.24.5.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 04/13/2024] [Indexed: 05/25/2024] Open
Abstract
Perceptual learning is a multifaceted process, encompassing general learning, between-session forgetting or consolidation, and within-session fast relearning and deterioration. The learning curve constructed from threshold estimates in blocks or sessions, based on tens or hundreds of trials, may obscure component processes; high temporal resolution is necessary. We developed two nonparametric inference procedures: a Bayesian inference procedure (BIP) to estimate the posterior distribution of contrast threshold in each learning block for each learner independently and a hierarchical Bayesian model (HBM) that computes the joint posterior distribution of contrast threshold across all learning blocks at the population, subject, and test levels via the covariance of contrast thresholds across blocks. We applied the procedures to the data from two studies that investigated the interaction between feedback and training accuracy in Gabor orientation identification over 1920 trials across six sessions and estimated learning curve with block sizes L = 10, 20, 40, 80, 160, and 320 trials. The HBM generated significantly better fits to the data, smaller standard deviations, and more precise estimates, compared to the BIP across all block sizes. In addition, the HBM generated unbiased estimates, whereas the BIP only generated unbiased estimates with large block sizes but exhibited increased bias with small block sizes. With L = 10, 20, and 40, we were able to consistently identify general learning, between-session forgetting, and rapid relearning and adaptation within sessions. The nonparametric HBM provides a general framework for fine-grained assessment of the learning curve and enables identification of component processes in perceptual learning.
Collapse
Affiliation(s)
- Yukai Zhao
- Center for Neural Science, New York University, New York, NY, USA
| | - Jiajuan Liu
- Department of Cognitive Sciences and Institute of Mathematical Behavioral Sciences, University of California, Irvine, CA, USA
| | - Barbara Anne Dosher
- Department of Cognitive Sciences and Institute of Mathematical Behavioral Sciences, University of California, Irvine, CA, USA
| | - Zhong-Lin Lu
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China
- Center for Neural Science and Department of Psychology, New York University, New York, NY, USA
- NYU-ECNU Institute of Brain and Cognitive Neuroscience, Shanghai, China
| |
Collapse
|
40
|
Overskott HL, Markholm CE, Sehic A, Khan Q. Different Methods of Teaching and Learning Dental Morphology. Dent J (Basel) 2024; 12:114. [PMID: 38668026 PMCID: PMC11049323 DOI: 10.3390/dj12040114] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 04/08/2024] [Accepted: 04/12/2024] [Indexed: 04/28/2024] Open
Abstract
Dental anatomy education is traditionally structured into theoretical and practical modules to foster both cognitive and psychomotor development. The theoretical module typically involves didactic lectures where educators elucidate dental structures using visual aids. In contrast, practical modules utilize three-dimensional illustrations, extracted and plastic teeth, and tooth carving exercises on wax or soap blocks, chosen for their cost, ease of handling, and fidelity in replication. However, the efficacy of these traditional methods is increasingly questioned. The criticism in this concern is that oversized carving materials may distort students' understanding of anatomical proportions, potentially affecting the development of necessary skills for clinical practice. Lecture-driven instruction, on the other hand, is also criticized for its limitations in fostering interactive learning, resulting in a gap between pre-clinical instruction and practical patient care. In this study, we review the various educational strategies that have emerged to enhance traditional dental anatomy pedagogy by describing the effectiveness of conventional didactic lectures, wax carving exercises, the use of real and artificial teeth, the flipped classroom model, and e-learning tools. Our review aims to assess each method's contribution to improving clinical applicability and educational outcomes in dental anatomy, with a focus on developing pedagogical frameworks that align with contemporary educational needs and the evolving landscape of dental practice. We suggest that the optimal approach for teaching tooth morphology would be to integrate the digital benefits of the flipped classroom model with the practical, hands-on experience of using extracted human teeth. To address the challenges presented by this integration, the creation and standardization of three-dimensional tooth morphology educational tools, complemented with concise instructional videos for a flipped classroom setting, appears to be a highly effective strategy.
Collapse
Affiliation(s)
| | | | - Amer Sehic
- Institute of Oral Biology, Faculty of Dentistry, University of Oslo, Blindern, P.O. Box 1052, 0316 Oslo, Norway; (H.L.O.); (C.E.M.); (Q.K.)
| | | |
Collapse
|
41
|
Li J, Deng SW. Attentional focusing and filtering in multisensory categorization. Psychon Bull Rev 2024; 31:708-720. [PMID: 37673842 DOI: 10.3758/s13423-023-02370-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/11/2023] [Indexed: 09/08/2023]
Abstract
Selective attention refers to the ability to focus on goal-relevant information while filtering out irrelevant information. In a multisensory context, how do people selectively attend to multiple inputs when making categorical decisions? Here, we examined the role of selective attention in cross-modal categorization in two experiments. In a speed categorization task, participants were asked to attend to visual or auditory targets and categorize them while ignoring other irrelevant stimuli. A response-time extended multinomial processing tree (RT-MPT) model was implemented to estimate the contribution of attentional focusing on task-relevant information and attentional filtering of distractors. The results indicated that the role of selective attention was modality-specific, with differences found in attentional focusing and filtering between visual and auditory modalities. Visual information could be focused on or filtered out more effectively, whereas auditory information was more difficult to filter out, causing greater interference with task-relevant performance. The findings suggest that selective attention plays a critical and differential role across modalities, which provides a novel and promising approach to understanding multisensory processing and attentional focusing and filtering mechanisms of categorical decision-making.
Collapse
Affiliation(s)
- Jianhua Li
- Department of Psychology, University of Macau, Avenida da Universidade, Taipa, Macau
- Center for Cognitive and Brain Sciences, University of Macau, Taipa, Macau
| | - Sophia W Deng
- Department of Psychology, University of Macau, Avenida da Universidade, Taipa, Macau.
- Center for Cognitive and Brain Sciences, University of Macau, Taipa, Macau.
| |
Collapse
|
42
|
Huntley MK, Nguyen A, Albrecht MA, Marinovic W. Tactile cues are more intrinsically linked to motor timing than visual cues in visual-tactile sensorimotor synchronization. Atten Percept Psychophys 2024; 86:1022-1037. [PMID: 38263510 PMCID: PMC11062975 DOI: 10.3758/s13414-023-02828-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/07/2023] [Indexed: 01/25/2024]
Abstract
Many tasks require precise synchronization with external sensory stimuli, such as driving a car. This study investigates whether combined visual-tactile information provides additional benefits to movement synchrony over separate visual and tactile stimuli and explores the relationship with the temporal binding window for multisensory integration. In Experiment 1, participants completed a sensorimotor synchronization task to examine movement variability and a simultaneity judgment task to measure the temporal binding window. Results showed similar synchronization variability between visual-tactile and tactile-only stimuli, but significantly lower than visual only. In Experiment 2, participants completed a visual-tactile sensorimotor synchronization task with cross-modal stimuli presented inside (stimulus onset asynchrony 80 ms) and outside (stimulus-onset asynchrony 400 ms) the temporal binding window to examine temporal accuracy of movement execution. Participants synchronized their movement with the first stimulus in the cross-modal pair, either the visual or tactile stimulus. Results showed significantly greater temporal accuracy when only one stimulus was presented inside the window and the second stimulus was outside the window than when both stimuli were presented inside the window, with movement execution being more accurate when attending to the tactile stimulus. Overall, these findings indicate there may be a modality-specific benefit to sensorimotor synchronization performance, such that tactile cues are weighted more strongly than visual information as tactile information is more intrinsically linked to motor timing than visual information. Further, our findings indicate that the visual-tactile temporal binding window is related to the temporal accuracy of movement execution.
Collapse
Affiliation(s)
- Michelle K Huntley
- School of Population Health, Curtin University, Perth, Western Australia, Australia.
- School of Psychology and Public Health, La Trobe University, Wodonga, Victoria, Australia.
| | - An Nguyen
- School of Population Health, Curtin University, Perth, Western Australia, Australia
| | - Matthew A Albrecht
- Western Australia Centre for Road Safety Research, School of Psychological Science, University of Western Australia, Perth, Western Australia, Australia
| | - Welber Marinovic
- School of Population Health, Curtin University, Perth, Western Australia, Australia
| |
Collapse
|
43
|
Wu S, Gao L, Fu J, Zhao C, Wang P. The Application of Virtual Simulation Technology in Scaling and Root Planing Teaching. Int Dent J 2024; 74:303-309. [PMID: 37973524 PMCID: PMC10988261 DOI: 10.1016/j.identj.2023.09.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 09/22/2023] [Accepted: 09/28/2023] [Indexed: 11/19/2023] Open
Abstract
BACKGROUND Virtual simulation (VS) technology has been widely utilised in various aspects of oral education. This study aimed to evaluate the impact of VS technology in a scaling and root planing (SRP) teaching programme and explore an effective teaching approach. METHOD A total of 98 fourth-year undergraduates from Guanghua School of Stomatology at Sun Yat-sen University were enrolled in this study and randomly assigned to either the VS teaching group or the traditional teaching (TT) group. All participants received SRP training before undergoing an operational examination. Subsequently, questionnaires were administered to both students and teachers involved in the programme to assess the teaching effect and fidelity of the VS training system. Unpaired Student t test was used to analyse the final test scores and residual rates amongst students. RESULTS The overall residual rate of the calculus in the VS group was significantly lower than that in the TT group (48.81% ± 13.50% vs 56.89% ± 13.68%, P<.01). The difference was particularly notable in posterior teeth, proximal surfaces, and deep pockets. Additionally, the VS group students achieved higher final grades compared to the TT group (86.92 ± 6.10 vs 83.02 ± 6.05, P<0.01). In terms of teaching effectiveness assessment, the VS group students provided higher scores than the TT group, except in the areas of mastery of position, finger rests, and efficiency. CONCLUSIONS The implementation of VS technology demonstrated improvements in students' performance in SRP teaching. Therefore, a novel integrated pedagogic approaches method that combines VS technology with traditional teaching approaches could be further explored in future training programmes.
Collapse
Affiliation(s)
- Shiwen Wu
- Guanghua School of Stomatology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Li Gao
- Guanghua School of Stomatology, Sun Yat-Sen University, Guangzhou, Guangdong, China; Department of Periodontology, Guanghua School and Hospital of Stomatology, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Jiarun Fu
- Guanghua School of Stomatology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Chuanjiang Zhao
- Guanghua School of Stomatology, Sun Yat-Sen University, Guangzhou, Guangdong, China; Department of Periodontology, Guanghua School and Hospital of Stomatology, Sun Yat-sen University, Guangzhou, Guangdong, China.
| | - Panpan Wang
- Guanghua School of Stomatology, Sun Yat-Sen University, Guangzhou, Guangdong, China; Department of Periodontology, Guanghua School and Hospital of Stomatology, Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
44
|
Jiang M, Zeng Z. Memristive Bionic Memory Circuit Implementation and Its Application in Multisensory Mutual Associative Learning Networks. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2024; 18:308-321. [PMID: 37831580 DOI: 10.1109/tbcas.2023.3324574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/15/2023]
Abstract
Memory is vital and indispensable for organisms and brain-inspired intelligence to gain complete sensation and cognition of the environment. In this work, a memristive bionic memory circuit inspired by human memory model is proposed, which includes 1) receptor and sensory neuron (SN), 2) short-term memory (STM) module, and 3) long-term memory (LTM) module. By leveraging the in-memory computing characteristic of memristors, various functions such as sensation, learning, forgetting, recall, consolidation, reconsolidation, retrieval, and reset are realized. Besides, a multisensory mutual associative learning network is constructed with several bionic memory units to memorize and associate sensory information of different modalities bidirectionally. Except for association establishment, enhancement, and extinction, we also mimicked multisensory integration to manifest the synthetic process of information from different sensory channels. According to the simulation results in PSPICE, the proposed circuit performs high robustness, low area overhead, and low power consumption. Combining associative memory with human memory model, this work provides a possible idea for further research in associative learning networks.
Collapse
|
45
|
de Kluis T, Romp S, Land-Zandstra AM. Science museum educators' views on object-based learning: The perceived importance of authenticity and touch. PUBLIC UNDERSTANDING OF SCIENCE (BRISTOL, ENGLAND) 2024; 33:325-342. [PMID: 37916587 PMCID: PMC10958754 DOI: 10.1177/09636625231202617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/03/2023]
Abstract
Museum educators play an important role in mediating visitors' museum experiences. We investigated the perspectives of science museum educators on the role of touching authentic objects and replicas in visitors' learning experiences during educational activities. We used a mixed-methods approach including surveys with 49 museum educators and interviews with 12 museum educators from several countries in Europe. Our findings indicate the importance of context when presenting museum visitors with objects. Participating museum educators based their choices for including authentic objects or replicas in educational activities more often on narrative and context than on the authenticity status of an object. In addition, educators used various definitions of authenticity, which may hinder the discussion about the topic within the field.
Collapse
|
46
|
Senna I, Piller S, Martolini C, Cocchi E, Gori M, Ernst MO. Multisensory training improves the development of spatial cognition after sight restoration from congenital cataracts. iScience 2024; 27:109167. [PMID: 38414862 PMCID: PMC10897914 DOI: 10.1016/j.isci.2024.109167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 11/04/2023] [Accepted: 02/05/2024] [Indexed: 02/29/2024] Open
Abstract
Spatial cognition and mobility are typically impaired in congenitally blind individuals, as vision usually calibrates space perception by providing the most accurate distal spatial cues. We have previously shown that sight restoration from congenital bilateral cataracts guides the development of more accurate space perception, even when cataract removal occurs years after birth. However, late cataract-treated individuals do not usually reach the performance levels of the typically sighted population. Here, we developed a brief multisensory training that associated audiovisual feedback with body movements. Late cataract-treated participants quickly improved their space representation and mobility, performing as well as typically sighted controls in most tasks. Their improvement was comparable with that of a group of blind participants, who underwent training coupling their movements with auditory feedback alone. These findings suggest that spatial cognition can be enhanced by a training program that strengthens the association between bodily movements and their sensory feedback (either auditory or audiovisual).
Collapse
Affiliation(s)
- Irene Senna
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, 89069 Ulm, Germany
- Department of Psychology, Liverpool Hope University, Liverpool L16 9JD, UK
| | - Sophia Piller
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, 89069 Ulm, Germany
| | - Chiara Martolini
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, 16152 Genova, Italy
| | - Elena Cocchi
- Istituto David Chiossone per Ciechi ed Ipovedenti ONLUS, 16145 Genova, Italy
| | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, 16152 Genova, Italy
| | - Marc O. Ernst
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, 89069 Ulm, Germany
| |
Collapse
|
47
|
Mayes WP, Gentle J, Ivanova M, Violante IR. Audio-visual multisensory integration and haptic perception are altered in adults with developmental coordination disorder. Hum Mov Sci 2024; 93:103180. [PMID: 38266441 DOI: 10.1016/j.humov.2024.103180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Revised: 12/06/2023] [Accepted: 01/13/2024] [Indexed: 01/26/2024]
Abstract
Developmental Coordination Disorder (DCD) is a movement disorder in which atypical sensory processing may underly movement atypicality. However, whether altered sensory processing is domain-specific or global in nature, are unanswered questions. Here, we measured for the first time, different aspects of sensory processing and spatiotemporal integration in the same cohort of adult participants with DCD (N = 16), possible DCD (pDCD, N = 12) and neurotypical adults (NT, N = 28). Haptic perception was reduced in both DCD and the extended DCD + pDCD groups when compared to NT adults. Audio-visual integration, measured using the sound-induced double flash illusion, was reduced only in DCD participants, and not the DCD + pDCD extended group. While low-level sensory processing was altered in DCD, the more cognitive, higher-level ability to infer temporal dimensions from spatial information, and vice-versa, as assessed with Tau-Kappa effects, was intact in DCD (and extended DCD + pDCD) participants. Both audio-visual integration and haptic perception difficulties correlated with the degree of self-reported DCD symptoms and were most apparent when comparing DCD and NT groups directly, instead of the expanded DCD + pDCD group. The association of sensory difficulties with DCD symptoms suggests that perceptual differences play a role in motor difficulties in DCD via an underlying internal modelling mechanism.
Collapse
Affiliation(s)
- William P Mayes
- School of Psychology, University of Surrey, Stag Hill, Surrey GU2 7XH, UK.
| | - Judith Gentle
- School of Psychology, University of Surrey, Stag Hill, Surrey GU2 7XH, UK.
| | - Mirela Ivanova
- School of Psychology, University of Surrey, Stag Hill, Surrey GU2 7XH, UK
| | - Ines R Violante
- School of Psychology, University of Surrey, Stag Hill, Surrey GU2 7XH, UK.
| |
Collapse
|
48
|
Studer S, Kleinstäuber M, von Lersner U, Weise C. Increasing transcultural competence in clinical psychologists through a web-based training: study protocol for a randomized controlled trial. Trials 2024; 25:71. [PMID: 38243285 PMCID: PMC10799352 DOI: 10.1186/s13063-023-07878-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 12/15/2023] [Indexed: 01/21/2024] Open
Abstract
BACKGROUND In mental health care, the number of patients with diverse cultural backgrounds is growing. Nevertheless, evaluated training programs for transcultural competence are missing. Barriers for engaging in transcultural therapy can be identified in patients as well as in therapists. Besides language barriers, clinical psychologists report insecurities, for example, fear of additional expenses when involving a language mediator, ethical concerns such as power imbalances, or fear of lack of knowledge or incorrect handling when working with patients from other cultures. Divergent values and concepts of disease, prejudices, and stereotyping are also among the issues discussed as barriers to optimal psychotherapy care. The planned study aims to empower clinical psychologists to handle both their own as well as patients' barriers through a web-based training on transcultural competence. METHODS The training includes 6 modules, which are unlocked weekly. A total of N = 174 clinical psychologists are randomly assigned to two groups: the training group (TG) works through the complete training over 6 weeks, which includes a variety of practical exercises and self-reflections. In addition, participants receive weekly written feedback from a trained psychologist. The waitlist control group (WL) completes the training after the end of the waiting period (2 months after the end of the TG's training). The primary outcome is transcultural competence. Secondary outcomes consist of experiences in treating people from other cultures (number of patients, satisfaction and experience of competence in treatment, etc.). Data will be collected before and after the training as well as 2 and 6 months after the end of the training. DISCUSSION This randomized controlled trial tests the efficacy of and satisfaction with a web-based training on transcultural competence for German-speaking clinical psychologists. If validated successfully, the training can represent a time- and place-flexible training opportunity that could be integrated into the continuing education of clinical psychologists in the long term. TRIAL REGISTRATION DRKS00031105. Registered on 21 February 2023.
Collapse
Affiliation(s)
- Selina Studer
- Department of Psychology, Division of Clinical Psychology and Psychotherapy, Philipps-University Marburg, Marburg, Germany.
| | - Maria Kleinstäuber
- Department of Psychology, Emma Eccles Jones College of Education and Human Services, Utah State University, 6405 Old Main Hill, Logan, UT, 84321, USA
| | | | - Cornelia Weise
- Department of Psychology, Division of Clinical Psychology and Psychotherapy, Philipps-University Marburg, Marburg, Germany
| |
Collapse
|
49
|
Nava E, Giraud M, Bolognini N. The emergence of the multisensory brain: From the womb to the first steps. iScience 2024; 27:108758. [PMID: 38230260 PMCID: PMC10790096 DOI: 10.1016/j.isci.2023.108758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2024] Open
Abstract
The becoming of the human being is a multisensory process that starts in the womb. By integrating spontaneous neuronal activity with inputs from the external world, the developing brain learns to make sense of itself through multiple sensory experiences. Over the past ten years, advances in neuroimaging and electrophysiological techniques have allowed the exploration of the neural correlates of multisensory processing in the newborn and infant brain, thus adding an important piece of information to behavioral evidence of early sensitivity to multisensory events. Here, we review recent behavioral and neuroimaging findings to document the origins and early development of multisensory processing, particularly showing that the human brain appears naturally tuned to multisensory events at birth, which requires multisensory experience to fully mature. We conclude the review by highlighting the potential uses and benefits of multisensory interventions in promoting healthy development by discussing emerging studies in preterm infants.
Collapse
Affiliation(s)
- Elena Nava
- Department of Psychology & Milan Centre for Neuroscience (NeuroMI), University of Milan-Bicocca, Milan, Italy
| | - Michelle Giraud
- Department of Psychology & Milan Centre for Neuroscience (NeuroMI), University of Milan-Bicocca, Milan, Italy
| | - Nadia Bolognini
- Department of Psychology & Milan Centre for Neuroscience (NeuroMI), University of Milan-Bicocca, Milan, Italy
- Laboratory of Neuropsychology, IRCCS Istituto Auxologico Italiano, Milan, Italy
| |
Collapse
|
50
|
Marticorena DCP, Wong QW, Browning J, Wilbur K, Jayakumar S, Davey PG, Seitz AR, Gardner JR, Barbour DL. Contrast response function estimation with nonparametric Bayesian active learning. J Vis 2024; 24:6. [PMID: 38197739 PMCID: PMC10790677 DOI: 10.1167/jov.24.1.6] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 11/10/2023] [Indexed: 01/11/2024] Open
Abstract
Multidimensional psychometric functions can typically be estimated nonparametrically for greater accuracy or parametrically for greater efficiency. By recasting the estimation problem from regression to classification, however, powerful machine learning tools can be leveraged to provide an adjustable balance between accuracy and efficiency. Contrast sensitivity functions (CSFs) are behaviorally estimated curves that provide insight into both peripheral and central visual function. Because estimation can be impractically long, current clinical workflows must make compromises such as limited sampling across spatial frequency or strong assumptions on CSF shape. This article describes the development of the machine learning contrast response function (MLCRF) estimator, which quantifies the expected probability of success in performing a contrast detection or discrimination task. A machine learning CSF can then be derived from the MLCRF. Using simulated eyes created from canonical CSF curves and actual human contrast response data, the accuracy and efficiency of the machine learning contrast sensitivity function (MLCSF) was evaluated to determine its potential utility for research and clinical applications. With stimuli selected randomly, the MLCSF estimator converged slowly toward ground truth. With optimal stimulus selection via Bayesian active learning, convergence was nearly an order of magnitude faster, requiring only tens of stimuli to achieve reasonable estimates. Inclusion of an informative prior provided no consistent advantage to the estimator as configured. MLCSF achieved efficiencies on par with quickCSF, a conventional parametric estimator, but with systematically higher accuracy. Because MLCSF design allows accuracy to be traded off against efficiency, it should be explored further to uncover its full potential.
Collapse
Affiliation(s)
- Dom C P Marticorena
- Department of Biomedical Engineering, Washington University, St. Louis, MO, USA
| | - Quinn Wai Wong
- Department of Biomedical Engineering, Washington University, St. Louis, MO, USA
| | - Jake Browning
- Department of Computer Science and Engineering, Washington University, St. Louis, MO, USA
| | - Ken Wilbur
- Department of Computer Science and Engineering, Washington University, St. Louis, MO, USA
| | - Samyukta Jayakumar
- Department of Psychology, University of California, Riverside, Riverside, CA, USA
| | | | - Aaron R Seitz
- Department of Psychology, Northeastern University, Boston, MA, USA
| | - Jacob R Gardner
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Dennis L Barbour
- Department of Biomedical Engineering, Washington University, St. Louis, MO, USA
| |
Collapse
|