1
|
Coen P, Sit TPH, Wells MJ, Carandini M, Harris KD. Mouse frontal cortex mediates additive multisensory decisions. Neuron 2023; 111:2432-2447.e13. [PMID: 37295419 PMCID: PMC10957398 DOI: 10.1016/j.neuron.2023.05.008] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 12/02/2022] [Accepted: 05/10/2023] [Indexed: 06/12/2023]
Abstract
The brain can combine auditory and visual information to localize objects. However, the cortical substrates underlying audiovisual integration remain uncertain. Here, we show that mouse frontal cortex combines auditory and visual evidence; that this combination is additive, mirroring behavior; and that it evolves with learning. We trained mice in an audiovisual localization task. Inactivating frontal cortex impaired responses to either sensory modality, while inactivating visual or parietal cortex affected only visual stimuli. Recordings from >14,000 neurons indicated that after task learning, activity in the anterior part of frontal area MOs (secondary motor cortex) additively encodes visual and auditory signals, consistent with the mice's behavioral strategy. An accumulator model applied to these sensory representations reproduced the observed choices and reaction times. These results suggest that frontal cortex adapts through learning to combine evidence across sensory cortices, providing a signal that is transformed into a binary decision by a downstream accumulator.
Collapse
Affiliation(s)
- Philip Coen
- UCL Queen Square Institute of Neurology, University College London, London, UK; UCL Institute of Ophthalmology, University College London, London, UK.
| | - Timothy P H Sit
- Sainsbury-Wellcome Center, University College London, London, UK
| | - Miles J Wells
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Matteo Carandini
- UCL Institute of Ophthalmology, University College London, London, UK
| | - Kenneth D Harris
- UCL Queen Square Institute of Neurology, University College London, London, UK
| |
Collapse
|
2
|
Williams AM, Angeloni CF, Geffen MN. Sound Improves Neuronal Encoding of Visual Stimuli in Mouse Primary Visual Cortex. J Neurosci 2023; 43:2885-2906. [PMID: 36944489 PMCID: PMC10124961 DOI: 10.1523/jneurosci.2444-21.2023] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 02/14/2023] [Accepted: 02/23/2023] [Indexed: 03/23/2023] Open
Abstract
In everyday life, we integrate visual and auditory information in routine tasks such as navigation and communication. While concurrent sound can improve visual perception, the neuronal correlates of audiovisual integration are not fully understood. Specifically, it remains unclear whether neuronal firing patters in the primary visual cortex (V1) of awake animals demonstrate similar sound-induced improvement in visual discriminability. Furthermore, presentation of sound is associated with movement in the subjects, but little is understood about whether and how sound-associated movement affects audiovisual integration in V1. Here, we investigated how sound and movement interact to modulate V1 visual responses in awake, head-fixed mice and whether this interaction improves neuronal encoding of the visual stimulus. We presented visual drifting gratings with and without simultaneous auditory white noise to awake mice while recording mouse movement and V1 neuronal activity. Sound modulated activity of 80% of light-responsive neurons, with 95% of neurons increasing activity when the auditory stimulus was present. A generalized linear model (GLM) revealed that sound and movement had distinct and complementary effects of the neuronal visual responses. Furthermore, decoding of the visual stimulus from the neuronal activity was improved with sound, an effect that persisted even when controlling for movement. These results demonstrate that sound and movement modulate visual responses in complementary ways, improving neuronal representation of the visual stimulus. This study clarifies the role of movement as a potential confound in neuronal audiovisual responses and expands our knowledge of how multimodal processing is mediated at a neuronal level in the awake brain.SIGNIFICANCE STATEMENT Sound and movement are both known to modulate visual responses in the primary visual cortex; however, sound-induced movement has largely remained unaccounted for as a potential confound in audiovisual studies in awake animals. Here, authors found that sound and movement both modulate visual responses in an important visual brain area, the primary visual cortex, in distinct, yet complementary ways. Furthermore, sound improved encoding of the visual stimulus even when accounting for movement. This study reconciles contrasting theories on the mechanism underlying audiovisual integration and asserts the primary visual cortex as a key brain region participating in tripartite sensory interactions.
Collapse
Affiliation(s)
- Aaron M Williams
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
| | - Christopher F Angeloni
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | - Maria N Geffen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
| |
Collapse
|
3
|
Rice NC, Frechette BP, Myers TM. Implementation of Manual and Automated Water Regulation for Rats ( Rattus norvegicus) and Ferrets ( Mustela putorius). JOURNAL OF THE AMERICAN ASSOCIATION FOR LABORATORY ANIMAL SCIENCE 2021; 60:519-528. [PMID: 34452658 DOI: 10.30802/aalas-jaalas-20-000158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Water regulation is a procedure that allows animals to consume water volumes equivalent to ad libitum access, but access is limited to specific time intervals (that is, water is not available outside of the designated access periods). Despite the relatively common use of water regulation in research, the implementation method is rarely detailed, stating only that water was available in the animal's home cage at specific times. For planned toxicologic assessments, we placed rats (n = 510) and ferrets (n = 16) on water regulation using both automated and manual methods. In testing our systems, we defined "successful implementation" as maintenance of appropriate weight gain and health status. An automated system that controlled water access to an entire rat rack was successful for most rats, but several rats failed to consume enough water even after 2 wk of experience. Manual methods of water regulation were successful in rats by either moving the cage to prevent access to the drinking valve or by placing/removing water bottles. An automated system that controlled water access from water bottles was implemented for ferrets and was maintained for up to 30 wk. Retrospective comparison of body weights to standard growth curves for both species showed that all animals grew normally despite water regulation. Differences in the systems and some species considerations provide insights into the key elements necessary for successful water regulation in rats and ferrets.
Collapse
Affiliation(s)
- Nathaniel C Rice
- US Army Medical Research Institute of Chemical Defense, Gunpowder, Maryland
| | | | - Todd M Myers
- US Army Medical Research Institute of Chemical Defense, Gunpowder, Maryland;,
| |
Collapse
|
4
|
Buchs G, Haimler B, Kerem M, Maidenbaum S, Braun L, Amedi A. A self-training program for sensory substitution devices. PLoS One 2021; 16:e0250281. [PMID: 33905446 PMCID: PMC8078811 DOI: 10.1371/journal.pone.0250281] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 04/01/2021] [Indexed: 11/30/2022] Open
Abstract
Sensory Substitution Devices (SSDs) convey visual information through audition or touch, targeting blind and visually impaired individuals. One bottleneck towards adopting SSDs in everyday life by blind users, is the constant dependency on sighted instructors throughout the learning process. Here, we present a proof-of-concept for the efficacy of an online self-training program developed for learning the basics of the EyeMusic visual-to-auditory SSD tested on sighted blindfolded participants. Additionally, aiming to identify the best training strategy to be later re-adapted for the blind, we compared multisensory vs. unisensory as well as perceptual vs. descriptive feedback approaches. To these aims, sighted participants performed identical SSD-stimuli identification tests before and after ~75 minutes of self-training on the EyeMusic algorithm. Participants were divided into five groups, differing by the feedback delivered during training: auditory-descriptive, audio-visual textual description, audio-visual perceptual simultaneous and interleaved, and a control group which had no training. At baseline, before any EyeMusic training, participants SSD objects’ identification was significantly above chance, highlighting the algorithm’s intuitiveness. Furthermore, self-training led to a significant improvement in accuracy between pre- and post-training tests in each of the four feedback groups versus control, though no significant difference emerged among those groups. Nonetheless, significant correlations between individual post-training success rates and various learning measures acquired during training, suggest a trend for an advantage of multisensory vs. unisensory feedback strategies, while no trend emerged for perceptual vs. descriptive strategies. The success at baseline strengthens the conclusion that cross-modal correspondences facilitate learning, given SSD algorithms are based on such correspondences. Additionally, and crucially, the results highlight the feasibility of self-training for the first stages of SSD learning, and suggest that for these initial stages, unisensory training, easily implemented also for blind and visually impaired individuals, may suffice. Together, these findings will potentially boost the use of SSDs for rehabilitation.
Collapse
Affiliation(s)
- Galit Buchs
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem, Israel
- * E-mail: (AA); (GB)
| | - Benedetta Haimler
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Center of Advanced Technologies in Rehabilitation (CATR), The Chaim Sheba Medical Center, Ramat Gan, Israel
| | - Menachem Kerem
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
| | - Shachar Maidenbaum
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Department of Biomedical Engineering, Ben Gurion University, Beersheba, Israel
| | - Liraz Braun
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Hebrew University of Jerusalem, Jerusalem, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- * E-mail: (AA); (GB)
| |
Collapse
|
5
|
Zheng M, Xu J, Keniston L, Wu J, Chang S, Yu L. Choice-dependent cross-modal interaction in the medial prefrontal cortex of rats. Mol Brain 2021; 14:13. [PMID: 33446258 PMCID: PMC7809823 DOI: 10.1186/s13041-021-00732-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 01/08/2021] [Indexed: 11/25/2022] Open
Abstract
Cross-modal interaction (CMI) could significantly influence the perceptional or decision-making process in many circumstances. However, it remains poorly understood what integrative strategies are employed by the brain to deal with different task contexts. To explore it, we examined neural activities of the medial prefrontal cortex (mPFC) of rats performing cue-guided two-alternative forced-choice tasks. In a task requiring rats to discriminate stimuli based on auditory cue, the simultaneous presentation of an uninformative visual cue substantially strengthened mPFC neurons' capability of auditory discrimination mainly through enhancing the response to the preferred cue. Doing this also increased the number of neurons revealing a cue preference. If the task was changed slightly and a visual cue, like the auditory, denoted a specific behavioral direction, mPFC neurons frequently showed a different CMI pattern with an effect of cross-modal enhancement best evoked in information-congruent multisensory trials. In a choice free task, however, the majority of neurons failed to show a cross-modal enhancement effect and cue preference. These results indicate that CMI at the neuronal level is context-dependent in a way that differs from what has been shown in previous studies.
Collapse
Affiliation(s)
- Mengyao Zheng
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Les Keniston
- Department of Physical Therapy, University of Maryland Eastern Shore, Princess Anne, MD 21853 USA
| | - Jing Wu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Song Chang
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| |
Collapse
|
6
|
Xu X, Hanganu-Opatz IL, Bieler M. Cross-Talk of Low-Level Sensory and High-Level Cognitive Processing: Development, Mechanisms, and Relevance for Cross-Modal Abilities of the Brain. Front Neurorobot 2020; 14:7. [PMID: 32116637 PMCID: PMC7034303 DOI: 10.3389/fnbot.2020.00007] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2019] [Accepted: 01/27/2020] [Indexed: 12/18/2022] Open
Abstract
The emergence of cross-modal learning capabilities requires the interaction of neural areas accounting for sensory and cognitive processing. Convergence of multiple sensory inputs is observed in low-level sensory cortices including primary somatosensory (S1), visual (V1), and auditory cortex (A1), as well as in high-level areas such as prefrontal cortex (PFC). Evidence shows that local neural activity and functional connectivity between sensory cortices participate in cross-modal processing. However, little is known about the functional interplay between neural areas underlying sensory and cognitive processing required for cross-modal learning capabilities across life. Here we review our current knowledge on the interdependence of low- and high-level cortices for the emergence of cross-modal processing in rodents. First, we summarize the mechanisms underlying the integration of multiple senses and how cross-modal processing in primary sensory cortices might be modified by top-down modulation of the PFC. Second, we examine the critical factors and developmental mechanisms that account for the interaction between neuronal networks involved in sensory and cognitive processing. Finally, we discuss the applicability and relevance of cross-modal processing for brain-inspired intelligent robotics. An in-depth understanding of the factors and mechanisms controlling cross-modal processing might inspire the refinement of robotic systems by better mimicking neural computations.
Collapse
Affiliation(s)
- Xiaxia Xu
- Developmental Neurophysiology, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Ileana L Hanganu-Opatz
- Developmental Neurophysiology, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Malte Bieler
- Laboratory for Neural Computation, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| |
Collapse
|
7
|
Kumpik DP, Campbell C, Schnupp JWH, King AJ. Re-weighting of Sound Localization Cues by Audiovisual Training. Front Neurosci 2019; 13:1164. [PMID: 31802997 PMCID: PMC6873890 DOI: 10.3389/fnins.2019.01164] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Accepted: 10/15/2019] [Indexed: 11/28/2022] Open
Abstract
Sound localization requires the integration in the brain of auditory spatial cues generated by interactions with the external ears, head and body. Perceptual learning studies have shown that the relative weighting of these cues can change in a context-dependent fashion if their relative reliability is altered. One factor that may influence this process is vision, which tends to dominate localization judgments when both modalities are present and induces a recalibration of auditory space if they become misaligned. It is not known, however, whether vision can alter the weighting of individual auditory localization cues. Using virtual acoustic space stimuli, we measured changes in subjects’ sound localization biases and binaural localization cue weights after ∼50 min of training on audiovisual tasks in which visual stimuli were either informative or not about the location of broadband sounds. Four different spatial configurations were used in which we varied the relative reliability of the binaural cues: interaural time differences (ITDs) and frequency-dependent interaural level differences (ILDs). In most subjects and experiments, ILDs were weighted more highly than ITDs before training. When visual cues were spatially uninformative, some subjects showed a reduction in auditory localization bias and the relative weighting of ILDs increased after training with congruent binaural cues. ILDs were also upweighted if they were paired with spatially-congruent visual cues, and the largest group-level improvements in sound localization accuracy occurred when both binaural cues were matched to visual stimuli. These data suggest that binaural cue reweighting reflects baseline differences in the relative weights of ILDs and ITDs, but is also shaped by the availability of congruent visual stimuli. Training subjects with consistently misaligned binaural and visual cues produced the ventriloquism aftereffect, i.e., a corresponding shift in auditory localization bias, without affecting the inter-subject variability in sound localization judgments or their binaural cue weights. Our results show that the relative weighting of different auditory localization cues can be changed by training in ways that depend on their reliability as well as the availability of visual spatial information, with the largest improvements in sound localization likely to result from training with fully congruent audiovisual information.
Collapse
Affiliation(s)
- Daniel P Kumpik
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Connor Campbell
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Jan W H Schnupp
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
8
|
Yu M, Sun X, Tyler SR, Liang B, Swatek AM, Lynch TJ, He N, Yuan F, Feng Z, Rotti PG, Choi SH, Shahin W, Liu X, Yan Z, Engelhardt JF. Highly Efficient Transgenesis in Ferrets Using CRISPR/Cas9-Mediated Homology-Independent Insertion at the ROSA26 Locus. Sci Rep 2019; 9:1971. [PMID: 30760763 PMCID: PMC6374392 DOI: 10.1038/s41598-018-37192-4] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Accepted: 12/03/2018] [Indexed: 11/09/2022] Open
Abstract
The domestic ferret (Mustela putorius furo) has proven to be a useful species for modeling human genetic and infectious diseases of the lung and brain. However, biomedical research in ferrets has been hindered by the lack of rapid and cost-effective methods for genome engineering. Here, we utilized CRISPR/Cas9-mediated, homology-independent insertion at the ROSA26 "safe harbor" locus in ferret zygotes and created transgenic animals expressing a dual-fluorescent Cre-reporter system flanked by PhiC31 and Bxb1 integrase attP sites. Out of 151 zygotes injected with circular transgene-containing plasmid and Cas9 protein loaded with the ROSA26 intron-1 sgRNA, there were 23 births of which 5 had targeted integration events (22% efficiency). The encoded tdTomato transgene was highly expressed in all tissues evaluated. Targeted integration was verified by PCR analyses, Southern blot, and germ-line transmission. Function of the ROSA26-CAG-LoxPtdTomatoStopLoxPEGFP (ROSA-TG) Cre-reporter was confirmed in primary cells following Cre expression. The Phi31 and Bxb1 integrase attP sites flanking the transgene will also enable rapid directional insertion of any transgene without a size limitation at the ROSA26 locus. These methods and the model generated will greatly enhance biomedical research involving lineage tracing, the evaluation of stem cell therapy, and transgenesis in ferret models of human disease.
Collapse
Affiliation(s)
- Miao Yu
- Department of Anatomy and Cell Biology, Carver College of Medicine, University of Iowa, Iowa City, IA, 52242, USA
- College of Life Science, Ningxia University, Yinchuan, Ningxia, 750021, China
| | - Xingshen Sun
- Department of Anatomy and Cell Biology, Carver College of Medicine, University of Iowa, Iowa City, IA, 52242, USA
| | - Scott R Tyler
- Department of Anatomy and Cell Biology, Carver College of Medicine, University of Iowa, Iowa City, IA, 52242, USA
| | - Bo Liang
- Department of Anatomy and Cell Biology, Carver College of Medicine, University of Iowa, Iowa City, IA, 52242, USA
| | - Anthony M Swatek
- Department of Anatomy and Cell Biology, Carver College of Medicine, University of Iowa, Iowa City, IA, 52242, USA
| | - Thomas J Lynch
- Department of Anatomy and Cell Biology, Carver College of Medicine, University of Iowa, Iowa City, IA, 52242, USA
| | - Nan He
- Department of Anatomy and Cell Biology, Carver College of Medicine, University of Iowa, Iowa City, IA, 52242, USA
| | - Feng Yuan
- Department of Anatomy and Cell Biology, Carver College of Medicine, University of Iowa, Iowa City, IA, 52242, USA
| | - Zehua Feng
- Department of Anatomy and Cell Biology, Carver College of Medicine, University of Iowa, Iowa City, IA, 52242, USA
| | - Pavana G Rotti
- Department of Anatomy and Cell Biology, Carver College of Medicine, University of Iowa, Iowa City, IA, 52242, USA
| | - Soon H Choi
- Department of Anatomy and Cell Biology, Carver College of Medicine, University of Iowa, Iowa City, IA, 52242, USA
| | - Weam Shahin
- Department of Anatomy and Cell Biology, Carver College of Medicine, University of Iowa, Iowa City, IA, 52242, USA
| | - Xiaoming Liu
- College of Life Science, Ningxia University, Yinchuan, Ningxia, 750021, China.
| | - Ziying Yan
- Department of Anatomy and Cell Biology, Carver College of Medicine, University of Iowa, Iowa City, IA, 52242, USA.
| | - John F Engelhardt
- Department of Anatomy and Cell Biology, Carver College of Medicine, University of Iowa, Iowa City, IA, 52242, USA.
| |
Collapse
|
9
|
Improving Human–Computer Interface Design through Application of Basic Research on Audiovisual Integration and Amplitude Envelope. MULTIMODAL TECHNOLOGIES AND INTERACTION 2019. [DOI: 10.3390/mti3010004] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Quality care for patients requires effective communication amongst medical teams. Increasingly, communication is required not only between team members themselves, but between members and the medical devices monitoring and managing patient well-being. Most human–computer interfaces use either auditory or visual displays, and despite significant experimentation, they still elicit well-documented concerns. Curiously, few interfaces explore the benefits of multimodal communication, despite extensive documentation of the brain’s sensitivity to multimodal signals. New approaches built on insights from basic audiovisual integration research hold the potential to improve future human–computer interfaces. In particular, recent discoveries regarding the acoustic property of amplitude envelope illustrate that it can enhance audiovisual integration while also lowering annoyance. Here, we share key insights from recent research with the potential to inform applications related to human–computer interface design. Ultimately, this could lead to a cost-effective way to improve communication in medical contexts—with signification implications for both human health and the burgeoning medical device industry.
Collapse
|
10
|
Meijer GT, Pie JL, Dolman TL, Pennartz CMA, Lansink CS. Audiovisual Integration Enhances Stimulus Detection Performance in Mice. Front Behav Neurosci 2018; 12:231. [PMID: 30337861 PMCID: PMC6180166 DOI: 10.3389/fnbeh.2018.00231] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Accepted: 09/14/2018] [Indexed: 11/13/2022] Open
Abstract
The detection of objects in the external world improves when humans and animals integrate object features of multiple sensory modalities. Behavioral and neuronal mechanisms underlying multisensory stimulus detection are poorly understood, mainly because they have not been investigated with suitable behavioral paradigms. Such behavioral paradigms should (i) elicit a robust multisensory gain, (ii) incorporate systematic calibration of stimulus amplitude to the sensory capacities of the individual subject, (iii) yield a high trial count, and (iv) be easily compatible with a large variety of neurophysiological recording techniques. We developed an audiovisual stimulus detection task for head-fixed mice which meets all of these critical behavioral constraints. Behavioral data obtained with this task indicated a robust increase in detection performance of multisensory stimuli compared with unisensory cues, which was maximal when both stimulus constituents were presented at threshold intensity. The multisensory behavioral effect was associated with a change in the perceptual performance which consisted of two components. First, the visual and auditory perceptual systems increased their sensitivity meaning that low intensity stimuli were more often detected. Second, enhanced acuity enabled the systems to better classify whether there was a stimulus or not. Fitting our data to signal detection models revealed that the multisensory gain was more likely to be achieved by integration of sensory signals rather than by stimulus redundancy or competition. This validated behavioral paradigm can be exploited to reliably investigate the neuronal correlates of multisensory stimulus detection at the level of single neurons, microcircuits, and larger perceptual systems.
Collapse
Affiliation(s)
- Guido T. Meijer
- Swammerdam Institute for Life Sciences, Center for Neuroscience, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
| | - Jean L. Pie
- Swammerdam Institute for Life Sciences, Center for Neuroscience, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
| | - Thomas L. Dolman
- Swammerdam Institute for Life Sciences, Center for Neuroscience, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
| | - Cyriel M. A. Pennartz
- Swammerdam Institute for Life Sciences, Center for Neuroscience, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
- Research Priority Program Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| | - Carien S. Lansink
- Swammerdam Institute for Life Sciences, Center for Neuroscience, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
- Research Priority Program Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
11
|
Lohse M, Bajo VM, King AJ. Development, organization and plasticity of auditory circuits: Lessons from a cherished colleague. Eur J Neurosci 2018; 49:990-1004. [PMID: 29804304 PMCID: PMC6519211 DOI: 10.1111/ejn.13979] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2018] [Revised: 05/11/2018] [Accepted: 05/23/2018] [Indexed: 12/20/2022]
Abstract
Ray Guillery was a neuroscientist known primarily for his ground-breaking studies on the development of the visual pathways and subsequently on the nature of thalamocortical processing loops. The legacy of his work, however, extends well beyond the visual system. Thanks to Ray Guillery's pioneering anatomical studies, the ferret has become a widely used animal model for investigating the development and plasticity of sensory processing. This includes our own work on the auditory system, where experiments in ferrets have revealed the role of sensory experience during development in shaping the neural circuits responsible for sound localization, as well as the capacity of the mature brain to adapt to changes in inputs resulting from hearing loss. Our research has also built on Ray Guillery's ideas about the possible functions of the massive descending projections that link sensory areas of the cerebral cortex to the thalamus and other subcortical targets, by demonstrating a role for corticothalamic feedback in the perception of complex sounds and for corticollicular projection neurons in learning to accommodate altered auditory spatial cues. Finally, his insights into the organization and functions of transthalamic corticocortical connections have inspired a raft of research, including by our own laboratory, which has attempted to identify how information flows through the thalamus.
Collapse
Affiliation(s)
- Michael Lohse
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, UK
| | - Victoria M Bajo
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, UK
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, UK
| |
Collapse
|
12
|
Henschke JU, Oelschlegel AM, Angenstein F, Ohl FW, Goldschmidt J, Kanold PO, Budinger E. Early sensory experience influences the development of multisensory thalamocortical and intracortical connections of primary sensory cortices. Brain Struct Funct 2018; 223:1165-1190. [PMID: 29094306 PMCID: PMC5871574 DOI: 10.1007/s00429-017-1549-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Accepted: 09/29/2017] [Indexed: 12/21/2022]
Abstract
The nervous system integrates information from multiple senses. This multisensory integration already occurs in primary sensory cortices via direct thalamocortical and corticocortical connections across modalities. In humans, sensory loss from birth results in functional recruitment of the deprived cortical territory by the spared senses but the underlying circuit changes are not well known. Using tracer injections into primary auditory, somatosensory, and visual cortex within the first postnatal month of life in a rodent model (Mongolian gerbil) we show that multisensory thalamocortical connections emerge before corticocortical connections but mostly disappear during development. Early auditory, somatosensory, or visual deprivation increases multisensory connections via axonal reorganization processes mediated by non-lemniscal thalamic nuclei and the primary areas themselves. Functional single-photon emission computed tomography of regional cerebral blood flow reveals altered stimulus-induced activity and higher functional connectivity specifically between primary areas in deprived animals. Together, we show that intracortical multisensory connections are formed as a consequence of sensory-driven multisensory thalamocortical activity and that spared senses functionally recruit deprived cortical areas by an altered development of sensory thalamocortical and corticocortical connections. The functional-anatomical changes after early sensory deprivation have translational implications for the therapy of developmental hearing loss, blindness, and sensory paralysis and might also underlie developmental synesthesia.
Collapse
Affiliation(s)
- Julia U Henschke
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118, Magdeburg, Germany
- German Center for Neurodegenerative Diseases Within the Helmholtz Association, Leipziger Str. 44, 39120, Magdeburg, Germany
- Institute of Cognitive Neurology and Dementia Research (IKND), Otto-von-Guericke-University Magdeburg, Leipziger Str. 44, 39120, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Universitätsplatz 2, 39120, Magdeburg, Germany
| | - Anja M Oelschlegel
- Research Group Neuropharmacology, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118, Magdeburg, Germany
- Institute of Anatomy, Otto-von-Guericke-University Magdeburg, Leipziger Str. 44, 39120, Magdeburg, Germany
| | - Frank Angenstein
- Functional Neuroimaging Group, German Center for Neurodegenerative Diseases Within the Helmholtz Association, Leipziger Str. 44, 39120, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Universitätsplatz 2, 39120, Magdeburg, Germany
| | - Frank W Ohl
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118, Magdeburg, Germany
- Institute of Biology, Otto-von-Guericke-University Magdeburg, Leipziger Str. 44, 39120, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Universitätsplatz 2, 39120, Magdeburg, Germany
| | - Jürgen Goldschmidt
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Universitätsplatz 2, 39120, Magdeburg, Germany
| | - Patrick O Kanold
- Department of Biology, University of Maryland, College Park, MD, 20742, USA
| | - Eike Budinger
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118, Magdeburg, Germany.
- Center for Behavioral Brain Sciences, Universitätsplatz 2, 39120, Magdeburg, Germany.
| |
Collapse
|
13
|
Beker S, Foxe JJ, Molholm S. Ripe for solution: Delayed development of multisensory processing in autism and its remediation. Neurosci Biobehav Rev 2018; 84:182-192. [PMID: 29162518 PMCID: PMC6389331 DOI: 10.1016/j.neubiorev.2017.11.008] [Citation(s) in RCA: 68] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Revised: 11/09/2017] [Accepted: 11/13/2017] [Indexed: 12/24/2022]
Abstract
Difficulty integrating inputs from different sensory sources is commonly reported in individuals with Autism Spectrum Disorder (ASD). Accumulating evidence consistently points to altered patterns of behavioral reactions and neural activity when individuals with ASD observe or act upon information arriving through multiple sensory systems. For example, impairments in the integration of seen and heard speech appear to be particularly acute, with obvious implications for interpersonal communication. Here, we explore the literature on multisensory processing in autism with a focus on developmental trajectories. While much remains to be understood, some consistent observations emerge. Broadly, sensory integration deficits are found in children with an ASD whereas these appear to be much ameliorated, or even fully recovered, in older teenagers and adults on the spectrum. This protracted delay in the development of multisensory processing raises the possibility of applying early intervention strategies focused on multisensory integration, to accelerate resolution of these functions. We also consider how dysfunctional cross-sensory oscillatory neural communication may be one key pathway to impaired multisensory processing in ASD.
Collapse
Affiliation(s)
- Shlomit Beker
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx, NY, United States; Rose F. Kennedy Intellectual and Developmental Disabilities Research Center (IDDRC), Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| | - John J Foxe
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx, NY, United States; Rose F. Kennedy Intellectual and Developmental Disabilities Research Center (IDDRC), Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States; The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, United States
| | - Sophie Molholm
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx, NY, United States; Rose F. Kennedy Intellectual and Developmental Disabilities Research Center (IDDRC), Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States; The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, United States.
| |
Collapse
|