1
|
Billock VA, Dougherty K, Kinney MJ, Preston AM, Winterbottom MD. Multisensory-inspired modeling and neural correlates for two key binocular interactions. Sci Rep 2024; 14:11269. [PMID: 38760410 PMCID: PMC11101479 DOI: 10.1038/s41598-024-60926-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 04/29/2024] [Indexed: 05/19/2024] Open
Abstract
Most binocular vision models assume that the two eyes sum incompletely. However, some facilitatory cortical neurons fire for only one eye, but amplify their firing rates if both eyes are stimulated. These 'binocular gate' neurons closely resemble subthreshold multisensory neurons. Binocular amplification for binocular gate neurons follows a power law, with a compressive exponent. Unexpectedly, this rule also applies to facilitatory true binocular neurons; although driven by either eye, binocular neurons are well modeled as gated amplifiers of their strongest monocular response, if both eyes are stimulated. Psychophysical data follows the same power law as the neural data, with a similar exponent; binocular contrast sensitivity can be modeled as a gated amplification of the more sensitive eye. These results resemble gated amplification phenomena in multisensory integration, and other non-driving modulatory interactions that affect sensory processing. Models of incomplete summation seem unnecessary for V1 facilitatory neurons or contrast sensitivity. However, binocular combination of clearly visible monocular stimuli follows Schrödinger's nonlinear magnitude-weighted average. We find that putatively suppressive binocular neurons closely follow Schrödinger's equation. Similar suppressive multisensory neurons are well documented but seldom studied. Facilitatory binocular neurons and mildly suppressive binocular neurons are likely neural correlates of binocular sensitivity and binocular appearance respectively.
Collapse
Grants
- 1R01EY027402-02 U.S. Department of Health & Human Services | NIH | National Eye Institute (NEI)
- T32EY007135 U.S. Department of Health & Human Services | NIH | National Eye Institute (NEI)
- P30EY008126 U.S. Department of Health & Human Services | NIH | National Eye Institute (NEI)
- US Navy Aerospace Medical Reseach Laboratory, Leidos, Dayton, OH, United States
- Princeton University, Princeton Neuroscience Institute, Princeton, NJ, United States
- Naval Air Warfare Center, Human Systems Engineering Department, Patuxent River, MD, United States
- Naval Aerospace Medical Research Laboratory, NAMRU-D, Vision and Acceleration, Wright-Patterson AFB
- US Air Force Research Laboratory, Wright-Patterson AFB, OH, United States
- Office of the Assistant Secretary of Defense, Dp_67.2_17_J9_1757 work unit H1814.
- MULTISENSORY-INSPIRED MODELING AND NEURAL CORRELATES FOR TWO KEY BINOCULAR INTERACTIONS
Collapse
Affiliation(s)
- Vincent A Billock
- Leidos, Inc. at the Naval Aerospace Medical Research Laboratory, NAMRU-D, Wright-Patterson AFB, OH, USA.
| | - Kacie Dougherty
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Micah J Kinney
- Naval Air Warfare Center, NAWCAD, Patuxent River, MD, USA
| | - Adam M Preston
- Naval Aerospace Medical Research Laboratory, NAMRU-D, Wright-Patterson AFB, OH, USA
| | - Marc D Winterbottom
- Air Force Research Laboratory, 711th Human Performance Wing, Wright-Patterson AFB, OH, USA
| |
Collapse
|
2
|
Billock VA, Kinney MJ, Schnupp JW, Meredith MA. A simple vector-like law for perceptual information combination is also followed by a class of cortical multisensory bimodal neurons. iScience 2021; 24:102527. [PMID: 34142039 PMCID: PMC8188495 DOI: 10.1016/j.isci.2021.102527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Revised: 01/10/2021] [Accepted: 05/05/2021] [Indexed: 11/25/2022] Open
Abstract
An interdisciplinary approach to sensory information combination shows a correspondence between perceptual and neural measures of nonlinear multisensory integration. In psychophysics, sensory information combinations are often characterized by the Minkowski formula, but the neural substrates of many psychophysical multisensory interactions are unknown. We show that audiovisual interactions - for both psychophysical detection threshold data and cortical bimodal neurons - obey similar vector-like Minkowski models, suggesting that cortical bimodal neurons could underlie multisensory perceptual sensitivity. An alternative Bayesian model is not a good predictor of cortical bimodal response. In contrast to cortex, audiovisual data from superior colliculus resembles the 'City-Block' combination rule used in perceptual similarity metrics. Previous work found a simple power law amplification rule is followed for perceptual appearance measures and by cortical subthreshold multisensory neurons. The two most studied neural cell classes in cortical multisensory interactions may provide neural substrates for two important perceptual modes: appearance-based and performance-based perception.
Collapse
Affiliation(s)
- Vincent A. Billock
- Naval Aerospace Medical Research Laboratory, NAMRU-D, Wright-Patterson Air Force Base, OH 45433, USA
| | - Micah J. Kinney
- Naval Aerospace Medical Research Laboratory, NAMRU-D, Wright-Patterson Air Force Base, OH 45433, USA
- Naval Air Warfare Center, NAWCAD, Patuxent River, MD 20670, USA
| | - Jan W.H. Schnupp
- Department of Neuroscience, City University of Hong Kong, Kowloon Tong, Hong Kong, China
| | - M. Alex Meredith
- Department of Anatomy and Neurobiology, Virginia Commonwealth University, Richmond, VA 23298, USA
| |
Collapse
|
3
|
Rosskothen-Kuhl N, Buck AN, Li K, Schnupp JW. Microsecond interaural time difference discrimination restored by cochlear implants after neonatal deafness. eLife 2021; 10:59300. [PMID: 33427644 PMCID: PMC7815311 DOI: 10.7554/elife.59300] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Accepted: 01/07/2021] [Indexed: 01/03/2023] Open
Abstract
Spatial hearing in cochlear implant (CI) patients remains a major challenge, with many early deaf users reported to have no measurable sensitivity to interaural time differences (ITDs). Deprivation of binaural experience during an early critical period is often hypothesized to be the cause of this shortcoming. However, we show that neonatally deafened (ND) rats provided with precisely synchronized CI stimulation in adulthood can be trained to lateralize ITDs with essentially normal behavioral thresholds near 50 μs. Furthermore, comparable ND rats show high physiological sensitivity to ITDs immediately after binaural implantation in adulthood. Our result that ND-CI rats achieved very good behavioral ITD thresholds, while prelingually deaf human CI patients often fail to develop a useful sensitivity to ITD raises urgent questions concerning the possibility that shortcomings in technology or treatment, rather than missing input during early development, may be behind the usually poor binaural outcomes for current CI patients.
Collapse
Affiliation(s)
- Nicole Rosskothen-Kuhl
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong, China.,Neurobiological Research Laboratory, Section for Clinical and Experimental Otology, University Medical Center Freiburg, Freiburg, Germany
| | - Alexa N Buck
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong, China
| | - Kongyan Li
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong, China
| | - Jan Wh Schnupp
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong, China.,CityU Shenzhen Research Institute, Shenzhen, China
| |
Collapse
|
4
|
Billock VA, Kinney M, Meredith MA. Audiovisual Enhanced Sensitivity: Both Psychophysical and Neural Data Follow the Same Combination Rule. J Vis 2019. [DOI: 10.1167/19.15.34] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
| | - Micah Kinney
- Naval Aerospace Medical Research Laboratory, NAMRU-D
| | - M. Alex Meredith
- Department of Anatomy & Neurobiology, Virginia Commonwealth University
| |
Collapse
|
5
|
Li K, Chan CHK, Rajendran VG, Meng Q, Rosskothen-Kuhl N, Schnupp JWH. Microsecond sensitivity to envelope interaural time differences in rats. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:EL341. [PMID: 31153346 DOI: 10.1121/1.5099164] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2018] [Accepted: 04/03/2019] [Indexed: 06/09/2023]
Abstract
Currently, there is controversy around whether rats can use interaural time differences (ITDs) to localize sound. Here, naturalistic pulse train stimuli were used to evaluate the rat's sensitivity to onset and ongoing ITDs using a two-alternative forced choice sound lateralization task. Pulse rates between 50 Hz and 4.8 kHz with rectangular or Hanning windows were delivered with ITDs between ±175 μs over a near-field acoustic setup. Similar to other mammals, rats performed with 75% accuracy at ∼50 μs ITD, demonstrating that rats are highly sensitive to envelope ITDs.
Collapse
Affiliation(s)
- Kongyan Li
- Department of Biomedical Sciences, City University of Hong Kong, 31 To Yuen Street, Kowloon Tong, Hong , ,
| | - Chloe H K Chan
- Department of Biomedical Sciences, City University of Hong Kong, 31 To Yuen Street, Kowloon Tong, Hong , ,
| | - Vani G Rajendran
- Department of Biomedical Sciences, City University of Hong Kong, 31 To Yuen Street, Kowloon Tong, Hong , ,
| | - Qinglin Meng
- Acoustics Lab, School of Physics and Optoelectronics, South China University of Technology, Guangzhou,
| | - Nicole Rosskothen-Kuhl
- Department of Biomedical Sciences, City University of Hong Kong, 31 To Yuen Street, Kowloon Tong, Hong ,
| | - Jan W H Schnupp
- Department of Biomedical Sciences, City University of Hong Kong, 31 To Yuen Street, Kowloon Tong, Hong ,
| |
Collapse
|
6
|
Pannunzi M, Pérez-Bellido A, Pereda-Baños A, López-Moliner J, Deco G, Soto-Faraco S. Deconstructing multisensory enhancement in detection. J Neurophysiol 2014; 113:1800-18. [PMID: 25520431 DOI: 10.1152/jn.00341.2014] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The mechanisms responsible for the integration of sensory information from different modalities have become a topic of intense interest in psychophysics and neuroscience. Many authors now claim that early, sensory-based cross-modal convergence improves performance in detection tasks. An important strand of supporting evidence for this claim is based on statistical models such as the Pythagorean model or the probabilistic summation model. These models establish statistical benchmarks representing the best predicted performance under the assumption that there are no interactions between the two sensory paths. Following this logic, when observed detection performances surpass the predictions of these models, it is often inferred that such improvement indicates cross-modal convergence. We present a theoretical analyses scrutinizing some of these models and the statistical criteria most frequently used to infer early cross-modal interactions during detection tasks. Our current analysis shows how some common misinterpretations of these models lead to their inadequate use and, in turn, to contradictory results and misleading conclusions. To further illustrate the latter point, we introduce a model that accounts for detection performances in multimodal detection tasks but for which surpassing of the Pythagorean or probabilistic summation benchmark can be explained without resorting to early cross-modal interactions. Finally, we report three experiments that put our theoretical interpretation to the test and further propose how to adequately measure multimodal interactions in audiotactile detection tasks.
Collapse
Affiliation(s)
| | | | | | - Joan López-Moliner
- Universitat de Barcelona, Barcelona, Spain; Institute for Brain, Cognition and Behaviour (IR3C), Barcelona, Spain; and
| | - Gustavo Deco
- Universitat Pompeu Fabra, Barcelona, Spain; Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | - Salvador Soto-Faraco
- Universitat Pompeu Fabra, Barcelona, Spain; Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| |
Collapse
|
7
|
Jaekl P, Pérez-Bellido A, Soto-Faraco S. On the 'visual' in 'audio-visual integration': a hypothesis concerning visual pathways. Exp Brain Res 2014; 232:1631-8. [PMID: 24699769 DOI: 10.1007/s00221-014-3927-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2013] [Accepted: 03/19/2014] [Indexed: 11/28/2022]
Abstract
Crossmodal interaction conferring enhancement in sensory processing is nowadays widely accepted. Such benefit is often exemplified by neural response amplification reported in physiological studies conducted with animals, which parallel behavioural demonstrations of sound-driven improvement in visual tasks in humans. Yet, a good deal of controversy still surrounds the nature and interpretation of these human psychophysical studies. Here, we consider the interpretation of crossmodal enhancement findings under the light of the functional as well as anatomical specialization of magno- and parvocellular visual pathways, whose paramount relevance has been well established in visual research but often overlooked in crossmodal research. We contend that a more explicit consideration of this important visual division may resolve some current controversies and help optimize the design of future crossmodal research.
Collapse
Affiliation(s)
- Philip Jaekl
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, USA,
| | | | | |
Collapse
|
8
|
Bizley JK, Walker KMM, King AJ, Schnupp JWH. Spectral timbre perception in ferrets: discrimination of artificial vowels under different listening conditions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 133:365-76. [PMID: 23297909 PMCID: PMC3783993 DOI: 10.1121/1.4768798] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Spectral timbre is an acoustic feature that enables human listeners to determine the identity of a spoken vowel. Despite its importance to sound perception, little is known about the neural representation of sound timbre and few psychophysical studies have investigated timbre discrimination in non-human species. In this study, ferrets were positively conditioned to discriminate artificial vowel sounds in a two-alternative-forced-choice paradigm. Animals quickly learned to discriminate the vowel sound /u/ from /ε/ and were immediately able to generalize across a range of voice pitches. They were further tested in a series of experiments designed to assess how well they could discriminate these vowel sounds under different listening conditions. First, a series of morphed vowels was created by systematically shifting the location of the first and second formant frequencies. Second, the ferrets were tested with single formant stimuli designed to assess which spectral cues they could be using to make their decisions. Finally, vowel discrimination thresholds were derived in the presence of noise maskers presented from either the same or a different spatial location. These data indicate that ferrets show robust vowel discrimination behavior across a range of listening conditions and that this ability shares many similarities with human listeners.
Collapse
Affiliation(s)
- Jennifer K Bizley
- Department of Physiology, Anatomy and Genetics, University of Oxford, Parks Road, Oxford OX1 3PT, United Kingdom.
| | | | | | | |
Collapse
|
9
|
Wilson EC, Reed CM, Braida LD. Integration of auditory and vibrotactile stimuli: effects of frequency. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 127:3044-59. [PMID: 21117754 PMCID: PMC2882664 DOI: 10.1121/1.3365318] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Perceptual integration of vibrotactile and auditory sinusoidal tone pulses was studied in detection experiments as a function of stimulation frequency. Vibrotactile stimuli were delivered through a single channel vibrator to the left middle fingertip. Auditory stimuli were presented diotically through headphones in a background of 50 dB sound pressure level broadband noise. Detection performance for combined auditory-tactile presentations was measured using stimulus levels that yielded 63% to 77% correct unimodal performance. In Experiment 1, the vibrotactile stimulus was 250 Hz and the auditory stimulus varied between 125 and 2000 Hz. In Experiment 2, the auditory stimulus was 250 Hz and the tactile stimulus varied between 50 and 400 Hz. In Experiment 3, the auditory and tactile stimuli were always equal in frequency and ranged from 50 to 400 Hz. The highest rates of detection for the combined-modality stimulus were obtained when stimulating frequencies in the two modalities were equal or closely spaced (and within the Pacinian range). Combined-modality detection for closely spaced frequencies was generally consistent with an algebraic sum model of perceptual integration; wider-frequency spacings were generally better fit by a Pythagorean sum model. Thus, perceptual integration of auditory and tactile stimuli at near-threshold levels appears to depend both on absolute frequency and relative frequency of stimulation within each modality.
Collapse
Affiliation(s)
- E Courtenay Wilson
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | | | | |
Collapse
|
10
|
Wilson EC, Braida LD, Reed CM. Perceptual interactions in the loudness of combined auditory and vibrotactile stimuli. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 127:3038-3043. [PMID: 21117753 PMCID: PMC2882663 DOI: 10.1121/1.3377116] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2009] [Revised: 02/18/2010] [Accepted: 03/11/2010] [Indexed: 05/30/2023]
Abstract
The loudness of auditory (A), tactile (T), and auditory-tactile (A+T) stimuli was measured at supra-threshold levels. Auditory stimuli were pure tones presented binaurally through headphones; tactile stimuli were sinusoids delivered through a single-channel vibrator to the left middle fingertip. All stimuli were presented together with a broadband auditory noise. The A and T stimuli were presented at levels that were matched in loudness to that of the 200-Hz auditory tone at 25 dB sensation level. The 200-Hz auditory tone was then matched in loudness to various combinations of auditory and tactile stimuli (A+T), and purely auditory stimuli (A+A). The results indicate that the matched intensity of the 200-Hz auditory tone is less when the A+T and A+A stimuli are close together in frequency than when they are separated by an octave or more. This suggests that A+T integration may operate in a manner similar to that found in auditory critical band studies, further supporting a strong frequency relationship between the auditory and somatosensory systems.
Collapse
Affiliation(s)
- E Courtenay Wilson
- Research Laboratory of Electronics, Massachusetts Institute of Technology, and Harvard-MIT Division of Health Sciences and Technology, Speech and Hearing Bioscience and Technology Program, Cambridge, Massachusetts 02139, USA
| | | | | |
Collapse
|
11
|
Wilson EC, Reed CM, Braida LD. Integration of auditory and vibrotactile stimuli: effects of phase and stimulus-onset asynchrony. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2009; 126:1960-74. [PMID: 19813808 PMCID: PMC2771057 DOI: 10.1121/1.3204305] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
The perceptual integration of 250 Hz, 500 ms vibrotactile and auditory tones was studied in detection experiments as a function of (1) relative phase and (2) temporal asynchrony of the tone pulses. Vibrotactile stimuli were delivered through a single-channel vibrator to the left middle fingertip and auditory stimuli were presented diotically through headphones in a background of 50 dB sound pressure level broadband noise. The vibrotactile and auditory stimulus levels used each yielded 63%-77%-correct unimodal detection performance in a 2-I, 2-AFC task. Results for combined vibrotactile and auditory detection indicated that (1) performance improved for synchronous presentation, (2) performance was not affected by the relative phase of the auditory and tactile sinusoidal stimuli, and (3) performance for non-overlapping stimuli improved only if the tactile stimulus preceded the auditory. The results are generally more consistent with a "Pythagorean Sum" model than with either an "Algebraic Sum" or an "Optimal Single-Channel" Model of perceptual integration. Thus, certain combinations of auditory and tactile signals result in significant integrative effects. The lack of phase effect suggests an envelope rather than fine-structure operation for integration. The effects of asynchronous presentation of the auditory and tactile stimuli are consistent with time constants deduced from single-modality masking experiments.
Collapse
Affiliation(s)
- E Courtenay Wilson
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | | | | |
Collapse
|
12
|
Gonzalo-Fonrodona I. Functional gradients through the cortex, multisensory integration and scaling laws in brain dynamics. Neurocomputing 2009. [DOI: 10.1016/j.neucom.2008.04.055] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
13
|
Andersen TS, Mamassian P. Audiovisual integration of stimulus transients. Vision Res 2008; 48:2537-44. [DOI: 10.1016/j.visres.2008.08.018] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2008] [Revised: 08/13/2008] [Accepted: 08/25/2008] [Indexed: 10/21/2022]
|