1
|
Zeng Z, Zhang C, Gu Y. Visuo-vestibular heading perception: a model system to study multi-sensory decision making. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220334. [PMID: 37545303 PMCID: PMC10404926 DOI: 10.1098/rstb.2022.0334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 05/15/2023] [Indexed: 08/08/2023] Open
Abstract
Integrating noisy signals across time as well as sensory modalities, a process named multi-sensory decision making (MSDM), is an essential strategy for making more accurate and sensitive decisions in complex environments. Although this field is just emerging, recent extraordinary works from different perspectives, including computational theory, psychophysical behaviour and neurophysiology, begin to shed new light onto MSDM. In the current review, we focus on MSDM by using a model system of visuo-vestibular heading. Combining well-controlled behavioural paradigms on virtual-reality systems, single-unit recordings, causal manipulations and computational theory based on spiking activity, recent progress reveals that vestibular signals contain complex temporal dynamics in many brain regions, including unisensory, multi-sensory and sensory-motor association areas. This challenges the brain for cue integration across time and sensory modality such as optic flow which mainly contains a motion velocity signal. In addition, new evidence from the higher-level decision-related areas, mostly in the posterior and frontal/prefrontal regions, helps revise our conventional thought on how signals from different sensory modalities may be processed, converged, and moment-by-moment accumulated through neural circuits for forming a unified, optimal perceptual decision. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Zhao Zeng
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| | - Ce Zhang
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| | - Yong Gu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| |
Collapse
|
2
|
Kang SJ, Liu S, Ye M, Kim DI, Pao GM, Copits BA, Roberts BZ, Lee KF, Bruchas MR, Han S. A central alarm system that gates multi-sensory innate threat cues to the amygdala. Cell Rep 2022; 40:111222. [PMID: 35977501 PMCID: PMC9420642 DOI: 10.1016/j.celrep.2022.111222] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 05/16/2022] [Accepted: 07/22/2022] [Indexed: 12/31/2022] Open
Abstract
Perception of threats is essential for survival. Previous findings suggest that parallel pathways independently relay innate threat signals from different sensory modalities to multiple brain areas, such as the midbrain and hypothalamus, for immediate avoidance. Yet little is known about whether and how multi-sensory innate threat cues are integrated and conveyed from each sensory modality to the amygdala, a critical brain area for threat perception and learning. Here, we report that neurons expressing calcitonin gene-related peptide (CGRP) in the parvocellular subparafascicular nucleus in the thalamus and external lateral parabrachial nucleus in the brainstem respond to multi-sensory threat cues from various sensory modalities and relay negative valence to the lateral and central amygdala, respectively. Both CGRP populations and their amygdala projections are required for multi-sensory threat perception and aversive memory formation. The identification of unified innate threat pathways may provide insights into developing therapeutic candidates for innate fear-related disorders.
Collapse
Affiliation(s)
- Sukjae J Kang
- Peptide Biology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA 92037, USA
| | - Shijia Liu
- Peptide Biology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA 92037, USA; Department of Neurobiology, School of Biological Sciences, University of California, San Diego, La Jolla, CA 92093, USA
| | - Mao Ye
- Peptide Biology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA 92037, USA
| | - Dong-Il Kim
- Peptide Biology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA 92037, USA
| | - Gerald M Pao
- Molecular and Cellular Biology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA 92037, USA; Okinawa Institute of Science and Technology Graduate University, 1919-1 Tancha, Onna-son, Okinawa 904-0495, Japan
| | - Bryan A Copits
- Washington University Pain Center, Washington University School of Medicine, St. Louis, MO 63110, USA; Department of Anesthesiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Benjamin Z Roberts
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA 92093, USA
| | - Kuo-Fen Lee
- Peptide Biology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA 92037, USA
| | - Michael R Bruchas
- Center of Excellence in the Neurobiology of Addiction, Pain, and Emotion, Departments of Anesthesiology and Pain Medicine, and Pharmacology, University of Washington, Seattle, WA 98195, USA
| | - Sung Han
- Peptide Biology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA 92037, USA; Department of Neurobiology, School of Biological Sciences, University of California, San Diego, La Jolla, CA 92093, USA; Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA 92093, USA.
| |
Collapse
|
3
|
Mazuski C, O'Keefe J. Representation of ethological events by basolateral amygdala neurons. Cell Rep 2022; 39:110921. [PMID: 35675779 PMCID: PMC9638002 DOI: 10.1016/j.celrep.2022.110921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Revised: 04/01/2022] [Accepted: 05/12/2022] [Indexed: 12/04/2022] Open
Abstract
The accurate interpretation of ethologically relevant stimuli is crucial for survival. While basolateral amygdala (BLA) neuronal responses during fear conditioning are well studied, little is known about how BLA neurons respond during naturalistic events. We recorded from the rat BLA during interaction with ethological stimuli: male or female rats, a moving toy, and rice. Forty-two percent of the cells reliably respond to at least one stimulus, with over half of these exclusively identifying one of the four stimulus classes. In addition to activation during interaction with their preferred stimulus, these cells signal micro-behavioral interactions like social contact. After stimulus removal, firing activity persists in 30% of responsive cells for several minutes. At the micro-circuit level, information flows from highly tuned event-specific neurons to less specific neurons, and connection strength increases after the event. We propose that individual BLA neurons identify specific ethological events, with event-specific neurons driving circuit-wide activity during and after salient events. Basolateral amygdala (BLA) neurons respond selectively to salient stimuli After activation, BLA neurons can be modulated by the behavioral microstructure Firing persists in some BLA neurons long after the removal of the eliciting stimulus In the BLA micro-circuit, information flowed from more tuned to less tuned neurons
Collapse
Affiliation(s)
- Cristina Mazuski
- Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London W1T4JG, UK.
| | - John O'Keefe
- Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London W1T4JG, UK; Department of Cell and Developmental Biology, University College London, London WC1E 6BT, UK
| |
Collapse
|
4
|
Abstract
Cochlear implants (CIs) have been remarkably successful at restoring hearing in severely-to-profoundly hearing-impaired individuals. However, users often struggle to deconstruct complex auditory scenes with multiple simultaneous sounds, which can result in reduced music enjoyment and impaired speech understanding in background noise. Hearing aid users often have similar issues, though these are typically less acute. Several recent studies have shown that haptic stimulation can enhance CI listening by giving access to sound features that are poorly transmitted through the electrical CI signal. This “electro-haptic stimulation” improves melody recognition and pitch discrimination, as well as speech-in-noise performance and sound localization. The success of this approach suggests it could also enhance auditory perception in hearing-aid users and other hearing-impaired listeners. This review focuses on the use of haptic stimulation to enhance music perception in hearing-impaired listeners. Music is prevalent throughout everyday life, being critical to media such as film and video games, and often being central to events such as weddings and funerals. It represents the biggest challenge for signal processing, as it is typically an extremely complex acoustic signal, containing multiple simultaneous harmonic and inharmonic sounds. Signal-processing approaches developed for enhancing music perception could therefore have significant utility for other key issues faced by hearing-impaired listeners, such as understanding speech in noisy environments. This review first discusses the limits of music perception in hearing-impaired listeners and the limits of the tactile system. It then discusses the evidence around integration of audio and haptic stimulation in the brain. Next, the features, suitability, and success of current haptic devices for enhancing music perception are reviewed, as well as the signal-processing approaches that could be deployed in future haptic devices. Finally, the cutting-edge technologies that could be exploited for enhancing music perception with haptics are discussed. These include the latest micro motor and driver technology, low-power wireless technology, machine learning, big data, and cloud computing. New approaches for enhancing music perception in hearing-impaired listeners could substantially improve quality of life. Furthermore, effective haptic techniques for providing complex sound information could offer a non-invasive, affordable means for enhancing listening more broadly in hearing-impaired individuals.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom.,Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
5
|
Buchwald D, Scherberger H. Visually and Tactually Guided Grasps Lead to Different Neuronal Activity in Non-human Primates. Front Neurosci 2021; 15:679910. [PMID: 34349616 PMCID: PMC8326571 DOI: 10.3389/fnins.2021.679910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 06/18/2021] [Indexed: 11/13/2022] Open
Abstract
Movements are defining characteristics of all behaviors. Animals walk around, move their eyes to explore the world or touch structures to learn more about them. So far we only have some basic understanding of how the brain generates movements, especially when we want to understand how different areas of the brain interact with each other. In this study we investigated the influence of sensory object information on grasp planning in four different brain areas involved in vision, touch, movement planning, and movement generation in the parietal, somatosensory, premotor and motor cortex. We trained one monkey to grasp objects that he either saw or touched beforehand while continuously recording neural spiking activity with chronically implanted floating multi-electrode arrays. The animal was instructed to sit in the dark and either look at a shortly illuminated object or reach out and explore the object with his hand in the dark before lifting it up. In a first analysis we confirmed that the animal not only memorizes the object in both tasks, but also applies an object-specific grip type, independent of the sensory modality. In the neuronal population, we found a significant difference in the number of tuned units for sensory modalities during grasp planning that persisted into grasp execution. These differences were sufficient to enable a classifier to decode the object and sensory modality in a single trial exclusively from neural population activity. These results give valuable insights in how different brain areas contribute to the preparation of grasp movement and how different sensory streams can lead to distinct neural activity while still resulting in the same action execution.
Collapse
Affiliation(s)
- Daniela Buchwald
- Neurobiology Laboratory, Deutsches Primatenzentrum GmbH, Göttingen, Germany
- Faculty of Biology and Psychology, University of Goettingen, Göttingen, Germany
| | - Hansjörg Scherberger
- Neurobiology Laboratory, Deutsches Primatenzentrum GmbH, Göttingen, Germany
- Faculty of Biology and Psychology, University of Goettingen, Göttingen, Germany
| |
Collapse
|
6
|
Buchwald D, Schaffelhofer S, Dörge M, Dann B, Scherberger H. A Turntable Setup for Testing Visual and Tactile Grasping Movements in Non-human Primates. Front Behav Neurosci 2021; 15:648483. [PMID: 34113241 PMCID: PMC8185519 DOI: 10.3389/fnbeh.2021.648483] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 04/19/2021] [Indexed: 11/13/2022] Open
Abstract
Grasping movements are some of the most common movements primates do every day. They are important for social interactions as well as picking up objects or food. Usually, these grasping movements are guided by vision but proprioceptive and haptic inputs contribute greatly. Since grasping behaviors are common and easy to motivate, they represent an ideal task for understanding the role of different brain areas during planning and execution of complex voluntary movements in primates. For experimental purposes, a stable and repeatable presentation of the same object as well as the variation of objects is important in order to understand the neural control of movement generation. This is even more the case when investigating the role of different senses for movement planning, where objects need to be presented in specific sensory modalities. We developed a turntable setup for non-human primates (macaque monkeys) to investigate visually and tactually guided grasping movements with an option to easily exchange objects. The setup consists of a turntable that can fit six different objects and can be exchanged easily during the experiment to increase the number of presented objects. The object turntable is connected to a stepper motor through a belt system to automate rotation and hence object presentation. By increasing the distance between the turntable and the stepper motor, metallic components of the stepper motor are kept at a distance to the actual recording setup, which allows using a magnetic-based data glove to track hand kinematics. During task execution, the animal sits in the dark and is instructed to grasp the object in front of it. Options to turn on a light above the object allow for visual presentation of the objects, while the object can also remain in the dark for exclusive tactile exploration. A red LED is projected onto the object by a one-way mirror that serves as a grasp cue instruction for the animal to start grasping the object. By comparing kinematic data from the magnetic-based data glove with simultaneously recorded neural signals, this setup enables the systematic investigation of neural population activity involved in the neural control of hand grasping movements.
Collapse
Affiliation(s)
- Daniela Buchwald
- Neuroscience Laboratory, Deutsches Primatenzentrum GmbH, Göttingen, Germany
- Faculty of Biology and Psychology, University of Goettingen, Göttingen, Germany
| | | | - Matthias Dörge
- Neuroscience Laboratory, Deutsches Primatenzentrum GmbH, Göttingen, Germany
| | - Benjamin Dann
- Neuroscience Laboratory, Deutsches Primatenzentrum GmbH, Göttingen, Germany
| | - Hansjörg Scherberger
- Neuroscience Laboratory, Deutsches Primatenzentrum GmbH, Göttingen, Germany
- Faculty of Biology and Psychology, University of Goettingen, Göttingen, Germany
| |
Collapse
|
7
|
Currier TA, Matheson AMM, Nagel KI. Encoding and control of orientation to airflow by a set of Drosophila fan-shaped body neurons. eLife 2020; 9:e61510. [PMID: 33377868 PMCID: PMC7793622 DOI: 10.7554/elife.61510] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Accepted: 12/29/2020] [Indexed: 12/25/2022] Open
Abstract
The insect central complex (CX) is thought to underlie goal-oriented navigation but its functional organization is not fully understood. We recorded from genetically-identified CX cell types in Drosophila and presented directional visual, olfactory, and airflow cues known to elicit orienting behavior. We found that a group of neurons targeting the ventral fan-shaped body (ventral P-FNs) are robustly tuned for airflow direction. Ventral P-FNs did not generate a 'map' of airflow direction. Instead, cells in each hemisphere were tuned to 45° ipsilateral, forming a pair of orthogonal bases. Imaging experiments suggest that ventral P-FNs inherit their airflow tuning from neurons that provide input from the lateral accessory lobe (LAL) to the noduli (NO). Silencing ventral P-FNs prevented flies from selecting appropriate corrective turns following changes in airflow direction. Our results identify a group of CX neurons that robustly encode airflow direction and are required for proper orientation to this stimulus.
Collapse
Affiliation(s)
- Timothy A Currier
- Neuroscience Institute, New York University Langone Medical CenterNew YorkUnited States
- Center for Neural Science, New York UniversityNew YorkUnited States
| | - Andrew MM Matheson
- Neuroscience Institute, New York University Langone Medical CenterNew YorkUnited States
| | - Katherine I Nagel
- Neuroscience Institute, New York University Langone Medical CenterNew YorkUnited States
- Center for Neural Science, New York UniversityNew YorkUnited States
| |
Collapse
|
8
|
Abstract
The mood and atmosphere of a service setting are essential factors in the way customers evaluate their shopping experience in a retail store environment. Scholars have shown that background music has a strong effect on consumer behavior. Retailers design novel environments in which appropriate music can elevate the shopping experience. While previous findings highlight the effects of background music on consumer behavior, the extent to which recognition of store atmosphere varies with genre of background music in sales spaces is unknown. We conducted an eye tracking experiment to evaluate the effect of background music on the perceived atmosphere of a service setting. We used a 2 (music genre: jazz song with slow tempo vs. dance song with fast tempo) × 1 (visual stimuli: image of coffee shop) within-subject design to test the effect of music genre on visual perception of a physical environment. Results show that the fixation values during the slow tempo music were at least two times higher than the fixation values during the fast tempo music and that the blink values during the fast tempo music were at least two times higher than the blink values during the slow tempo music. Notably, initial and maximum concentration differed by music type. Our findings also indicate that differences in scan paths and locations between the slow tempo music and the fast tempo music changed over time. However, average fixation values were not significantly different between the two music types.
Collapse
Affiliation(s)
- Jihoon Kim
- Department of Advertising and Public Relations, University of Alabama, Tuscaloosa, AL, United States
| | - Ju Yeon Kim
- Department of Interior Architectural Design, Soongsil University, Seoul, South Korea
| |
Collapse
|
9
|
Abstract
This paper aimed to review the potential for archival items to be used to support therapeutic interventions in dementia care, with a particular focus on olfactory stimuli. Archival research was used to identify objects and to re-create authentic historical product fragrances from Boots UK. Potentially therapeutic material and smells for people living with dementia were identified and olfactory profiles created. These were characterized by strong smells and items featuring well-known brands and distinctive packaging including carbolic soap and Old English Lavender talcum powder. A dataset of items has been created for use in future research studies.
Collapse
Affiliation(s)
- Victoria Tischler
- Nursing, Midwifery and Healthcare, University of West London, London, UK
| | | |
Collapse
|
10
|
Abstract
The human visual and auditory systems do not encode an entirely overlapped space when static head and body position are maintained. While visual capture of sound source location in the frontal field is known to be immediate and direct, visual influence in the rear auditory space behind the subject remains under-studied. In this study we investigated the influence of presenting frontal LED flashes on the perceived location of a phantom sound source generated using time-delay-based stereophony. Our results show that frontal visual stimuli affected auditory localization in two different ways - (1) auditory responses were laterally shifted (left or right) toward the location of the light stimulus and (2) auditory responses were more often in the frontal field. The observed visual effects do not adhere to the spatial rule of multisensory interaction with regard to the physical proximity of cues. Instead, the influence of visual cues interacted closely with front-back confusions in auditory localization. In particular, visually induced shift along the left-right direction occurred most often when an auditory stimulus was localized in the same (frontal) field as the light stimulus, even when the actual sound sources were presented from behind a subject. Increasing stimulus duration (from 15-ms to 50-ms) significantly mitigated the rates of front-back confusion and the associated effects of visual stimuli. These findings suggest that concurrent visual stimulation elicits a strong frontal bias in auditory localization and confirm that temporal integration plays an important role in decreasing front-back errors under conditions requiring multisensory spatial processing.
Collapse
Affiliation(s)
- Christopher Montagne
- Laboratory of Auditory Computation & Neurophysiology, Department of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, AZ, United States
| | - Yi Zhou
- Laboratory of Auditory Computation & Neurophysiology, Department of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, AZ, United States
| |
Collapse
|
11
|
Bremen P, Massoudi R, Van Wanrooij MM, Van Opstal AJ. Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man. Front Syst Neurosci 2017; 11:89. [PMID: 29238295 PMCID: PMC5712580 DOI: 10.3389/fnsys.2017.00089] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 11/16/2017] [Indexed: 11/13/2022] Open
Abstract
The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain.
Collapse
Affiliation(s)
- Peter Bremen
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands.,Department of Neuroscience, Erasmus Medical Center, Rotterdam, Netherlands
| | - Rooholla Massoudi
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands.,Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, United Kingdom
| | - Marc M Van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - A J Van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
12
|
Carbon CC. Creating a Framework for Holistic Assessment of Aesthetics: A Response to Nilsson and Axelsson (2015) on Attributes of Aesthetic Quality of Textile Quality1. Percept Mot Skills 2016; 122:96-100. [PMID: 27420309 DOI: 10.1177/0031512516628366] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Nilsson and Axelsson (2015) made an important contribution by linking recent scientific approaches from the field of empirical aesthetics with everyday demands of museum conservators of deciding which items to be preserved or not. The authors made an important effort in identifying the valuable candidates of variables - but focused on visual properties only and on quite high-expertise aspects of aesthetic quality based on very sophisticated evaluations. The present article responds to the target paper by developing the outline of a more holistic approach for future research as a kind of framework that should assist a multi-modal approach, mainly including tactile sense.
Collapse
Affiliation(s)
- Claus-Christian Carbon
- Department of General Psychology and Methodology, University of Bamberg, Bamberg, Germany; Forschungsgruppe EPÆG (Ergonomie, Psychologische Ästhetik, Gestaltung), Bamberg, Germany
| |
Collapse
|
13
|
LaRue KM, Clemens J, Berman GJ, Murthy M. Acoustic duetting in Drosophila virilis relies on the integration of auditory and tactile signals. eLife 2015; 4:e07277. [PMID: 26046297 PMCID: PMC4456510 DOI: 10.7554/elife.07277] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2015] [Accepted: 05/11/2015] [Indexed: 01/26/2023] Open
Abstract
Many animal species, including insects, are capable of acoustic duetting, a complex social behavior in which males and females tightly control the rate and timing of their courtship song syllables relative to each other. The mechanisms underlying duetting remain largely unknown across model systems. Most studies of duetting focus exclusively on acoustic interactions, but the use of multisensory cues should aid in coordinating behavior between individuals. To test this hypothesis, we develop Drosophila virilis as a new model for studies of duetting. By combining sensory manipulations, quantitative behavioral assays, and statistical modeling, we show that virilis females combine precisely timed auditory and tactile cues to drive song production and duetting. Tactile cues delivered to the abdomen and genitalia play the larger role in females, as even headless females continue to coordinate song production with courting males. These data, therefore, reveal a novel, non-acoustic, mechanism for acoustic duetting. Finally, our results indicate that female-duetting circuits are not sexually differentiated, as males can also produce 'female-like' duets in a context-dependent manner.
Collapse
Affiliation(s)
- Kelly M LaRue
- Princeton Neuroscience Institute, Princeton University, Princeton, United States
- Department of Molecular Biology, Princeton University, Princeton, United States
| | - Jan Clemens
- Princeton Neuroscience Institute, Princeton University, Princeton, United States
- Department of Molecular Biology, Princeton University, Princeton, United States
| | - Gordon J Berman
- Lewis Sigler Institute for Integrative Genomics, Princeton University, Princeton, United States
| | - Mala Murthy
- Princeton Neuroscience Institute, Princeton University, Princeton, United States
- Department of Molecular Biology, Princeton University, Princeton, United States
| |
Collapse
|
14
|
Andrade J, May J, Deeprose C, Baugh SJ, Ganis G. Assessing vividness of mental imagery: The Plymouth Sensory Imagery Questionnaire. Br J Psychol 2013; 105:547-63. [PMID: 24117327 DOI: 10.1111/bjop.12050] [Citation(s) in RCA: 87] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2013] [Revised: 07/31/2013] [Indexed: 11/27/2022]
Abstract
Mental imagery may occur in any sensory modality, although visual imagery has been most studied. A sensitive measure of the vividness of imagery across a range of modalities is needed: the shorter version of Bett's Questionnaire upon Mental Imagery (Sheehan, , J. Clin. Psychology, 23, 386) uses outdated items and has an unreliable factor structure. We report the development and initial validation of the Plymouth Sensory Imagery Questionnaire (Psi-Q) comprising items for each of the following modalities: Vision, Sound, Smell, Taste, Touch, Bodily Sensation, and Emotional Feeling. An exploratory factor analysis on a 35-item form indicated that these modalities formed separate factors, rather than a single imagery factor, and this was replicated by confirmatory factor analysis. The Psi-Q was validated against the Spontaneous Use of Imagery Scale (Reisberg et al., , Appl. Cogn. Psychology, 17, 147) and Marks' (, J. Mental Imagery, 19, 153) Vividness of Visual Imagery Questionnaire-2 (VVIQ-2). A short 21-item form comprising the best three items from the seven factors correlated with the total score and subscales of the full form, and with the VVIQ-2. Inspection of the data shows that while visual and sound imagery is most often rated as vivid, individuals who rate one modality as strong and the other as weak are not uncommon. Findings are interpreted within a working memory framework and point to the need for further research to identify the specific cognitive processes underlying the vividness of imagery across sensory modalities.
Collapse
Affiliation(s)
- Jackie Andrade
- School of Psychology, Cognition Institute, Plymouth University, UK
| | | | | | | | | |
Collapse
|
15
|
Abstract
We studied whether vision can teach touch to the same extent as touch seems to teach vision. In a 2 × 2 between-participants learning study, we artificially correlated visual gloss cues with haptic compliance cues. In two “natural” tasks, we tested whether visual gloss estimations have an influence on haptic estimations of softness and vice versa. In two “novel” tasks, in which participants were either asked to haptically judge glossiness or to visually judge softness, we investigated how perceptual estimates transfer from one sense to the other. Our results showed that vision does not teach touch as efficient as touch seems to teach vision.
Collapse
Affiliation(s)
- Dagmar A Wismeijer
- Allgemeine Psychologie, Justus-Liebig Universität Gießen Gießen, Germany
| | | | | |
Collapse
|
16
|
Abstract
During reach planning, we integrate multiple senses to estimate the location of the hand and the target, which is used to generate a movement. Visual and proprioceptive information are combined to determine the location of the hand. The goal of this study was to investigate whether multi-sensory integration is affected by extraretinal signals, such as head roll. It is believed that a coordinate matching transformation is required before vision and proprioception can be combined because proprioceptive and visual sensory reference frames do not generally align. This transformation utilizes extraretinal signals about current head roll position, i.e., to rotate proprioceptive signals into visual coordinates. Since head roll is an estimated sensory signal with noise, this head roll dependency of the reference frame transformation should introduce additional noise to the transformed signal, reducing its reliability and thus its weight in the multi-sensory integration. To investigate the role of noisy reference frame transformations on multi-sensory weighting, we developed a novel probabilistic (Bayesian) multi-sensory integration model (based on Sober and Sabes, 2003) that included explicit (noisy) reference frame transformations. We then performed a reaching experiment to test the model's predictions. To test for head roll dependent multi-sensory integration, we introduced conflicts between viewed and actual hand position and measured reach errors. Reach analysis revealed that eccentric head roll orientations led to an increase of movement variability, consistent with our model. We further found that the weighting of vision and proprioception depended on head roll, which we interpret as being a result of signal dependant noise. Thus, the brain has online knowledge of the statistics of its internal sensory representations. In summary, we show that sensory reliability is used in a context-dependent way to adjust multi-sensory integration weights for reaching.
Collapse
|
17
|
Abstract
We measured frequency-dependent functional MRI (fMRI) activations (at 11.7 T) in the somatosensory cortex with whisker and forepaw stimuli in the same alpha-chloralose anesthetized rats. Whisker and forepaw stimuli were attained by computer-controlled pulses of air puffs and electrical currents, respectively. Air puffs deflected (+/-2 mm) the chosen whisker(s) in the right snout in the rostral to caudal direction, and electrical currents (2 mA amplitude, 0.3 ms duration) stimulated the left forepaw with subcutaneous copper electrodes placed between the second and fourth digits. In the same subject, unimodal stimulation of whisker and forepaw gave rise to significant blood oxygen level-dependent (BOLD) signal increases in corresponding contralateral somatosensory areas of whisker barrel field (S1BF) and forelimb (S1FL), respectively, with no significant spatial overlap between these regions. The BOLD responses in S1(BF) and S1(FL) regions were found to be differentially variable with frequency of each stimulus type. In the S1BF, a linear increase in the BOLD response was observed with whisker stimulation frequency of up to approximately 12 Hz, beyond which the response seemed to saturate (and/or slightly attenuate) up to the maximum frequency studied (i.e. 30 Hz). In the S1FL, the magnitude of the BOLD response was largest at forepaw stimulation frequency between 1.5 and 3 Hz, beyond which the response diminished with little or no activity at frequencies higher than 20 Hz. The volume of tissue activated by each stimulus type followed a similar pattern to that of the stimulation frequency dependence. These results of bimodal whisker and forepaw stimuli in the same subject may provide a framework to study interactions of different tactile modules, with both fMRI and neurophysiology (i.e. inside and outside the magnet).
Collapse
Affiliation(s)
- Basavaraju G. Sanganahalli
- Magnetic Resonance Research Center (MRRC), Yale University, New Haven, CT 06520, USA
- Quantitative Neuroscience with Magnetic Resonance (QNMR), Yale University, New Haven, CT 06520, USA
- Department of Diagnostic Radiology, Yale University, New Haven, CT 06520, USA
| | - Peter Herman
- Magnetic Resonance Research Center (MRRC), Yale University, New Haven, CT 06520, USA
- Quantitative Neuroscience with Magnetic Resonance (QNMR), Yale University, New Haven, CT 06520, USA
- Department of Diagnostic Radiology, Yale University, New Haven, CT 06520, USA
- Institute of Human Physiology and Clinical Experimental Research, Semmelweis University, Budapest, Hungary
| | - Fahmeed Hyder
- Magnetic Resonance Research Center (MRRC), Yale University, New Haven, CT 06520, USA
- Quantitative Neuroscience with Magnetic Resonance (QNMR), Yale University, New Haven, CT 06520, USA
- Department of Diagnostic Radiology, Yale University, New Haven, CT 06520, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT 06520, USA
| |
Collapse
|
18
|
Abstract
Optical prisms shift visual space, and through adaptation over time, generate a compensatory realignment of sensory-motor reference frames. In humans, prism-induced lateral shifts of visual space produce a corresponding shift in sound localization. We recently reported that sound localization shifts towards eccentric eye position, approaching approximately 40% of gaze over several minutes. Given that eye position affects sound localization directly, prism adaptation may well reflect contributions of both eye position and sensory adaptation; while the visual world is shifted by the prisms, the eyes must also shift simply to gaze ahead. To test this new concept of prism adaptation, 10 young (18-27 year) adults localized sound targets before and after 4 h of adaptation to base-right or base-left prisms that induced an 11.4 degrees shift left or right, respectively. In separate sessions subjects were exposed to: (1) natural binaural hearing; (2) diotically presented inputs devoid of meaningful spatial cues; or (3) attenuated hearing to simulate hearing loss. These preliminary results suggest that the prism adaptation of auditory space is dependent on two independent influences: (1) the effect of displaced mean eye position induced by the prisms, which occurs without cross-sensory experience; and (2) true cross-sensory learning in response to an imposed offset between auditory and visual space.
Collapse
Affiliation(s)
- Qi N. Cui
- Department of Neurobiology and Anatomy, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA
| | - Laura Bachus
- Department of Neurobiology and Anatomy, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA
| | - Eva Knoth
- Department of Neurobiology and Anatomy, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA
| | - William E. O’Neill
- Department of Neurobiology and Anatomy, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA
- Center for Navigation and Communication Sciences, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA
| | - Gary D. Paige
- Department of Neurobiology and Anatomy, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA
- Center for Navigation and Communication Sciences, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA
- Center for Visual Science, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA
- Corresponding author. Tel.: 585-275-6395/275-2591; Fax: 585-442-8766;
| |
Collapse
|