1
|
Tolentino-Castro JW, Schroeger A, Cañal-Bruland R, Raab M. Increasing auditory intensity enhances temporal but deteriorates spatial accuracy in a virtual interception task. Exp Brain Res 2024:10.1007/s00221-024-06787-x. [PMID: 38334793 DOI: 10.1007/s00221-024-06787-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 01/15/2024] [Indexed: 02/10/2024]
Abstract
Humans are quite accurate and precise in interception performance. So far, it is still unclear what role auditory information plays in spatiotemporal accuracy and consistency during interception. In the current study, interception performance was measured as the spatiotemporal accuracy and consistency of when and where a virtual ball was intercepted on a visible line displayed on a screen based on auditory information alone. We predicted that participants would more accurately indicate when the ball would cross a target line than where it would cross the line, because human hearing is particularly sensitive to temporal parameters. In a within-subject design, we manipulated auditory intensity (52, 61, 70, 79, 88 dB) using a sound stimulus programmed to be perceived over the screen in an inverted C-shape trajectory. Results showed that the louder the sound, the better was temporal accuracy, but the worse was spatial accuracy. We argue that louder sounds increased attention toward auditory information when performing interception judgments. How balls are intercepted and practically how intensity of sound may add to temporal accuracy and consistency is discussed from a theoretical perspective of modality-specific interception behavior.
Collapse
Affiliation(s)
- J Walter Tolentino-Castro
- Department of Performance Psychology, Institute of Psychology, German Sport University Cologne, Am Sportpark Müngersdorf 6, 50933, Cologne, Germany
| | - Anna Schroeger
- Department for General Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Rouwen Cañal-Bruland
- Department for the Psychology of Human Movement and Sport, Institute of Sport Science, Friedrich Schiller University Jena, Jena, Germany
| | - Markus Raab
- Department of Performance Psychology, Institute of Psychology, German Sport University Cologne, Am Sportpark Müngersdorf 6, 50933, Cologne, Germany.
- School of Applied Sciences, London South Bank University, London, England.
| |
Collapse
|
2
|
Kathios N, Patel AD, Loui P. Musical anhedonia, timbre, and the rewards of music listening. Cognition 2024; 243:105672. [PMID: 38086279 DOI: 10.1016/j.cognition.2023.105672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 10/18/2023] [Accepted: 11/21/2023] [Indexed: 12/22/2023]
Abstract
Pleasure in music has been linked to predictive coding of melodic and rhythmic patterns, subserved by connectivity between regions in the brain's auditory and reward networks. Specific musical anhedonics derive little pleasure from music and have altered auditory-reward connectivity, but no difficulties with music perception abilities and no generalized physical anhedonia. Recent research suggests that specific musical anhedonics experience pleasure in nonmusical sounds, suggesting that the implicated brain pathways may be specific to music reward. However, this work used sounds with clear real-world sources (e.g., babies laughing, crowds cheering), so positive hedonic responses could be based on the referents of these sounds rather than the sounds themselves. We presented specific musical anhedonics and matched controls with isolated short pleasing and displeasing synthesized sounds of varying timbres with no clear real-world referents. While the two groups found displeasing sounds equally displeasing, the musical anhedonics gave substantially lower pleasure ratings to the pleasing sounds, indicating that their sonic anhedonia is not limited to musical rhythms and melodies. Furthermore, across a large sample of participants, mean pleasure ratings for pleasing synthesized sounds predicted significant and similar variance in six dimensions of musical reward considered to be relatively independent, suggesting that pleasure in sonic timbres play a role in eliciting reward-related responses to music. We replicate the earlier findings of preserved pleasure ratings for semantically referential sounds in musical anhedonics and find that pleasure ratings of semantic referents, when presented without sounds, correlated with ratings for the sounds themselves. This association was stronger in musical anhedonics than in controls, suggesting the use of semantic knowledge as a compensatory mechanism for affective sound processing. Our results indicate that specific musical anhedonia is not entirely specific to melodic and rhythmic processing, and suggest that timbre merits further research as a source of pleasure in music.
Collapse
Affiliation(s)
- Nicholas Kathios
- Dept. of Psychology, Northeastern University, United States of America
| | - Aniruddh D Patel
- Dept. of Psychology, Tufts University, United States of America; Program in Brain Mind and Consciousness, Canadian Institute for Advanced Research, Canada
| | - Psyche Loui
- Dept. of Psychology, Northeastern University, United States of America; Dept. of Music, Northeastern University, United States of America.
| |
Collapse
|
3
|
Kang H, Auksztulewicz R, Chan CH, Cappotto D, Rajendran VG, Schnupp JWH. Cross-modal implicit learning of random time patterns. Hear Res 2023; 438:108857. [PMID: 37639922 DOI: 10.1016/j.heares.2023.108857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 07/12/2023] [Accepted: 07/21/2023] [Indexed: 08/31/2023]
Abstract
Perception is sensitive to statistical regularities in the environment, including temporal characteristics of sensory inputs. Interestingly, implicit learning of temporal patterns in one modality can also improve their processing in another modality. However, it is unclear how cross-modal learning transfer affects neural responses to sensory stimuli. Here, we recorded neural activity of human volunteers using electroencephalography (EEG), while participants were exposed to brief sequences of randomly timed auditory or visual pulses. Some trials consisted of a repetition of the temporal pattern within the sequence, and subjects were tasked with detecting these trials. Unknown to the participants, some trials reappeared throughout the experiment across both modalities (Transfer) or only within a modality (Control), enabling implicit learning in one modality and its transfer. Using a novel method of analysis of single-trial EEG responses, we showed that learning temporal structures within and across modalities is reflected in neural learning curves. These putative neural correlates of learning transfer were similar both when temporal information learned in audition was transferred to visual stimuli and vice versa. The modality-specific mechanisms for learning of temporal information and general mechanisms which mediate learning transfer across modalities had distinct physiological signatures: temporal learning within modalities relied on modality-specific brain regions while learning transfer affected beta-band activity in frontal regions.
Collapse
Affiliation(s)
- HiJee Kang
- Department of Neuroscience, City University of Hong Kong, Hong Kong S.A.R; Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Ryszard Auksztulewicz
- Department of Neuroscience, City University of Hong Kong, Hong Kong S.A.R; Center for Cognitive Neuroscience Berlin, Free University Berlin, Berlin, Germany
| | - Chi Hong Chan
- Department of Neuroscience, City University of Hong Kong, Hong Kong S.A.R
| | - Drew Cappotto
- Department of Neuroscience, City University of Hong Kong, Hong Kong S.A.R; UCL Ear Institute, University College London, London, United Kingdom
| | - Vani G Rajendran
- Department of Neuroscience, City University of Hong Kong, Hong Kong S.A.R; Department of Cognitive Neuroscience, Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, NM
| | - Jan W H Schnupp
- Department of Neuroscience, City University of Hong Kong, Hong Kong S.A.R.
| |
Collapse
|
4
|
Lorenzi C, Apoux F, Grinfeder E, Krause B, Miller-Viacava N, Sueur J. Human Auditory Ecology: Extending Hearing Research to the Perception of Natural Soundscapes by Humans in Rapidly Changing Environments. Trends Hear 2023; 27:23312165231212032. [PMID: 37981813 PMCID: PMC10658775 DOI: 10.1177/23312165231212032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 10/13/2023] [Accepted: 10/18/2023] [Indexed: 11/21/2023] Open
Abstract
Research in hearing sciences has provided extensive knowledge about how the human auditory system processes speech and assists communication. In contrast, little is known about how this system processes "natural soundscapes," that is the complex arrangements of biological and geophysical sounds shaped by sound propagation through non-anthropogenic habitats [Grinfeder et al. (2022). Frontiers in Ecology and Evolution. 10: 894232]. This is surprising given that, for many species, the capacity to process natural soundscapes determines survival and reproduction through the ability to represent and monitor the immediate environment. Here we propose a framework to encourage research programmes in the field of "human auditory ecology," focusing on the study of human auditory perception of ecological processes at work in natural habitats. Based on large acoustic databases with high ecological validity, these programmes should investigate the extent to which this presumably ancestral monitoring function of the human auditory system is adapted to specific information conveyed by natural soundscapes, whether it operate throughout the life span or whether it emerges through individual learning or cultural transmission. Beyond fundamental knowledge of human hearing, these programmes should yield a better understanding of how normal-hearing and hearing-impaired listeners monitor rural and city green and blue spaces and benefit from them, and whether rehabilitation devices (hearing aids and cochlear implants) restore natural soundscape perception and emotional responses back to normal. Importantly, they should also reveal whether and how humans hear the rapid changes in the environment brought about by human activity.
Collapse
Affiliation(s)
- Christian Lorenzi
- Laboratoire des Systèmes Perceptifs, UMR CNRS 8248, Département d’Etudes Cognitives, Ecole Normale Supérieure, Université Paris Sciences et Lettres (PSL), Paris, France
| | - Frédéric Apoux
- Laboratoire des Systèmes Perceptifs, UMR CNRS 8248, Département d’Etudes Cognitives, Ecole Normale Supérieure, Université Paris Sciences et Lettres (PSL), Paris, France
| | - Elie Grinfeder
- Laboratoire des Systèmes Perceptifs, UMR CNRS 8248, Département d’Etudes Cognitives, Ecole Normale Supérieure, Université Paris Sciences et Lettres (PSL), Paris, France
- Institut de Systématique, Évolution, Biodiversité (ISYEB), Muséum national d’Histoire naturelle, CNRS, Sorbonne Université, EPHE, Université des Antilles, Paris, France
| | | | - Nicole Miller-Viacava
- Laboratoire des Systèmes Perceptifs, UMR CNRS 8248, Département d’Etudes Cognitives, Ecole Normale Supérieure, Université Paris Sciences et Lettres (PSL), Paris, France
| | - Jérôme Sueur
- Institut de Systématique, Évolution, Biodiversité (ISYEB), Muséum national d’Histoire naturelle, CNRS, Sorbonne Université, EPHE, Université des Antilles, Paris, France
| |
Collapse
|
5
|
Stilp CE, Shorey AE, King CJ. Nonspeech sounds are not all equally good at being nonspeech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1842. [PMID: 36182316 DOI: 10.1121/10.0014174] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 08/30/2022] [Indexed: 06/16/2023]
Abstract
Perception of speech sounds has a long history of being compared to perception of nonspeech sounds, with rich and enduring debates regarding how closely they share similar underlying processes. In many instances, perception of nonspeech sounds is directly compared to that of speech sounds without a clear explanation of how related these sounds are to the speech they are selected to mirror (or not mirror). While the extreme acoustic variability of speech sounds is well documented, this variability is bounded by the common source of a human vocal tract. Nonspeech sounds do not share a common source, and as such, exhibit even greater acoustic variability than that observed for speech. This increased variability raises important questions about how well perception of a given nonspeech sound might resemble or model perception of speech sounds. Here, we offer a brief review of extremely diverse nonspeech stimuli that have been used in the efforts to better understand perception of speech sounds. The review is organized according to increasing spectrotemporal complexity: random noise, pure tones, multitone complexes, environmental sounds, music, speech excerpts that are not recognized as speech, and sinewave speech. Considerations are offered for stimulus selection in nonspeech perception experiments moving forward.
Collapse
Affiliation(s)
- Christian E Stilp
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky 40292, USA
| | - Anya E Shorey
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky 40292, USA
| | - Caleb J King
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky 40292, USA
| |
Collapse
|
6
|
Foley L, Schlesinger J, Schutz M. More detectable, less annoying: Temporal variation in amplitude envelope and spectral content improves auditory interface efficacy. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:3189. [PMID: 35649914 DOI: 10.1121/10.0010447] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 04/22/2022] [Indexed: 06/15/2023]
Abstract
Auditory interfaces, such as auditory alarms, are useful tools for human computer interaction. Unfortunately, poor detectability and annoyance inhibit the efficacy of many interface sounds. Here, it is shown in two ways how moving beyond the traditional simplistic temporal structures of normative interface sounds can significantly improve auditory interface efficacy. First, participants rated tones with percussive amplitude envelopes as significantly less annoying than tones with flat amplitude envelopes. Crucially, this annoyance reduction did not come with a detection cost as percussive tones were detected more often than flat tones-particularly, at relatively low listening levels. Second, it was found that reductions in the duration of a tone's harmonics significantly lowered its annoyance without a commensurate reduction in detection. Together, these findings help inform our theoretical understanding of detection and annoyance of sound. In addition, they offer promising original design considerations for auditory interfaces.
Collapse
Affiliation(s)
- Liam Foley
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Canada
| | - Joseph Schlesinger
- Anesthesiology Critical Care Medicine, Biomedical Engineering, Vanderbilt University Medical Center, Nashville, Tennessee 37212, USA
| | - Michael Schutz
- School of the Arts, McMaster University, Hamilton, Canada
| |
Collapse
|
7
|
Russell MK. Age and Auditory Spatial Perception in Humans: Review of Behavioral Findings and Suggestions for Future Research. Front Psychol 2022; 13:831670. [PMID: 35250777 PMCID: PMC8888835 DOI: 10.3389/fpsyg.2022.831670] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 01/24/2022] [Indexed: 11/13/2022] Open
Abstract
It has been well documented, and fairly well known, that concomitant with an increase in chronological age is a corresponding increase in sensory impairment. As most people realize, our hearing suffers as we get older; hence, the increased need for hearing aids. The first portion of the present paper is how the change in age apparently affects auditory judgments of sound source position. A summary of the literature evaluating the changes in the perception of sound source location and the perception of sound source motion as a function of chronological age is presented. The review is limited to empirical studies with behavioral findings involving humans. It is the view of the author that we have an immensely limited understanding of how chronological age affects perception of space when based on sound. In the latter part of the paper, discussion is given to how auditory spatial perception is traditionally conducted in the laboratory. Theoretically, beneficial reasons exist for conducting research in the manner it has been. Nonetheless, from an ecological perspective, the vast majority of previous research can be considered unnatural and greatly lacking in ecological validity. Suggestions for an alternative and more ecologically valid approach to the investigation of auditory spatial perception are proposed. It is believed an ecological approach to auditory spatial perception will enhance our understanding of the extent to which individuals perceive sound source location and how those perceptual judgments change with an increase in chronological age.
Collapse
|
8
|
Sreetharan S, Schlesinger JJ, Schutz M. Decaying amplitude envelopes reduce alarm annoyance: Exploring new approaches to improving auditory interfaces. APPLIED ERGONOMICS 2021; 96:103432. [PMID: 34120000 DOI: 10.1016/j.apergo.2021.103432] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 04/03/2021] [Accepted: 04/07/2021] [Indexed: 06/12/2023]
Abstract
Auditory alarms offer great potential for facilitating human-computer interactions in complex, rapidly changing environments. They are particularly useful in medical settings, where in theory they should afford communication in emergency rooms, operating theatres, and hospitals around the world. Unfortunately, the sounds typically used in these devices are problematic, and researchers have documented numerous shortcomings. Their ubiquity means that even incremental improvements can have significant benefits for patient care. However, solutions have proven challenging for multiple reasons-including issues of backward compatibility inherent in changing any standard. Here we present a series of three experiments showing that manipulations to one specific, understudied property can significantly lower alarm annoyance without harming learning or memory-while preserving an alarm's melodic and rhythmic structure. These results suggest promising new directions for improving the hospital's soundscape, where evidence of problems related to sound are increasingly recognized as affecting medical outcomes as well as physician well-being.
Collapse
Affiliation(s)
- Sharmila Sreetharan
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, ON, Canada
| | - Joseph J Schlesinger
- Department of Anesthesiology Critical Care Medicine (FCCM), Vanderbilt University Medical Center, Nashville, TN, USA; Adjunct Professor, Electrical and Computer Engineering, McGill University. Montréal, Québec, Canada
| | - Michael Schutz
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, ON, Canada; School of the Arts, McMaster University, Hamilton, ON, Canada.
| |
Collapse
|
9
|
Heins N, Pomp J, Kluger DS, Vinbrüx S, Trempler I, Kohler A, Kornysheva K, Zentgraf K, Raab M, Schubotz RI. Surmising synchrony of sound and sight: Factors explaining variance of audiovisual integration in hurdling, tap dancing and drumming. PLoS One 2021; 16:e0253130. [PMID: 34293800 PMCID: PMC8298114 DOI: 10.1371/journal.pone.0253130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 05/31/2021] [Indexed: 11/18/2022] Open
Abstract
Auditory and visual percepts are integrated even when they are not perfectly temporally aligned with each other, especially when the visual signal precedes the auditory signal. This window of temporal integration for asynchronous audiovisual stimuli is relatively well examined in the case of speech, while other natural action-induced sounds have been widely neglected. Here, we studied the detection of audiovisual asynchrony in three different whole-body actions with natural action-induced sounds–hurdling, tap dancing and drumming. In Study 1, we examined whether audiovisual asynchrony detection, assessed by a simultaneity judgment task, differs as a function of sound production intentionality. Based on previous findings, we expected that auditory and visual signals should be integrated over a wider temporal window for actions creating sounds intentionally (tap dancing), compared to actions creating sounds incidentally (hurdling). While percentages of perceived synchrony differed in the expected way, we identified two further factors, namely high event density and low rhythmicity, to induce higher synchrony ratings as well. Therefore, we systematically varied event density and rhythmicity in Study 2, this time using drumming stimuli to exert full control over these variables, and the same simultaneity judgment tasks. Results suggest that high event density leads to a bias to integrate rather than segregate auditory and visual signals, even at relatively large asynchronies. Rhythmicity had a similar, albeit weaker effect, when event density was low. Our findings demonstrate that shorter asynchronies and visual-first asynchronies lead to higher synchrony ratings of whole-body action, pointing to clear parallels with audiovisual integration in speech perception. Overconfidence in the naturally expected, that is, synchrony of sound and sight, was stronger for intentional (vs. incidental) sound production and for movements with high (vs. low) rhythmicity, presumably because both encourage predictive processes. In contrast, high event density appears to increase synchronicity judgments simply because it makes the detection of audiovisual asynchrony more difficult. More studies using real-life audiovisual stimuli with varying event densities and rhythmicities are needed to fully uncover the general mechanisms of audiovisual integration.
Collapse
Affiliation(s)
- Nina Heins
- Department of Psychology, University of Muenster, Muenster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
| | - Jennifer Pomp
- Department of Psychology, University of Muenster, Muenster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
| | - Daniel S. Kluger
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
- Institute for Biomagnetism and Biosignal Analysis, University Hospital Muenster, Muenster, Germany
| | - Stefan Vinbrüx
- Institute of Sport and Exercise Sciences, Human Performance and Training, University of Muenster, Muenster, Germany
| | - Ima Trempler
- Department of Psychology, University of Muenster, Muenster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
| | - Axel Kohler
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
| | - Katja Kornysheva
- School of Psychology and Bangor Neuroimaging Unit, Bangor University, Wales, United Kingdom
| | - Karen Zentgraf
- Department of Movement Science and Training in Sports, Institute of Sport Sciences, Goethe University Frankfurt, Frankfurt, Germany
| | - Markus Raab
- Institute of Psychology, German Sport University Cologne, Cologne, Germany
- School of Applied Sciences, London South Bank University, London, United Kingdom
| | - Ricarda I. Schubotz
- Department of Psychology, University of Muenster, Muenster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
- * E-mail:
| |
Collapse
|
10
|
Sangari A, Emhardt EA, Salas B, Avery A, Freundlich RE, Fabbri D, Shotwell MS, Schlesinger JJ. Delirium Variability is Influenced by the Sound Environment (DEVISE Study): How Changes in the Intensive Care Unit soundscape affect delirium incidence. J Med Syst 2021; 45:76. [PMID: 34173052 PMCID: PMC8300597 DOI: 10.1007/s10916-021-01752-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 06/15/2021] [Indexed: 01/01/2023]
Abstract
Quantitative data on the sensory environment of intensive care unit (ICU) patients and its potential link to increased risk of delirium is limited. We examined whether higher average sound and light levels in ICU environments are associated with delirium incidence. Over 111 million sound and light measurements from 143 patient stays in the surgical and trauma ICUs were collected using Quietyme® (Neshkoro, Wisconsin) sensors from May to July 2018 and analyzed. Sensory data were grouped into time of day, then normalized against their ICU environments, with Confusion Assessment Method (CAM-ICU) scores measured each shift. We then performed logistic regression analysis, adjusting for possible confounding variables. Lower morning sound averages (8 am-12 pm) (OR = 0.835, 95% OR CI = [0.746, 0.934], p = 0.002) and higher daytime sound averages (12 pm-6 pm) (OR = 1.157, 95% OR CI = [1.036, 1.292], p = 0.011) were associated with an increased odds of delirium incidence, while nighttime sound averages (10 pm-8 am) (OR = 0.990, 95% OR CI = [0.804, 1.221], p = 0.928) and the ICU light environment did not show statistical significance. Our results suggest an association between the ICU soundscape and the odds of developing delirium. This creates a future paradigm for studies of the ICU soundscape and lightscape.
Collapse
Affiliation(s)
- Ayush Sangari
- Department of Electrical Engineering and Computer Science, Vanderbilt University, 2301 Vanderbilt Place, PMB 351679, Nashville, TN, 37235, USA
| | - Elizabeth A Emhardt
- Department of Anesthesiology, Division of Critical Care Medicine, Vanderbilt University Medical Center, 1211 21st Avenue South, MAB 422, Nashville, TN, 37212, USA.
| | - Barbara Salas
- The Newcastle upon Tyne NHS Foundation Trust, Freeman Hospital, Freeman Road, High Heaton, Newcastle-upon-Tyne, Tyne and Wear, NE7 7DN, UK
| | - Andrew Avery
- Department of General Surgery, Trauma and Burn Surgery, Vanderbilt University Medical Center, 1211 Medical Center Drive, Nashville, TN, 37212, USA
| | - Robert E Freundlich
- Department of Anesthesiology, Division of Critical Care Medicine, Vanderbilt University Medical Center, 1211 21st Avenue South, MAB 422, Nashville, TN, 37212, USA
- Department of Biomedical Informatics, Vanderbilt University Medical Center, 2525 West End Avenue, Suite 1475, Nashville, TN, 37203, USA
| | - Daniel Fabbri
- Department of Biomedical Informatics, Vanderbilt University Medical Center, 2525 West End Avenue, Suite 1475, Nashville, TN, 37203, USA
| | - Matthew S Shotwell
- Department of Biostatistics, Vanderbilt University Medical Center, 2525 West End Avenue, Suite 1100, Nashville, TN, 37203, USA
| | - Joseph J Schlesinger
- Department of Anesthesiology, Division of Critical Care Medicine, Vanderbilt University Medical Center, 1211 21st Avenue South, MAB 422, Nashville, TN, 37212, USA
| |
Collapse
|
11
|
Snow JC, Culham JC. The Treachery of Images: How Realism Influences Brain and Behavior. Trends Cogn Sci 2021; 25:506-519. [PMID: 33775583 PMCID: PMC10149139 DOI: 10.1016/j.tics.2021.02.008] [Citation(s) in RCA: 50] [Impact Index Per Article: 16.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 02/08/2021] [Accepted: 02/22/2021] [Indexed: 10/21/2022]
Abstract
Although the cognitive sciences aim to ultimately understand behavior and brain function in the real world, for historical and practical reasons, the field has relied heavily on artificial stimuli, typically pictures. We review a growing body of evidence that both behavior and brain function differ between image proxies and real, tangible objects. We also propose a new framework for immersive neuroscience to combine two approaches: (i) the traditional build-up approach of gradually combining simplified stimuli, tasks, and processes; and (ii) a newer tear-down approach that begins with reality and compelling simulations such as virtual reality to determine which elements critically affect behavior and brain processing.
Collapse
Affiliation(s)
- Jacqueline C Snow
- Department of Psychology, University of Nevada Reno, Reno, NV 89557, USA
| | - Jody C Culham
- Department of Psychology, University of Western Ontario, London, Ontario, N6A 5C2, Canada; Brain and Mind Institute, Western Interdisciplinary Research Building, University of Western Ontario, London, Ontario, N6A 3K7, Canada.
| |
Collapse
|
12
|
Effects of Musical Training, Timbre, and Response Orientation on the ROMPR Effect. JOURNAL OF COGNITIVE ENHANCEMENT 2021. [DOI: 10.1007/s41465-021-00213-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
13
|
Acute alcohol intoxication and the cocktail party problem: do "mocktails" help or hinder? Psychopharmacology (Berl) 2021; 238:3083-3093. [PMID: 34313803 PMCID: PMC8605962 DOI: 10.1007/s00213-021-05924-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 07/06/2021] [Indexed: 11/06/2022]
Abstract
RATIONALE To test the notion that alcohol impairs auditory attentional control by reducing the listener's cognitive capacity. OBJECTIVES We examined the effect of alcohol consumption and working memory span on dichotic speech shadowing and the cocktail party effect-the ability to focus on one of many simultaneous speakers yet still detect mention of one's name amidst the background speech. Alcohol was expected either to increase name detection, by weakening the inhibition of irrelevant speech, or reduce name detection, by restricting auditory attention on to the primary input channel. Low-span participants were expected to show larger drug impairments than high-span counterparts. METHODS On completion of the working memory span task, participants (n = 81) were randomly assigned to an alcohol or placebo beverage treatment. After alcohol absorption, they shadowed speech presented to one ear while ignoring the synchronised speech of a different speaker presented to the other. Each participant's first name was covertly embedded in to-be-ignored speech. RESULTS The "cocktail party effect" was not affected by alcohol or working memory span, though low-span participants made more shadowing errors and recalled fewer words from the primary channel than high-span counterparts. Bayes factors support a null effect of alcohol on the cocktail party phenomenon, on shadowing errors and on memory for either shadowed or ignored speech. CONCLUSION Findings suggest that an alcoholic beverage producing a moderate level of intoxication (M BAC ≈ 0.08%) neither enhances nor impairs the cocktail party effect.
Collapse
|
14
|
Re-Sounding Alarms: Designing Ergonomic Auditory Interfaces by Embracing Musical Insights. Healthcare (Basel) 2020; 8:healthcare8040389. [PMID: 33049954 PMCID: PMC7711797 DOI: 10.3390/healthcare8040389] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 09/30/2020] [Accepted: 10/01/2020] [Indexed: 11/17/2022] Open
Abstract
Auditory alarms are an important component of human–computer interfaces, used in mission-critical industries such as aviation, nuclear power plants, and hospital settings. Unfortunately, problems with recognition, detection, and annoyance continue to hamper their effectiveness. Historically, they appear designed more in response to engineering constraints than principles of hearing science. Here we argue that auditory perception in general and music perception in particular hold valuable lessons for alarm designers. We also discuss ongoing research suggesting that the temporal complexity of musical tones offers promising insight into new ways of addressing widely recognized shortcomings of current alarms.
Collapse
|