26
|
Vempala NN, Russo FA. Editorial: Bridging Music Informatics With Music Cognition. Front Psychol 2018; 9:633. [PMID: 29867629 PMCID: PMC5952036 DOI: 10.3389/fpsyg.2018.00633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2018] [Accepted: 04/16/2018] [Indexed: 11/24/2022] Open
|
27
|
Vempala NN, Russo FA. Modeling Music Emotion Judgments Using Machine Learning Methods. Front Psychol 2018; 8:2239. [PMID: 29354080 PMCID: PMC5760560 DOI: 10.3389/fpsyg.2017.02239] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2017] [Accepted: 12/11/2017] [Indexed: 11/13/2022] Open
Abstract
Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML) methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.
Collapse
|
28
|
Good A, Choma B, Russo FA. Movement Synchrony Influences Intergroup Relations in a Minimal Groups Paradigm. BASIC AND APPLIED SOCIAL PSYCHOLOGY 2017. [DOI: 10.1080/01973533.2017.1337015] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
29
|
Wilbiks JMP, Vuvan DT, Girard PY, Peretz I, Russo FA. Effects of vocal training in a musicophile with congenital amusia. Neurocase 2016; 22:526-537. [PMID: 28001646 DOI: 10.1080/13554794.2016.1263339] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Congenital amusia is a condition in which an individual suffers from a deficit of musical pitch perception and production. Individuals suffering from congenital amusia generally tend to abstain from musical activities. Here, we present the unique case of Tim Falconer, a self-described musicophile who also suffers from congenital amusia. We describe and assess Tim's attempts to train himself out of amusia through a self-imposed 18-month program of formal vocal training and practice. We tested Tim with respect to music perception and vocal production across seven sessions including pre- and post-training assessments. We also obtained diffusion-weighted images of his brain to assess connectivity between auditory and motor planning areas via the arcuate fasciculus (AF). Tim's behavioral and brain data were compared to that of normal and amusic controls. While Tim showed temporary gains in his singing ability, he did not reach normal levels, and these gains faded when he was not engaged in regular lessons and practice. Tim did show some sustained gains with respect to the perception of musical rhythm and meter. We propose that Tim's lack of improvement in pitch perception and production tasks is due to long-standing and likely irreversible reduction in connectivity along the AF fiber tract.
Collapse
|
30
|
Abstract
Abstract. Previous research involving preschool children and adults suggests that moving in synchrony with others can foster cooperation. Song provides a rich oscillatory framework that supports synchronous movement and may thus be considered a powerful agent of positive social relations. In the current study, we assessed this hypothesis in a group of primary-school aged children with diverse ethnic and socioeconomic backgrounds. Children participated in one of three activity conditions: group singing, group art, or competitive games. They were then asked to play a prisoner’s dilemma game as a measure of cooperation. Results showed that children who engaged in group singing were more cooperative than children who engaged in group art or competitive games.
Collapse
|
31
|
Abel MK, Li HC, Russo FA, Schlaug G, Loui P. Audiovisual Interval Size Estimation Is Associated with Early Musical Training. PLoS One 2016; 11:e0163589. [PMID: 27760134 PMCID: PMC5070837 DOI: 10.1371/journal.pone.0163589] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2016] [Accepted: 09/12/2016] [Indexed: 11/18/2022] Open
Abstract
Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants’ ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants’ ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.
Collapse
|
32
|
Russo FA, Cuddy LL, Galembo A, Thompson WF. Sensitivity to Tonality across the Pitch Range. Perception 2016; 36:781-90. [PMID: 17624122 DOI: 10.1068/p5435] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Striking changes in sensitivity to tonality across the pitch range are reported. Participants were presented a key-defining context (do-mi-do-sol) followed by one of the 12 chromatic tones of the octave, and rated the goodness of fit of the probe tone to the context. The set of ratings, called the probe-tone profile, was compared to an established standardised profile for the Western tonal hierarchy. The presentation of context and probe tones at low and high pitch registers resulted in significantly reduced sensitivity to tonality. Sensitivity was especially poor for presentations in the lowest octaves where inharmonicity levels were substantially above the threshold for detection. We propose that sensitivity to tonality may be influenced by pitch salience (or a co-varying factor such as exposure to pitch distributional information) as well as suprathreshold inharmonicity.
Collapse
|
33
|
Livingstone SR, Vezer E, McGarry LM, Lang AE, Russo FA. Deficits in the Mimicry of Facial Expressions in Parkinson's Disease. Front Psychol 2016; 7:780. [PMID: 27375505 PMCID: PMC4894910 DOI: 10.3389/fpsyg.2016.00780] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2016] [Accepted: 05/09/2016] [Indexed: 11/21/2022] Open
Abstract
Background: Humans spontaneously mimic the facial expressions of others, facilitating social interaction. This mimicking behavior may be impaired in individuals with Parkinson's disease, for whom the loss of facial movements is a clinical feature. Objective: To assess the presence of facial mimicry in patients with Parkinson's disease. Method: Twenty-seven non-depressed patients with idiopathic Parkinson's disease and 28 age-matched controls had their facial muscles recorded with electromyography while they observed presentations of calm, happy, sad, angry, and fearful emotions. Results: Patients exhibited reduced amplitude and delayed onset in the zygomaticus major muscle region (smiling response) following happy presentations (patients M = 0.02, 95% confidence interval [CI] −0.15 to 0.18, controls M = 0.26, CI 0.14 to 0.37, ANOVA, effect size [ES] = 0.18, p < 0.001). Although patients exhibited activation of the corrugator supercilii and medial frontalis (frowning response) following sad and fearful presentations, the frontalis response to sad presentations was attenuated relative to controls (patients M = 0.05, CI −0.08 to 0.18, controls M = 0.21, CI 0.09 to 0.34, ANOVA, ES = 0.07, p = 0.017). The amplitude of patients' zygomaticus activity in response to positive emotions was found to be negatively correlated with response times for ratings of emotional identification, suggesting a motor-behavioral link (r = –0.45, p = 0.02, two-tailed). Conclusions: Patients showed decreased mimicry overall, mimicking other peoples' frowns to some extent, but presenting with profoundly weakened and delayed smiles. These findings open a new avenue of inquiry into the “masked face” syndrome of PD.
Collapse
|
34
|
|
35
|
Peck KJ, Girard TA, Russo FA, Fiocco AJ. Music and Memory in Alzheimer’s Disease and The Potential Underlying Mechanisms. J Alzheimers Dis 2016; 51:949-59. [DOI: 10.3233/jad-150998] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
36
|
Kirchberger M, Russo FA. Dynamic Range Across Music Genres and the Perception of Dynamic Compression in Hearing-Impaired Listeners. Trends Hear 2016; 20:2331216516630549. [PMID: 26868955 PMCID: PMC4753356 DOI: 10.1177/2331216516630549] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2015] [Revised: 12/21/2015] [Accepted: 01/13/2016] [Indexed: 11/22/2022] Open
Abstract
Dynamic range compression serves different purposes in the music and hearing-aid industries. In the music industry, it is used to make music louder and more attractive to normal-hearing listeners. In the hearing-aid industry, it is used to map the variable dynamic range of acoustic signals to the reduced dynamic range of hearing-impaired listeners. Hence, hearing-aided listeners will typically receive a dual dose of compression when listening to recorded music. The present study involved an acoustic analysis of dynamic range across a cross section of recorded music as well as a perceptual study comparing the efficacy of different compression schemes. The acoustic analysis revealed that the dynamic range of samples from popular genres, such as rock or rap, was generally smaller than the dynamic range of samples from classical genres, such as opera and orchestra. By comparison, the dynamic range of speech, based on recordings of monologues in quiet, was larger than the dynamic range of all music genres tested. The perceptual study compared the effect of the prescription rule NAL-NL2 with a semicompressive and a linear scheme. Music subjected to linear processing had the highest ratings for dynamics and quality, followed by the semicompressive and the NAL-NL2 setting. These findings advise against NAL-NL2 as a prescription rule for recorded music and recommend linear settings.
Collapse
|
37
|
Kirchberger M, Russo FA. Harmonic Frequency Lowering: Effects on the Perception of Music Detail and Sound Quality. Trends Hear 2016; 20:2331216515626131. [PMID: 26834122 PMCID: PMC4737978 DOI: 10.1177/2331216515626131] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2015] [Revised: 12/16/2015] [Accepted: 12/16/2015] [Indexed: 12/03/2022] Open
Abstract
A novel algorithm for frequency lowering in music was developed and experimentally tested in hearing-impaired listeners. Harmonic frequency lowering (HFL) combines frequency transposition and frequency compression to preserve the harmonic content of music stimuli. Listeners were asked to make judgments regarding detail and sound quality in music stimuli. Stimuli were presented under different signal processing conditions: original, low-pass filtered, HFL, and nonlinear frequency compressed. Results showed that participants reported perceiving the most detail in the HFL condition. In addition, there was no difference in sound quality across conditions.
Collapse
|
38
|
Trehub SE, Plantinga J, Russo FA. Maternal Vocal Interactions with Infants: Reciprocal Visual Influences. SOCIAL DEVELOPMENT 2015. [DOI: 10.1111/sode.12164] [Citation(s) in RCA: 65] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
39
|
|
40
|
Livingstone SR, Choi DH, Russo FA. The influence of vocal training and acting experience on measures of voice quality and emotional genuineness. Front Psychol 2014; 5:156. [PMID: 24639659 PMCID: PMC3945712 DOI: 10.3389/fpsyg.2014.00156] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2013] [Accepted: 02/08/2014] [Indexed: 11/23/2022] Open
Abstract
Vocal training through singing and acting lessons is known to modify acoustic parameters of the voice. While the effects of singing training have been well documented, the role of acting experience on the singing voice remains unclear. In two experiments, we used linear mixed models to examine the relationships between the relative amounts of acting and singing experience on the acoustics and perception of the male singing voice. In Experiment 1, 12 male vocalists were recorded while singing with five different emotions, each with two intensities. Acoustic measures of pitch accuracy, jitter, and harmonics-to-noise ratio (HNR) were examined. Decreased pitch accuracy and increased jitter, indicative of a lower “voice quality,” were associated with more years of acting experience, while increased pitch accuracy was associated with more years of singing lessons. We hypothesized that the acoustic deviations exhibited by more experienced actors was an intentional technique to increase the genuineness or truthfulness of their emotional expressions. In Experiment 2, listeners rated vocalists’ emotional genuineness. Vocalists with more years of acting experience were rated as more genuine than vocalists with less acting experience. No relationship was reported for singing training. Increased genuineness was associated with decreased pitch accuracy, increased jitter, and a higher HNR. These effects may represent a shifting of priorities by male vocalists with acting experience to emphasize emotional genuineness over pitch accuracy or voice quality in their singing performances.
Collapse
|
41
|
Russo FA, Vempala NN, Sandstrom GM. Predicting musically induced emotions from physiological inputs: linear and neural network models. Front Psychol 2013; 4:468. [PMID: 23964250 PMCID: PMC3737459 DOI: 10.3389/fpsyg.2013.00468] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2013] [Accepted: 07/05/2013] [Indexed: 11/13/2022] Open
Abstract
Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of “felt” emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants—heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.
Collapse
|
42
|
Abstract
Two experiments investigated deaf individuals' ability to discriminate between same-sex talkers based on vibrotactile stimulation alone. Nineteen participants made same/different judgments on pairs of utterances presented to the lower back through voice coils embedded in a conforming chair. Discrimination of stimuli matched for F0, duration, and perceived magnitude was successful for pairs of spoken sentences in Experiment 1 (median percent correct = 83%) and pairs of vowel utterances in Experiment 2 (median percent correct = 75%). Greater difference in spectral tilt between “different” pairs strongly predicted their discriminability in both experiments. The current findings support the hypothesis that discrimination of complex vibrotactile stimuli involves the cortical integration of spectral information filtered through frequency-tuned skin receptors.
Collapse
|
43
|
Russo FA, Ammirante P, Fels DI. Vibrotactile discrimination of musical timbre. J Exp Psychol Hum Percept Perform 2012; 38:822-6. [PMID: 22708743 DOI: 10.1037/a0029046] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Five experiments investigated the ability to discriminate between musical timbres based on vibrotactile stimulation alone. Participants made same/different judgments on pairs of complex waveforms presented sequentially to the back through voice coils embedded in a conforming chair. Discrimination between cello, piano, and trombone tones matched for F0, duration, and magnitude was above chance with white noise masking the sound output of the voice coils (Experiment 1), with additional masking to control for bone-conducted sound (Experiment 2), and among a group of deaf individuals (Experiment 4a). Hearing (Experiment 3) and deaf individuals (Experiment 4b) also successfully discriminated between dull and bright timbres varying only with regard to spectral centroid. We propose that, as with auditory discrimination of musical timbre, vibrotactile discrimination may involve the cortical integration of filtered output from frequency-tuned mechanoreceptors functioning as critical bands.
Collapse
|
44
|
McGarry LM, Russo FA, Schalles MD, Pineda JA. Audio-visual facilitation of the mu rhythm. Exp Brain Res 2012; 218:527-38. [PMID: 22427133 DOI: 10.1007/s00221-012-3046-3] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2011] [Accepted: 02/21/2012] [Indexed: 11/24/2022]
Abstract
Previous studies demonstrate that perception of action presented audio-visually facilitates greater mirror neuron system (MNS) activity in humans (Kaplan and Iacoboni in Cogn Process 8(2):103-113, 2007) and non-human primates (Keysers et al. in Exp Brain Res 153(4):628-636, 2003) than perception of action presented unimodally. In the current study, we examined whether audio-visual facilitation of the MNS can be indexed using electroencephalography (EEG) measurement of the mu rhythm. The mu rhythm is an EEG oscillation with peaks at 10 and 20 Hz that is suppressed during the execution and perception of action and is speculated to reflect activity in the premotor and inferior parietal cortices as a result of MNS activation (Pineda in Behav Brain Funct 4(1):47, 2008). Participants observed experimental stimuli unimodally (visual-alone or audio-alone) or bimodally during randomized presentations of two hands ripping a sheet of paper, and a control video depicting a box moving up and down. Audio-visual perception of action stimuli led to greater event-related desynchrony (ERD) of the 8-13 Hz mu rhythm compared to unimodal perception of the same stimuli over the C3 electrode, as well as in a left central cluster when data were examined in source space. These results are consistent with Kaplan and Iacoboni's (in Cogn Process 8(2):103-113, 2007), findings that indicate audio-visual facilitation of the MNS; our left central cluster was localized approximately 13.89 mm away from the ventral premotor cluster identified in their fMRI study, suggesting that these clusters originate from similar sources. Consistency of results in electrode space and component space support the use of ICA as a valid source localization tool.
Collapse
|
45
|
McGarry LM, Russo FA. Mirroring in Dance/Movement Therapy: Potential mechanisms behind empathy enhancement. ARTS IN PSYCHOTHERAPY 2011. [DOI: 10.1016/j.aip.2011.04.005] [Citation(s) in RCA: 97] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
46
|
Russo FA, Sandstrom GM, Maksimowski M. Mouth versus eyes: Gaze fixation during perception of sung interval size. ACTA ACUST UNITED AC 2011. [DOI: 10.1037/h0094007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
47
|
Ammirante P, Thompson WF, Russo FA. Ideomotor effects of pitch on continuation tapping. Q J Exp Psychol (Hove) 2010; 64:381-93. [PMID: 20694921 DOI: 10.1080/17470218.2010.495408] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
The ideomotor principle predicts that perception will modulate action where overlap exists between perceptual and motor representations of action. This effect is demonstrated with auditory stimuli. Previous perceptual evidence suggests that pitch contour and pitch distance in tone sequences may elicit tonal motion effects consistent with listeners' implicit awareness of the lawful dynamics of locomotive bodies. To examine modulating effects of perception on action, participants in a continuation tapping task produced a steady tempo. Auditory tones were triggered by each tap. Pitch contour randomly and persistently varied within trials. Pitch distance between successive tones varied between trials. Although participants were instructed to ignore them, tones systematically affected finger dynamics and timing. Where pitch contour implied positive acceleration, the following tap and the intertap interval (ITI) that it completed were faster. Where pitch contour implied negative acceleration, the following tap and the ITI that it completed were slower. Tempo was faster with greater pitch distance. Musical training did not predict the magnitude of these effects. There were no generalized effects on timing variability. Pitch contour findings demonstrate how tonal motion may elicit the spontaneous production of accents found in expressive music performance.
Collapse
|
48
|
Karam M, Russo FA, Fels DI. Designing the Model Human Cochlea: An Ambient Crossmodal Audio-Tactile Display. IEEE TRANSACTIONS ON HAPTICS 2009; 2:160-169. [PMID: 27788080 DOI: 10.1109/toh.2009.32] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We present a model human cochlea (MHC), a sensory substitution technique and system that translates auditory information into vibrotactile stimuli using an ambient, tactile display. The model is used in the current study to translate music into discrete vibration signals displayed along the back of the body using a chair form factor. Voice coils facilitate the direct translation of auditory information onto the multiple discrete vibrotactile channels, which increases the potential to identify sections of the music that would otherwise be masked by the combined signal. One of the central goals of this work has been to improve accessibility to the emotional information expressed in music for users who are deaf or hard of hearing. To this end, we present our prototype of the MHC, two models of sensory substitution to support the translation of existing and new music, and some of the design challenges encountered throughout the development process. Results of a series of experiments conducted to assess the effectiveness of the MHC are discussed, followed by an overview of future directions for this research.
Collapse
|
49
|
|
50
|
Russo FA, Jones JA. Urgency is a non-monotonic function of pulse rate. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2007; 122:EL185-EL190. [PMID: 18189454 DOI: 10.1121/1.2784112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Magnitude estimation was used to assess the experience of urgency in pulse-train stimuli (pulsed white noise) ranging from 3.13 to 200 Hz. At low pulse rates, pulses were easily resolved. At high pulse rates, pulses fused together leading to a tonal sensation with a clear pitch level. Urgency ratings followed a nonmonotonic (polynomial) function with local maxima at 17.68 and 200 Hz. The same stimuli were also used in response time and pitch scaling experiments. Response times were negatively correlated with urgency ratings. Pitch scaling results indicated that urgency of pulse trains is mediated by the perceptual constructs of speed and pitch.
Collapse
|