1
|
de Zubicaray GI, Hinojosa JA. Statistical Relationships Between Phonological Form, Emotional Valence and Arousal of Spanish Words. J Cogn 2024; 7:42. [PMID: 38737820 PMCID: PMC11086587 DOI: 10.5334/joc.366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Accepted: 04/23/2024] [Indexed: 05/14/2024] Open
Abstract
A number of studies have provided evidence of limited non-arbitrary associations between the phonological forms and meanings of affective words, a finding referred to as affective sound symbolism. Here, we explored whether the affective connotations of Spanish words might have more extensive statistical relationships with phonological/phonetic features, or affective form typicality. After eliminating words with poor affective rating agreement and morphophonological redundancies (e.g., negating prefixes), we found evidence of significant form typicality for emotional valence, emotionality, and arousal in a large sample of monosyllabic and polysyllabic words. These affective form-meaning mappings remained significant even when controlling for a range of lexico-semantic variables. We show that affective variables and their corresponding form typicality measures are able to significantly predict lexical decision performance using a megastudy dataset. Overall, our findings provide new evidence that affective form typicality is a statistical property of the Spanish lexicon.
Collapse
Affiliation(s)
- Greig I. de Zubicaray
- School of Psychology and Counselling, Faculty of Health, Queensland University of Technology (QUT), Brisbane, Australia
| | - José A. Hinojosa
- Departamento de Psicología Experimental, Procesos Cognitivos y Logopedia, Universidad Complutense de Madrid, Madrid, Spain
- Instituto Pluridisciplinar, Universidad Complutense de Madrid, Madrid, Spain
- Centro de Investigación Nebrija en Cognición (CINC), Universidad Nebrija, Madrid, Spain
| |
Collapse
|
2
|
Burdick KJ, Yang S, Lopez AE, Wessel C, Schutz M, Schlesinger JJ. Auditory roughness: a delicate balance. Br J Anaesth 2023; 131:649-652. [PMID: 37537119 DOI: 10.1016/j.bja.2023.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 07/03/2023] [Accepted: 07/04/2023] [Indexed: 08/05/2023] Open
Abstract
Auditory roughness in medical alarm sounds is an important design attribute, and has been shown to impact user performance and perception. While roughness can assist in decreased signal-to-noise ratios (perceived loudness) and communicate urgency, it might also impact patient recovery. Therefore, considerations of neuroscience correlates, music theory, and patient impact are critical aspects to investigate in order to optimise alarm design.
Collapse
Affiliation(s)
- Kendall J Burdick
- Department of Pediatrics, Boston Children's Hospital, Boston, MA, USA.
| | - Sean Yang
- Blair School of Music, Vanderbilt University, Nashville, TN, USA
| | | | | | | | - Joseph J Schlesinger
- Department of Anesthesiology, Division of Critical Care Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
3
|
Protpagorn N, Lalitharatne TD, Costi L, Iida F. Vocal pain expression augmentation for a robopatient. Front Robot AI 2023; 10:1122914. [PMID: 37771605 PMCID: PMC10524268 DOI: 10.3389/frobt.2023.1122914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Accepted: 08/21/2023] [Indexed: 09/30/2023] Open
Abstract
Abdominal palpation is one of the basic but important physical examination methods used by physicians. Visual, auditory, and haptic feedback from the patients are known to be the main sources of feedback they use in the diagnosis. However, learning to interpret this feedback and making accurate diagnosis require several years of training. Many abdominal palpation training simulators have been proposed to date, but very limited attempts have been reported in integrating vocal pain expressions into physical abdominal palpation simulators. Here, we present a vocal pain expression augmentation for a robopatient. The proposed robopatient is capable of providing real-time facial and vocal pain expressions based on the exerted palpation force and position on the abdominal phantom of the robopatient. A pilot study is conducted to test the proposed system, and we show the potential of integrating vocal pain expressions to the robopatient. The platform has also been tested by two clinical experts with prior experience in abdominal palpation. Their evaluations on functionality and suggestions for improvements are presented. We highlight the advantages of the proposed robopatient with real-time vocal and facial pain expressions as a controllable simulator platform for abdominal palpation training studies. Finally, we discuss the limitations of the proposed approach and suggest several future directions for improvements.
Collapse
Affiliation(s)
- Namnueng Protpagorn
- Bio Inspired Robotics Laboratory, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Thilina Dulantha Lalitharatne
- Bio Inspired Robotics Laboratory, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
- Dyson School of Design Engineering, Imperial College London, London, United Kingdom
| | - Leone Costi
- Bio Inspired Robotics Laboratory, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Fumiya Iida
- Bio Inspired Robotics Laboratory, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
4
|
Thévenet J, Papet L, Coureaud G, Boyer N, Levréro F, Grimault N, Mathevon N. Crocodile perception of distress in hominid baby cries. Proc Biol Sci 2023; 290:20230201. [PMID: 37554035 PMCID: PMC10410202 DOI: 10.1098/rspb.2023.0201] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 07/07/2023] [Indexed: 08/10/2023] Open
Abstract
It is generally argued that distress vocalizations, a common modality for alerting conspecifics across a wide range of terrestrial vertebrates, share acoustic features that allow heterospecific communication. Yet studies suggest that the acoustic traits used to decode distress may vary between species, leading to decoding errors. Here we found through playback experiments that Nile crocodiles are attracted to infant hominid cries (bonobo, chimpanzee and human), and that the intensity of crocodile response depends critically on a set of specific acoustic features (mainly deterministic chaos, harmonicity and spectral prominences). Our results suggest that crocodiles are sensitive to the degree of distress encoded in the vocalizations of phylogenetically very distant vertebrates. A comparison of these results with those obtained with human subjects confronted with the same stimuli further indicates that crocodiles and humans use different acoustic criteria to assess the distress encoded in infant cries. Interestingly, the acoustic features driving crocodile reaction are likely to be more reliable markers of distress than those used by humans. These results highlight that the acoustic features encoding information in vertebrate sound signals are not necessarily identical across species.
Collapse
Affiliation(s)
- Julie Thévenet
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
- Equipe Cognition Auditive et Psychoacoustique, CRNL, CNRS, Inserm, University Lyon 1, Villeurbanne 69622, France
| | - Léo Papet
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
- Equipe Cognition Auditive et Psychoacoustique, CRNL, CNRS, Inserm, University Lyon 1, Villeurbanne 69622, France
| | - Gérard Coureaud
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
| | - Nicolas Boyer
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
| | - Florence Levréro
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
| | - Nicolas Grimault
- Equipe Cognition Auditive et Psychoacoustique, CRNL, CNRS, Inserm, University Lyon 1, Villeurbanne 69622, France
| | - Nicolas Mathevon
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
- Institut universitaire de France, Paris, Île-de-France, France
| |
Collapse
|
5
|
Obasih CO, Luthra S, Dick F, Holt LL. Auditory category learning is robust across training regimes. Cognition 2023; 237:105467. [PMID: 37148640 DOI: 10.1016/j.cognition.2023.105467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 03/17/2023] [Accepted: 04/21/2023] [Indexed: 05/08/2023]
Abstract
Multiple lines of research have developed training approaches that foster category learning, with important translational implications for education. Increasing exemplar variability, blocking or interleaving by category-relevant dimension, and providing explicit instructions about diagnostic dimensions each have been shown to facilitate category learning and/or generalization. However, laboratory research often must distill the character of natural input regularities that define real-world categories. As a result, much of what we know about category learning has come from studies with simplifying assumptions. We challenge the implicit expectation that these studies reflect the process of category learning of real-world input by creating an auditory category learning paradigm that intentionally violates some common simplifying assumptions of category learning tasks. Across five experiments and nearly 300 adult participants, we used training regimes previously shown to facilitate category learning, but here drew from a more complex and multidimensional category space with tens of thousands of unique exemplars. Learning was equivalently robust across training regimes that changed exemplar variability, altered the blocking of category exemplars, or provided explicit instructions of the category-diagnostic dimension. Each drove essentially equivalent accuracy measures of learning generalization following 40 min of training. These findings suggest that auditory category learning across complex input is not as susceptible to training regime manipulation as previously thought.
Collapse
Affiliation(s)
- Chisom O Obasih
- Department of Psychology, Carnegie Mellon University, United States of America; Neuroscience Institute, Carnegie Mellon University, United States of America; Center for the Neural Basis of Cognition, Carnegie Mellon University, United States of America.
| | - Sahil Luthra
- Department of Psychology, Carnegie Mellon University, United States of America; Neuroscience Institute, Carnegie Mellon University, United States of America; Center for the Neural Basis of Cognition, Carnegie Mellon University, United States of America
| | - Frederic Dick
- Experimental Psychology, University College London, United Kingdom; Birkbeck/UCL Centre for NeuroImaging, United Kingdom
| | - Lori L Holt
- Department of Psychology, Carnegie Mellon University, United States of America; Neuroscience Institute, Carnegie Mellon University, United States of America; Center for the Neural Basis of Cognition, Carnegie Mellon University, United States of America
| |
Collapse
|
6
|
Di Stefano N, Vuust P, Brattico E. Consonance and dissonance perception. A critical review of the historical sources, multidisciplinary findings, and main hypotheses. Phys Life Rev 2022; 43:273-304. [PMID: 36372030 DOI: 10.1016/j.plrev.2022.10.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 10/17/2022] [Indexed: 11/05/2022]
Abstract
Revealed more than two millennia ago by Pythagoras, consonance and dissonance (C/D) are foundational concepts in music theory, perception, and aesthetics. The search for the biological, acoustical, and cultural factors that affect C/D perception has resulted in descriptive accounts inspired by arithmetic, musicological, psychoacoustical or neurobiological frameworks without reaching a consensus. Here, we review the key historical sources and modern multidisciplinary findings on C/D and integrate them into three main hypotheses: the vocal similarity hypothesis (VSH), the psychocultural hypothesis (PH), and the sensorimotor hypothesis (SH). By illustrating the hypotheses-related findings, we highlight their major conceptual, methodological, and terminological shortcomings. Trying to provide a unitary framework for C/D understanding, we put together multidisciplinary research on human and animal vocalizations, which converges to suggest that auditory roughness is associated with distress/danger and, therefore, elicits defensive behavioral reactions and neural responses that indicate aversion. We therefore stress the primacy of vocality and roughness as key factors in the explanation of C/D phenomenon, and we explore the (neuro)biological underpinnings of the attraction-aversion mechanisms that are triggered by C/D stimuli. Based on the reviewed evidence, while the aversive nature of dissonance appears as solidly rooted in the multidisciplinary findings, the attractive nature of consonance remains a somewhat speculative claim that needs further investigation. Finally, we outline future directions for empirical research in C/D, especially regarding cross-modal and cross-cultural approaches.
Collapse
Affiliation(s)
- Nicola Di Stefano
- Institute for Cognitive Sciences and Technologies (ISTC), National Research Council of Italy (CNR), Via San Martino della Battaglia 44, 00185 Rome, Italy.
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Royal Academy of Music Aarhus/Aalborg (RAMA), 8000 Aarhus, Denmark.
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Royal Academy of Music Aarhus/Aalborg (RAMA), 8000 Aarhus, Denmark; Department of Education, Psychology, Communication, University of Bari Aldo Moro, 70122 Bari, Italy.
| |
Collapse
|
7
|
Abstract
Roughness is a perceptual attribute typically associated with certain stimuli that are presented in one of the spatial senses. In auditory research, the term is typically used to describe the harsh effects that are induced by particular sound qualities (i.e., dissonance) and human/animal vocalizations (e.g., screams, distress cries). In the tactile domain, roughness is a crucial factor determining the perceptual features of a surface. The same feature can also be ascertained visually, by means of the extraction of pattern features that determine the haptic quality of surfaces, such as grain size and density. By contrast, the term roughness has rarely been applied to the description of those stimuli perceived via the chemical senses. In this review, we take a critical look at the putative meaning(s) of the term roughness, when used in both unisensory and multisensory contexts, in an attempt to answer two key questions: (1) Is the use of the term 'roughness' the same in each modality when considered individually? and (2) Do crossmodal correspondences involving roughness match distinct perceptual features or (at least on certain occasions) do they merely pick-up on an amodal property? We start by examining the use of the term in the auditory domain. Next, we summarize the ways in which the term roughness has been used in the literature on tactile and visual perception, and in the domain of olfaction and gustation. Then, we move on to the crossmodal context, reviewing the literature on the perception of roughness in the audiovisual, audiotactile, and auditory-gustatory/olfactory domains. Finally, we highlight some limitations of the reviewed literature and we outline a number of key directions for future empirical research in roughness perception.
Collapse
|
8
|
Adults learn to identify pain in babies' cries. Curr Biol 2022; 32:R824-R825. [PMID: 35944479 DOI: 10.1016/j.cub.2022.06.076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Because the expression of pain in babies' cries is based on universal acoustic features, it is assumed that adult listeners should be able to detect when a crying baby is experiencing pain1-3. We report that detecting that a baby's cry expresses pain actually requires learning through experience. Our psychoacoustic experiments reveal that adults with no experience of caring for babies are unable to identify whether a baby's cry is a pain cry induced by vaccination or a mild discomfort cry recorded during a bath, even when they are familiar with the discomfort cries from this particular baby. In contrast, people with prior experience of babies - parents or professional caregivers - identify a familiar baby's pain cries without having heard these cries before. Parents of very young children are even able to identify the pain cries of a baby who is completely unfamiliar to them. Exposure through caregiving and/or parenting thus shapes the auditory and cognitive abilities involved in decoding the information conveyed by the baby's communication signals.
Collapse
|
9
|
Massenet M, Anikin A, Pisanski K, Reynaud K, Mathevon N, Reby D. Nonlinear vocal phenomena affect human perceptions of distress, size and dominance in puppy whines. Proc Biol Sci 2022; 289:20220429. [PMID: 35473375 PMCID: PMC9043735 DOI: 10.1098/rspb.2022.0429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
While nonlinear phenomena (NLP) are widely reported in animal vocalizations, often causing perceptual harshness and roughness, their communicative function remains debated. Several hypotheses have been put forward: attention-grabbing, communication of distress, exaggeration of body size and dominance. Here, we use state-of-the-art sound synthesis to investigate how NLP affect the perception of puppy whines by human listeners. Listeners assessed the distress, size or dominance conveyed by synthetic puppy whines with manipulated NLP, including frequency jumps and varying proportions of subharmonics, sidebands and deterministic chaos. We found that the presence of chaos increased the puppy's perceived level of distress and that this effect held across a range of representative fundamental frequency (fo) levels. Adding sidebands and subharmonics also increased perceived distress among listeners who have extensive caregiving experience with pre-weaned puppies (e.g. breeders, veterinarians). Finally, we found that whines with added chaos, subharmonics or sidebands were associated with larger and more dominant puppies, although these biases were attenuated in experienced caregivers. Together, our results show that nonlinear phenomena in puppy whines can convey rich information to human listeners and therefore may be crucial for offspring survival during breeding of a domesticated species.
Collapse
Affiliation(s)
- Mathilde Massenet
- Equipe de Neuro-Ethologie Sensorielle, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France
| | - Andrey Anikin
- Equipe de Neuro-Ethologie Sensorielle, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France.,Division of Cognitive Science, University of Lund, 22100 Lund, Sweden
| | - Katarzyna Pisanski
- Equipe de Neuro-Ethologie Sensorielle, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France.,CNRS, French National Centre for Scientific Research, Laboratoire de Dynamique du Langage, University of Lyon 2, 69007 Lyon, France
| | - Karine Reynaud
- École Nationale Vétérinaire d'Alfort, EnvA, 94700 Maisons-Alfort, France.,Physiologie de la Reproduction et des Comportements, CNRS, IFCE, INRAE, University of Tours, PRC, Nouzilly, France
| | - Nicolas Mathevon
- Equipe de Neuro-Ethologie Sensorielle, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France.,Institut universitaire de France, Paris, France
| | - David Reby
- Equipe de Neuro-Ethologie Sensorielle, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France.,Institut universitaire de France, Paris, France
| |
Collapse
|
10
|
Pisanski K, Bryant GA, Cornec C, Anikin A, Reby D. Form follows function in human nonverbal vocalisations. ETHOL ECOL EVOL 2022. [DOI: 10.1080/03949370.2022.2026482] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Katarzyna Pisanski
- ENES Sensory Neuro-Ethology Lab, CRNL, Jean Monnet University of Saint Étienne, UMR 5293, St-Étienne 42023, France
- CNRS French National Centre for Scientific Research, DDL Dynamics of Language Lab, University of Lyon 2, Lyon 69007, France
| | - Gregory A. Bryant
- Department of Communication, Center for Behavior, Evolution, and Culture, University of California, Los Angeles, California, USA
| | - Clément Cornec
- ENES Sensory Neuro-Ethology Lab, CRNL, Jean Monnet University of Saint Étienne, UMR 5293, St-Étienne 42023, France
| | - Andrey Anikin
- ENES Sensory Neuro-Ethology Lab, CRNL, Jean Monnet University of Saint Étienne, UMR 5293, St-Étienne 42023, France
- Division of Cognitive Science, Lund University, Lund 22100, Sweden
| | - David Reby
- ENES Sensory Neuro-Ethology Lab, CRNL, Jean Monnet University of Saint Étienne, UMR 5293, St-Étienne 42023, France
| |
Collapse
|
11
|
Lahdelma I, Eerola T, Armitage J. Is Harmonicity a Misnomer for Cultural Familiarity in Consonance Preferences? Front Psychol 2022; 13:802385. [PMID: 35153957 PMCID: PMC8833847 DOI: 10.3389/fpsyg.2022.802385] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 01/10/2022] [Indexed: 11/13/2022] Open
|
12
|
Effect of pitch range on dogs' response to conspecific vs. heterospecific distress cries. Sci Rep 2021; 11:19723. [PMID: 34611191 PMCID: PMC8492669 DOI: 10.1038/s41598-021-98967-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 09/02/2021] [Indexed: 11/08/2022] Open
Abstract
Distress cries are emitted by many mammal species to elicit caregiving attention. Across taxa, these calls tend to share similar acoustic structures, but not necessarily frequency range, raising the question of their interspecific communicative potential. As domestic dogs are highly responsive to human emotional cues and experience stress when hearing human cries, we explore whether their responses to distress cries from human infants and puppies depend upon sharing conspecific frequency range or species-specific call characteristics. We recorded adult dogs' responses to distress cries from puppies and human babies, emitted from a loudspeaker in a basket. The frequency of the cries was presented in both their natural range and also shifted to match the other species. Crucially, regardless of species origin, calls falling into the dog call-frequency range elicited more attention. Thus, domestic dogs' responses depended strongly on the frequency range. Females responded both faster and more strongly than males, potentially reflecting asymmetries in parental care investment. Our results suggest that, despite domestication leading to an increased overall responsiveness to human cues, dogs still respond considerably less to calls in the natural human infant range than puppy range. Dogs appear to use a fast but inaccurate decision-making process to determine their response to distress-like vocalisations.
Collapse
|
13
|
Icht M, Wiznitser Ressis-Tal H, Lotan M. Can the Vocal Expression of Intellectually Disabled Individuals Be Used as a Pain Indicator? Initial Findings Supporting a Possible Novice Assessment Method. Front Psychol 2021; 12:655202. [PMID: 34366973 PMCID: PMC8339267 DOI: 10.3389/fpsyg.2021.655202] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 06/22/2021] [Indexed: 11/23/2022] Open
Abstract
Pain is difficult to assess in non-verbal populations such as individuals with intellectual and developmental disability (IDD). Due to scarce research in this area, pain assessment for individuals with IDD is still lacking, leading to maltreatment. To improve medical care for individuals with IDD, immediate, reliable, easy to use pain detection methods should be developed. The goal of this preliminary study was to examine the sensitivity of acoustic features of vocal expressions in identifying pain for adults with IDD, assessing their feasibility as a pain detection indicator for those individuals. Such unique pain related vocal characteristics may be used to develop objective pain detection means. Adults with severe-profound IDD level (N = 9) were recorded in daily activities associated with pain (during diaper changes), or without pain (at rest). Spontaneous vocal expressions were acoustically analyzed to assess several voice characteristics. Analyzing the data revealed that pain related vocal expressions were characterized by significantly higher number of pulses and higher shimmer values relative to no-pain vocal expressions. Pain related productions were also characterized by longer duration, higher jitter and Cepstral Peak Prominence values, lower Harmonic-Noise Ratio, lower difference between the amplitude of the 1st and 2nd harmonic (corrected for vocal tract influence; H1H2c), and higher mean and standard deviation of voice fundamental frequency relative to no-pain related vocal productions, yet these findings were not statistically significant, possibly due to the small and heterogeneous sample. These initial results may prompt further research to explore the possibility to use pain related vocal output as an objective and easily identifiable indicator of pain in this population.
Collapse
Affiliation(s)
- Michal Icht
- Department of Communication Disorders, Ariel University, Ariel, Israel
| | | | - Meir Lotan
- Department of Physiotherapy, Ariel University, Ariel, Israel
| |
Collapse
|
14
|
Anikin A, Pisanski K, Massenet M, Reby D. Harsh is large: nonlinear vocal phenomena lower voice pitch and exaggerate body size. Proc Biol Sci 2021; 288:20210872. [PMID: 34229494 PMCID: PMC8261225 DOI: 10.1098/rspb.2021.0872] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
A lion's roar, a dog's bark, an angry yell in a pub brawl: what do these vocalizations have in common? They all sound harsh due to nonlinear vocal phenomena (NLP)—deviations from regular voice production, hypothesized to lower perceived voice pitch and thereby exaggerate the apparent body size of the vocalizer. To test this yet uncorroborated hypothesis, we synthesized human nonverbal vocalizations, such as roars, groans and screams, with and without NLP (amplitude modulation, subharmonics and chaos). We then measured their effects on nearly 700 listeners' perceptions of three psychoacoustic (pitch, timbre, roughness) and three ecological (body size, formidability, aggression) characteristics. In an explicit rating task, all NLP lowered perceived voice pitch, increased voice darkness and roughness, and caused vocalizers to sound larger, more formidable and more aggressive. Key results were replicated in an implicit associations test, suggesting that the ‘harsh is large’ bias will arise in ecologically relevant confrontational contexts that involve a rapid, and largely implicit, evaluation of the opponent's size. In sum, nonlinearities in human vocalizations can flexibly communicate both formidability and intention to attack, suggesting they are not a mere byproduct of loud vocalizing, but rather an informative acoustic signal well suited for intimidating potential opponents.
Collapse
Affiliation(s)
- Andrey Anikin
- Division of Cognitive Science, Lund University, 22100 Lund, Sweden.,Equipe de Neuro-Ethologie Sensorielle, CNRS and University of Saint Étienne, UMR 5293, 42023 St-Étienne, France
| | - Katarzyna Pisanski
- Equipe de Neuro-Ethologie Sensorielle, CNRS and University of Saint Étienne, UMR 5293, 42023 St-Étienne, France.,CNRS, French National Centre for Scientific Research, Laboratoire de Dynamique du Langage, University of Lyon 2, 69007 Lyon, France
| | - Mathilde Massenet
- Equipe de Neuro-Ethologie Sensorielle, CNRS and University of Saint Étienne, UMR 5293, 42023 St-Étienne, France
| | - David Reby
- Equipe de Neuro-Ethologie Sensorielle, CNRS and University of Saint Étienne, UMR 5293, 42023 St-Étienne, France
| |
Collapse
|
15
|
Armitage J, Lahdelma I, Eerola T. Automatic responses to musical intervals: Contrasts in acoustic roughness predict affective priming in Western listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:551. [PMID: 34340511 DOI: 10.1121/10.0005623] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 06/24/2021] [Indexed: 06/13/2023]
Abstract
The aim of the present study is to determine which acoustic components of harmonic consonance and dissonance influence automatic responses in a simple cognitive task. In a series of affective priming experiments, eight pairs of musical intervals were used to measure the influence of acoustic roughness and harmonicity on response times in a word-classification task conducted online. Interval pairs that contrasted in roughness induced a greater degree of affective priming than pairs that did not contrast in terms of their roughness. Contrasts in harmonicity did not induce affective priming. A follow-up experiment used detuned intervals to create higher levels of roughness contrasts. However, the detuning did not lead to any further increase in the size of the priming effect. More detailed analysis suggests that the presence of priming in intervals is binary: in the negative primes that create congruency effects the intervals' fundamentals and overtones coincide within the same equivalent rectangular bandwidth (i.e., the minor and major seconds). Intervals that fall outside this equivalent rectangular bandwidth do not elicit priming effects, regardless of their dissonance or negative affect. The results are discussed in the context of recent developments in consonance/dissonance research and vocal similarity.
Collapse
Affiliation(s)
- James Armitage
- Department of Music, Durham University, Durham, DH1 3RL, United Kingdom
| | - Imre Lahdelma
- Department of Music, Durham University, Durham, DH1 3RL, United Kingdom
| | - Tuomas Eerola
- Department of Music, Durham University, Durham, DH1 3RL, United Kingdom
| |
Collapse
|
16
|
Taffou M, Suied C, Viaud-Delmon I. Auditory roughness elicits defense reactions. Sci Rep 2021; 11:956. [PMID: 33441758 PMCID: PMC7806762 DOI: 10.1038/s41598-020-79767-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 12/09/2020] [Indexed: 11/26/2022] Open
Abstract
Auditory roughness elicits aversion, and higher activation in cerebral areas involved in threat processing, but its link with defensive behavior is unknown. Defensive behaviors are triggered by intrusions into the space immediately surrounding the body, called peripersonal space (PPS). Integrating multisensory information in PPS is crucial to assure the protection of the body. Here, we assessed the behavioral effects of roughness on auditory-tactile integration, which reflects the monitoring of this multisensory region of space. Healthy human participants had to detect as fast as possible a tactile stimulation delivered on their hand while an irrelevant sound was approaching them from the rear hemifield. The sound was either a simple harmonic sound or a rough sound, processed through binaural rendering so that the virtual sound source was looming towards participants. The rough sound speeded tactile reaction times at a farther distance from the body than the non-rough sound. This indicates that PPS, as estimated here via auditory-tactile integration, is sensitive to auditory roughness. Auditory roughness modifies the behavioral relevance of simple auditory events in relation to the body. Even without emotional or social contextual information, auditory roughness constitutes an innate threat cue that elicits defensive responses.
Collapse
Affiliation(s)
- Marine Taffou
- Institut de Recherche Biomédicale des Armées, 91220, Brétigny-sur-Orge, France.
| | - Clara Suied
- Institut de Recherche Biomédicale des Armées, 91220, Brétigny-sur-Orge, France
| | - Isabelle Viaud-Delmon
- CNRS, Ircam, Sorbonne Université, Ministère de la Culture, Sciences et Technologies de la Musique et du son, STMS, 75004, Paris, France
| |
Collapse
|
17
|
Anikin A, Pisanski K, Reby D. Do nonlinear vocal phenomena signal negative valence or high emotion intensity? ROYAL SOCIETY OPEN SCIENCE 2020; 7:201306. [PMID: 33489278 PMCID: PMC7813245 DOI: 10.1098/rsos.201306] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 11/05/2020] [Indexed: 05/06/2023]
Abstract
Nonlinear vocal phenomena (NLPs) are commonly reported in animal calls and, increasingly, in human vocalizations. These perceptually harsh and chaotic voice features function to attract attention and convey urgency, but they may also signal aversive states. To test whether NLPs enhance the perception of negative affect or only signal high arousal, we added subharmonics, sidebands or deterministic chaos to 48 synthetic human nonverbal vocalizations of ambiguous valence: gasps of fright/surprise, moans of pain/pleasure, roars of frustration/achievement and screams of fear/delight. In playback experiments (N = 900 listeners), we compared their perceived valence and emotion intensity in positive or negative contexts or in the absence of any contextual cues. Primarily, NLPs increased the perceived aversiveness of vocalizations regardless of context. To a smaller extent, they also increased the perceived emotion intensity, particularly when the context was negative or absent. However, NLPs also enhanced the perceived intensity of roars of achievement, indicating that their effects can generalize to positive emotions. In sum, a harsh voice with NLPs strongly tips the balance towards negative emotions when a vocalization is ambiguous, but with sufficiently informative contextual cues, NLPs may be re-evaluated as expressions of intense positive affect, underlining the importance of context in nonverbal communication.
Collapse
Affiliation(s)
- Andrey Anikin
- Division of Cognitive Science, Lund University, Lund, Sweden
- Equipe de Neuro-Ethologie Sensorielle (ENES) / Centre de Recherche en Neurosciences de Lyon (CRNL), University of Lyon/Saint-Etienne, CNRS UMR5292, INSERM UMR_S 1028, Saint-Etienne, France
- Author for correspondence: Andrey Anikin e-mail:
| | - Katarzyna Pisanski
- Equipe de Neuro-Ethologie Sensorielle (ENES) / Centre de Recherche en Neurosciences de Lyon (CRNL), University of Lyon/Saint-Etienne, CNRS UMR5292, INSERM UMR_S 1028, Saint-Etienne, France
| | - David Reby
- Equipe de Neuro-Ethologie Sensorielle (ENES) / Centre de Recherche en Neurosciences de Lyon (CRNL), University of Lyon/Saint-Etienne, CNRS UMR5292, INSERM UMR_S 1028, Saint-Etienne, France
| |
Collapse
|
18
|
Abstract
Numerous species use different forms of communication in order to successfully interact in their respective environment. This article seeks to elucidate limitations of the classical conduit metaphor by investigating communication from the perspectives of biology and artificial neural networks. First, communication is a biological natural phenomenon, found to be fruitfully grounded in an organism’s embodied structures and memory system, where specific abilities are tied to procedural, semantic, and episodic long-term memory as well as to working memory. Second, the account explicates differences between non-verbal and verbal communication and shows how artificial neural networks can communicate by means of ontologically non-committal modelling. This approach enables new perspectives of communication to emerge regarding both sender and receiver. It is further shown that communication features gradient properties that are plausibly divided into a reflexive and a reflective form, parallel to knowledge and reflection.
Collapse
|
19
|
Parsons CE, LeBeau RT, Kringelbach ML, Young KS. Pawsitively sad: pet-owners are more sensitive to negative emotion in animal distress vocalizations. ROYAL SOCIETY OPEN SCIENCE 2019; 6:181555. [PMID: 31598218 PMCID: PMC6731714 DOI: 10.1098/rsos.181555] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Accepted: 07/09/2019] [Indexed: 06/10/2023]
Abstract
Pets have numerous, effective methods to communicate with their human hosts. Perhaps most conspicuous of these are distress vocalizations: in cats, the 'miaow' and in dogs, the 'whine' or 'whimper'. We compared a sample of young adults who owned cats and or dogs ('pet-owners' n = 264) and who did not (n = 297) on their ratings of the valence of animal distress vocalizations, taken from a standardized database of sounds. We also examined these participants' self-reported symptoms of anxiety and depression, and their scores on a measure of interpersonal relationship functioning. Pet-owners rated the animal distress vocalizations as sadder than adults who did not own a pet. Cat-owners specifically gave the most negative ratings of cat miaows compared with other participants, but were no different in their ratings of other sounds. Dog sounds were rated more negatively overall, in fact as negatively as human baby cries. Pet-owning adults (cat only, dog only, both) were not significantly different from adults with no pets on symptoms of depression, anxiety or on self-reported interpersonal relationship functioning. We suggest that pet ownership is associated with greater sensitivity to negative emotion in cat and dog distress vocalizations.
Collapse
Affiliation(s)
- Christine E. Parsons
- Interacting Minds Center, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Richard T. LeBeau
- Department of Psychology, University of California, Los Angeles, CA, USA
| | | | - Katherine S. Young
- Department of Psychology, University of California, Los Angeles, CA, USA
- Social, Genetic and Developmental Psychiatry Centre, Institute of Psychology, Psychiatry and Neuroscience, King's College London, London, UK
| |
Collapse
|
20
|
Anikin A. The perceptual effects of manipulating nonlinear phenomena in synthetic nonverbal vocalizations. BIOACOUSTICS 2019. [DOI: 10.1080/09524622.2019.1581839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Andrey Anikin
- Division of Cognitive Science, Department of Philosophy, Lund University, Lund, Sweden
| |
Collapse
|
21
|
Affiliation(s)
- Jordan Raine
- Mammal Vocal Communication and Cognition Research Group, School of Psychology, University of Sussex, Brighton, UK
| | - Katarzyna Pisanski
- Mammal Vocal Communication and Cognition Research Group, School of Psychology, University of Sussex, Brighton, UK
| | - Julia Simner
- MULTISENSE Research Lab, School of Psychology, University of Sussex, Brighton, UK
| | - David Reby
- Mammal Vocal Communication and Cognition Research Group, School of Psychology, University of Sussex, Brighton, UK
| |
Collapse
|