1
|
Li H, Huang X. Intelligent Dance Motion Evaluation: An Evaluation Method Based on Keyframe Acquisition According to Musical Beat Features. SENSORS (BASEL, SWITZERLAND) 2024; 24:6278. [PMID: 39409318 PMCID: PMC11478525 DOI: 10.3390/s24196278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 09/11/2024] [Accepted: 09/12/2024] [Indexed: 10/20/2024]
Abstract
Motion perception is crucial in competitive sports like dance, basketball, and diving. However, evaluations in these sports heavily rely on professionals, posing two main challenges: subjective assessments are uncertain and can be influenced by experience, making it hard to guarantee timeliness and accuracy, and increasing labor costs with multi-expert voting. While video analysis methods have alleviated some pressure, challenges remain in extracting key points/frames from videos and constructing a suitable, quantifiable evaluation method that aligns with the static-dynamic nature of movements for accurate assessment. Therefore, this study proposes an innovative intelligent evaluation method aimed at enhancing the accuracy and processing speed of complex video analysis tasks. Firstly, by constructing a keyframe extraction method based on musical beat detection, coupled with prior knowledge, the beat detection is optimized through a perceptually weighted window to accurately extract keyframes that are highly correlated with dance movement changes. Secondly, OpenPose is employed to detect human joint points in the keyframes, quantifying human movements into a series of numerically expressed nodes and their relationships (i.e., pose descriptions). Combined with the positions of keyframes in the time sequence, a standard pose description sequence is formed, serving as the foundational data for subsequent quantitative evaluations. Lastly, an Action Sequence Evaluation method (ASCS) is established based on all action features within a single action frame to precisely assess the overall performance of individual actions. Furthermore, drawing inspiration from the Rouge-L evaluation method in natural language processing, a Similarity Measure Approach based on Contextual Relationships (SMACR) is constructed, focusing on evaluating the coherence of actions. By integrating ASCS and SMACR, a comprehensive evaluation of dancers is conducted from both the static and dynamic dimensions. During the method validation phase, the research team judiciously selected 12 representative samples from the popular dance game Just Dance, meticulously classifying them according to the complexity of dance moves and physical exertion levels. The experimental results demonstrate the outstanding performance of the constructed automated evaluation method. Specifically, this method not only achieves the precise assessments of dance movements at the individual keyframe level but also significantly enhances the evaluation of action coherence and completeness through the innovative SMACR. Across all 12 test samples, the method accurately selects 2 to 5 keyframes per second from the videos, reducing the computational load to 4.1-10.3% compared to traditional full-frame matching methods, while the overall evaluation accuracy only slightly decreases by 3%, fully demonstrating the method's combination of efficiency and precision. Through precise musical beat alignment, efficient keyframe extraction, and the introduction of intelligent dance motion analysis technology, this study significantly improves upon the subjectivity and inefficiency of traditional manual evaluations, enhancing the scientificity and accuracy of assessments. It provides robust tool support for fields such as dance education and competition evaluations, showcasing broad application prospects.
Collapse
Affiliation(s)
- Hengzi Li
- School of Music, Wenzhou University, Wenzhou 325035, China;
| | - Xingli Huang
- College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou 325035, China
| |
Collapse
|
2
|
Hirst RJ, McGovern DP, Setti A, Shams L, Newell FN. What you see is what you hear: Twenty years of research using the Sound-Induced Flash Illusion. Neurosci Biobehav Rev 2020; 118:759-774. [DOI: 10.1016/j.neubiorev.2020.09.006] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Revised: 07/06/2020] [Accepted: 09/03/2020] [Indexed: 01/17/2023]
|
3
|
Zax A, Williams K, Patalano AL, Slusser E, Cordes S, Barth H. What Do Biased Estimates Tell Us about Cognitive Processing? Spatial Judgments as Proportion Estimation. JOURNAL OF COGNITION AND DEVELOPMENT 2019. [DOI: 10.1080/15248372.2019.1653297] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
| | | | | | - Emily Slusser
- Wesleyan University, USA
- San Jose State University, USA
| | | | | |
Collapse
|
4
|
Maaß SC, Schlichting N, van Rijn H. Eliciting contextual temporal calibration: The effect of bottom-up and top-down information in reproduction tasks. Acta Psychol (Amst) 2019; 199:102898. [PMID: 31369983 DOI: 10.1016/j.actpsy.2019.102898] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 06/03/2019] [Accepted: 07/20/2019] [Indexed: 01/03/2023] Open
Abstract
Bayesian integration assumes that a current observation is integrated with previous observations. An example in the temporal domain is the central tendency effect: when a range of durations is presented, a regression towards the mean is observed. Furthermore, a context effect emerges if a partially overlapping lower and a higher range of durations is presented in a blocked design, with the overlapping durations pulled towards the mean duration of the block. We determine under which conditions this context effect is observed, and whether explicit cues strengthen the effect. Each block contained either two or three durations, with one duration present in both blocks. We provided either no information at the start of each block about the nature of that block, provided written ("short" / "long" or "A" / "B") categorizations, or operationalized pitch (low vs high) to reflect the temporal context. We demonstrate that (1) the context effect emerges as long as sufficiently distinct durations are presented; (2) the effect is not modulated by explicit instructions or other cues; (3) just a single additional duration is sufficient to produce a context effect. Taken together, these results provide information on the most efficient operationalization to evoke the context effect, allowing for highly economical experimental designs, and highlights the automaticity by which priors are constructed.
Collapse
|
5
|
Superior Visual Timing Sensitivity in Auditory But Not Visual World Class Drum Corps Experts. eNeuro 2019; 5:eN-NWR-0241-18. [PMID: 30627642 PMCID: PMC6325546 DOI: 10.1523/eneuro.0241-18.2018] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Revised: 10/30/2018] [Accepted: 11/01/2018] [Indexed: 11/21/2022] Open
Abstract
World class drum corps require cooperation among performance artists to render precisely synchronized and asynchronized events. For example, drum corps visual aesthetics often feature salient radial and rotational motion displays from the color guard. Accordingly, extensive color guard training might predict superior visual timing sensitivity to asynchronies in radial and rotational motion displays. Less intuitively, one might instead predict superior visual timing sensitivity among world class drum corps musicians, who regularly subdivide musical tempos into brief time units. This prediction arises from the possibility that auditory training transfers cross-modally. Here, we investigated whether precise visual temporal order judgments (TOJs) more strongly align with color guard’s visual training or musicians’ auditory training. To mimic color guard visual displays, stimuli comprised bilateral plaid patterns that radiated or rotated before changing direction asynchronously. Human participants indicated whether the direction changed first on the left or right, called a TOJ. Twenty-five percussionists, 67 brass players, and 29 color guard members from a world class drum corps collectively completed 67,760 visual TOJ trials. Percussionists exhibited significantly lower TOJ thresholds than did brass players, who exhibited significantly lower TOJ thresholds than did the color guard. Group median thresholds spanned an order of magnitude, ranging between 29 ms (percussionists judging rotational asynchronies) and 290 ms (color guard judging radial asynchronies). The results suggest that visual timing can improve more by training cross-modally than intramodally, even when intramodal training and testing stimuli closely match. More broadly, pre-existing training histories can provide a unique window into the timing sensitivity of the nervous system.
Collapse
|
6
|
Thurley K, Schild U. Time and distance estimation in children using an egocentric navigation task. Sci Rep 2018; 8:18001. [PMID: 30573744 PMCID: PMC6302095 DOI: 10.1038/s41598-018-36234-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Accepted: 11/19/2018] [Indexed: 01/22/2023] Open
Abstract
Navigation crucially depends on the capability to estimate time elapsed and distance covered during movement. From adults it is known that magnitude estimation is subject to characteristic biases. Most intriguing is the regression effect (central tendency), whose strength depends on the stimulus distribution (i.e. stimulus range), a second characteristic of magnitude estimation known as range effect. We examined regression and range effects for time and distance estimation in eleven-year-olds and young adults, using an egocentric virtual navigation task. Regression effects were stronger for distance compared to time and depended on stimulus range. These effects were more pronounced in children compared to adults due to a more heterogeneous performance among the children. Few children showed veridical estimations similar to adults; most children, however, performed less accurate displaying stronger regression effects. Our findings suggest that children use magnitude processing strategies similar to adults, but it seems that these are not yet fully developed in all eleven-year-olds and are further refined throughout adolescence.
Collapse
Affiliation(s)
- Kay Thurley
- Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany. .,Bernstein Center for Computational Neuroscience Munich, Munich, Germany.
| | - Ulrike Schild
- Developmental Psychology, University of Tübingen, Tübingen, Germany.
| |
Collapse
|
7
|
Fujioka T, Ross B. Beta-band oscillations during passive listening to metronome sounds reflect improved timing representation after short-term musical training in healthy older adults. Eur J Neurosci 2017; 46:2339-2354. [DOI: 10.1111/ejn.13693] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2017] [Revised: 08/21/2017] [Accepted: 08/29/2017] [Indexed: 11/29/2022]
Affiliation(s)
- Takako Fujioka
- Center for Computer Research in Music and Acoustics; Department of Music; Stanford University; 660 Lomita Ct. Stanford CA 94305 USA
- Stanford Neurosciences Institute; Stanford University; Stanford CA USA
| | - Bernhard Ross
- Rotman Research Institute; Baycrest Centre; Toronto ON Canada
- Department of Medical Biophysics; University of Toronto; Toronto ON Canada
| |
Collapse
|
8
|
Karaminis T, Cicchini GM, Neil L, Cappagli G, Aagten-Murphy D, Burr D, Pellicano E. Central tendency effects in time interval reproduction in autism. Sci Rep 2016; 6:28570. [PMID: 27349722 PMCID: PMC4923867 DOI: 10.1038/srep28570] [Citation(s) in RCA: 67] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2015] [Accepted: 06/01/2016] [Indexed: 12/31/2022] Open
Abstract
Central tendency, the tendency of judgements of quantities (lengths, durations etc.) to gravitate towards their mean, is one of the most robust perceptual effects. A Bayesian account has recently suggested that central tendency reflects the integration of noisy sensory estimates with prior knowledge representations of a mean stimulus, serving to improve performance. The process is flexible, so prior knowledge is weighted more heavily when sensory estimates are imprecise, requiring more integration to reduce noise. In this study we measure central tendency in autism to evaluate a recent theoretical hypothesis suggesting that autistic perception relies less on prior knowledge representations than typical perception. If true, autistic children should show reduced central tendency than theoretically predicted from their temporal resolution. We tested autistic and age- and ability-matched typical children in two child-friendly tasks: (1) a time interval reproduction task, measuring central tendency in the temporal domain; and (2) a time discrimination task, assessing temporal resolution. Central tendency reduced with age in typical development, while temporal resolution improved. Autistic children performed far worse in temporal discrimination than the matched controls. Computational simulations suggested that central tendency was much less in autistic children than predicted by theoretical modelling, given their poor temporal resolution.
Collapse
Affiliation(s)
- Themelis Karaminis
- Centre for Research in Autism and Education (CRAE), Department of Psychology and Human Development, UCL Institute of Education, University College London, London, WC1H 0NU, UK.,School of Psychology, Plymouth University, Plymouth, PL4 8AA, UK
| | | | - Louise Neil
- Centre for Research in Autism and Education (CRAE), Department of Psychology and Human Development, UCL Institute of Education, University College London, London, WC1H 0NU, UK
| | - Giulia Cappagli
- Centre for Research in Autism and Education (CRAE), Department of Psychology and Human Development, UCL Institute of Education, University College London, London, WC1H 0NU, UK.,Istituto Italiano di Tecnologia, Genova, 16163, Italy
| | - David Aagten-Murphy
- Department of Psychology, Ludwig-Maximilians-Universität München, Münich, 80802, Germany
| | - David Burr
- Institute of Neuroscience, National Research Council (CNR), Pisa, 56124, Italy.,School of Psychology, University of Western Australia, Crawley, Perth, Western Australia, 6009, Australia
| | - Elizabeth Pellicano
- Centre for Research in Autism and Education (CRAE), Department of Psychology and Human Development, UCL Institute of Education, University College London, London, WC1H 0NU, UK.,School of Psychology, University of Western Australia, Crawley, Perth, Western Australia, 6009, Australia
| |
Collapse
|
9
|
Unimodal and cross-modal prediction is enhanced in musicians. Sci Rep 2016; 6:25225. [PMID: 27142627 PMCID: PMC4855230 DOI: 10.1038/srep25225] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2015] [Accepted: 04/06/2016] [Indexed: 11/09/2022] Open
Abstract
Musical training involves exposure to complex auditory and visual stimuli, memorization of elaborate sequences, and extensive motor rehearsal. It has been hypothesized that such multifaceted training may be associated with differences in basic cognitive functions, such as prediction, potentially translating to a facilitation in expert musicians. Moreover, such differences might generalize to non-auditory stimuli. This study was designed to test both hypotheses. We implemented a cross-modal attentional cueing task with auditory and visual stimuli, where a target was preceded by compatible or incompatible cues in mainly compatible (80% compatible, predictable) or random blocks (50% compatible, unpredictable). This allowed for the testing of prediction skills in musicians and controls. Musicians showed increased sensitivity to the statistical structure of the block, expressed as advantage for compatible trials (disadvantage for incompatible trials), but only in the mainly compatible (predictable) blocks. Controls did not show this pattern. The effect held within modalities (auditory, visual), across modalities, and when controlling for short-term memory capacity. These results reveal a striking enhancement in cross-modal prediction in musicians in a very basic cognitive task.
Collapse
|
10
|
Abstract
The proposal that the processing of visual time might rely on a network of distributed mechanisms that are vision-specific and timescale-specific stands in contrast to the classical view of time perception as the product of a single supramodal clock. Evidence showing that some of these mechanisms have a sensory component that can be locally adapted is at odds with another traditional assumption, namely that time is completely divorced from space. Recent evidence suggests that multiple timing mechanisms exist across and within sensory modalities and that they operate in various neural regions. The current review summarizes this evidence and frames it into the broader scope of models for time perception in the visual domain.
Collapse
Affiliation(s)
- Aurelio Bruno
- Experimental Psychology, University College London, 26 Bedford Way, 16, London WC1H 0AP, UK
| | - Guido Marco Cicchini
- Institute of Neuroscience, Consiglio Nazionale delle Ricerche, Via Moruzzi 1, 56124 Pisa, Italy
| |
Collapse
|
11
|
Maes PJ. Sensorimotor Grounding of Musical Embodiment and the Role of Prediction: A Review. Front Psychol 2016; 7:308. [PMID: 26973587 PMCID: PMC4778011 DOI: 10.3389/fpsyg.2016.00308] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2015] [Accepted: 02/17/2016] [Indexed: 01/23/2023] Open
Abstract
In a previous article, we reviewed empirical evidence demonstrating action-based effects on music perception to substantiate the musical embodiment thesis (Maes et al., 2014). Evidence was largely based on studies demonstrating that music perception automatically engages motor processes, or that body states/movements influence music perception. Here, we argue that more rigorous evidence is needed before any decisive conclusion in favor of a “radical” musical embodiment thesis can be posited. In the current article, we provide a focused review of recent research to collect further evidence for the “radical” embodiment thesis that music perception is a dynamic process firmly rooted in the natural disposition of sounds and the human auditory and motor system. Though, we emphasize that, on top of these natural dispositions, long-term processes operate, rooted in repeated sensorimotor experiences and leading to learning, prediction, and error minimization. This approach sheds new light on the development of musical repertoires, and may refine our understanding of action-based effects on music perception as discussed in our previous article (Maes et al., 2014). Additionally, we discuss two of our recent empirical studies demonstrating that music performance relies on similar principles of sensorimotor dynamics and predictive processing.
Collapse
Affiliation(s)
- Pieter-Jan Maes
- Department of Art, Music, and Theatre Sciences, IPEM, Ghent University Belgium
| |
Collapse
|
12
|
Sensorimotor integration is enhanced in dancers and musicians. Exp Brain Res 2015; 234:893-903. [PMID: 26670906 DOI: 10.1007/s00221-015-4524-1] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2015] [Accepted: 12/06/2015] [Indexed: 10/22/2022]
Abstract
Studying individuals with specialized training, such as dancers and musicians, provides an opportunity to investigate how intensive practice of sensorimotor skills affects behavioural performance across various domains. While several studies have found that musicians have improved motor, perceptual and sensorimotor integration skills compared to untrained controls, fewer studies have examined the effect of dance training on such skills. Moreover, no study has specifically compared the effects of dance versus music training on perceptual or sensorimotor performance. To this aim, in the present study, expert dancers, expert musicians and untrained controls were tested on a range of perceptual and sensorimotor tasks designed to discriminate performance profiles across groups. Dancers performed better than musicians and controls on a dance imitation task (involving whole-body movement), but musicians performed better than dancers and controls on a musical melody discrimination task as well as on a rhythm synchronization task (involving finger tapping). These results indicate that long-term intensive dance and music training are associated with distinct enhancements in sensorimotor skills. This novel work advances knowledge of the effects of long-term dance versus music training and has potential applications in therapies for motor disorders.
Collapse
|
13
|
Examining the relationship between skilled music training and attention. Conscious Cogn 2015; 36:169-79. [DOI: 10.1016/j.concog.2015.06.014] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2014] [Revised: 06/18/2015] [Accepted: 06/23/2015] [Indexed: 01/06/2023]
|
14
|
Barth H, Lesser E, Taggart J, Slusser E. Spatial estimation: a non-Bayesian alternative. Dev Sci 2014; 18:853-62. [PMID: 25440776 DOI: 10.1111/desc.12264] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2014] [Accepted: 09/10/2014] [Indexed: 11/26/2022]
Abstract
A large collection of estimation phenomena (e.g. biases arising when adults or children estimate remembered locations of objects in bounded spaces; Huttenlocher, Newcombe & Sandberg, 1994) are commonly explained in terms of complex Bayesian models. We provide evidence that some of these phenomena may be modeled instead by a simpler non-Bayesian alternative. Undergraduates and 9- to 10-year-olds completed a speeded linear position estimation task. Bias in both groups' estimates could be explained in terms of a simple psychophysical model of proportion estimation. Moreover, some individual data were not compatible with the requirements of the more complex Bayesian model.
Collapse
Affiliation(s)
- Hilary Barth
- Department of Psychology, Wesleyan University, USA
| | - Ellen Lesser
- Department of Psychology, Wesleyan University, USA
| | | | - Emily Slusser
- Department of Psychology, Wesleyan University, USA.,Department of Child and Adolescent Development, San Jose State University, USA
| |
Collapse
|
15
|
Keebler JR, Wiltshire TJ, Smith DC, Fiore SM, Bedwell JS. Shifting the paradigm of music instruction: implications of embodiment stemming from an augmented reality guitar learning system. Front Psychol 2014; 5:471. [PMID: 24999334 PMCID: PMC4034341 DOI: 10.3389/fpsyg.2014.00471] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2013] [Accepted: 05/01/2014] [Indexed: 11/26/2022] Open
Abstract
Musical instruction often includes materials that can act as a barrier to learning. New technologies using augmented reality may aid in reducing the initial difficulties involved in learning music by lowering these barriers characteristic of traditional instructional materials. Therefore, this set of studies examined a novel augmented reality guitar learning system (i.e., the Fretlight® guitar) in regards to current theories of embodied music cognition. Specifically, we examined the effects of using this system in comparison to a standard instructional material (i.e., diagrams). First, we review major theories related to musical embodiment and specify a niche within this research space we call embodied music technology for learning. Following, we explicate two parallel experiments that were conducted to address the learning effects of this system. Experiment 1 examined short-term learning effects within one experimental session, while Experiment 2 examined both short-term and long-term effects across two sessions spaced at a 2-week interval. Analyses demonstrated that, for many of our dependent variables, all participants increased in performance across time. Further, the Fretlight® condition consistently led to significantly better outcomes via interactive effects, including significantly better long term retention for the learned information across a 2 week time interval. These results are discussed in the context of embodied cognition theory as it relates to music. Potential limitations and avenues for future research are described.
Collapse
Affiliation(s)
- Joseph R Keebler
- Training Research and Applied Cognitive Engineering Laboratory, Department of Psychology, Wichita State University Wichita, KS, USA
| | - Travis J Wiltshire
- Cognitive Sciences Laboratory, Institute for Simulation and Training, University of Central Florida Orlando, FL, USA
| | - Dustin C Smith
- Training Research and Applied Cognitive Engineering Laboratory, Department of Psychology, Wichita State University Wichita, KS, USA
| | - Stephen M Fiore
- Cognitive Sciences Laboratory, Institute for Simulation and Training, University of Central Florida Orlando, FL, USA ; Department of Philosophy, University of Central Florida Orlando, FL, USA
| | - Jeffrey S Bedwell
- Psychophysiology of Mental Illness Laboratory, Department of Psychology, University of Central Florida Orlando, FL, USA
| |
Collapse
|
16
|
Exploring the reciprocal modulation of time and space in dancers and non-dancers. Exp Brain Res 2014; 232:3191-9. [DOI: 10.1007/s00221-014-4005-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2014] [Accepted: 05/25/2014] [Indexed: 10/25/2022]
|
17
|
Aagten-Murphy D, Iversen J, Williams C, Meck W. Novel Inversions in Auditory Sequences Provide Evidence for Spontaneous Subtraction of Time and Number. TIMING & TIME PERCEPTION 2014. [DOI: 10.1163/22134468-00002028] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
Animals, including fish, birds, rodents, non-human primates, and pre-verbal infants are able to discriminate the duration and number of events without the use of language. In this paper, we present the results of six experiments exploring the capability of adult rats to count 2–6 sequentially presented white-noise stimuli. The investigation focuses on the animal’s ability to exhibit spontaneous subtraction following the presentation of novel stimulus inversions in the auditory signals being counted. Results suggest that a subtraction operation between two opposite sensory representations may be a general processing strategy used for the comparison of stimulus magnitudes. These findings are discussed within the context of a mode-control model of timing and counting that relies on an analog temporal-integration process for the addition and subtraction of sequential events.
Collapse
Affiliation(s)
- David Aagten-Murphy
- Department of Psychology, Ludwig-Maximilians-Universität München, Münich, Germany
| | - John R. Iversen
- Swartz Center for Computational Neuroscience and Institute for Neural Computation, University of California, San Diego, La Jolla, CA, USA
| | | | - Warren H. Meck
- Department of Psychology and Neuroscience, Duke University, Durham, NC, USA
| |
Collapse
|