1
|
Clustering Algorithm in English Language Learning Pattern Matching under Big Data Framework. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1380046. [PMID: 36110905 PMCID: PMC9470339 DOI: 10.1155/2022/1380046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 08/12/2022] [Accepted: 08/22/2022] [Indexed: 11/25/2022]
Abstract
The Internet era has brought new challenges and opportunities for English learning and English teaching. At the same time, basic education is fully implementing quality education and respecting students' individual differences. The same teacher teaches the same content to the same class of students, but some students perform well, and some students perform poorly due to the influence of intellectual and nonintellectual factors. The uneven performance of students in the same class makes it very difficult for teachers to teach. In view of the current situation of university English teaching and the trend of respecting students' individual development in the new era, this study investigates the basic concept of English language learning pattern matching, its main features, and practical application in the process of university English teaching. The clustering algorithm based on the big data framework is proposed for English language learning pattern matching, which is fault-tolerant and can quickly acquire and process the big data information in English teaching. By analyzing the characteristics of the data mining method of students' English learning behavior, the method of clustering processing for students' English learning data mining and the processing method of students' English learning clustering data are explored. The method is highly adaptable and can be used for actual English language learning pattern matching, and actively explores the main path of English teaching change and innovation.
Collapse
|
2
|
Athari P, Dey R, Rvachew S. Vocal imitation between mothers and infants. Infant Behav Dev 2021; 63:101531. [PMID: 33582572 DOI: 10.1016/j.infbeh.2021.101531] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 01/25/2021] [Accepted: 01/28/2021] [Indexed: 10/22/2022]
Abstract
The aim of the present mixed cross-sectional and longitudinal study was to observe and describe some aspects of vocal imitation in natural mother-infant interaction. Specifically, maternal imitation of infant utterances was observed in relation to the imitative modeling, mirrored equivalence, and social guided learning models of infant speech development. Nine mother-infant dyads were audio-video recorded. Infants were recruited at different ages between 6 and 11 months and followed for 3 months, providing a quasi-longitudinal series of data from 6 through 14 months of age. It was observed that maternal imitation was more frequent than infant imitation even though vocal imitation was a rare maternal response. Importantly, mothers used a range of contingent and noncontingent vocal responses in interaction with their infants. Mothers responded to three-quarters of their infant's vocalizations, including speech-like and less mature vocalization types. The infants' phonetic repertoire expanded with age. Overall, the findings are most consistent with the social guided learning approach. Infants rarely imitated their mothers, suggests a creative self-motivated learning mechanism that requires further investigation.
Collapse
Affiliation(s)
- Pegah Athari
- School of Communication Sciences and Disorders, University of McGill, Canada
| | - Rajib Dey
- School of Communication Sciences and Disorders, University of McGill, Canada
| | - Susan Rvachew
- School of Communication Sciences and Disorders, University of McGill, Canada.
| |
Collapse
|
3
|
Ritwika VPS, Pretzer GM, Mendoza S, Shedd C, Kello CT, Gopinathan A, Warlaumont AS. Exploratory dynamics of vocal foraging during infant-caregiver communication. Sci Rep 2020; 10:10469. [PMID: 32591549 PMCID: PMC7319970 DOI: 10.1038/s41598-020-66778-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Accepted: 05/22/2020] [Indexed: 11/16/2022] Open
Abstract
We investigated the hypothesis that infants search in an acoustic space for vocalisations that elicit adult utterances and vice versa, inspired by research on animal and human foraging. Infant-worn recorders were used to collect day-long audio recordings, and infant speech-related and adult vocalisation onsets and offsets were automatically identified. We examined vocalisation-to-vocalisation steps, focusing on inter-vocalisation time intervals and distances in an acoustic space defined by mean pitch and mean amplitude, measured from the child's perspective. Infant inter-vocalisation intervals were shorter immediately following a vocal response from an adult. Adult intervals were shorter following an infant response and adult inter-vocalisation pitch differences were smaller following the receipt of a vocal response from the infant. These findings are consistent with the hypothesis that infants and caregivers are foraging vocally for social input. Increasing infant age was associated with changes in inter-vocalisation step sizes for both infants and adults, and we found associations between response likelihood and acoustic characteristics. Future work is needed to determine the impact of different labelling methods and of automatic labelling errors on the results. The study represents a novel application of foraging theory, demonstrating how infant behaviour and infant-caregiver interaction can be characterised as foraging processes.
Collapse
Affiliation(s)
- V P S Ritwika
- University of California, Merced, Department of Physics, Merced, CA, 94343, USA.
| | - Gina M Pretzer
- University of California, Merced, Cognitive and Information Sciences, Merced, CA, 95343, USA
| | - Sara Mendoza
- University of California, Merced, Cognitive and Information Sciences, Merced, CA, 95343, USA
| | - Christopher Shedd
- University of California, Merced, Department of Physics, Merced, CA, 94343, USA
| | - Christopher T Kello
- University of California, Merced, Cognitive and Information Sciences, Merced, CA, 95343, USA
| | - Ajay Gopinathan
- University of California, Merced, Department of Physics, Merced, CA, 94343, USA
| | - Anne S Warlaumont
- University of California, Los Angeles, Department of Communication, Los Angeles, CA, 90095, USA.
| |
Collapse
|
4
|
Skipper JI, Devlin JT, Lametti DR. The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception. BRAIN AND LANGUAGE 2017; 164:77-105. [PMID: 27821280 DOI: 10.1016/j.bandl.2016.10.004] [Citation(s) in RCA: 123] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2016] [Accepted: 10/24/2016] [Indexed: 06/06/2023]
Abstract
Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires.
Collapse
Affiliation(s)
- Jeremy I Skipper
- Experimental Psychology, University College London, United Kingdom.
| | - Joseph T Devlin
- Experimental Psychology, University College London, United Kingdom
| | - Daniel R Lametti
- Experimental Psychology, University College London, United Kingdom; Department of Experimental Psychology, University of Oxford, United Kingdom
| |
Collapse
|
5
|
Warlaumont AS, Richards JA, Gilkerson J, Messinger DS, Oller DK. The Social Feedback Hypothesis and Communicative Development in Autism Spectrum Disorder: A Response to Akhtar, Jaswal, Dinishak, and Stephan (2016). Psychol Sci 2016; 27:1531-1533. [PMID: 27664191 PMCID: PMC5647864 DOI: 10.1177/0956797616668558] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2016] [Accepted: 08/19/2016] [Indexed: 11/15/2022] Open
Affiliation(s)
- Anne S Warlaumont
- Cognitive and Information Sciences, University of California, Merced
| | | | | | | | - D Kimbrough Oller
- School of Communication Sciences and Disorders, University of Memphis
| |
Collapse
|
6
|
Moore RK, Marxer R, Thill S. Vocal Interactivity in-and-between Humans, Animals, and Robots. Front Robot AI 2016. [DOI: 10.3389/frobt.2016.00061] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
|
7
|
Asada M. Modeling Early Vocal Development Through Infant–Caregiver Interaction: A Review. IEEE Trans Cogn Dev Syst 2016. [DOI: 10.1109/tcds.2016.2552493] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
8
|
Warlaumont AS, Finnegan MK. Learning to Produce Syllabic Speech Sounds via Reward-Modulated Neural Plasticity. PLoS One 2016; 11:e0145096. [PMID: 26808148 PMCID: PMC4726623 DOI: 10.1371/journal.pone.0145096] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2015] [Accepted: 11/29/2015] [Indexed: 11/19/2022] Open
Abstract
At around 7 months of age, human infants begin to reliably produce well-formed syllables containing both consonants and vowels, a behavior called canonical babbling. Over subsequent months, the frequency of canonical babbling continues to increase. How the infant's nervous system supports the acquisition of this ability is unknown. Here we present a computational model that combines a spiking neural network, reinforcement-modulated spike-timing-dependent plasticity, and a human-like vocal tract to simulate the acquisition of canonical babbling. Like human infants, the model's frequency of canonical babbling gradually increases. The model is rewarded when it produces a sound that is more auditorily salient than sounds it has previously produced. This is consistent with data from human infants indicating that contingent adult responses shape infant behavior and with data from deaf and tracheostomized infants indicating that hearing, including hearing one's own vocalizations, is critical for canonical babbling development. Reward receipt increases the level of dopamine in the neural network. The neural network contains a reservoir with recurrent connections and two motor neuron groups, one agonist and one antagonist, which control the masseter and orbicularis oris muscles, promoting or inhibiting mouth closure. The model learns to increase the number of salient, syllabic sounds it produces by adjusting the base level of muscle activation and increasing their range of activity. Our results support the possibility that through dopamine-modulated spike-timing-dependent plasticity, the motor cortex learns to harness its natural oscillations in activity in order to produce syllabic sounds. It thus suggests that learning to produce rhythmic mouth movements for speech production may be supported by general cortical learning mechanisms. The model makes several testable predictions and has implications for our understanding not only of how syllabic vocalizations develop in infancy but also for our understanding of how they may have evolved.
Collapse
Affiliation(s)
- Anne S. Warlaumont
- Cognitive and Information Sciences, University of California, Merced, Merced, CA, United States of America
| | - Megan K. Finnegan
- Speech & Hearing Sciences, University of Illinois at Urbana-Champaign, Champaign, IL, United States of America
| |
Collapse
|
9
|
Asada M, Endo N. Infant-caregiver interactions affect the early development of vocalization. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:5351-4. [PMID: 26737500 DOI: 10.1109/embc.2015.7319600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Vocal communication is a unique means to bilaterally exchange messages in real-time. The developmental origin of such communication is the vocal interactions between an infant and a caregiver, and one of the big mysteries is how the infant learns to vocalize the mother tongue of the caregiver. Many theories claim to explain an infant's capability to imitate a caregiver based on acoustic matching. However, the acoustic qualities of the infant and the caregiver are quite different, and, therefore, cannot fully explain the imitation. Instead, the interaction itself may have an important role, but the mechanism is still unclear. In this article, we review studies addressing this problem using constructive approaches based on cognitive developmental robotics.
Collapse
|
10
|
de Greeff J, Belpaeme T. Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction. PLoS One 2015; 10:e0138061. [PMID: 26422143 PMCID: PMC4589374 DOI: 10.1371/journal.pone.0138061] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2014] [Accepted: 08/25/2015] [Indexed: 11/18/2022] Open
Abstract
Social learning is a powerful method for cultural propagation of knowledge and skills relying on a complex interplay of learning strategies, social ecology and the human propensity for both learning and tutoring. Social learning has the potential to be an equally potent learning strategy for artificial systems and robots in specific. However, given the complexity and unstructured nature of social learning, implementing social machine learning proves to be a challenging problem. We study one particular aspect of social machine learning: that of offering social cues during the learning interaction. Specifically, we study whether people are sensitive to social cues offered by a learning robot, in a similar way to children's social bids for tutoring. We use a child-like social robot and a task in which the robot has to learn the meaning of words. For this a simple turn-based interaction is used, based on language games. Two conditions are tested: one in which the robot uses social means to invite a human teacher to provide information based on what the robot requires to fill gaps in its knowledge (i.e. expression of a learning preference); the other in which the robot does not provide social cues to communicate a learning preference. We observe that conveying a learning preference through the use of social cues results in better and faster learning by the robot. People also seem to form a "mental model" of the robot, tailoring the tutoring to the robot's performance as opposed to using simply random teaching. In addition, the social learning shows a clear gender effect with female participants being responsive to the robot's bids, while male teachers appear to be less receptive. This work shows how additional social cues in social machine learning can result in people offering better quality learning input to artificial systems, resulting in improved learning performance.
Collapse
Affiliation(s)
- Joachim de Greeff
- Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom
- Interactive Intelligence Group, Delft University of Technology, Delft, the Netherlands
- * E-mail:
| | - Tony Belpaeme
- Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom
| |
Collapse
|