1
|
Jing Y, Mingfang Z, Yafang C. Feasibility Analysis of the Application of Virtual Reality Technology in College English Culture Teaching. JOURNAL OF INFORMATION & KNOWLEDGE MANAGEMENT 2022. [DOI: 10.1142/s0219649222400202] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Although the traditional English teaching mode has changed qualitatively, it still has not broken through the two-dimensional limitation, which limits the students’ creative thinking to a certain extent. The application of virtual reality technology in teaching is a qualitative leap in the development of educational informatisation. Based on the data collected from questionnaires and in-depth interviews, this paper studies the feasibility of applying virtual reality immersion teaching in primary and secondary school English teaching. On this basis, this paper attempts to combine virtual reality technology with immersion teaching, and make use of its diversity, flexibility, interactivity and other characteristics to prospect the realisation of virtual reality in English learning, and strive to explore a virtual reality immersion English teaching mode suitable for primary and secondary school children, so as to make up for the shortcomings of traditional language education and optimise the learning effect, It provides a feasible reference for the application of virtual reality technology in English teaching in the future.
Collapse
Affiliation(s)
- Yan Jing
- School of Foreign Languages, Jiaozuo University, Jiaozuo 454000, P. R. China
| | - Zhou Mingfang
- School of Marxism, Jiaozuo University, Jiaozuo 454000, P. R. China
| | - Chen Yafang
- School of Foreign Languages, Jiaozuo University, Jiaozuo 454000, P. R. China
| |
Collapse
|
2
|
Rabold J, Siebers M, Schmid U. Generating contrastive explanations for inductive logic programming based on a near miss approach. Mach Learn 2021. [DOI: 10.1007/s10994-021-06048-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractIn recent research, human-understandable explanations of machine learning models have received a lot of attention. Often explanations are given in form of model simplifications or visualizations. However, as shown in cognitive science as well as in early AI research, concept understanding can also be improved by the alignment of a given instance for a concept with a similar counterexample. Contrasting a given instance with a structurally similar example which does not belong to the concept highlights what characteristics are necessary for concept membership. Such near misses have been proposed by Winston (Learning structural descriptions from examples, 1970) as efficient guidance for learning in relational domains. We introduce an explanation generation algorithm for relational concepts learned with Inductive Logic Programming (GeNME). The algorithm identifies near miss examples from a given set of instances and ranks these examples by their degree of closeness to a specific positive instance. A modified rule which covers the near miss but not the original instance is given as an explanation. We illustrate GeNME with the well-known family domain consisting of kinship relations, the visual relational Winston arches domain, and a real-world domain dealing with file management. We also present a psychological experiment comparing human preferences of rule-based, example-based, and near miss explanations in the family and the arches domains.
Collapse
|
3
|
Ai L, Muggleton SH, Hocquette C, Gromowski M, Schmid U. Beneficial and harmful explanatory machine learning. Mach Learn 2021. [DOI: 10.1007/s10994-020-05941-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
AbstractGiven the recent successes of Deep Learning in AI there has been increased interest in the role and need for explanations in machine learned theories. A distinct notion in this context is that of Michie’s definition of ultra-strong machine learning (USML). USML is demonstrated by a measurable increase in human performance of a task following provision to the human of a symbolic machine learned theory for task performance. A recent paper demonstrates the beneficial effect of a machine learned logic theory for a classification task, yet no existing work to our knowledge has examined the potential harmfulness of machine’s involvement for human comprehension during learning. This paper investigates the explanatory effects of a machine learned theory in the context of simple two person games and proposes a framework for identifying the harmfulness of machine explanations based on the Cognitive Science literature. The approach involves a cognitive window consisting of two quantifiable bounds and it is supported by empirical evidence collected from human trials. Our quantitative and qualitative results indicate that human learning aided by a symbolic machine learned theory which satisfies a cognitive window has achieved significantly higher performance than human self learning. Results also demonstrate that human learning aided by a symbolic machine learned theory that fails to satisfy this window leads to significantly worse performance than unaided human learning.
Collapse
|