1
|
Kumar AP, Nayak A, K MS, Chaitanya, Ghosh K. A Novel Framework for the Generation of Multiple Choice Question Stems Using Semantic and Machine-Learning Techniques. INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION 2023. [DOI: 10.1007/s40593-023-00333-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
Abstract
Multiple Choice Questions (MCQs) are a popular assessment method because they enable automated evaluation, flexible administration and use with huge groups. Despite these benefits, the manual construction of MCQs is challenging, time-consuming and error-prone. This is because each MCQ is comprised of a question called the "stem", a correct option called the "key" along with alternative options called "distractors" whose construction demands expertise from the MCQ developers. In addition, there are different kinds of MCQs such as Wh-type, Fill-in-the-blank, Odd one out, and many more needed to assess understanding at different cognitive levels. Automatic Question Generation (AQG) for developing heterogeneous MCQ stems has generally followed two approaches: semantics-based and machine-learning-based. Questions generated via AQG techniques can be utilized only if they are grammatically correct. Semantics-based techniques have been able to generate a range of different types of grammatically correct MCQs but require the semantics to be specified. In contrast, most machine-learning approaches have been primarily able to generate only grammatically correct Fill-in-the-blank/Cloze by reusing the original text. This paper describes a technique for combining semantic-based and machine-learning-based techniques to generate grammatically correct MCQ stems of various types for a technical domain. Expert evaluation of the resultant MCQ stems demonstrated that they were promising in terms of their usefulness and grammatical correctness.
Collapse
|
2
|
Kusuma SF, Siahaan DO, Fatichah C. Automatic question generation with various difficulty levels based on knowledge ontology using a query template. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108906] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
3
|
Baldwin P, Clauser BE. Historical Perspectives on Score Comparability Issues Raised by Innovations in Testing. JOURNAL OF EDUCATIONAL MEASUREMENT 2022. [DOI: 10.1111/jedm.12318] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
4
|
An Ontology-Driven Learning Assessment Using the Script Concordance Test. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Assessing the level of domain-specific reasoning acquired by students is one of the major challenges in education particularly in medical education. Considering the importance of clinical reasoning in preclinical and clinical practice, it is necessary to evaluate students’ learning achievements accordingly. The traditional way of assessing clinical reasoning includes long-case exams, oral exams, and objective structured clinical examinations. However, the traditional assessment techniques are not enough to answer emerging requirements in the new reality due to limited scalability and difficulty for adoption in online education. In recent decades, the script concordance test (SCT) has emerged as a promising tool for assessment, particularly in medical education. The question is whether the usability of SCT could be raised to a level high enough to match the current education requirements by exploiting opportunities that new technologies provide, particularly semantic knowledge graphs (SCGs) and ontologies. In this paper, an ontology-driven learning assessment is proposed using a novel automated SCT generation platform. SCTonto ontology is adopted for knowledge representation in SCT question generation with the focus on using electronic health records data for medical education. Direct and indirect strategies for generating Likert-type scores of SCT are described in detail as well. The proposed automatic question generation was evaluated against the traditional manually created SCT, and the results showed that the time required for tests creation significantly reduced, which confirms significant scalability improvements with respect to traditional approaches.
Collapse
|
5
|
Pelánek R, Effenberger T, Čechák J. Complexity and Difficulty of Items in Learning Systems. INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION 2021. [DOI: 10.1007/s40593-021-00252-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
6
|
Smith S, Geist M. TERM model: The incorporation of mentorship as a test-item improvement strategy. TEACHING AND LEARNING IN NURSING 2021. [DOI: 10.1016/j.teln.2020.05.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
7
|
Kurdi G, Leo J, Parsia B, Sattler U, Al-Emari S. A Systematic Review of Automatic Question Generation for Educational Purposes. INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION 2019. [DOI: 10.1007/s40593-019-00186-y] [Citation(s) in RCA: 60] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
AbstractWhile exam-style questions are a fundamental educational tool serving a variety of purposes, manual construction of questions is a complex process that requires training, experience, and resources. This, in turn, hinders and slows down the use of educational activities (e.g. providing practice questions) and new advances (e.g. adaptive testing) that require a large pool of questions. To reduce the expenses associated with manual construction of questions and to satisfy the need for a continuous supply of new questions, automatic question generation (AQG) techniques were introduced. This review extends a previous review on AQG literature that has been published up to late 2014. It includes 93 papers that were between 2015 and early 2019 and tackle the automatic generation of questions for educational purposes. The aims of this review are to: provide an overview of the AQG community and its activities, summarise the current trends and advances in AQG, highlight the changes that the area has undergone in the recent years, and suggest areas for improvement and future opportunities for AQG. Similar to what was found previously, there is little focus in the current literature on generating questions of controlled difficulty, enriching question forms and structures, automating template construction, improving presentation, and generating feedback. Our findings also suggest the need to further improve experimental reporting, harmonise evaluation metrics, and investigate other evaluation methods that are more feasible.
Collapse
|