Presentation of an Efficient Automatic Short Answer Grading Model Based on Combination of Pseudo Relevance Feedback and Semantic Relatedness Measures
Subject Areas : H.3.8. Natural Language ProcessingHossein Sadr 1 , Mojdeh Nazari Solimandarabi 2
1 - Department of Computer Engineering, Rasht Branch, Islamic Azad University, Rasht, Iran|Young Researchers and Elite Club, Rasht Branch, Islamic Azad University, Rasht, Iran
2 - Young Researchers and Elite Club, Rasht Branch, Islamic Azad University, Rasht, Iran
Keywords: Semantic Relatedness, deep learning, Short Answer Grading, Latent Semantic Analysis, Explicit Semantic Analysis, E-learning system,
Abstract :
Automatic short answer grading (ASAG) is the automated process of assessing answers based on natural language using computation methods and machine learning algorithms. Development of large-scale smart education systems on one hand and the importance of assessment as a key factor in the learning process and its confronted challenges, on the other hand, have significantly increased the need for an automated system with high flexibility for assessing exams based on texts. Generally, ASAG methods can be categorized into supervised and unsupervised approaches. Supervised approaches such as machine learning and especially deep learning methods require the manually constructed pattern. On the other hands, while in the assessment process, student's answer is compared to an ideal response and scoring is done based on their similarity, semantic relatedness and similarity measures can be considered as unsupervised approaches for this aim. Whereas unsupervised approaches do not require labeled data they are more applicable to real-world problems and are confronted with fewer limitations. Therefore, in this paper, various measures of semantic relatedness and similarity are extensively compared in the application of short answer grading. In the following, an approach is proposed for improving the performance of short answer grading systems based on semantic relatedness and similarity measures which leverages students' answers with the highest score as feedback. Empirical experiments have proved that using students' answers as feedback can considerably improve the precision of semantic relatedness and similarity measures in the automatic assessment of exams with short answers.