Improving Automated Arabic Essay Questions Grading Based on Microsoft Word Dictionary

Küçük Resim Yok

Tarih

2021

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Springer Science and Business Media Deutschland GmbH

Erişim Hakkı

info:eu-repo/semantics/closedAccess

Özet

There are three main types of questions: true/false, multiple choice, and essay questions; it is easy to implement automatic grading system (AGS) for multiple choice and true/false questions because the answers are specific compared with essay question answers. Automatic grading system (AGS) was developed to evaluate essay answers using a computer program that solves manual grading process problems like high cost, time-consuming task, increasing number of students, and pressure on teachers. This chapter presents Arabic essay question grading techniques using inner product similarity. The reason behind this is to retrieve students’ answers that more relevance to teachers’ answers. NB (naive Bayes) classifier is used because it is simple to implement and fast. The process starts by preprocessing phase, where tokenization step divides answers for small pieces of tokens. For normalization step, it is used to replace special letter shapes and remove diacritics. Then, stop word removal step removes meaningless and useless words. Finally, stemming process is used to get the stem and root of the words. All the preprocessing phase is meant to be implemented for both student answer and dataset. Then, classifying by naive Bayes classifier to get accurate result also for both students’ answers among with dataset. After that, using Microsoft Word dictionary to compare and get enough synonyms for both students’ answers and model answers in order to have exceptional results. Finally, showing results with the use of inner product similarity then compare the results showed by inner product similarity with human score results so the evaluation among with the efficiency of the proposed technique can be measured using mean absolute error (MAE) and Pearson correlation results (PCR). According to the experimental results, the approach leads to positive results when using MS dictionary and improvement Automated Arabic essay questions grading, where experiment results showed improvement in MAE is 0.041 with enhanced accuracy is 4.65% and PCR is 0.8250. © 2021, The Author(s), under exclusive license to Springer Nature Switzerland AG.

Açıklama

Anahtar Kelimeler

Arabic essay questions grading, Inner product, Microsoft Word dictionary, Naive Bayes

Kaynak

Signals and Communication Technology

WoS Q Değeri

Scopus Q Değeri

Q4

Cilt

Sayı

Künye

Hailat, M. M., Otair, M. A., Abualigah, L., Houssein, E. H., & Şahin, C. B. (2021). Improving Automated Arabic Essay Questions Grading Based on Microsoft Word Dictionary. In Deep Learning Approaches for Spoken and Natural Language Processing (pp. 19-40). Springer, Cham.