Improving Automated Arabic Essay Questions Grading Based on Microsoft Word Dictionary

dc.authorid0000-0002-2131-6368en_US
dc.contributor.authorHailat, Muath
dc.contributor.authorOtair, Mohammed
dc.contributor.authorAbualigah, Laith
dc.contributor.authorHoussein, Essam
dc.contributor.authorBatur Şahin, Canan
dc.date.accessioned2022-03-15T06:53:20Z
dc.date.available2022-03-15T06:53:20Z
dc.date.issued2021en_US
dc.departmentMTÖ Üniversitesi, Mühendislik ve Doğa Bilimleri Fakültesi, Yazılım Mühendisliği Bölümüen_US
dc.description.abstractThere are three main types of questions: true/false, multiple choice, and essay questions; it is easy to implement automatic grading system (AGS) for multiple choice and true/false questions because the answers are specific compared with essay question answers. Automatic grading system (AGS) was developed to evaluate essay answers using a computer program that solves manual grading process problems like high cost, time-consuming task, increasing number of students, and pressure on teachers. This chapter presents Arabic essay question grading techniques using inner product similarity. The reason behind this is to retrieve students’ answers that more relevance to teachers’ answers. NB (naive Bayes) classifier is used because it is simple to implement and fast. The process starts by preprocessing phase, where tokenization step divides answers for small pieces of tokens. For normalization step, it is used to replace special letter shapes and remove diacritics. Then, stop word removal step removes meaningless and useless words. Finally, stemming process is used to get the stem and root of the words. All the preprocessing phase is meant to be implemented for both student answer and dataset. Then, classifying by naive Bayes classifier to get accurate result also for both students’ answers among with dataset. After that, using Microsoft Word dictionary to compare and get enough synonyms for both students’ answers and model answers in order to have exceptional results. Finally, showing results with the use of inner product similarity then compare the results showed by inner product similarity with human score results so the evaluation among with the efficiency of the proposed technique can be measured using mean absolute error (MAE) and Pearson correlation results (PCR). According to the experimental results, the approach leads to positive results when using MS dictionary and improvement Automated Arabic essay questions grading, where experiment results showed improvement in MAE is 0.041 with enhanced accuracy is 4.65% and PCR is 0.8250. © 2021, The Author(s), under exclusive license to Springer Nature Switzerland AG.en_US
dc.identifier.citationHailat, M. M., Otair, M. A., Abualigah, L., Houssein, E. H., & Şahin, C. B. (2021). Improving Automated Arabic Essay Questions Grading Based on Microsoft Word Dictionary. In Deep Learning Approaches for Spoken and Natural Language Processing (pp. 19-40). Springer, Cham.en_US
dc.identifier.doi10.1007/978-3-030-79778-2_2
dc.identifier.endpage40en_US
dc.identifier.issn1860-4862en_US
dc.identifier.scopus2-s2.0-85122473630en_US
dc.identifier.scopusqualityQ4en_US
dc.identifier.startpage19en_US
dc.identifier.urihttps://hdl.handle.net/20.500.12899/631
dc.indekslendigikaynakScopusen_US
dc.institutionauthorBatur Şahin, Canan
dc.language.isoenen_US
dc.publisherSpringer Science and Business Media Deutschland GmbHen_US
dc.relation.ispartofSignals and Communication Technologyen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectArabic essay questions gradingen_US
dc.subjectInner producten_US
dc.subjectMicrosoft Word dictionaryen_US
dc.subjectNaive Bayesen_US
dc.titleImproving Automated Arabic Essay Questions Grading Based on Microsoft Word Dictionaryen_US
dc.title.alternativeMicrosoft Word Sözlüğüne Dayalı Otomatik Arapça Deneme Soruları Derecelendirmesini İyileştirmeen_US
dc.typeArticleen_US

Dosyalar

Lisans paketi
Listeleniyor 1 - 1 / 1
Küçük Resim Yok
İsim:
license.txt
Boyut:
1.44 KB
Biçim:
Item-specific license agreed upon to submission
Açıklama: