Yazar "Otair, Mohammed" seçeneğine göre listele
Listeleniyor 1 - 1 / 1
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe Improving Automated Arabic Essay Questions Grading Based on Microsoft Word Dictionary(Springer Science and Business Media Deutschland GmbH, 2021) Hailat, Muath; Otair, Mohammed; Abualigah, Laith; Houssein, Essam; Batur Şahin, CananThere are three main types of questions: true/false, multiple choice, and essay questions; it is easy to implement automatic grading system (AGS) for multiple choice and true/false questions because the answers are specific compared with essay question answers. Automatic grading system (AGS) was developed to evaluate essay answers using a computer program that solves manual grading process problems like high cost, time-consuming task, increasing number of students, and pressure on teachers. This chapter presents Arabic essay question grading techniques using inner product similarity. The reason behind this is to retrieve students’ answers that more relevance to teachers’ answers. NB (naive Bayes) classifier is used because it is simple to implement and fast. The process starts by preprocessing phase, where tokenization step divides answers for small pieces of tokens. For normalization step, it is used to replace special letter shapes and remove diacritics. Then, stop word removal step removes meaningless and useless words. Finally, stemming process is used to get the stem and root of the words. All the preprocessing phase is meant to be implemented for both student answer and dataset. Then, classifying by naive Bayes classifier to get accurate result also for both students’ answers among with dataset. After that, using Microsoft Word dictionary to compare and get enough synonyms for both students’ answers and model answers in order to have exceptional results. Finally, showing results with the use of inner product similarity then compare the results showed by inner product similarity with human score results so the evaluation among with the efficiency of the proposed technique can be measured using mean absolute error (MAE) and Pearson correlation results (PCR). According to the experimental results, the approach leads to positive results when using MS dictionary and improvement Automated Arabic essay questions grading, where experiment results showed improvement in MAE is 0.041 with enhanced accuracy is 4.65% and PCR is 0.8250. © 2021, The Author(s), under exclusive license to Springer Nature Switzerland AG.