The analysis of marking reliability through the approach of gauge repeatability and reproducibility (GR&R) study: a case of English-speaking test
4
Issued Date
2024-12-01
Resource Type
eISSN
22290443
Scopus ID
2-s2.0-85182491063
Journal Title
Language Testing in Asia
Volume
14
Issue
1
Rights Holder(s)
SCOPUS
Bibliographic Citation
Language Testing in Asia Vol.14 No.1 (2024)
Suggested Citation
Sureeyatanapas P., Sureeyatanapas P., Panitanarak U., Kraisriwattana J., Sarootyanapat P., O’Connell D. The analysis of marking reliability through the approach of gauge repeatability and reproducibility (GR&R) study: a case of English-speaking test. Language Testing in Asia Vol.14 No.1 (2024). doi:10.1186/s40468-023-00271-z Retrieved from: https://repository.li.mahidol.ac.th/handle/123456789/95798
Title
The analysis of marking reliability through the approach of gauge repeatability and reproducibility (GR&R) study: a case of English-speaking test
Author's Affiliation
Corresponding Author(s)
Other Contributor(s)
Abstract
Ensuring consistent and reliable scoring is paramount in education, especially in performance-based assessments. This study delves into the critical issue of marking consistency, focusing on speaking proficiency tests in English language learning, which often face greater reliability challenges. While existing literature has explored various methods for assessing marking reliability, this study is the first of its kind to introduce an alternative statistical tool, namely the gauge repeatability and reproducibility (GR&R) approach, to the educational context. The study encompasses both intra- and inter-rater reliabilities, with additional validation using the intraclass correlation coefficient (ICC). Using a case study approach involving three examiners evaluating 30 recordings of a speaking proficiency test, the GR&R method demonstrates its effectiveness in detecting reliability issues over the ICC approach. Furthermore, this research identifies key factors influencing scoring inconsistencies, including group performance estimation, work presentation order, rubric complexity and clarity, the student’s chosen topic, accent familiarity, and recording quality. Importantly, it not only pinpoints these root causes but also suggests practical solutions, thereby enhancing the precision of the measurement system. The GR&R method can offer significant contributions to stakeholders in language proficiency assessment, including educational institutions, test developers and policymakers. It is also applicable to other cases of performance-based assessments. By addressing reliability issues, this study provides insights to enhance the fairness and accuracy of subjective judgements, ultimately benefiting overall performance comparisons and decision making.
