A Study on the Role of Item Response Theory in the Development of Computerized Adaptive Testing

https://doi.org/10.58451/ijebss.v3i8.295

Authors

  • Muhammad Gibran Alif Prasetya Universitas Negeri Semarang
  • Arif Widiyatmoko Universitas Negeri Semarang
  • Dyah Rini Indriyanti Universitas Negeri Semarang
  • Novi Ratna Dewi Universitas Negeri Semarang

Keywords:

computerized adaptive testing (cat), item response theory (IRT), systematic literature review, assessment efficiency, machine learning

Abstract

The paradigm shift in educational assessment from conventional testing to Computerized Adaptive Testing (CAT) based  on Item Response Theory (IRT) offers significant measurement efficiency, but presents complexities related to validity and integration of new technologies. This study aims to analyze trends, methodologies, and the role of IRT in the development and validation of CAT through  the Systematic Literature Review (SLR) approach. Following the PRISMA protocol, data were collected from the Scopus database with a publication year range of 2020–2025 and analyzed using an NVivo-assisted thematic approach to 12 selected articles. The results show that there has been an evolution of research focus from just the efficiency of reducing question items to the integration of multimodal data, such as the incorporation of machine learning and physiological signals for ability estimation. Although CAT has been shown to drastically improve test efficiency without reducing reliability, challenges related to test safety (item preknowledge) and item bias (Differential Item Functioning) are still major obstacles. It is concluded that the future of CAT development depends on a balance between algorithm efficiency, multidimensional data integration, and system resilience to validity threats to create fair and transparent assessments.

Published

2026-01-05