Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Search

Page Path
HOME > Search
2 "Test taking skills"
Filter
Filter
Article category
Keywords
Publication year
Authors
Review
Components of the item selection algorithm in computerized adaptive testing  
Kyung (Chris) Tyek Han
J Educ Eval Health Prof. 2018;15:7.   Published online March 24, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.7
  • 47,106 View
  • 540 Download
  • 25 Web of Science
  • 19 Crossref
AbstractAbstract PDFSupplementary Material
Computerized adaptive testing (CAT) greatly improves measurement efficiency in high-stakes testing operations through the selection and administration of test items with the difficulty level that is most relevant to each individual test taker. This paper explains the 3 components of a conventional CAT item selection algorithm: test content balancing, the item selection criterion, and item exposure control. Several noteworthy methodologies underlie each component. The test script method and constrained CAT method are used for test content balancing. Item selection criteria include the maximized Fisher information criterion, the b-matching method, the astratification method, the weighted likelihood information criterion, the efficiency balanced information criterion, and the KullbackLeibler information criterion. The randomesque method, the Sympson-Hetter method, the unconditional and conditional multinomial methods, and the fade-away method are used for item exposure control. Several holistic approaches to CAT use automated test assembly methods, such as the shadow test approach and the weighted deviation model. Item usage and exposure count vary depending on the item selection criterion and exposure control method. Finally, other important factors to consider when determining an appropriate CAT design are the computer resources requirement, the size of item pools, and the test length. The logic of CAT is now being adopted in the field of adaptive learning, which integrates the learning aspect and the (formative) assessment aspect of education into a continuous, individualized learning experience. Therefore, the algorithms and technologies described in this review may be able to help medical health educators and high-stakes test developers to adopt CAT more actively and efficiently.

Citations

Citations to this article as recorded by  
  • Efficiency of PROMIS MCAT Assessments for Orthopaedic Care
    Michael Bass, Scott Morris, Sheng Zhang
    Measurement: Interdisciplinary Research and Perspectives.2025; 23(2): 124.     CrossRef
  • Computerized Adaptive Testing Framework Based on Excitation Block and Gumbel-Softmax
    Chengsong Liu, Yan Wei
    IEEE Access.2025; 13: 3475.     CrossRef
  • From Development to Validation: Exploring the Efficiency of Numetrive, a Computerized Adaptive Assessment of Numerical Reasoning
    Marianna Karagianni, Ioannis Tsaousis
    Behavioral Sciences.2025; 15(3): 268.     CrossRef
  • Patient reported outcome measures: from the classics to AI
    Conrad J Harrison, Ryan W Trickett
    Journal of Hand Surgery (European Volume).2025; 50(6): 807.     CrossRef
  • A hybrid model based on learning automata and cuckoo search for optimizing test item selection in computerized adaptive testing
    Chanjuan Jin, Weiming Pan
    Scientific Reports.2025;[Epub]     CrossRef
  • Maximin criterion for item selection in computerized adaptive testing
    Jyun-Hong Chen, Hsiu-Yi Chao
    Behavior Research Methods.2025;[Epub]     CrossRef
  • Development and initial validation of DESCAQ: A Rasch‐based computer‐adaptive questionnaire for assessing digital eye strain
    Mariano González‐Pérez, Ana Barrio, Rosario Susi, Carlos Pérez‐Garmendia, Beatriz Antona
    Ophthalmic and Physiological Optics.2025; 45(6): 1326.     CrossRef
  • Development of a CAT based Diagnostic System for Assessing Basic Academic Skills in Undergraduate Students
    Woo-Jin Han, Jeongwook Choi, Dong-Gi Seo
    The Korean Association of General Education.2025; 19(3): 177.     CrossRef
  • Utilizing Real-Time Test Data to Solve Attenuation Paradox in Computerized Adaptive Testing to Enhance Optimal Design
    Jyun-Hong Chen, Hsiu-Yi Chao
    Journal of Educational and Behavioral Statistics.2024; 49(4): 630.     CrossRef
  • A Context-based Question Selection Model to Support the Adaptive Assessment of Learning: A study of online learning assessment in elementary schools in Indonesia
    Umi Laili Yuhana, Eko Mulyanto Yuniarno, Wenny Rahayu, Eric Pardede
    Education and Information Technologies.2024; 29(8): 9517.     CrossRef
  • A shortened test is feasible: Evaluating a large-scale multistage adaptive English language assessment
    Shangchao Min, Kyoungwon Bishop
    Language Testing.2024; 41(3): 627.     CrossRef
  • Implementing Computer Adaptive Testing for High-Stakes Assessment: A Shift for Examinations Council of Lesotho
    Musa Adekunle Ayanwale, Julia Chere-Masopha, Mapulane Mochekele, Malebohang Catherine Morena
    International Journal of New Education.2024;[Epub]     CrossRef
  • The Effects of Different Item Selection Methods on Test Information and Test Efficiency in Computer Adaptive Testing
    Merve ŞAHİN KÜRŞAD
    Eğitimde ve Psikolojide Ölçme ve Değerlendirme Dergisi.2023; 14(1): 33.     CrossRef
  • Presidential address: improving item validity and adopting computer-based testing, clinical skills assessments, artificial intelligence, and virtual reality in health professions licensing examinations in Korea
    Hyunjoo Pai
    Journal of Educational Evaluation for Health Professions.2023; 20: 8.     CrossRef
  • Remote Symptom Monitoring With Ecological Momentary Computerized Adaptive Testing: Pilot Cohort Study of a Platform for Frequent, Low-Burden, and Personalized Patient-Reported Outcome Measures
    Conrad Harrison, Ryan Trickett, Justin Wormald, Thomas Dobbs, Przemysław Lis, Vesselin Popov, David J Beard, Jeremy Rodrigues
    Journal of Medical Internet Research.2023; 25: e47179.     CrossRef
  • Evaluating a Computerized Adaptive Testing Version of a Cognitive Ability Test Using a Simulation Study
    Ioannis Tsaousis, Georgios D. Sideridis, Hannan M. AlGhamdi
    Journal of Psychoeducational Assessment.2021; 39(8): 954.     CrossRef
  • Developing Multistage Tests Using D-Scoring Method
    Kyung (Chris) T. Han, Dimiter M. Dimitrov, Faisal Al-Mashary
    Educational and Psychological Measurement.2019; 79(5): 988.     CrossRef
  • Conducting simulation studies for computerized adaptive testing using SimulCAT: an instructional piece
    Kyung (Chris) Tyek Han
    Journal of Educational Evaluation for Health Professions.2018; 15: 20.     CrossRef
  • Updates from 2018: Being indexed in Embase, becoming an affiliated journal of the World Federation for Medical Education, implementing an optional open data policy, adopting principles of transparency and best practice in scholarly publishing, and appreci
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2018; 15: 36.     CrossRef
Research Article
Exploration of examinees’ traits that affect the score of Korean Medical Licensing Examination  
Mi Kyoung Yim
J Educ Eval Health Prof. 2015;12:5.   Published online March 16, 2015
DOI: https://doi.org/10.3352/jeehp.2015.12.5
  • 30,228 View
  • 167 Download
  • 2 Crossref
AbstractAbstract PDF
Purpose
It aims to identify the effect of five variables to score of the Korean Medical Licensing Examinations (KMLE) for three consecutive years from 2011 to 2013.
Methods
The number of examinees for each examination was 3,364 in 2011 3,177 in 2012, and 3,287 in 2013. Five characteristics of examinees were set as variables: gender, age, graduation status, written test result (pass or fail), and city of medical school. A regression model was established, with the score of a written test as a dependent variable and with examinees’ traits as variables.
Results
The regression coefficients in all variables, except the city of medical school, were statistically significant. The variable’s effect in three examinations appeared in the following order: result of written test, graduation status, age, gender, and city of medical school.
Conclusion
written test scores of the KMLE revealed that female students, younger examinees, and first-time examinees had higher performances.

Citations

Citations to this article as recorded by  
  • Gender bias in the medical school admission system in Japan
    Kayo Fukami, Kae Okoshi, Yasuko Tomizawa
    SN Social Sciences.2022;[Epub]     CrossRef
  • A Comparative Study of Predictive Factors for Passing the National Physical Therapy Examination using Logistic Regression Analysis and Decision Tree Analysis
    So Hyun Kim, Sung Hyoun Cho
    Physical Therapy Rehabilitation Science.2022; 11(3): 285.     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions
TOP