Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Article category

Page Path
HOME > Article category > Article category
498 Article category
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles
Research article
Comparison of the level of cognitive processing between case-based items and non-case-based items on the Interuniversity Progress Test of Medicine in the Netherlands  
Dario Cecilio-Fernandes, Wouter Kerdijk, Andreas Johannes Bremers, Wytze Aalders, René Anton Tio
J Educ Eval Health Prof. 2018;15:28.   Published online December 12, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.28
  • 18,823 View
  • 195 Download
  • 8 Web of Science
  • 8 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
It is assumed that case-based questions require higher-order cognitive processing, whereas questions that are not case-based require lower-order cognitive processing. In this study, we investigated to what extent case-based and non-case-based questions followed this assumption based on Bloom’s taxonomy.
Methods
In this article, 4,800 questions from the Interuniversity Progress Test of Medicine were classified based on whether they were case-based and on the level of Bloom’s taxonomy that they involved. Lower-order questions require students to remember or/and have a basic understanding of knowledge. Higher-order questions require students to apply, analyze, or/and evaluate. The phi coefficient was calculated to investigate the relationship between whether questions were case-based and the required level of cognitive processing.
Results
Our results demonstrated that 98.1% of case-based questions required higher-level cognitive processing. Of the non-case-based questions, 33.7% required higher-level cognitive processing. The phi coefficient demonstrated a significant, but moderate correlation between the presence of a patient case in a question and its required level of cognitive processing (phi coefficient= 0.55, P< 0.001).
Conclusion
Medical instructors should be aware of the association between item format (case-based versus non-case-based) and the cognitive processes they elicit in order to meet the desired balance in a test, taking the learning objectives and the test difficulty into account.

Citations

Citations to this article as recorded by  
  • Progress is impossible without change: implementing automatic item generation in medical knowledge progress testing
    Filipe Manuel Vidal Falcão, Daniela S.M. Pereira, José Miguel Pêgo, Patrício Costa
    Education and Information Technologies.2024; 29(4): 4505.     CrossRef
  • Identifying the response process validity of clinical vignette-type multiple choice questions: An eye-tracking study
    Francisco Carlos Specian Junior, Thiago Martins Santos, John Sandars, Eliana Martorano Amaral, Dario Cecilio-Fernandes
    Medical Teacher.2023; 45(8): 845.     CrossRef
  • Relationship between medical programme progress test performance and surgical clinical attachment timing and performance
    Andy Wearn, Vanshay Bindra, Bradley Patten, Benjamin P. T. Loveday
    Medical Teacher.2023; 45(8): 877.     CrossRef
  • Analysis of Orthopaedic In-Training Examination Trauma Questions: 2017 to 2021
    Lilah Fones, Daryl C. Osbahr, Daniel E. Davis, Andrew M. Star, Atif K. Ahmed, Arjun Saxena
    JAAOS: Global Research and Reviews.2023;[Epub]     CrossRef
  • Use of Sociodemographic Information in Clinical Vignettes of Multiple-Choice Questions for Preclinical Medical Students
    Kelly Carey-Ewend, Amir Feinberg, Alexis Flen, Clark Williamson, Carmen Gutierrez, Samuel Cykert, Gary L. Beck Dallaghan, Kurt O. Gilliland
    Medical Science Educator.2023; 33(3): 659.     CrossRef
  • What faculty write versus what students see? Perspectives on multiple-choice questions using Bloom’s taxonomy
    Seetha U. Monrad, Nikki L. Bibler Zaidi, Karri L. Grob, Joshua B. Kurtz, Andrew W. Tai, Michael Hortsch, Larry D. Gruppen, Sally A. Santen
    Medical Teacher.2021; 43(5): 575.     CrossRef
  • Aménagement du concours de première année commune aux études de santé (PACES) : entre justice sociale et éthique confraternelle en devenir ?
    R. Pougnet, L. Pougnet
    Éthique & Santé.2020; 17(4): 250.     CrossRef
  • Knowledge of dental faculty in gulf cooperation council states of multiple-choice questions’ item writing flaws
    Mawlood Kowash, Hazza Alhobeira, Iyad Hussein, Manal Al Halabi, Saif Khan
    Medical Education Online.2020;[Epub]     CrossRef
Corrigendum
Funding information of the article entitled “Post-hoc simulation study of computerized adaptive testing for the Korean Medical Licensing Examination”
Dong Gi Seo, Jeongwook Choi
J Educ Eval Health Prof. 2018;15:27.   Published online December 4, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.27    [Epub ahead of print]
Corrects: J Educ Eval Health Prof 2018;15(0):14
  • 17,011 View
  • 174 Download
PDF
Research article
Linear programming method to construct equated item sets for the implementation of periodical computer-based testing for the Korean Medical Licensing Examination  
Dong Gi Seo, Myeong Gi Kim, Na Hui Kim, Hye Sook Shin, Hyun Jung Kim
J Educ Eval Health Prof. 2018;15:26.   Published online October 18, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.26
  • 20,486 View
  • 278 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to identify the best way of developing equivalent item sets and to propose a stable and effective management plan for periodical licensing examinations.
Methods
Five pre-equated item sets were developed based on the predicted correct answer rate of each item using linear programming. These pre-equated item sets were compared to the ones that were developed with a random item selection method based on the actual correct answer rate (ACAR) and difficulty from item response theory (IRT). The results with and without common items were also compared in the same way. ACAR and the IRT difficulty were used to determine whether there was a significant difference between the pre-equating conditions.
Results
There was a statistically significant difference in IRT difficulty among the results from different pre-equated conditions. The predicted correct answer rate was divided using 2 or 3 difficulty categories, and the ACAR and IRT difficulty parameters of the 5 item sets were equally constructed. Comparing the item set conditions with and without common items, including common items did not make a significant contribution to the equating of the 5 item sets.
Conclusion
This study suggested that the linear programming method is applicable to construct equated-item sets that reflect each content area. The suggested best method to construct equated item sets is to divide the predicted correct answer rate using 2 or 3 difficulty categories, regardless of common items. If pre-equated item sets are required to construct a test based on the actual data, several methods should be considered by simulation studies to determine which is optimal before administering a real test.

Citations

Citations to this article as recorded by  
  • Application of computer-based testing in the Korean Medical Licensing Examination, the emergence of the metaverse in medical education, journal metrics and statistics, and appreciation to reviewers and volunteers
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2022; 19: 2.     CrossRef
  • Reading Comprehension Tests for Children: Test Equating and Specific Age-Interval Reports
    Patrícia Silva Lúcio, Fausto Coutinho Lourenço, Hugo Cogo-Moreira, Deborah Bandalos, Carolina Alves Ferreira de Carvalho, Adriana de Souza Batista Kida, Clara Regina Brandão de Ávila
    Frontiers in Psychology.2021;[Epub]     CrossRef
Brief report
Benefits of focus group discussions beyond online surveys in course evaluations by medical students in the United States: a qualitative study  
Katharina Brandl, Soniya V. Rabadia, Alexander Chang, Jess Mandel
J Educ Eval Health Prof. 2018;15:25.   Published online October 16, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.25
  • 22,222 View
  • 342 Download
  • 3 Web of Science
  • 5 Crossref
AbstractAbstract PDFSupplementary Material
In addition to online questionnaires, many medical schools use supplemental evaluation tools such as focus groups to evaluate their courses. Although some benefits of using focus groups in program evaluation have been described, it is unknown whether these inperson data collection methods provide sufficient additional information beyond online evaluations to justify them. In this study, we analyze recommendations gathered from student evaluation team (SET) focus group meetings and analyzed whether these items were captured in open-ended comments within the online evaluations. Our results indicate that online evaluations captured only 49% of the recommendations identified via SETs. Surveys to course directors identified that 74% of the recommendations exclusively identified via the SETs were implemented within their courses. Our results indicate that SET meetings provided information not easily captured in online evaluations and that these recommendations resulted in actual course changes.

Citations

Citations to this article as recorded by  
  • Grupos focais como ferramenta de pesquisa qualitativa na fisioterapia: implicações e expectativas
    Dartel Ferrari de Lima, Adelar Aparecido Sampaio
    Revista Pesquisa Qualitativa.2023; 11(27): 361.     CrossRef
  • Educational attainment for at-risk high school students: closing the gap
    Karen Miner-Romanoff
    SN Social Sciences.2023;[Epub]     CrossRef
  • Student evaluations of teaching and the development of a comprehensive measure of teaching effectiveness for medical schools
    Constantina Constantinou, Marjo Wijnen-Meijer
    BMC Medical Education.2022;[Epub]     CrossRef
  • National Security Law Education in Hong Kong: Qualitative Evaluation Based on the Perspective of the Students
    Daniel T. L. Shek, Xiaoqin Zhu, Diya Dou, Xiang Li
    International Journal of Environmental Research and Public Health.2022; 20(1): 553.     CrossRef
  • Mentoring as a transformative experience
    Wendy A. Hall, Sarah Liva
    Mentoring & Tutoring: Partnership in Learning.2021; 29(1): 6.     CrossRef
Case report
Dental students’ learning attitudes and perceptions of YouTube as a lecture video hosting platform in a flipped classroom in Korea  
Chang Wan Seo, A Ra Cho, Jung Chul Park, Hag Yeon Cho, Sun Kim
J Educ Eval Health Prof. 2018;15:24.   Published online October 11, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.24
  • 27,762 View
  • 382 Download
  • 13 Web of Science
  • 16 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
The aim of this study was to confirm the applicability of YouTube as a delivery platform of lecture videos for dental students and to assess their learning attitudes towards the flipped classroom model.
Methods
Learning experiences after using the YouTube platform to deliver preliminary video lectures in a flipped classroom were assessed by 69 second-year students (52 males, 17 females) at Dankook University College of Dentistry, Korea, who attended periodontology lectures during 2 consecutive semesters of the 2016 academic year. The instructor uploaded the lecture videos to YouTube before each class. At the end of the second semester, the students were surveyed using a questionnaire devised by the authors.
Results
Of the students, 53 (76.8%) always watched the lecture before the class, 48 (69.6%) used their smartphones, and 66 (95.7%) stated that they watched the lectures at home. The majority of the students replied that the video lectures were easier to understand than face to face lectures (82.6%) and that they would like to view the videos again after graduation (73.9%).
Conclusion
Our results indicate that YouTube is an applicable platform to deliver video lectures and to expose students to increased learning opportunities.

Citations

Citations to this article as recorded by  
  • YouTube as a source of information about rubber dam: quality and content analysis
    Gülsen Kiraz, Arzu Kaya Mumcu, Safa Kurnaz
    Restorative Dentistry & Endodontics.2024;[Epub]     CrossRef
  • Use of social media by dental students: A comparative study
    Rand Al-Obaidi
    Clinical Epidemiology and Global Health.2024; 26: 101559.     CrossRef
  • Evaluating video‐based lectures on YouTube for dental education
    Ryan T. Gross, Nare Ghaltakhchyan, Eleanor M. Nanney, Tate H. Jackson, Christopher A. Wiesen, Paul Mihas, Adam M. Persky, Sylvia A. Frazier‐Bowers, Laura A. Jacox
    Orthodontics & Craniofacial Research.2023; 26(S1): 210.     CrossRef
  • Learning of paediatric dentistry with the flipped classroom model
    Nuria E. Gallardo, Antonia M. Caleya, Maria Esperanza Sánchez, Gonzalo Feijóo
    European Journal of Dental Education.2022; 26(2): 302.     CrossRef
  • Effects of Video Length on a Flipped English Classroom
    Zhonggen Yu, Mingle Gao
    SAGE Open.2022; 12(1): 215824402110684.     CrossRef
  • An Evaluation of the Usefulness of YouTube® Videos on Crown Preparation
    Syed Rashid Habib, Aleshba Saba Khan, Mohsin Ali, Essam Abdulla Abutheraa, Ahmad khaled alkhrayef, Faisal Jibrin Aljibrin, Nawaf Saad Almutairi, Ammar A. Siddiqui
    BioMed Research International.2022; 2022: 1.     CrossRef
  • Perceptions of Students on Distance Education and E-Learning in Dentistry Education: Challenges and Opportunities
    Ayşe TORAMAN, Ebru SAĞLAM, Serhat KÖSEOĞLU
    Journal of Biotechnology and Strategic Health Research.2022; 6(2): 101.     CrossRef
  • Social media as a learning tool for the budding periodontist: A questionnaire survey
    Riddhi Awasthi, Balaji Manohar, S Vinay, Santosh Kumar
    Advances in Human Biology.2022; 12(3): 286.     CrossRef
  • YouTube and Education: A Scoping Review
    Abdulhadi Shoufan, Fatma Mohamed
    IEEE Access.2022; 10: 125576.     CrossRef
  • Uso de la plataforma YouTube® por los estudiantes de odontología: Revisión de alcance
    María Luján Méndez Bauer, Stella de los Angeles Bauer Walter
    Universitas Odontologica.2022;[Epub]     CrossRef
  • Social media as a learning tool: Dental students’ perspectives
    Mona T. Rajeh, Shahinaz N. Sembawa, Afnan A. Nassar, Seba A. Al Hebshi, Khalid T. Aboalshamat, Mohammed K. Badri
    Journal of Dental Education.2021; 85(4): 513.     CrossRef
  • Social Media Usage among Dental Undergraduate Students—A Comparative Study
    Eswara Uma, Pentti Nieminen, Shani Ann Mani, Jacob John, Emilia Haapanen, Marja-Liisa Laitala, Olli-Pekka Lappalainen, Eby Varghase, Ankita Arora, Kanwardeep Kaur
    Healthcare.2021; 9(11): 1408.     CrossRef
  • Does forced-shift to online learning affect university brand image in South Korea? Role of perceived harm and international students’ learning engagement
    Umer Zaman, Murat Aktan, Hasnan Baber, Shahid Nawaz
    Journal of Marketing for Higher Education.2021; : 1.     CrossRef
  • Flipped Classroom Experiences in Clinical Dentistry – A Strategic Mini-Review
    Abdullah Aljabr
    The Open Dentistry Journal.2021; 15(1): 717.     CrossRef
  • Newly appointed medical faculty members’ self-evaluation of their educational roles at the Catholic University of Korea College of Medicine in 2020 and 2021: a cross-sectional survey-based study
    Sun Kim, A Ra Cho, Chul Woon Chung
    Journal of Educational Evaluation for Health Professions.2021; 18: 28.     CrossRef
  • Attitudes toward Social Media among Practicing Dentists and Dental Students in Clinical Years in Saudi Arabia
    Khalid Aboalshamat, Sharifah Alkiyadi, Sarah Alsaleh, Rana Reda, Sharifa Alkhaldi, Arwa Badeeb, Najwa Gabb
    The Open Dentistry Journal.2019; 13(1): 143.     CrossRef
Research articles
Agreement between 2 raters’ evaluations of a traditional prosthodontic practical exam integrated with directly observed procedural skills in Egypt  
Ahmed Khalifa Khalifa, Salah Hegazy
J Educ Eval Health Prof. 2018;15:23.   Published online September 27, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.23
  • 23,766 View
  • 203 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to assess the agreement between 2 raters in evaluations of students on a prosthodontic clinical practical exam integrated with directly observed procedural skills (DOPS).
Methods
A sample of 76 students was monitored by 2 raters to evaluate the process and the final registered maxillomandibular relation for a completely edentulous patient at Mansoura Dental School, Egypt on a practical exam of bachelor’s students from May 15 to June 28, 2017. Each registered relation was evaluated from a total of 60 marks subdivided into 3 score categories: occlusal plane orientation (OPO), vertical dimension registration (VDR), and centric relation registration (CRR). The marks for each category included an assessment of DOPS. The marks of OPO and VDR for both raters were compared using the graph method to measure reliability through Bland and Altman analysis. The reliability of the CRR marks was evaluated by the Krippendorff alpha ratio.
Results
The results revealed highly similar marks between raters for OPO (mean= 18.1 for both raters), with close limits of agreement (0.73 and −0.78). For VDR, the mean marks were close (mean= 17.4 and 17.1 for examiners 1 and 2, respectively), with close limits of agreement (2.7 and −2.2). There was a strong correlation (Krippendorff alpha ratio, 0.92; 95% confidence interval, 0.79– 0.99) between the raters in the evaluation of CRR.
Conclusion
The 2 raters’ evaluation of a clinical traditional practical exam integrated with DOPS showed no significant differences in the evaluations of candidates at the end of a clinical prosthodontic course. The limits of agreement between raters could be optimized by excluding subjective evaluation parameters and complicated cases from the examination procedure.

Citations

Citations to this article as recorded by  
  • In‐person and virtual assessment of oral radiology skills and competences by the Objective Structured Clinical Examination
    Fernanda R. Porto, Mateus A. Ribeiro, Luciano A. Ferreira, Rodrigo G. Oliveira, Karina L. Devito
    Journal of Dental Education.2023; 87(4): 505.     CrossRef
  • Evaluation agreement between peer assessors, supervisors, and parents in assessing communication and interpersonal skills of students of pediatric dentistry
    Jin Asari, Maiko Fujita-Ohtani, Kuniomi Nakamura, Tomomi Nakamura, Yoshinori Inoue, Shigenari Kimoto
    Pediatric Dental Journal.2023; 33(2): 133.     CrossRef
Learning through multiple lenses: analysis of self, peer, nearpeer, and faculty assessments of a clinical history-taking task in Australia  
Kylie Fitzgerald, Brett Vaughan
J Educ Eval Health Prof. 2018;15:22.   Published online September 18, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.22
  • 23,278 View
  • 284 Download
  • 4 Web of Science
  • 5 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Peer assessment provides a framework for developing expected skills and receiving feedback appropriate to the learner’s level. Near-peer (NP) assessment may elevate expectations and motivate learning. Feedback from peers and NPs may be a sustainable way to enhance student assessment feedback. This study analysed relationships among self, peer, NP, and faculty marking of an assessment and students’ attitudes towards marking by those various groups.
Methods
A cross-sectional study design was used. Year 2 osteopathy students (n= 86) were invited to perform self and peer assessments of a clinical history-taking and communication skills assessment. NPs and faculty also marked the assessment. Year 2 students also completed a questionnaire on their attitudes to peer/NP marking. Descriptive statistics and the Spearman rho coefficient were used to evaluate relationships across marker groups.
Results
Year 2 students (n= 9), NPs (n= 3), and faculty (n= 5) were recruited. Correlations between self and peer (r= 0.38) and self and faculty (r= 0.43) marks were moderate. A weak correlation was observed between self and NP marks (r= 0.25). Perceptions of peer and NP marking varied, with over half of the cohort suggesting that peer or NP assessments should not contribute to their grade.
Conclusion
Framing peer and NP assessment as another feedback source may offer a sustainable method for enhancing feedback without overloading faculty resources. Multiple sources of feedback may assist in developing assessment literacy and calibrating students’ self-assessment capability. The small number of students recruited suggests some acceptability of peer and NP assessment; however, further work is required to increase its acceptability.

Citations

Citations to this article as recorded by  
  • The extent and quality of evidence for osteopathic education: A scoping review
    Andrew MacMillan, Patrick Gauthier, Luciane Alberto, Arabella Gaunt, Rachel Ives, Chris Williams, Dr Jerry Draper-Rodi
    International Journal of Osteopathic Medicine.2023; 49: 100663.     CrossRef
  • History and physical exam: a retrospective analysis of a clinical opportunity
    David McLinden, Krista Hailstone, Sue Featherston
    BMC Medical Education.2023;[Epub]     CrossRef
  • How Accurate Are Our Students? A Meta-analytic Systematic Review on Self-assessment Scoring Accuracy
    Samuel P. León, Ernesto Panadero, Inmaculada García-Martínez
    Educational Psychology Review.2023;[Epub]     CrossRef
  • Evaluating the Academic Performance of Mustansiriyah Medical College Teaching Staff vs. Final-Year Students Failure Rates
    Wassan Nori, Wisam Akram , Saad Mubarak Rasheed, Nabeeha Najatee Akram, Taqi Mohammed Jwad Taher, Mustafa Ali Kassim Kassim, Alexandru Cosmin Pantazi
    Al-Rafidain Journal of Medical Sciences ( ISSN 2789-3219 ).2023; 5(1S): S151.     CrossRef
  • History-taking level and its influencing factors among nursing undergraduates based on the virtual standardized patient testing results: Cross sectional study
    Jingrong Du, Xiaowen Zhu, Juan Wang, Jing Zheng, Xiaomin Zhang, Ziwen Wang, Kun Li
    Nurse Education Today.2022; 111: 105312.     CrossRef
Opinion
How to evaluate learning in a flipped classroom
Sun Kim
J Educ Eval Health Prof. 2018;15:21.   Published online September 13, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.21
  • 20,746 View
  • 286 Download
  • 2 Web of Science
  • 2 Crossref
PDFSupplementary Material

Citations

Citations to this article as recorded by  
  • The role of web-based flipped learning in EFL learners’ critical thinking and learner engagement
    Ya Pang
    Frontiers in Psychology.2022;[Epub]     CrossRef
  • Qualitative Data Requirements in the Divayana Evaluation Model
    Dewa Gede Hendra Divayana, Ni Ketut Widiartini, I Gede Ratnaya
    International Journal of Qualitative Methods.2022; 21: 160940692211348.     CrossRef
Software report
Conducting simulation studies for computerized adaptive testing using SimulCAT: an instructional piece  
Kyung (Chris) Tyek Han
J Educ Eval Health Prof. 2018;15:20.   Published online August 17, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.20
  • 28,880 View
  • 243 Download
  • 7 Web of Science
  • 7 Crossref
AbstractAbstract PDFSupplementary Material
Computerized adaptive testing (CAT) technology is widely used in a variety of licensing and certification examinations administered to health professionals in the United States. Many more countries worldwide are expected to adopt CAT for their national licensing examinations for health professionals due to its reduced test time and more accurate estimation of a test-taker’s performance ability. Continuous improvements to CAT algorithms promote the stability and reliability of the results of such examinations. For this reason, conducting simulation studies is a critically important component of evaluating the design of CAT programs and their implementation. This report introduces the principles of SimulCAT, a software program developed for conducting CAT simulation studies. The key evaluation criteria for CAT simulation studies are explained and some guidelines are offered for practitioners and test developers. A step-by-step tutorial example of a SimulCAT run is also presented. The SimulCAT program supports most of the methods used for the 3 key components of item selection in CAT: the item selection criterion, item exposure control, and content balancing. Methods for determining the test length (fixed or variable) and score estimation algorithms are also covered. The simulation studies presented include output files for the response string, item use, standard error of estimation, Newton-Raphson iteration information, theta estimation, the full response matrix, and the true standard error of estimation. In CAT simulations, one condition cannot be generalized to another; therefore, it is recommended that practitioners perform CAT simulation studies in each stage of CAT development.

Citations

Citations to this article as recorded by  
  • Presidential address: improving item validity and adopting computer-based testing, clinical skills assessments, artificial intelligence, and virtual reality in health professions licensing examinations in Korea
    Hyunjoo Pai
    Journal of Educational Evaluation for Health Professions.2023; 20: 8.     CrossRef
  • Students’ perceptions of Computerised Adaptive Testing in higher education
    Proya Ramgovind, Shamola Pramjeeth
    The Independent Journal of Teaching and Learning.2023; 18(2): 109.     CrossRef
  • Preliminary Development of an Item Bank and an Adaptive Test in Mathematical Knowledge for University Students
    Fernanda Belén Ghio, Manuel Bruzzone, Luis Rojas-Torres, Marcos Cupani
    European Journal of Science and Mathematics Education.2022; 10(3): 352.     CrossRef
  • Evaluating a Computerized Adaptive Testing Version of a Cognitive Ability Test Using a Simulation Study
    Ioannis Tsaousis, Georgios D. Sideridis, Hannan M. AlGhamdi
    Journal of Psychoeducational Assessment.2021; 39(8): 954.     CrossRef
  • Exploring Counselor‐Client Agreement on Clients’ Work Capacity in Established and Consultative Dyads
    Uma Chandrika Millner, Diane Brandt, Leighton Chan, Alan Jette, Elizabeth Marfeo, Pengsheng Ni, Elizabeth Rasch, E. Sally Rogers
    Journal of Employment Counseling.2020; 57(3): 98.     CrossRef
  • Development of a Computerized Adaptive Testing for Internet Addiction
    Yong Zhang, Daxun Wang, Xuliang Gao, Yan Cai, Dongbo Tu
    Frontiers in Psychology.2019;[Epub]     CrossRef
  • Updates from 2018: Being indexed in Embase, becoming an affiliated journal of the World Federation for Medical Education, implementing an optional open data policy, adopting principles of transparency and best practice in scholarly publishing, and appreci
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2018; 15: 36.     CrossRef
Research article
A novel tool for evaluating non-cognitive traits of doctor of physical therapy learners in the United States  
Marcus Roll, Lara Canham, Paul Salamh, Kyle Covington, Corey Simon, Chad Cook
J Educ Eval Health Prof. 2018;15:19.   Published online August 17, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.19
  • 28,285 View
  • 365 Download
  • 6 Web of Science
  • 7 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
The primary aim of this study was to develop a survey addressing an individual’s non-cognitive traits, such as emotional intelligence, interpersonal skills, social intelligence, psychological flexibility, and grit. Such a tool would provide beneficial information for the continued development of admissions standards and would help better capture the full breadth of experience and capabilities of applicants applying to doctor of physical therapy (DPT) programs.
Methods
This was a cross-sectional survey study involving learners in DPT programs at 3 academic institutions in the United States. A survey was developed based on established non-proprietary, non-cognitive measures affiliated with success and resilience. The survey was assessed for face validity, and exploratory factor analysis (EFA) was used to identify subgroups of factors based on responses to the items.
Results
A total of 298 participants (90.3%) completed all elements of the survey. EFA yielded 39 items for dimensional assessment with regression coefficients < 0.4. Within the 39 items, 3 latent constructs were identified: adaptability (16 items), intuitiveness (12 items), and engagement (11 items).
Conclusion
This preliminary non-cognitive assessment survey will be able to play a valuable role in DPT admissions decisions following further examination and refinement.

Citations

Citations to this article as recorded by  
  • A Systematic Review of Variables Used in Physical Therapist Education Program Admissions Part 2: Noncognitive Variables
    Andrea N. Bowens
    Journal of Physical Therapy Education.2024;[Epub]     CrossRef
  • Predictors of Success on the National Physical Therapy Examination in 2 US Accelerated-Hybrid Doctor of Physical Therapy Programs
    Breanna Reynolds, Casey Unverzagt, Alex Koszalinski, Roberta Gatlin, Jill Seale, Kendra Gagnon, Kareaion Eaton, Shane L. Koppenhaver
    Journal of Physical Therapy Education.2022; 36(3): 225.     CrossRef
  • Grit, Resilience, Mindset, and Academic Success in Physical Therapist Students: A Cross-Sectional, Multicenter Study
    Marlena Calo, Belinda Judd, Lucy Chipchase, Felicity Blackstock, Casey L Peiris
    Physical Therapy.2022;[Epub]     CrossRef
  • Predicting graduate student performance – A case study
    Jinghua Nie, Ashrafee Hossain
    Journal of Further and Higher Education.2021; 45(4): 524.     CrossRef
  • Examining Demographic and Preadmission Factors Predictive of First Year and Overall Program Success in a Public Physical Therapist Education Program
    Katy Mitchell, Jennifer Ellison, Elke Schaumberg, Peggy Gleeson, Christina Bickley, Anna Naiki, Severin Travis
    Journal of Physical Therapy Education.2021; 35(3): 203.     CrossRef
  • Doctor of Physical Therapy Student Grit as a Predictor of Academic Success: A Pilot Study
    Rebecca Bliss, Erin Jacobson
    Health Professions Education.2020; 6(4): 522.     CrossRef
  • Personality-oriented job analysis to identify non-cognitive factors predictive of performance in a doctor of physical therapy program in the United States
    Maureen Conard, Kristin Schweizer
    Journal of Educational Evaluation for Health Professions.2018; 15: 34.     CrossRef
Brief report
The implementation and evaluation of an e-Learning training module for objective structured clinical examination raters in Canada  
Karima Khamisa, Samantha Halman, Isabelle Desjardins, Mireille St. Jean, Debra Pugh
J Educ Eval Health Prof. 2018;15:18.   Published online August 6, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.18
  • 22,829 View
  • 253 Download
  • 3 Web of Science
  • 4 Crossref
AbstractAbstract PDFSupplementary Material
Improving the reliability and consistency of objective structured clinical examination (OSCE) raters’ marking poses a continual challenge in medical education. The purpose of this study was to evaluate an e-Learning training module for OSCE raters who participated in the assessment of third-year medical students at the University of Ottawa, Canada. The effects of online training and those of traditional in-person (face-to-face) orientation were compared. Of the 90 physicians recruited as raters for this OSCE, 60 consented to participate (67.7%) in the study in March 2017. Of the 60 participants, 55 rated students during the OSCE, while the remaining 5 were back-up raters. The number of raters in the online training group was 41, while that in the traditional in-person training group was 19. Of those with prior OSCE experience (n= 18) who participated in the online group, 13 (68%) reported that they preferred this format to the in-person orientation. The total average time needed to complete the online module was 15 minutes. Furthermore, 89% of the participants felt the module provided clarity in the rater training process. There was no significant difference in the number of missing ratings based on the type of orientation that raters received. Our study indicates that online OSCE rater training is comparable to traditional face-to-face orientation.

Citations

Citations to this article as recorded by  
  • Assessment methods and the validity and reliability of measurement tools in online objective structured clinical examinations: a systematic scoping review
    Jonathan Zachary Felthun, Silas Taylor, Boaz Shulruf, Digby Wigram Allen
    Journal of Educational Evaluation for Health Professions.2021; 18: 11.     CrossRef
  • Empirical analysis comparing the tele-objective structured clinical examination and the in-person assessment in Australia
    Jonathan Zachary Felthun, Silas Taylor, Boaz Shulruf, Digby Wigram Allen
    Journal of Educational Evaluation for Health Professions.2021; 18: 23.     CrossRef
  • No observed effect of a student-led mock objective structured clinical examination on subsequent performance scores in medical students in Canada
    Lorenzo Madrazo, Claire Bo Lee, Meghan McConnell, Karima Khamisa, Debra Pugh
    Journal of Educational Evaluation for Health Professions.2019; 16: 14.     CrossRef
  • ОБ’ЄКТИВНИЙ СТРУКТУРОВАНИЙ КЛІНІЧНИЙ ІСПИТ ЯК ВИМІР ПРАКТИЧНОЇ ПІДГОТОВКИ МАЙБУТНЬОГО ЛІКАРЯ
    M. M. Korda, A. H. Shulhai, N. V. Pasyaka, N. V. Petrenko, N. V. Haliyash, N. A. Bilkevich
    Медична освіта.2019; (3): 19.     CrossRef
Opinions
Proposal for improving the education and licensing examination for medical record administrators in Korea
Hyunchun Park, Hyunkyung Lee, Yookyung Boo
J Educ Eval Health Prof. 2018;15:16.   Published online July 11, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.16
  • 23,681 View
  • 200 Download
PDFSupplementary Material
Trainee perceptions of a group-based standardized patient training for challenging behavioral health scenarios in the United States
Rachel A. Petts, Jeffrey D. Shahidullah, Paul W. Kettlewell, Kathryn Dehart
J Educ Eval Health Prof. 2018;15:15.   Published online June 11, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.15
  • 27,852 View
  • 216 Download
  • 2 Web of Science
PDFSupplementary Material
Research article
Post-hoc simulation study of computerized adaptive testing for the Korean Medical Licensing Examination  
Dong Gi Seo, Jeongwook Choi
J Educ Eval Health Prof. 2018;15:14.   Published online May 17, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.14
Correction in: J Educ Eval Health Prof 2018;15(0):27
  • 36,275 View
  • 321 Download
  • 8 Web of Science
  • 7 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Computerized adaptive testing (CAT) has been adopted in licensing examinations because it improves the efficiency and accuracy of the tests, as shown in many studies. This simulation study investigated CAT scoring and item selection methods for the Korean Medical Licensing Examination (KMLE).
Methods
This study used a post-hoc (real data) simulation design. The item bank used in this study included all items from the January 2017 KMLE. All CAT algorithms for this study were implemented using the ‘catR’ package in the R program.
Results
In terms of accuracy, the Rasch and 2-parametric logistic (PL) models performed better than the 3PL model. The ‘modal a posteriori’ and ‘expected a posterior’ methods provided more accurate estimates than maximum likelihood estimation or weighted likelihood estimation. Furthermore, maximum posterior weighted information and minimum expected posterior variance performed better than other item selection methods. In terms of efficiency, the Rasch model is recommended to reduce test length.
Conclusion
Before implementing live CAT, a simulation study should be performed under varied test conditions. Based on a simulation study, and based on the results, specific scoring and item selection methods should be predetermined.

Citations

Citations to this article as recorded by  
  • Presidential address: improving item validity and adopting computer-based testing, clinical skills assessments, artificial intelligence, and virtual reality in health professions licensing examinations in Korea
    Hyunjoo Pai
    Journal of Educational Evaluation for Health Professions.2023; 20: 8.     CrossRef
  • Developing Computerized Adaptive Testing for a National Health Professionals Exam: An Attempt from Psychometric Simulations
    Lingling Xu, Zhehan Jiang, Yuting Han, Haiying Liang, Jinying Ouyang
    Perspectives on Medical Education.2023;[Epub]     CrossRef
  • Optimizing Computer Adaptive Test Performance: A Hybrid Simulation Study to Customize the Administration Rules of the CAT-EyeQ in Macular Edema Patients
    T. Petra Rausch-Koster, Michiel A. J. Luijten, Frank D. Verbraak, Ger H. M. B. van Rens, Ruth M. A. van Nispen
    Translational Vision Science & Technology.2022; 11(11): 14.     CrossRef
  • The accuracy and consistency of mastery for each content domain using the Rasch and deterministic inputs, noisy “and” gate diagnostic classification models: a simulation study and a real-world analysis using data from the Korean Medical Licensing Examinat
    Dong Gi Seo, Jae Kum Kim
    Journal of Educational Evaluation for Health Professions.2021; 18: 15.     CrossRef
  • Linear programming method to construct equated item sets for the implementation of periodical computer-based testing for the Korean Medical Licensing Examination
    Dong Gi Seo, Myeong Gi Kim, Na Hui Kim, Hye Sook Shin, Hyun Jung Kim
    Journal of Educational Evaluation for Health Professions.2018; 15: 26.     CrossRef
  • Funding information of the article entitled “Post-hoc simulation study of computerized adaptive testing for the Korean Medical Licensing Examination”
    Dong Gi Seo, Jeongwook Choi
    Journal of Educational Evaluation for Health Professions.2018; 15: 27.     CrossRef
  • Updates from 2018: Being indexed in Embase, becoming an affiliated journal of the World Federation for Medical Education, implementing an optional open data policy, adopting principles of transparency and best practice in scholarly publishing, and appreci
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2018; 15: 36.     CrossRef
Opinion
Experiences with a graduate course on sex and gender medicine in Korea
Seon Mee Park, Nayoung Kim, Hee Young Paik
J Educ Eval Health Prof. 2018;15:13.   Published online May 4, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.13
  • 35,304 View
  • 275 Download
  • 6 Web of Science
  • 5 Crossref
PDFSupplementary Material

Citations

Citations to this article as recorded by  
  • A roadmap for sex- and gender-disaggregated health research
    Sanne A. E. Peters, Mark Woodward
    BMC Medicine.2023;[Epub]     CrossRef
  • How to Integrate Sex and Gender Medicine into Medical and Allied Health Profession Undergraduate, Graduate, and Post-Graduate Education: Insights from a Rapid Systematic Literature Review and a Thematic Meta-Synthesis
    Rola Khamisy-Farah, Nicola Luigi Bragazzi
    Journal of Personalized Medicine.2022; 12(4): 612.     CrossRef
  • Work–Life Conflict and Its Health Effects on Korean Gastroenterologists According to Age and Sex
    Eun Sun Jang, Seon Mee Park, Young Sook Park, Jong Chan Lee, Nayoung Kim
    Digestive Diseases and Sciences.2020; 65(1): 86.     CrossRef
  • Escaso conocimiento entre los profesionales sanitarios sobre las diferencias de género en la asociación entre la diabetes tipo 2 y la enfermedad cardiovascular
    P. Buil-Cosiales, C. Gómez-García, X. Cos, J. Franch-Nadal, B. Vlacho, J.M. Millaruelo
    Medicina de Familia. SEMERGEN.2020; 46(2): 90.     CrossRef
  • Outcomes of gender-sensitivity educational interventions for healthcare providers: A systematic review
    Sally Lindsay, Mana Rezai, Kendall Kolne, Victoria Osten
    Health Education Journal.2019; 78(8): 958.     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions