-
A new performance evaluation indicator for the LEE Jong-wook Fellowship Program of Korea Foundation for International Healthcare to better assess its long-term educational impacts: a Delphi study
-
Minkyung Oh, Bo Young Yoon
-
J Educ Eval Health Prof. 2024;21:27. Published online October 2, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.27
-
-
Abstract
PDFSupplementary Material
- Purpose
The Dr. LEE Jong-wook Fellowship Program, established by the Korea Foundation for International Healthcare (KOFIH), aims to strengthen healthcare capacity in partner countries. The aim of the study was to develop new performance evaluation indicators for the program to better assess long-term educational impact across various courses and professional roles.
Methods A 3-stage process was employed. First, a literature review of established evaluation models (Kirkpatrick’s 4 levels, context/input/process/product evaluation model, Organization for Economic Cooperation and Development Assistance Committee criteria) was conducted to devise evaluation criteria. Second, these criteria were validated via a 2-round Delphi survey with 18 experts in training projects from May 2021 to June 2021. Third, the relative importance of the evaluation criteria was determined using the analytic hierarchy process (AHP), calculating weights and ensuring consistency through the consistency index and consistency ratio (CR), with CR values below 0.1 indicating acceptable consistency.
Results The literature review led to a combined evaluation model, resulting in 4 evaluation areas, 20 items, and 92 indicators. The Delphi surveys confirmed the validity of these indicators, with content validity ratio values exceeding 0.444. The AHP analysis assigned weights to each indicator, and CR values below 0.1 indicated consistency. The final set of evaluation indicators was confirmed through a workshop with KFIH and adopted as the new evaluation tool.
Conclusion The developed evaluation framework provides a comprehensive tool for assessing the long-term outcomes of the Dr. LEE Jong-wook Fellowship Program. It enhances evaluation capabilities and supports improvements in the training program’s effectiveness and international healthcare collaboration.
-
Medical students’ thought process while solving problems in 3 different types of clinical assessments in Korea: clinical performance examination, multimedia case-based assessment, and modified essay question
-
Sejin Kim, Ikseon Choi, Bo Young Yoon, Min Jeong Kwon, Seok-jin Choi, Sang Hyun Kim, Jong-Tae Lee, Byoung Doo Rhee
-
J Educ Eval Health Prof. 2019;16:10. Published online May 9, 2019
-
DOI: https://doi.org/10.3352/jeehp.2019.16.10
-
-
16,857
View
-
281
Download
-
3
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
This study aimed to explore students’ cognitive patterns while solving clinical problems in 3 different types of assessments—clinical performance examination (CPX), multimedia case-based assessment (CBA), and modified essay question (MEQ)—and thereby to understand how different types of assessments stimulate different patterns of thinking.
Methods A total of 6 test-performance cases from 2 fourth-year medical students were used in this cross-case study. Data were collected through one-on-one interviews using a stimulated recall protocol where students were shown videos of themselves taking each assessment and asked to elaborate on what they were thinking. The unit of analysis was the smallest phrases or sentences in the participants’ narratives that represented meaningful cognitive occurrences. The narrative data were reorganized chronologically and then analyzed according to the hypothetico-deductive reasoning framework for clinical reasoning.
Results Both participants demonstrated similar proportional frequencies of clinical reasoning patterns on the same clinical assessments. The results also revealed that the three different assessment types may stimulate different patterns of clinical reasoning. For example, the CPX strongly promoted the participants’ reasoning related to inquiry strategy, while the MEQ strongly promoted hypothesis generation. Similarly, data analysis and synthesis by the participants were more strongly stimulated by the CBA than by the other assessment types.
Conclusion This study found that different assessment designs stimulated different patterns of thinking during problem-solving. This finding can contribute to the search for ways to improve current clinical assessments. Importantly, the research method used in this study can be utilized as an alternative way to examine the validity of clinical assessments.
-
Citations
Citations to this article as recorded by
- Future directions of online learning environment design at medical schools: a transition towards a post-pandemic context
Sejin Kim Kosin Medical Journal.2023; 38(1): 12. CrossRef - Clinical Reasoning Training based on the analysis of clinical case using a virtual environment
Sandra Elena Lisperguer Soto, María Soledad Calvo, Gabriela Paz Urrejola Contreras, Miguel Ángel Pérez Lizama Educación Médica.2021; 22(3): 139. CrossRef - Newly appointed medical faculty members’ self-evaluation of their educational roles at the Catholic University of Korea College of Medicine in 2020 and 2021: a cross-sectional survey-based study
Sun Kim, A Ra Cho, Chul Woon Chung Journal of Educational Evaluation for Health Professions.2021; 18: 28. CrossRef
|