Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Search

Page Path
HOME > Search
4 "Song Yi Park"
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles
Research articles
Acceptability of the 8-case objective structured clinical examination of medical students in Korea using generalizability theory: a reliability study  
Song Yi Park, Sang-Hwa Lee, Min-Jeong Kim, Ki-Hwan Ji, Ji Ho Ryu
J Educ Eval Health Prof. 2022;19:26.   Published online September 8, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.26
  • 2,385 View
  • 211 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study investigated whether the reliability was acceptable when the number of cases in the objective structured clinical examination (OSCE) decreased from 12 to 8 using generalizability theory (GT).
Methods
This psychometric study analyzed the OSCE data of 439 fourth-year medical students conducted in the Busan and Gyeongnam areas of South Korea from July 12 to 15, 2021. The generalizability study (G-study) considered 3 facets—students (p), cases (c), and items (i)—and designed the analysis as p×(i:c) due to items being nested in a case. The acceptable generalizability (G) coefficient was set to 0.70. The G-study and decision study (D-study) were performed using G String IV ver. 6.3.8 (Papawork, Hamilton, ON, Canada).
Results
All G coefficients except for July 14 (0.69) were above 0.70. The major sources of variance components (VCs) were items nested in cases (i:c), from 51.34% to 57.70%, and residual error (pi:c), from 39.55% to 43.26%. The proportion of VCs in cases was negligible, ranging from 0% to 2.03%.
Conclusion
The case numbers decreased in the 2021 Busan and Gyeongnam OSCE. However, the reliability was acceptable. In the D-study, reliability was maintained at 0.70 or higher if there were more than 21 items/case in 8 cases and more than 18 items/case in 9 cases. However, according to the G-study, increasing the number of items nested in cases rather than the number of cases could further improve reliability. The consortium needs to maintain a case bank with various items to implement a reliable blueprinting combination for the OSCE.

Citations

Citations to this article as recorded by  
  • Applying the Generalizability Theory to Identify the Sources of Validity Evidence for the Quality of Communication Questionnaire
    Flávia Del Castanhel, Fernanda R. Fonseca, Luciana Bonnassis Burg, Leonardo Maia Nogueira, Getúlio Rodrigues de Oliveira Filho, Suely Grosseman
    American Journal of Hospice and Palliative Medicine®.2024; 41(7): 792.     CrossRef
Comparing the cut score for the borderline group method and borderline regression method with norm-referenced standard setting in an objective structured clinical examination in medical school in Korea  
Song Yi Park, Sang-Hwa Lee, Min-Jeong Kim, Ki-Hwan Ji, Ji Ho Ryu
J Educ Eval Health Prof. 2021;18:25.   Published online September 27, 2021
DOI: https://doi.org/10.3352/jeehp.2021.18.25
  • 5,559 View
  • 299 Download
  • 2 Web of Science
  • 3 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Setting standards is critical in health professions. However, appropriate standard setting methods do not always apply to the set cut score in performance assessment. The aim of this study was to compare the cut score when the standard setting is changed from the norm-referenced method to the borderline group method (BGM) and borderline regression method (BRM) in an objective structured clinical examination (OSCE) in medical school.
Methods
This was an explorative study to model the implementation of the BGM and BRM. A total of 107 fourth-year medical students attended the OSCE at 7 stations for encountering standardized patients (SPs) and at 1 station for performing skills on a manikin on July 15th, 2021. Thirty-two physician examiners evaluated the performance by completing a checklist and global rating scales.
Results
The cut score of the norm-referenced method was lower than that of the BGM (P<0.01) and BRM (P<0.02). There was no significant difference in the cut score between the BGM and BRM (P=0.40). The station with the highest standard deviation and the highest proportion of the borderline group showed the largest cut score difference in standard setting methods.
Conclusion
Prefixed cut scores by the norm-referenced method without considering station contents or examinee performance can vary due to station difficulty and content, affecting the appropriateness of standard setting decisions. If there is an adequate consensus on the criteria for the borderline group, standard setting with the BRM could be applied as a practical and defensible method to determine the cut score for OSCE.

Citations

Citations to this article as recorded by  
  • Analyzing the Quality of Objective Structured Clinical Examination in Alborz University of Medical Sciences
    Suleiman Ahmadi, Amin Habibi, Mitra Rahimzadeh, Shahla Bahrami
    Alborz University Medical Journal.2023; 12(4): 485.     CrossRef
  • Possibility of using the yes/no Angoff method as a substitute for the percent Angoff method for estimating the cutoff score of the Korean Medical Licensing Examination: a simulation study
    Janghee Park
    Journal of Educational Evaluation for Health Professions.2022; 19: 23.     CrossRef
  • Newly appointed medical faculty members’ self-evaluation of their educational roles at the Catholic University of Korea College of Medicine in 2020 and 2021: a cross-sectional survey-based study
    Sun Kim, A Ra Cho, Chul Woon Chung
    Journal of Educational Evaluation for Health Professions.2021; 18: 28.     CrossRef
Agreement between medical students’ peer assessments and faculty assessments in advanced resuscitation skills examinations in South Korea  
Jinwoo Jeong, Song Yi Park, Kyung Hoon Sun
J Educ Eval Health Prof. 2021;18:4.   Published online March 25, 2021
DOI: https://doi.org/10.3352/jeehp.2021.18.4
  • 5,094 View
  • 285 Download
AbstractAbstract PDFSupplementary Material
Purpose
In medical education, peer assessment is considered to be an effective learning strategy. Although several studies have examined agreement between peer and faculty assessments regarding basic life support (BLS), few studies have done so for advanced resuscitation skills (ARS) such as intubation and defibrillation. Therefore, this study aimed to determine the degree of agreement between medical students’ and faculty assessments of ARS examinations.
Methods
This retrospective explorative study was conducted during the emergency medicine (EM) clinical clerkship of fourth-year medical students from April to July 2020. A faculty assessor (FA) and a peer assessor (PA) assessed each examinee’s resuscitation skills (including BLS, intubation, and defibrillation) using a checklist that consisted of 20 binary items (performed or not performed) and 1 global proficiency rating using a 5-point Likert scale. The prior examinee assessed the next examinee after feedback and training as a PA. All 54 students participated in peer assessment. The assessments of 44 FA/PA pairs were analyzed using the intraclass correlation coefficient (ICC) and Gwet’s first-order agreement coefficient.
Results
The PA scores were higher than the FA scores (mean±standard deviation, 20.2±2.5 [FA] vs. 22.3±2.4 [PA]; P<0.001). The agreement was poor to moderate for the overall checklist (ICC, 0.55; 95% confidence interval [CI], 0.31 to 0.73; P<0.01), BLS (ICC, 0.19; 95% CI, -0.11 to 0.46; P<0.10), intubation (ICC, 0.51; 95% CI, 0.26 to 0.70; P<0.01), and defibrillation (ICC, 0.49; 95% CI, 0.23 to 0.68; P<0.01).
Conclusion
Senior medical students showed unreliable agreement in ARS assessments compared to faculty assessments. If a peer assessment is planned in skills education, comprehensive preparation and sufficient assessor training should be provided in advance.
Brief Report
Clinical performance of medical students in Korea in a whole-task emergency station in the objective structured clinical examination with a standardized patient complaining of palpitations  
Song Yi Park, Hyun-Hee Kong, Min-Jeong Kim, Yoo Sang Yoon, Sang-Hwa Lee, Sunju Im, Ji-Hyun Seo
J Educ Eval Health Prof. 2020;17:42.   Published online December 16, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.42
  • 4,833 View
  • 132 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
This study assessed the clinical performance of 150 third-year medicalstudents in Busan, Korea in a whole-task emergency objective structured clinical examination station that simulated a patient with palpitations visiting the emergency department. The examination was conducted from November 25 to 27, 2019. Clinical performance was assessed as the number and percentage of students who performed history-taking (HT), a physical examination (PE), an electrocardiography (ECG) study, patient education (Ed), and clinical reasoning (CR), which were items on the checklist. It was found that 18.0% of students checked the patient’s pulse, 51.3% completed an ECG study, and 57.9% explained the results to the patient. A sizable proportion (38.0%) of students did not even attempt an ECG study. In a whole-task emergency station, students showed good performance on HT and CR, but unsatisfactory results for PE, ECG study, and Ed. Clinical skills educational programs for subjected student should focus more on PE, timely diagnostic tests, and sufficient Ed.

Citations

Citations to this article as recorded by  
  • Newly appointed medical faculty members’ self-evaluation of their educational roles at the Catholic University of Korea College of Medicine in 2020 and 2021: a cross-sectional survey-based study
    Sun Kim, A Ra Cho, Chul Woon Chung
    Journal of Educational Evaluation for Health Professions.2021; 18: 28.     CrossRef
  • Comparing the cut score for the borderline group method and borderline regression method with norm-referenced standard setting in an objective structured clinical examination in medical school in Korea
    Song Yi Park, Sang-Hwa Lee, Min-Jeong Kim, Ki-Hwan Ji, Ji Ho Ryu
    Journal of Educational Evaluation for Health Professions.2021; 18: 25.     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions