Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Author index

Page Path
HOME > Browse articles > Author index
Search
Janghee Park 6 Articles
Evaluation of medical school faculty members’ educational performance in Korea in 2022 through analysis of the promotion regulations: a mixed methods study  
Hye Won Jang, Janghee Park
J Educ Eval Health Prof. 2023;20:7.   Published online February 28, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.7
  • 3,049 View
  • 133 Download
AbstractAbstract PDFSupplementary Material
Purpose
To ensure faculty members’ active participation in education in response to growing demand, medical schools should clearly describe educational activities in their promotion regulations. This study analyzed the status of how medical education activities are evaluated in promotion regulations in 2022, in Korea.
Methods
Data were collected from promotion regulations retrieved by searching the websites of 22 medical schools/universities in August 2022. To categorize educational activities and evaluation methods, the Association of American Medical Colleges framework for educational activities was utilized. Correlations between medical schools’ characteristics and the evaluation of medical educational activities were analyzed.
Results
We defined 6 categories, including teaching, development of education products, education administration and service, scholarship in education, student affairs, and others, and 20 activities with 57 sub-activities. The average number of included activities was highest in the development of education products category and lowest in the scholarship in education category. The weight adjustment factors of medical educational activities were the characteristics of the target subjects and faculty members, the number of involved faculty members, and the difficulty of activities. Private medical schools tended to have more educational activities in the regulations than public medical schools. The greater the number of faculty members, the greater the number of educational activities in the education administration and service categories.
Conclusion
Medical schools included various medical education activities and their evaluation methods in promotion regulations in Korea. This study provides basic data for improving the rewarding system for efforts of medical faculty members in education.
Medical students’ patterns of using ChatGPT as a feedback tool and perceptions of ChatGPT in a Leadership and Communication course in Korea: a cross-sectional study  
Janghee Park
J Educ Eval Health Prof. 2023;20:29.   Published online November 10, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.29
  • 2,559 View
  • 206 Download
  • 6 Web of Science
  • 6 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to analyze patterns of using ChatGPT before and after group activities and to explore medical students’ perceptions of ChatGPT as a feedback tool in the classroom.
Methods
The study included 99 2nd-year pre-medical students who participated in a “Leadership and Communication” course from March to June 2023. Students engaged in both individual and group activities related to negotiation strategies. ChatGPT was used to provide feedback on their solutions. A survey was administered to assess students’ perceptions of ChatGPT’s feedback, its use in the classroom, and the strengths and challenges of ChatGPT from May 17 to 19, 2023.
Results
The students responded by indicating that ChatGPT’s feedback was helpful, and revised and resubmitted their group answers in various ways after receiving feedback. The majority of respondents expressed agreement with the use of ChatGPT during class. The most common response concerning the appropriate context of using ChatGPT’s feedback was “after the first round of discussion, for revisions.” There was a significant difference in satisfaction with ChatGPT’s feedback, including correctness, usefulness, and ethics, depending on whether or not ChatGPT was used during class, but there was no significant difference according to gender or whether students had previous experience with ChatGPT. The strongest advantages were “providing answers to questions” and “summarizing information,” and the worst disadvantage was “producing information without supporting evidence.”
Conclusion
The students were aware of the advantages and disadvantages of ChatGPT, and they had a positive attitude toward using ChatGPT in the classroom.

Citations

Citations to this article as recorded by  
  • Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
    Xiaojun Xu, Yixiao Chen, Jing Miao
    Journal of Educational Evaluation for Health Professions.2024; 21: 6.     CrossRef
  • Embracing ChatGPT for Medical Education: Exploring Its Impact on Doctors and Medical Students
    Yijun Wu, Yue Zheng, Baijie Feng, Yuqi Yang, Kai Kang, Ailin Zhao
    JMIR Medical Education.2024; 10: e52483.     CrossRef
  • Integration of ChatGPT Into a Course for Medical Students: Explorative Study on Teaching Scenarios, Students’ Perception, and Applications
    Anita V Thomae, Claudia M Witt, Jürgen Barth
    JMIR Medical Education.2024; 10: e50545.     CrossRef
  • A cross sectional investigation of ChatGPT-like large language models application among medical students in China
    Guixia Pan, Jing Ni
    BMC Medical Education.2024;[Epub]     CrossRef
  • ChatGPT and Clinical Training: Perception, Concerns, and Practice of Pharm-D Students
    Mohammed Zawiah, Fahmi Al-Ashwal, Lobna Gharaibeh, Rana Abu Farha, Karem Alzoubi, Khawla Abu Hammour, Qutaiba A Qasim, Fahd Abrah
    Journal of Multidisciplinary Healthcare.2023; Volume 16: 4099.     CrossRef
  • Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
    Hyunju Lee, Soobin Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 39.     CrossRef
Possibility of using the yes/no Angoff method as a substitute for the percent Angoff method for estimating the cutoff score of the Korean Medical Licensing Examination: a simulation study  
Janghee Park
J Educ Eval Health Prof. 2022;19:23.   Published online August 31, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.23
  • 3,096 View
  • 171 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
The percent Angoff (PA) method has been recommended as a reliable method to set the cutoff score instead of a fixed cut point of 60% in the Korean Medical Licensing Examination (KMLE). The yes/no Angoff (YNA) method, which is easy for panelists to judge, can be considered as an alternative because the KMLE has many items to evaluate. This study aimed to compare the cutoff score and the reliability depending on whether the PA or the YNA standard-setting method was used in the KMLE.
Methods
The materials were the open-access PA data of the KMLE. The PA data were converted to YNA data in 5 categories, in which the probabilities for a “yes” decision by panelists were 50%, 60%, 70%, 80%, and 90%. SPSS for descriptive analysis and G-string for generalizability theory were used to present the results.
Results
The PA method and the YNA method counting 60% as “yes,” estimated similar cutoff scores. Those cutoff scores were deemed acceptable based on the results of the Hofstee method. The highest reliability coefficients estimated by the generalizability test were from the PA method and the YNA method, with probabilities of 70%, 80%, 60%, and 50% for deciding “yes,” in descending order. The panelist’s specialty was the main cause of the error variance. The error size was similar regardless of the standard-setting method.
Conclusion
The above results showed that the PA method was more reliable than the YNA method in estimating the cutoff score of the KMLE. However, the YNA method with a 60% probability for deciding “yes” also can be used as a substitute for the PA method in estimating the cutoff score of the KMLE.

Citations

Citations to this article as recorded by  
  • Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2023; 20: 5.     CrossRef
  • Possibility of independent use of the yes/no Angoff and Hofstee methods for the standard setting of the Korean Medical Licensing Examination written test: a descriptive study
    Do-Hwan Kim, Ye Ji Kang, Hoon-Ki Park
    Journal of Educational Evaluation for Health Professions.2022; 19: 33.     CrossRef
Similarity of the cut score in test sets with different item amounts using the modified Angoff, modified Ebel, and Hofstee standard-setting methods for the Korean Medical Licensing Examination  
Janghee Park, Mi Kyoung Yim, Na Jin Kim, Duck Sun Ahn, Young-Min Kim
J Educ Eval Health Prof. 2020;17:28.   Published online October 5, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.28
  • 6,893 View
  • 195 Download
  • 7 Web of Science
  • 6 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
The Korea Medical Licensing Exam (KMLE) typically contains a large number of items. The purpose of this study was to investigate whether there is a difference in the cut score between evaluating all items of the exam and evaluating only some items when conducting standard-setting.
Methods
We divided the item sets that appeared on 3 recent KMLEs for the past 3 years into 4 subsets of each year of 25% each based on their item content categories, discrimination index, and difficulty index. The entire panel of 15 members assessed all the items (360 items, 100%) of the year 2017. In split-half set 1, each item set contained 184 (51%) items of year 2018 and each set from split-half set 2 contained 182 (51%) items of the year 2019 using the same method. We used the modified Angoff, modified Ebel, and Hofstee methods in the standard-setting process.
Results
Less than a 1% cut score difference was observed when the same method was used to stratify item subsets containing 25%, 51%, or 100% of the entire set. When rating fewer items, higher rater reliability was observed.
Conclusion
When the entire item set was divided into equivalent subsets, assessing the exam using a portion of the item set (90 out of 360 items) yielded similar cut scores to those derived using the entire item set. There was a higher correlation between panelists’ individual assessments and the overall assessments.

Citations

Citations to this article as recorded by  
  • Application of computer-based testing in the Korean Medical Licensing Examination, the emergence of the metaverse in medical education, journal metrics and statistics, and appreciation to reviewers and volunteers
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2022; 19: 2.     CrossRef
  • Possibility of using the yes/no Angoff method as a substitute for the percent Angoff method for estimating the cutoff score of the Korean Medical Licensing Examination: a simulation study
    Janghee Park
    Journal of Educational Evaluation for Health Professions.2022; 19: 23.     CrossRef
  • Equal Z standard-setting method to estimate the minimum number of panelists for a medical school’s objective structured clinical examination in Taiwan: a simulation study
    Ying-Ying Yang, Pin-Hsiang Huang, Ling-Yu Yang, Chia-Chang Huang, Chih-Wei Liu, Shiau-Shian Huang, Chen-Huan Chen, Fa-Yauh Lee, Shou-Yen Kao, Boaz Shulruf
    Journal of Educational Evaluation for Health Professions.2022; 19: 27.     CrossRef
  • Possibility of independent use of the yes/no Angoff and Hofstee methods for the standard setting of the Korean Medical Licensing Examination written test: a descriptive study
    Do-Hwan Kim, Ye Ji Kang, Hoon-Ki Park
    Journal of Educational Evaluation for Health Professions.2022; 19: 33.     CrossRef
  • Presidential address: Quarantine guidelines to protect examinees from coronavirus disease 2019, clinical skills examination for dental licensing, and computer-based testing for medical, dental, and oriental medicine licensing
    Yoon-Seong Lee
    Journal of Educational Evaluation for Health Professions.2021; 18: 1.     CrossRef
  • Comparing the cut score for the borderline group method and borderline regression method with norm-referenced standard setting in an objective structured clinical examination in medical school in Korea
    Song Yi Park, Sang-Hwa Lee, Min-Jeong Kim, Ki-Hwan Ji, Ji Ho Ryu
    Journal of Educational Evaluation for Health Professions.2021; 18: 25.     CrossRef
Comparison of standard-setting methods for the Korean Radiological Technologist Licensing Examination: Angoff, Ebel, bookmark, and Hofstee  
Janghee Park, Duck-Sun Ahn, Mi Kyoung Yim, Jaehyoung Lee
J Educ Eval Health Prof. 2018;15:32.   Published online December 26, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.32
  • 19,597 View
  • 255 Download
  • 11 Web of Science
  • 9 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to compare the possible standard-setting methods for the Korean Radiological Technologist Licensing Examination, which has a fixed cut score, and to suggest the most appropriate method.
Methods
Six radiological technology professors set standards for 250 items on the Korean Radiological Technologist Licensing Examination administered in December 2016 using the Angoff, Ebel, bookmark, and Hofstee methods.
Results
With a maximum percentile score of 100, the cut score for the examination was 71.27 using the Angoff method, 62.2 using the Ebel method, 64.49 using the bookmark method, and 62 using the Hofstee method. Based on the Hofstee method, an acceptable cut score for the examination would be between 52.83 and 70, but the cut score was 71.27 using the Angoff method.
Conclusion
The above results suggest that the best standard-setting method to determine the cut score would be a panel discussion with the modified Angoff or Ebel method, with verification of the rated results by the Hofstee method. Since no standard-setting method has yet been adopted for the Korean Radiological Technologist Licensing Examination, this study will be able to provide practical guidance for introducing a standard-setting process.

Citations

Citations to this article as recorded by  
  • Setting standards for a diagnostic test of aviation English for student pilots
    Maria Treadaway, John Read
    Language Testing.2024; 41(3): 557.     CrossRef
  • Third Year Veterinary Student Academic Encumbrances and Tenacity: Navigating Clinical Skills Curricula and Assessment
    Saundra H. Sample, Elpida Artemiou, Darlene J. Donszelmann, Cindy Adams
    Journal of Veterinary Medical Education.2024;[Epub]     CrossRef
  • The challenges inherent with anchor-based approaches to the interpretation of important change in clinical outcome assessments
    Kathleen W. Wyrwich, Geoffrey R. Norman
    Quality of Life Research.2023; 32(5): 1239.     CrossRef
  • Possibility of independent use of the yes/no Angoff and Hofstee methods for the standard setting of the Korean Medical Licensing Examination written test: a descriptive study
    Do-Hwan Kim, Ye Ji Kang, Hoon-Ki Park
    Journal of Educational Evaluation for Health Professions.2022; 19: 33.     CrossRef
  • Comparison of the validity of bookmark and Angoff standard setting methods in medical performance tests
    Majid Yousefi Afrashteh
    BMC Medical Education.2021;[Epub]     CrossRef
  • Comparing the cut score for the borderline group method and borderline regression method with norm-referenced standard setting in an objective structured clinical examination in medical school in Korea
    Song Yi Park, Sang-Hwa Lee, Min-Jeong Kim, Ki-Hwan Ji, Ji Ho Ryu
    Journal of Educational Evaluation for Health Professions.2021; 18: 25.     CrossRef
  • Using the Angoff method to set a standard on mock exams for the Korean Nursing Licensing Examination
    Mi Kyoung Yim, Sujin Shin
    Journal of Educational Evaluation for Health Professions.2020; 17: 14.     CrossRef
  • Performance of the Ebel standard-setting method for the spring 2019 Royal College of Physicians and Surgeons of Canada internal medicine certification examination consisting of multiple-choice questions
    Jimmy Bourque, Haley Skinner, Jonathan Dupré, Maria Bacchus, Martha Ainslie, Irene W. Y. Ma, Gary Cole
    Journal of Educational Evaluation for Health Professions.2020; 17: 12.     CrossRef
  • Similarity of the cut score in test sets with different item amounts using the modified Angoff, modified Ebel, and Hofstee standard-setting methods for the Korean Medical Licensing Examination
    Janghee Park, Mi Kyoung Yim, Na Jin Kim, Duck Sun Ahn, Young-Min Kim
    Journal of Educational Evaluation for Health Professions.2020; 17: 28.     CrossRef
Proposal for a Modified Dreyfus and Miller Model with simplified competency level descriptions for performing self-rated surveys  
Janghee Park
J Educ Eval Health Prof. 2015;12:54.   Published online November 30, 2015
DOI: https://doi.org/10.3352/jeehp.2015.12.54
  • 37,311 View
  • 352 Download
  • 10 Web of Science
  • 11 Crossref
AbstractAbstract PDF
In competency-based education, it is important to frequently evaluate the degree of competency achieved by establishing and specifying competency levels. To self-appraise one’s own competency level, one needs a simple, clear, and accurate description for each competency level. This study aimed at developing competency stages that can be used in surveys and conceptualizing clear and precise competency level descriptions. In this paper, the author intends to conceptualize a simple competency level description through a literature review. The author modified the most widely quoted competency level models—Dreyfus’ Five-stage Model and Miller’s Pyramid—and classified competency levels into the following: The Modified Dreyfus Model comprises absolute beginner, beginner, advanced beginner, competent, proficient, and expert, while the Modified Miller Model uses the levels of knows little, knows and knows how, exercised does, selected does, experienced does, and intuitive does. The author also provided a simple and clear description of competency levels. The precise description of competency levels developed in this study is expected to be useful in determining one’s competency level in surveys.

Citations

Citations to this article as recorded by  
  • Long-Term Retention of Advanced Cardiovascular Life Support Knowledge and Confidence in Doctor of Pharmacy Students
    Susan E. Smith, Andrea N. Sikora, Michael Fulford, Kelly C. Rogers
    American Journal of Pharmaceutical Education.2024; 88(1): 100609.     CrossRef
  • Impact of fully guided implant planning software training on the knowledge acquisition and satisfaction of dental undergraduate students
    Shishir Ram Shetty, Colin Alexander Murray, Sausan Al Kawas, Sara Jaser, Natheer Al-Rawi, Wael Talaat, Sangeetha Narasimhan, Sunaina Shetty, Pooja Adtani, Shruthi Hegde
    Medical Education Online.2023;[Epub]     CrossRef
  • Assessment of a support garment in parastomal bulging from a patient perspective: a qualitative study
    Trine Borglit, Marianne Krogsgaard, Stine Zeberg Theisen, Mette Juel Rothmann
    International Journal of Qualitative Studies on Health and Well-being.2022;[Epub]     CrossRef
  • Milestones 2.0: An advancement in competency-based assessment for dermatology
    Kiran Motaparthi, Laura Edgar, William D. Aughenbaugh, Anna L. Bruckner, Alexa Leone, Erin F. Mathes, Andrea Murina, Ronald P. Rapini, David Rubenstein, Ashley Wysong, Erik J. Stratman
    Clinics in Dermatology.2022; 40(6): 776.     CrossRef
  • Sandbox of Competence: A Conceptual Model for Assessing Professional Competence
    Alcides Luiz Neto, Luciano Ferreira da Silva, Renato Penha
    Administrative Sciences.2022; 12(4): 182.     CrossRef
  • Preparation for Challenging Cases: What Differentiates Expert From Novice Surgeons?
    Iman Ghaderi, Lev Korovin, Timothy M. Farrell
    Journal of Surgical Education.2021; 78(2): 450.     CrossRef
  • Rethinking Competence: A Nexus of Educational Models in the Context of Lifelong Learning
    Dalia Bajis, Betty Chaar, Rebekah Moles
    Pharmacy.2020; 8(2): 81.     CrossRef
  • Meeting Personal Health Care Needs in Primary Care: A Response From the Athletic Training Profession
    Wade Green, Eric Sauers
    Athletic Training Education Journal.2020; 15(4): 278.     CrossRef
  • Dreyfus scale-based feedback increased medical students’ satisfaction with the complex cluster part of a interviewing and physical examination course and improved skills readiness in Taiwan
    Shiau-Shian Huang, Chia-Chang Huang, Ying-Ying Yang, Shuu-Jiun Wang, Boaz Shulruf, Chen-Huan Chen
    Journal of Educational Evaluation for Health Professions.2019; 16: 30.     CrossRef
  • Fitness for purpose in anaesthesiology: a review
    Nicola Kalafatis, Thomas Sommerville, Pragasan Dean Gopalan
    Southern African Journal of Anaesthesia and Analgesia.2018; 24(6): 148.     CrossRef
  • Confidence in Procedural Skills before and after a Two-Year Master’s Programme in Family Medicine in Gezira State, Sudan
    K. G. Mohamed, S. Hunskaar, S. H. Abdelrahman, E. M. Malik
    Advances in Medicine.2017; 2017: 1.     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions
TOP