Research articles
-
GPT-4o’s competency in answering the simulated written European Board of Interventional Radiology exam compared to a medical student and experts in Germany and its ability to generate exam items on interventional radiology: a descriptive study
-
Sebastian Ebel, Constantin Ehrengut, Timm Denecke, Holger Gößmann, Anne Bettina Beeskow
-
J Educ Eval Health Prof. 2024;21:21. Published online August 20, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.21
-
-
858
View
-
277
Download
-
2
Web of Science
-
2
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
This study aimed to determine whether ChatGPT-4o, a generative artificial intelligence (AI) platform, was able to pass a simulated written European Board of Interventional Radiology (EBIR) exam and whether GPT-4o can be used to train medical students and interventional radiologists of different levels of expertise by generating exam items on interventional radiology.
Methods
GPT-4o was asked to answer 370 simulated exam items of the Cardiovascular and Interventional Radiology Society of Europe (CIRSE) for EBIR preparation (CIRSE Prep). Subsequently, GPT-4o was requested to generate exam items on interventional radiology topics at levels of difficulty suitable for medical students and the EBIR exam. Those generated items were answered by 4 participants, including a medical student, a resident, a consultant, and an EBIR holder. The correctly answered items were counted. One investigator checked the answers and items generated by GPT-4o for correctness and relevance. This work was done from April to July 2024.
Results
GPT-4o correctly answered 248 of the 370 CIRSE Prep items (67.0%). For 50 CIRSE Prep items, the medical student answered 46.0%, the resident 42.0%, the consultant 50.0%, and the EBIR holder 74.0% correctly. All participants answered 82.0% to 92.0% of the 50 GPT-4o generated items at the student level correctly. For the 50 GPT-4o items at the EBIR level, the medical student answered 32.0%, the resident 44.0%, the consultant 48.0%, and the EBIR holder 66.0% correctly. All participants could pass the GPT-4o-generated items for the student level; while the EBIR holder could pass the GPT-4o-generated items for the EBIR level. Two items (0.3%) out of 150 generated by the GPT-4o were assessed as implausible.
Conclusion
GPT-4o could pass the simulated written EBIR exam and create exam items of varying difficulty to train medical students and interventional radiologists.
-
Citations
Citations to this article as recorded by
- From GPT-3.5 to GPT-4.o: A Leap in AI’s Medical Exam Performance
Markus Kipp
Information.2024; 15(9): 543. CrossRef - Performance of ChatGPT and Bard on the medical licensing examinations varies across different cultures: a comparison study
Yikai Chen, Xiujie Huang, Fangjie Yang, Haiming Lin, Haoyu Lin, Zhuoqun Zheng, Qifeng Liang, Jinhai Zhang, Xinxin Li
BMC Medical Education.2024;[Epub] CrossRef
-
Impact of a change from A–F grading to honors/pass/fail grading on academic performance at Yonsei University College of Medicine in Korea: a cross-sectional serial mediation analysis
-
Min-Kyeong Kim, Hae Won Kim
-
J Educ Eval Health Prof. 2024;21:20. Published online August 16, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.20
-
Correction in: J Educ Eval Health Prof 2024;21(0):35
-
869
View
-
290
Download
-
1
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
This study aimed to explore how the grading system affected medical students’ academic performance based on their perceptions of the learning environment and intrinsic motivation in the context of changing from norm-referenced A–F grading to criterion-referenced honors/pass/fail grading.
Methods
The study involved 238 second-year medical students from 2014 (n=127, A–F grading) and 2015 (n=111, honors/pass/fail grading) at Yonsei University College of Medicine in Korea. Scores on the Dundee Ready Education Environment Measure, the Academic Motivation Scale, and the Basic Medical Science Examination were used to measure overall learning environment perceptions, intrinsic motivation, and academic performance, respectively. Serial mediation analysis was conducted to examine the pathways between the grading system and academic performance, focusing on the mediating roles of student perceptions and intrinsic motivation.
Results
The honors/pass/fail grading class students reported more positive perceptions of the learning environment, higher intrinsic motivation, and better academic performance than the A–F grading class students. Mediation analysis demonstrated a serial mediation effect between the grading system and academic performance through learning environment perceptions and intrinsic motivation. Student perceptions and intrinsic motivation did not independently mediate the relationship between the grading system and performance.
Conclusion
Reducing the number of grades and eliminating rank-based grading might have created an affirming learning environment that fulfills basic psychological needs and reinforces the intrinsic motivation linked to academic performance. The cumulative effect of these 2 mediators suggests that a comprehensive approach should be used to understand student performance.
-
Citations
Citations to this article as recorded by
- Erratum: Impact of a change from A–F grading to honors/pass/fail grading on academic performance at Yonsei University College of Medicine in Korea: a cross-sectional serial mediation analysis
Journal of Educational Evaluation for Health Professions.2024; 21: 35. CrossRef
Special article on the 20th anniversary of the journal
-
Comparison of real data and simulated data analysis of a stopping rule based on the standard error of measurement in computerized adaptive testing for medical examinations in Korea: a psychometric study
-
Dong Gi Seo, Jeongwook Choi, Jinha Kim
-
J Educ Eval Health Prof. 2024;21:18. Published online July 9, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.18
-
-
Abstract
PDFSupplementary Material
- Purpose
This study aimed to compare and evaluate the efficiency and accuracy of computerized adaptive testing (CAT) under 2 stopping rules (standard error of measurement [SEM]=0.3 and 0.25) using both real and simulated data in medical examinations in Korea.
Methods
This study employed post-hoc simulation and real data analysis to explore the optimal stopping rule for CAT in medical examinations. The real data were obtained from the responses of 3rd-year medical students during examinations in 2020 at Hallym University College of Medicine. Simulated data were generated using estimated parameters from a real item bank in R. Outcome variables included the number of examinees’ passing or failing with SEM values of 0.25 and 0.30, the number of items administered, and the correlation. The consistency of real CAT result was evaluated by examining consistency of pass or fail based on a cut score of 0.0. The efficiency of all CAT designs was assessed by comparing the average number of items administered under both stopping rules.
Results
Both SEM 0.25 and SEM 0.30 provided a good balance between accuracy and efficiency in CAT. The real data showed minimal differences in pass/fail outcomes between the 2 SEM conditions, with a high correlation (r=0.99) between ability estimates. The simulation results confirmed these findings, indicating similar average item numbers between real and simulated data.
Conclusion
The findings suggest that both SEM 0.25 and 0.30 are effective termination criteria in the context of the Rasch model, balancing accuracy and efficiency in CAT.
Review
-
Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
-
Xiaojun Xu, Yixiao Chen, Jing Miao
-
J Educ Eval Health Prof. 2024;21:6. Published online March 15, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.6
-
-
5,334
View
-
543
Download
-
11
Web of Science
-
15
Crossref
-
Abstract
PDFSupplementary Material
- Background
ChatGPT is a large language model (LLM) based on artificial intelligence (AI) capable of responding in multiple languages and generating nuanced and highly complex responses. While ChatGPT holds promising applications in medical education, its limitations and potential risks cannot be ignored.
Methods
A scoping review was conducted for English articles discussing ChatGPT in the context of medical education published after 2022. A literature search was performed using PubMed/MEDLINE, Embase, and Web of Science databases, and information was extracted from the relevant studies that were ultimately included.
Results
ChatGPT exhibits various potential applications in medical education, such as providing personalized learning plans and materials, creating clinical practice simulation scenarios, and assisting in writing articles. However, challenges associated with academic integrity, data accuracy, and potential harm to learning were also highlighted in the literature. The paper emphasizes certain recommendations for using ChatGPT, including the establishment of guidelines. Based on the review, 3 key research areas were proposed: cultivating the ability of medical students to use ChatGPT correctly, integrating ChatGPT into teaching activities and processes, and proposing standards for the use of AI by medical students.
Conclusion
ChatGPT has the potential to transform medical education, but careful consideration is required for its full integration. To harness the full potential of ChatGPT in medical education, attention should not only be given to the capabilities of AI but also to its impact on students and teachers.
-
Citations
Citations to this article as recorded by
- AI-assisted patient education: Challenges and solutions in pediatric kidney transplantation
MZ Ihsan, Dony Apriatama, Pithriani, Riza Amalia
Patient Education and Counseling.2025; 131: 108575. CrossRef - Exploring predictors of AI chatbot usage intensity among students: Within- and between-person relationships based on the technology acceptance model
Anne-Kathrin Kleine, Insa Schaffernak, Eva Lermer
Computers in Human Behavior: Artificial Humans.2025; 3: 100113. CrossRef - Chatbots in neurology and neuroscience: Interactions with students, patients and neurologists
Stefano Sandrone
Brain Disorders.2024; 15: 100145. CrossRef - ChatGPT in education: unveiling frontiers and future directions through systematic literature review and bibliometric analysis
Buddhini Amarathunga
Asian Education and Development Studies.2024; 13(5): 412. CrossRef - Evaluating the performance of ChatGPT-3.5 and ChatGPT-4 on the Taiwan plastic surgery board examination
Ching-Hua Hsieh, Hsiao-Yun Hsieh, Hui-Ping Lin
Heliyon.2024; 10(14): e34851. CrossRef - Preparing for Artificial General Intelligence (AGI) in Health Professions Education: AMEE Guide No. 172
Ken Masters, Anne Herrmann-Werner, Teresa Festl-Wietek, David Taylor
Medical Teacher.2024; 46(10): 1258. CrossRef - A Comparative Analysis of ChatGPT and Medical Faculty Graduates in Medical Specialization Exams: Uncovering the Potential of Artificial Intelligence in Medical Education
Gülcan Gencer, Kerem Gencer
Cureus.2024;[Epub] CrossRef - Research ethics and issues regarding the use of ChatGPT-like artificial intelligence platforms by authors and reviewers: a narrative review
Sang-Jun Kim
Science Editing.2024; 11(2): 96. CrossRef - Innovation Off the Bat: Bridging the ChatGPT Gap in Digital Competence among English as a Foreign Language Teachers
Gulsara Urazbayeva, Raisa Kussainova, Aikumis Aibergen, Assel Kaliyeva, Gulnur Kantayeva
Education Sciences.2024; 14(9): 946. CrossRef - Exploring the perceptions of Chinese pre-service teachers on the integration of generative AI in English language teaching: Benefits, challenges, and educational implications
Ji Young Chung, Seung-Hoon Jeong
Online Journal of Communication and Media Technologies.2024; 14(4): e202457. CrossRef - Unveiling the bright side and dark side of AI-based ChatGPT : a bibliographic and thematic approach
Chandan Kumar Tiwari, Mohd. Abass Bhat, Abel Dula Wedajo, Shagufta Tariq Khan
Journal of Decision Systems.2024; : 1. CrossRef - Artificial Intelligence in Medical Education and Mentoring in Rehabilitation Medicine
Julie K. Silver, Mustafa Reha Dodurgali, Nara Gavini
American Journal of Physical Medicine & Rehabilitation.2024; 103(11): 1039. CrossRef - The Potential of Artificial Intelligence Tools for Reducing Uncertainty in Medicine and Directions for Medical Education
Sauliha Rabia Alli, Soaad Qahhār Hossain, Sunit Das, Ross Upshur
JMIR Medical Education.2024; 10: e51446. CrossRef - A Systematic Literature Review of Empirical Research on Applying Generative Artificial Intelligence in Education
Xin Zhang, Peng Zhang, Yuan Shen, Min Liu, Qiong Wang, Dragan Gašević, Yizhou Fan
Frontiers of Digital Education.2024; 1(3): 223. CrossRef - Artificial intelligence in medical problem-based learning: opportunities and challenges
Yaoxing Chen, Hong Qi, Yu Qiu, Juan Li, Liang Zhu, Xiaoling Gao, Hao Wang, Gan Jiang
Global Medical Education.2024;[Epub] CrossRef
Research articles
-
Discovering social learning ecosystems during clinical clerkship from United States medical students’ feedback encounters: a content analysis
-
Anna Therese Cianciolo, Heeyoung Han, Lydia Anne Howes, Debra Lee Klamen, Sophia Matos
-
J Educ Eval Health Prof. 2024;21:5. Published online February 28, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.5
-
-
Abstract
PDFSupplementary Material
- Purpose
We examined United States medical students’ self-reported feedback encounters during clerkship training to better understand in situ feedback practices. Specifically, we asked: Who do students receive feedback from, about what, when, where, and how do they use it? We explored whether curricular expectations for preceptors’ written commentary aligned with feedback as it occurs naturalistically in the workplace.
Methods
This study occurred from July 2021 to February 2022 at Southern Illinois University School of Medicine. We used qualitative survey-based experience sampling to gather students’ accounts of their feedback encounters in 8 core specialties. We analyzed the who, what, when, where, and why of 267 feedback encounters reported by 11 clerkship students over 30 weeks. Code frequencies were mapped qualitatively to explore patterns in feedback encounters.
Results
Clerkship feedback occurs in patterns apparently related to the nature of clinical work in each specialty. These patterns may be attributable to each specialty’s “social learning ecosystem”—the distinctive learning environment shaped by the social and material aspects of a given specialty’s work, which determine who preceptors are, what students do with preceptors, and what skills or attributes matter enough to preceptors to comment on.
Conclusion
Comprehensive, standardized expectations for written feedback across specialties conflict with the reality of workplace-based learning. Preceptors may be better able—and more motivated—to document student performance that occurs as a natural part of everyday work. Nurturing social learning ecosystems could facilitate workplace-based learning such that, across specialties, students acquire a comprehensive clinical skillset appropriate for graduation.
-
Medical students’ patterns of using ChatGPT as a feedback tool and perceptions of ChatGPT in a Leadership and Communication course in Korea: a cross-sectional study
-
Janghee Park
-
J Educ Eval Health Prof. 2023;20:29. Published online November 10, 2023
-
DOI: https://doi.org/10.3352/jeehp.2023.20.29
-
-
3,135
View
-
230
Download
-
6
Web of Science
-
8
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
This study aimed to analyze patterns of using ChatGPT before and after group activities and to explore medical students’ perceptions of ChatGPT as a feedback tool in the classroom.
Methods
The study included 99 2nd-year pre-medical students who participated in a “Leadership and Communication” course from March to June 2023. Students engaged in both individual and group activities related to negotiation strategies. ChatGPT was used to provide feedback on their solutions. A survey was administered to assess students’ perceptions of ChatGPT’s feedback, its use in the classroom, and the strengths and challenges of ChatGPT from May 17 to 19, 2023.
Results
The students responded by indicating that ChatGPT’s feedback was helpful, and revised and resubmitted their group answers in various ways after receiving feedback. The majority of respondents expressed agreement with the use of ChatGPT during class. The most common response concerning the appropriate context of using ChatGPT’s feedback was “after the first round of discussion, for revisions.” There was a significant difference in satisfaction with ChatGPT’s feedback, including correctness, usefulness, and ethics, depending on whether or not ChatGPT was used during class, but there was no significant difference according to gender or whether students had previous experience with ChatGPT. The strongest advantages were “providing answers to questions” and “summarizing information,” and the worst disadvantage was “producing information without supporting evidence.”
Conclusion
The students were aware of the advantages and disadvantages of ChatGPT, and they had a positive attitude toward using ChatGPT in the classroom.
-
Citations
Citations to this article as recorded by
- Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
Xiaojun Xu, Yixiao Chen, Jing Miao
Journal of Educational Evaluation for Health Professions.2024; 21: 6. CrossRef - Embracing ChatGPT for Medical Education: Exploring Its Impact on Doctors and Medical Students
Yijun Wu, Yue Zheng, Baijie Feng, Yuqi Yang, Kai Kang, Ailin Zhao
JMIR Medical Education.2024; 10: e52483. CrossRef - Integration of ChatGPT Into a Course for Medical Students: Explorative Study on Teaching Scenarios, Students’ Perception, and Applications
Anita V Thomae, Claudia M Witt, Jürgen Barth
JMIR Medical Education.2024; 10: e50545. CrossRef - A cross sectional investigation of ChatGPT-like large language models application among medical students in China
Guixia Pan, Jing Ni
BMC Medical Education.2024;[Epub] CrossRef - A Pilot Study of Medical Student Opinions on Large Language Models
Alan Y Xu, Vincent S Piranio, Skye Speakman, Chelsea D Rosen, Sally Lu, Chris Lamprecht, Robert E Medina, Maisha Corrielus, Ian T Griffin, Corinne E Chatham, Nicolas J Abchee, Daniel Stribling, Phuong B Huynh, Heather Harrell, Benjamin Shickel, Meghan Bre
Cureus.2024;[Epub] CrossRef - The intent of ChatGPT usage and its robustness in medical proficiency exams: a systematic review
Tatiana Chaiban, Zeinab Nahle, Ghaith Assi, Michelle Cherfane
Discover Education.2024;[Epub] CrossRef - ChatGPT and Clinical Training: Perception, Concerns, and Practice of Pharm-D Students
Mohammed Zawiah, Fahmi Al-Ashwal, Lobna Gharaibeh, Rana Abu Farha, Karem Alzoubi, Khawla Abu Hammour, Qutaiba A Qasim, Fahd Abrah
Journal of Multidisciplinary Healthcare.2023; Volume 16: 4099. CrossRef - Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
Hyunju Lee, Soobin Park
Journal of Educational Evaluation for Health Professions.2023; 20: 39. CrossRef
-
Development and validation of the student ratings in clinical teaching scale in Australia: a methodological study
-
Pin-Hsiang Huang, Anthony John O’Sullivan, Boaz Shulruf
-
J Educ Eval Health Prof. 2023;20:26. Published online September 5, 2023
-
DOI: https://doi.org/10.3352/jeehp.2023.20.26
-
-
Abstract
PDFSupplementary Material
- Purpose
This study aimed to devise a valid measurement for assessing clinical students’ perceptions of teaching practices.
Methods
A new tool was developed based on a meta-analysis encompassing effective clinical teaching-learning factors. Seventy-nine items were generated using a frequency (never to always) scale. The tool was applied to the University of New South Wales year 2, 3, and 6 medical students. Exploratory and confirmatory factor analysis (exploratory factor analysis [EFA] and confirmatory factor analysis [CFA], respectively) were conducted to establish the tool’s construct validity and goodness of fit, and Cronbach’s α was used for reliability.
Results
In total, 352 students (44.2%) completed the questionnaire. The EFA identified student-centered learning, problem-solving learning, self-directed learning, and visual technology (reliability, 0.77 to 0.89). CFA showed acceptable goodness of fit (chi-square P<0.01, comparative fit index=0.930 and Tucker-Lewis index=0.917, root mean square error of approximation=0.069, standardized root mean square residual=0.06).
Conclusion
The established tool—Student Ratings in Clinical Teaching (STRICT)—is a valid and reliable tool that demonstrates how students perceive clinical teaching efficacy. STRICT measures the frequency of teaching practices to mitigate the biases of acquiescence and social desirability. Clinical teachers may use the tool to adapt their teaching practices with more active learning activities and to utilize visual technology to facilitate clinical learning efficacy. Clinical educators may apply STRICT to assess how these teaching practices are implemented in current clinical settings.
-
Experience of introducing an electronic health records station in an objective structured clinical examination to evaluate medical students’ communication skills in Canada: a descriptive study
-
Kuan-chin Jean Chen, Ilona Bartman, Debra Pugh, David Topps, Isabelle Desjardins, Melissa Forgie, Douglas Archibald
-
J Educ Eval Health Prof. 2023;20:22. Published online July 4, 2023
-
DOI: https://doi.org/10.3352/jeehp.2023.20.22
-
-
3,933
View
-
152
Download
-
1
Web of Science
-
1
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
There is limited literature related to the assessment of electronic medical record (EMR)-related competencies. To address this gap, this study explored the feasibility of an EMR objective structured clinical examination (OSCE) station to evaluate medical students’ communication skills by psychometric analyses and standardized patients’ (SPs) perspectives on EMR use in an OSCE.
Methods
An OSCE station that incorporated the use of an EMR was developed and pilot-tested in March 2020. Students’ communication skills were assessed by SPs and physician examiners. Students’ scores were compared between the EMR station and 9 other stations. A psychometric analysis, including item total correlation, was done. SPs participated in a post-OSCE focus group to discuss their perception of EMRs’ effect on communication.
Results
Ninety-nine 3rd-year medical students participated in a 10-station OSCE that included the use of the EMR station. The EMR station had an acceptable item total correlation (0.217). Students who leveraged graphical displays in counseling received higher OSCE station scores from the SPs (P=0.041). The thematic analysis of SPs’ perceptions of students’ EMR use from the focus group revealed the following domains of themes: technology, communication, case design, ownership of health information, and timing of EMR usage.
Conclusion
This study demonstrated the feasibility of incorporating EMR in assessing learner communication skills in an OSCE. The EMR station had acceptable psychometric characteristics. Some medical students were able to efficiently use the EMRs as an aid in patient counseling. Teaching students how to be patient-centered even in the presence of technology may promote engagement.
-
Citations
Citations to this article as recorded by
- Usage and perception of electronic medical records (EMR) among medical students in southwestern Nigeria
A. A. Adeyeye, A. O. Ajose, O. M. Oduola, B. A. Akodu, A. Olufadeji
Discover Public Health.2024;[Epub] CrossRef
-
What impacts students’ satisfaction the most from Medicine Student Experience Questionnaire in Australia: a validity study
-
Pin-Hsiang Huang, Gary Velan, Greg Smith, Melanie Fentoullis, Sean Edward Kennedy, Karen Jane Gibson, Kerry Uebel, Boaz Shulruf
-
J Educ Eval Health Prof. 2023;20:2. Published online January 18, 2023
-
DOI: https://doi.org/10.3352/jeehp.2023.20.2
-
-
2,410
View
-
156
Download
-
2
Web of Science
-
1
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
This study evaluated the validity of student feedback derived from Medicine Student Experience Questionnaire (MedSEQ), as well as the predictors of students’ satisfaction in the Medicine program.
Methods
Data from MedSEQ applying to the University of New South Wales Medicine program in 2017, 2019, and 2021 were analyzed. Confirmatory factor analysis (CFA) and Cronbach’s α were used to assess the construct validity and reliability of MedSEQ respectively. Hierarchical multiple linear regressions were used to identify the factors that most impact students’ overall satisfaction with the program.
Results
A total of 1,719 students (34.50%) responded to MedSEQ. CFA showed good fit indices (root mean square error of approximation=0.051; comparative fit index=0.939; chi-square/degrees of freedom=6.429). All factors yielded good (α>0.7) or very good (α>0.8) levels of reliability, except the “online resources” factor, which had acceptable reliability (α=0.687). A multiple linear regression model with only demographic characteristics explained 3.8% of the variance in students’ overall satisfaction, whereas the model adding 8 domains from MedSEQ explained 40%, indicating that 36.2% of the variance was attributable to students’ experience across the 8 domains. Three domains had the strongest impact on overall satisfaction: “being cared for,” “satisfaction with teaching,” and “satisfaction with assessment” (β=0.327, 0.148, 0.148, respectively; all with P<0.001).
Conclusion
MedSEQ has good construct validity and high reliability, reflecting students’ satisfaction with the Medicine program. Key factors impacting students’ satisfaction are the perception of being cared for, quality teaching irrespective of the mode of delivery and fair assessment tasks which enhance learning.
-
Citations
Citations to this article as recorded by
- Mental health and quality of life across 6 years of medical training: A year-by-year analysis
Natalia de Castro Pecci Maddalena, Alessandra Lamas Granero Lucchetti, Ivana Lucia Damasio Moutinho, Oscarina da Silva Ezequiel, Giancarlo Lucchetti
International Journal of Social Psychiatry.2024; 70(2): 298. CrossRef
Brief report
-
Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study
-
Sun Huh
-
J Educ Eval Health Prof. 2023;20:1. Published online January 11, 2023
-
DOI: https://doi.org/10.3352/jeehp.2023.20.1
-
-
14,697
View
-
1,109
Download
-
180
Web of Science
-
85
Crossref
-
Abstract
PDFSupplementary Material
- This study aimed to compare the knowledge and interpretation ability of ChatGPT, a language model of artificial general intelligence, with those of medical students in Korea by administering a parasitology examination to both ChatGPT and medical students. The examination consisted of 79 items and was administered to ChatGPT on January 1, 2023. The examination results were analyzed in terms of ChatGPT’s overall performance score, its correct answer rate by the items’ knowledge level, and the acceptability of its explanations of the items. ChatGPT’s performance was lower than that of the medical students, and ChatGPT’s correct answer rate was not related to the items’ knowledge level. However, there was a relationship between acceptable explanations and correct answers. In conclusion, ChatGPT’s knowledge and interpretation ability for this parasitology examination were not yet comparable to those of medical students in Korea.
-
Citations
Citations to this article as recorded by
- Performance of ChatGPT on the India Undergraduate Community Medicine Examination: Cross-Sectional Study
Aravind P Gandhi, Felista Karen Joesph, Vineeth Rajagopal, P Aparnavi, Sushma Katkuri, Sonal Dayama, Prakasini Satapathy, Mahalaqua Nazli Khatib, Shilpa Gaidhane, Quazi Syed Zahiruddin, Ashish Behera
JMIR Formative Research.2024; 8: e49964. CrossRef - Large Language Models and Artificial Intelligence: A Primer for Plastic Surgeons on the Demonstrated and Potential Applications, Promises, and Limitations of ChatGPT
Jad Abi-Rafeh, Hong Hao Xu, Roy Kazan, Ruth Tevlin, Heather Furnas
Aesthetic Surgery Journal.2024; 44(3): 329. CrossRef - Redesigning Tertiary Educational Evaluation with AI: A Task-Based Analysis of LIS Students’ Assessment on Written Tests and Utilizing ChatGPT at NSTU
Shamima Yesmin
Science & Technology Libraries.2024; 43(4): 355. CrossRef - Unveiling the ChatGPT phenomenon: Evaluating the consistency and accuracy of endodontic question answers
Ana Suárez, Víctor Díaz‐Flores García, Juan Algar, Margarita Gómez Sánchez, María Llorente de Pedro, Yolanda Freire
International Endodontic Journal.2024; 57(1): 108. CrossRef - Bob or Bot: Exploring ChatGPT's Answers to University Computer Science Assessment
Mike Richards, Kevin Waugh, Mark Slaymaker, Marian Petre, John Woodthorpe, Daniel Gooch
ACM Transactions on Computing Education.2024; 24(1): 1. CrossRef - A systematic review of ChatGPT use in K‐12 education
Peng Zhang, Gemma Tur
European Journal of Education.2024;[Epub] CrossRef - Evaluating ChatGPT as a self‐learning tool in medical biochemistry: A performance assessment in undergraduate medical university examination
Krishna Mohan Surapaneni, Anusha Rajajagadeesan, Lakshmi Goudhaman, Shalini Lakshmanan, Saranya Sundaramoorthi, Dineshkumar Ravi, Kalaiselvi Rajendiran, Porchelvan Swaminathan
Biochemistry and Molecular Biology Education.2024; 52(2): 237. CrossRef - Examining the use of ChatGPT in public universities in Hong Kong: a case study of restricted access areas
Michelle W. T. Cheng, Iris H. Y. YIM
Discover Education.2024;[Epub] CrossRef - Performance of ChatGPT on Ophthalmology-Related Questions Across Various Examination Levels: Observational Study
Firas Haddad, Joanna S Saade
JMIR Medical Education.2024; 10: e50842. CrossRef - Assessment of Artificial Intelligence Platforms With Regard to Medical Microbiology Knowledge: An Analysis of ChatGPT and Gemini
Jai Ranjan, Absar Ahmad, Monalisa Subudhi, Ajay Kumar
Cureus.2024;[Epub] CrossRef - A comparative vignette study: Evaluating the potential role of a generative AI model in enhancing clinical decision‐making in nursing
Mor Saban, Ilana Dubovi
Journal of Advanced Nursing.2024;[Epub] CrossRef - Comparison of the Performance of GPT-3.5 and GPT-4 With That of Medical Students on the Written German Medical Licensing Examination: Observational Study
Annika Meyer, Janik Riese, Thomas Streichert
JMIR Medical Education.2024; 10: e50965. CrossRef - From hype to insight: Exploring ChatGPT's early footprint in education via altmetrics and bibliometrics
Lung‐Hsiang Wong, Hyejin Park, Chee‐Kit Looi
Journal of Computer Assisted Learning.2024; 40(4): 1428. CrossRef - A scoping review of artificial intelligence in medical education: BEME Guide No. 84
Morris Gordon, Michelle Daniel, Aderonke Ajiboye, Hussein Uraiby, Nicole Y. Xu, Rangana Bartlett, Janice Hanson, Mary Haas, Maxwell Spadafore, Ciaran Grafton-Clarke, Rayhan Yousef Gasiea, Colin Michie, Janet Corral, Brian Kwan, Diana Dolmans, Satid Thamma
Medical Teacher.2024; 46(4): 446. CrossRef - Üniversite Öğrencilerinin ChatGPT 3,5 Deneyimleri: Yapay Zekâyla Yazılmış Masal Varyantları
Bilge GÖK, Fahri TEMİZYÜREK, Özlem BAŞ
Korkut Ata Türkiyat Araştırmaları Dergisi.2024; (14): 1040. CrossRef - Tracking ChatGPT Research: Insights From the Literature and the Web
Omar Mubin, Fady Alnajjar, Zouheir Trabelsi, Luqman Ali, Medha Mohan Ambali Parambil, Zhao Zou
IEEE Access.2024; 12: 30518. CrossRef - Potential applications of ChatGPT in obstetrics and gynecology in Korea: a review article
YooKyung Lee, So Yun Kim
Obstetrics & Gynecology Science.2024; 67(2): 153. CrossRef - Application of generative language models to orthopaedic practice
Jessica Caterson, Olivia Ambler, Nicholas Cereceda-Monteoliva, Matthew Horner, Andrew Jones, Arwel Tomos Poacher
BMJ Open.2024; 14(3): e076484. CrossRef - Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
Xiaojun Xu, Yixiao Chen, Jing Miao
Journal of Educational Evaluation for Health Professions.2024; 21: 6. CrossRef - The advent of ChatGPT: Job Made Easy or Job Loss to Data Analysts
Abiola Timothy Owolabi, Oluwaseyi Oluwadamilare Okunlola, Emmanuel Taiwo Adewuyi, Janet Iyabo Idowu, Olasunkanmi James Oladapo
WSEAS TRANSACTIONS ON COMPUTERS.2024; 23: 24. CrossRef - ChatGPT in dentomaxillofacial radiology education
Hilal Peker Öztürk, Hakan Avsever, Buğra Şenel, Şükran Ayran, Mustafa Çağrı Peker, Hatice Seda Özgedik, Nurten Baysal
Journal of Health Sciences and Medicine.2024; 7(2): 224. CrossRef - Performance of ChatGPT on the Korean National Examination for Dental Hygienists
Soo-Myoung Bae, Hye-Rim Jeon, Gyoung-Nam Kim, Seon-Hui Kwak, Hyo-Jin Lee
Journal of Dental Hygiene Science.2024; 24(1): 62. CrossRef - Medical knowledge of ChatGPT in public health, infectious diseases, COVID-19 pandemic, and vaccines: multiple choice questions examination based performance
Sultan Ayoub Meo, Metib Alotaibi, Muhammad Zain Sultan Meo, Muhammad Omair Sultan Meo, Mashhood Hamid
Frontiers in Public Health.2024;[Epub] CrossRef - Unlock the potential for Saudi Arabian higher education: a systematic review of the benefits of ChatGPT
Eman Faisal
Frontiers in Education.2024;[Epub] CrossRef - Does the Information Quality of ChatGPT Meet the Requirements of Orthopedics and Trauma Surgery?
Adnan Kasapovic, Thaer Ali, Mari Babasiz, Jessica Bojko, Martin Gathen, Robert Kaczmarczyk, Jonas Roos
Cureus.2024;[Epub] CrossRef - Exploring the Profile of University Assessments Flagged as Containing AI-Generated Material
Daniel Gooch, Kevin Waugh, Mike Richards, Mark Slaymaker, John Woodthorpe
ACM Inroads.2024; 15(2): 39. CrossRef - Comparing the Performance of ChatGPT-4 and Medical Students on MCQs at Varied Levels of Bloom’s Taxonomy
Ambadasu Bharatha, Nkemcho Ojeh, Ahbab Mohammad Fazle Rabbi, Michael Campbell, Kandamaran Krishnamurthy, Rhaheem Layne-Yarde, Alok Kumar, Dale Springer, Kenneth Connell, Md Anwarul Majumder
Advances in Medical Education and Practice.2024; Volume 15: 393. CrossRef - The emergence of generative artificial intelligence platforms in 2023, journal metrics, appreciation to reviewers and volunteers, and obituary
Sun Huh
Journal of Educational Evaluation for Health Professions.2024; 21: 9. CrossRef - ChatGPT, a Friend or a Foe in Medical Education: A Review of Strengths, Challenges, and Opportunities
Mahdi Zarei, Maryam Zarei, Sina Hamzehzadeh, Sepehr Shakeri Bavil Oliyaei, Mohammad-Salar Hosseini
Shiraz E-Medical Journal.2024;[Epub] CrossRef - Augmenting intensive care unit nursing practice with generative AI: A formative study of diagnostic synergies using simulation‐based clinical cases
Chedva Levin, Moriya Suliman, Etti Naimi, Mor Saban
Journal of Clinical Nursing.2024;[Epub] CrossRef - Artificial intelligence chatbots for the nutrition management of diabetes and the metabolic syndrome
Farah Naja, Mandy Taktouk, Dana Matbouli, Sharfa Khaleel, Ayah Maher, Berna Uzun, Maryam Alameddine, Lara Nasreddine
European Journal of Clinical Nutrition.2024; 78(10): 887. CrossRef - Large language models in healthcare: from a systematic review on medical examinations to a comparative analysis on fundamentals of robotic surgery online test
Andrea Moglia, Konstantinos Georgiou, Pietro Cerveri, Luca Mainardi, Richard M. Satava, Alfred Cuschieri
Artificial Intelligence Review.2024;[Epub] CrossRef - Is ChatGPT Enhancing Youth’s Learning, Engagement and Satisfaction?
Christina Sanchita Shah, Smriti Mathur, Sushant Kr. Vishnoi
Journal of Computer Information Systems.2024; : 1. CrossRef - Comparison of ChatGPT, Gemini, and Le Chat with physician interpretations of medical laboratory questions from an online health forum
Annika Meyer, Ari Soleman, Janik Riese, Thomas Streichert
Clinical Chemistry and Laboratory Medicine (CCLM).2024;[Epub] CrossRef - Performance of ChatGPT-3.5 and GPT-4 in national licensing examinations for medicine, pharmacy, dentistry, and nursing: a systematic review and meta-analysis
Hye Kyung Jin, Ha Eun Lee, EunYoung Kim
BMC Medical Education.2024;[Epub] CrossRef - Role of ChatGPT in Dentistry: A Review
Pratik Surana, Priyanka P. Ostwal, Shruti Vishal Dev, Jayesh Tiwari, Kadire Shiva Charan Yadav, Gajji Renuka
Research Journal of Pharmacy and Technology.2024; : 3489. CrossRef - Exploring the Current Applications and Effectiveness of ChatGPT in Nursing: An Integrative Review
Yuan Luo, Yiqun Miao, Yuhan Zhao, Jiawei Li, Ying Wu
Journal of Advanced Nursing.2024;[Epub] CrossRef - A Scoping Review on the Educational Applications of Generative AI in Primary and Secondary Education
Solmoe Ahn, Jeongyoon Lee, Jungmin Park, Soyoung Jung, Jihoon Song
The Journal of Korean Association of Computer Education.2024; 27(6): 11. CrossRef - Performance of GPT-3.5 and GPT-4 on the Korean Pharmacist Licensing Examination: Comparison Study
Hye Kyung Jin, EunYoung Kim
JMIR Medical Education.2024; 10: e57451. CrossRef - ChatGPT-Produced Content as a Resource in the Language Education Classroom: A Guiding Hand
Rod E. Case, Leping Liu
Computers in the Schools.2024; : 1. CrossRef - Applicability of ChatGPT in Assisting to Solve Higher Order Problems in Pathology
Ranwir K Sinha, Asitava Deb Roy, Nikhil Kumar, Himel Mondal
Cureus.2023;[Epub] CrossRef - Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers
Sun Huh
Journal of Educational Evaluation for Health Professions.2023; 20: 5. CrossRef - Emergence of the metaverse and ChatGPT in journal publishing after the COVID-19 pandemic
Sun Huh
Science Editing.2023; 10(1): 1. CrossRef - Assessing the Capability of ChatGPT in Answering First- and Second-Order Knowledge Questions on Microbiology as per Competency-Based Medical Education Curriculum
Dipmala Das, Nikhil Kumar, Langamba Angom Longjam, Ranwir Sinha, Asitava Deb Roy, Himel Mondal, Pratima Gupta
Cureus.2023;[Epub] CrossRef - Evaluating ChatGPT's Ability to Solve Higher-Order Questions on the Competency-Based Medical Education Curriculum in Medical Biochemistry
Arindam Ghosh, Aritri Bir
Cureus.2023;[Epub] CrossRef - Overview of Early ChatGPT’s Presence in Medical Literature: Insights From a Hybrid Literature Review by ChatGPT and Human Experts
Omar Temsah, Samina A Khan, Yazan Chaiah, Abdulrahman Senjab, Khalid Alhasan, Amr Jamal, Fadi Aljamaan, Khalid H Malki, Rabih Halwani, Jaffar A Al-Tawfiq, Mohamad-Hani Temsah, Ayman Al-Eyadhy
Cureus.2023;[Epub] CrossRef - ChatGPT for Future Medical and Dental Research
Bader Fatani
Cureus.2023;[Epub] CrossRef - ChatGPT in Dentistry: A Comprehensive Review
Hind M Alhaidry, Bader Fatani, Jenan O Alrayes, Aljowhara M Almana, Nawaf K Alfhaed
Cureus.2023;[Epub] CrossRef - Can we trust AI chatbots’ answers about disease diagnosis and patient care?
Sun Huh
Journal of the Korean Medical Association.2023; 66(4): 218. CrossRef - Large Language Models in Medical Education: Opportunities, Challenges, and Future Directions
Alaa Abd-alrazaq, Rawan AlSaad, Dari Alhuwail, Arfan Ahmed, Padraig Mark Healy, Syed Latifi, Sarah Aziz, Rafat Damseh, Sadam Alabed Alrazak, Javaid Sheikh
JMIR Medical Education.2023; 9: e48291. CrossRef - Early applications of ChatGPT in medical practice, education and research
Sam Sedaghat
Clinical Medicine.2023; 23(3): 278. CrossRef - A Review of Research on Teaching and Learning Transformation under the Influence of ChatGPT Technology
璇 师
Advances in Education.2023; 13(05): 2617. CrossRef - Performance of GPT-3.5 and GPT-4 on the Japanese Medical Licensing Examination: Comparison Study
Soshi Takagi, Takashi Watari, Ayano Erabi, Kota Sakaguchi
JMIR Medical Education.2023; 9: e48002. CrossRef - ChatGPT’s quiz skills in different otolaryngology subspecialties: an analysis of 2576 single-choice and multiple-choice board certification preparation questions
Cosima C. Hoch, Barbara Wollenberg, Jan-Christoffer Lüers, Samuel Knoedler, Leonard Knoedler, Konstantin Frank, Sebastian Cotofana, Michael Alfertshofer
European Archives of Oto-Rhino-Laryngology.2023; 280(9): 4271. CrossRef - Analysing the Applicability of ChatGPT, Bard, and Bing to Generate Reasoning-Based Multiple-Choice Questions in Medical Physiology
Mayank Agarwal, Priyanka Sharma, Ayan Goswami
Cureus.2023;[Epub] CrossRef - The Intersection of ChatGPT, Clinical Medicine, and Medical Education
Rebecca Shin-Yee Wong, Long Chiau Ming, Raja Affendi Raja Ali
JMIR Medical Education.2023; 9: e47274. CrossRef - The Role of Artificial Intelligence in Higher Education: ChatGPT Assessment for Anatomy Course
Tarık TALAN, Yusuf KALINKARA
Uluslararası Yönetim Bilişim Sistemleri ve Bilgisayar Bilimleri Dergisi.2023; 7(1): 33. CrossRef - Comparing ChatGPT’s ability to rate the degree of stereotypes and the consistency of stereotype attribution with those of medical students in New Zealand in developing a similarity rating test: a methodological study
Chao-Cheng Lin, Zaine Akuhata-Huntington, Che-Wei Hsu
Journal of Educational Evaluation for Health Professions.2023; 20: 17. CrossRef - Examining Real-World Medication Consultations and Drug-Herb Interactions: ChatGPT Performance Evaluation
Hsing-Yu Hsu, Kai-Cheng Hsu, Shih-Yen Hou, Ching-Lung Wu, Yow-Wen Hsieh, Yih-Dih Cheng
JMIR Medical Education.2023; 9: e48433. CrossRef - Assessing the Efficacy of ChatGPT in Solving Questions Based on the Core Concepts in Physiology
Arijita Banerjee, Aquil Ahmad, Payal Bhalla, Kavita Goyal
Cureus.2023;[Epub] CrossRef - ChatGPT Performs on the Chinese National Medical Licensing Examination
Xinyi Wang, Zhenye Gong, Guoxin Wang, Jingdan Jia, Ying Xu, Jialu Zhao, Qingye Fan, Shaun Wu, Weiguo Hu, Xiaoyang Li
Journal of Medical Systems.2023;[Epub] CrossRef - Artificial intelligence and its impact on job opportunities among university students in North Lima, 2023
Doris Ruiz-Talavera, Jaime Enrique De la Cruz-Aguero, Nereo García-Palomino, Renzo Calderón-Espinoza, William Joel Marín-Rodriguez
ICST Transactions on Scalable Information Systems.2023;[Epub] CrossRef - Revolutionizing Dental Care: A Comprehensive Review of Artificial Intelligence Applications Among Various Dental Specialties
Najd Alzaid, Omar Ghulam, Modhi Albani, Rafa Alharbi, Mayan Othman, Hasan Taher, Saleem Albaradie, Suhael Ahmed
Cureus.2023;[Epub] CrossRef - Opportunities, Challenges, and Future Directions of Generative Artificial Intelligence in Medical Education: Scoping Review
Carl Preiksaitis, Christian Rose
JMIR Medical Education.2023; 9: e48785. CrossRef - Exploring the impact of language models, such as ChatGPT, on student learning and assessment
Araz Zirar
Review of Education.2023;[Epub] CrossRef - Evaluating the reliability of ChatGPT as a tool for imaging test referral: a comparative study with a clinical decision support system
Shani Rosen, Mor Saban
European Radiology.2023; 34(5): 2826. CrossRef - ChatGPT and the AI revolution: a comprehensive investigation of its multidimensional impact and potential
Mohd Afjal
Library Hi Tech.2023;[Epub] CrossRef - The Significance of Artificial Intelligence Platforms in Anatomy Education: An Experience With ChatGPT and Google Bard
Hasan B Ilgaz, Zehra Çelik
Cureus.2023;[Epub] CrossRef - Is ChatGPT’s Knowledge and Interpretative Ability Comparable to First Professional MBBS (Bachelor of Medicine, Bachelor of Surgery) Students of India in Taking a Medical Biochemistry Examination?
Abhra Ghosh, Nandita Maini Jindal, Vikram K Gupta, Ekta Bansal, Navjot Kaur Bajwa, Abhishek Sett
Cureus.2023;[Epub] CrossRef - Ethical consideration of the use of generative artificial intelligence, including ChatGPT in writing a nursing article
Sun Huh
Child Health Nursing Research.2023; 29(4): 249. CrossRef - Potential Use of ChatGPT for Patient Information in Periodontology: A Descriptive Pilot Study
Osman Babayiğit, Zeynep Tastan Eroglu, Dilek Ozkan Sen, Fatma Ucan Yarkac
Cureus.2023;[Epub] CrossRef - Efficacy and limitations of ChatGPT as a biostatistical problem-solving tool in medical education in Serbia: a descriptive study
Aleksandra Ignjatović, Lazar Stevanović
Journal of Educational Evaluation for Health Professions.2023; 20: 28. CrossRef - Assessing the Performance of ChatGPT in Medical Biochemistry Using Clinical Case Vignettes: Observational Study
Krishna Mohan Surapaneni
JMIR Medical Education.2023; 9: e47191. CrossRef - Performance of ChatGPT, Bard, Claude, and Bing on the Peruvian National Licensing Medical Examination: a cross-sectional study
Betzy Clariza Torres-Zegarra, Wagner Rios-Garcia, Alvaro Micael Ñaña-Cordova, Karen Fatima Arteaga-Cisneros, Xiomara Cristina Benavente Chalco, Marina Atena Bustamante Ordoñez, Carlos Jesus Gutierrez Rios, Carlos Alberto Ramos Godoy, Kristell Luisa Teresa
Journal of Educational Evaluation for Health Professions.2023; 20: 30. CrossRef - ChatGPT’s performance in German OB/GYN exams – paving the way for AI-enhanced medical education and clinical practice
Maximilian Riedel, Katharina Kaefinger, Antonia Stuehrenberg, Viktoria Ritter, Niklas Amann, Anna Graf, Florian Recker, Evelyn Klein, Marion Kiechle, Fabian Riedel, Bastian Meyer
Frontiers in Medicine.2023;[Epub] CrossRef - Medical students’ patterns of using ChatGPT as a feedback tool and perceptions of ChatGPT in a Leadership and Communication course in Korea: a cross-sectional study
Janghee Park
Journal of Educational Evaluation for Health Professions.2023; 20: 29. CrossRef - FROM TEXT TO DIAGNOSE: CHATGPT’S EFFICACY IN MEDICAL DECISION-MAKING
Yaroslav Mykhalko, Pavlo Kish, Yelyzaveta Rubtsova, Oleksandr Kutsyn, Valentyna Koval
Wiadomości Lekarskie.2023; 76(11): 2345. CrossRef - Using ChatGPT for Clinical Practice and Medical Education: Cross-Sectional Survey of Medical Students’ and Physicians’ Perceptions
Pasin Tangadulrat, Supinya Sono, Boonsin Tangtrakulwanich
JMIR Medical Education.2023; 9: e50658. CrossRef - Below average ChatGPT performance in medical microbiology exam compared to university students
Malik Sallam, Khaled Al-Salahat
Frontiers in Education.2023;[Epub] CrossRef - ChatGPT: "To be or not to be" ... in academic research. The human mind's analytical rigor and capacity to discriminate between AI bots' truths and hallucinations
Aurelian Anghelescu, Ilinca Ciobanu, Constantin Munteanu, Lucia Ana Maria Anghelescu, Gelu Onose
Balneo and PRM Research Journal.2023; 14(Vol.14, no): 614. CrossRef - ChatGPT Review: A Sophisticated Chatbot Models in Medical & Health-related Teaching and Learning
Nur Izah Ab Razak, Muhammad Fawwaz Muhammad Yusoff, Rahmita Wirza O.K. Rahmat
Malaysian Journal of Medicine and Health Sciences.2023; 19(s12): 98. CrossRef - Application of artificial intelligence chatbots, including ChatGPT, in education, scholarly work, programming, and content generation and its prospects: a narrative review
Tae Won Kim
Journal of Educational Evaluation for Health Professions.2023; 20: 38. CrossRef - Trends in research on ChatGPT and adoption-related issues discussed in articles: a narrative review
Sang-Jun Kim
Science Editing.2023; 11(1): 3. CrossRef - Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
Hyunju Lee, Soobin Park
Journal of Educational Evaluation for Health Professions.2023; 20: 39. CrossRef - What will ChatGPT revolutionize in the financial industry?
Hassnian Ali, Ahmet Faruk Aysan
Modern Finance.2023; 1(1): 116. CrossRef
Reviews
-
Factors associated with medical students’ scores on the National Licensing Exam in Peru: a systematic review
-
Javier Alejandro Flores-Cohaila
-
J Educ Eval Health Prof. 2022;19:38. Published online December 29, 2022
-
DOI: https://doi.org/10.3352/jeehp.2022.19.38
-
-
4,591
View
-
322
Download
-
2
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
This study aimed to identify factors that have been studied for their associations with National Licensing Examination (ENAM) scores in Peru.
Methods
A search was conducted of literature databases and registers, including EMBASE, SciELO, Web of Science, MEDLINE, Peru’s National Register of Research Work, and Google Scholar. The following key terms were used: “ENAM” and “associated factors.” Studies in English and Spanish were included. The quality of the included studies was evaluated using the Medical Education Research Study Quality Instrument (MERSQI).
Results
In total, 38,500 participants were enrolled in 12 studies. Most (11/12) studies were cross-sectional, except for one case-control study. Three studies were published in peer-reviewed journals. The mean MERSQI was 10.33. A better performance on the ENAM was associated with a higher-grade point average (GPA) (n=8), internship setting in EsSalud (n=4), and regular academic status (n=3). Other factors showed associations in various studies, such as medical school, internship setting, age, gender, socioeconomic status, simulations test, study resources, preparation time, learning styles, study techniques, test-anxiety, and self-regulated learning strategies.
Conclusion
The ENAM is a multifactorial phenomenon; our model gives students a locus of control on what they can do to improve their score (i.e., implement self-regulated learning strategies) and faculty, health policymakers, and managers a framework to improve the ENAM score (i.e., design remediation programs to improve GPA and integrate anxiety-management courses into the curriculum).
-
Citations
Citations to this article as recorded by
- Medical Student’s Attitudes towards Implementation of National Licensing Exam (NLE) – A Qualitative Exploratory Study
Saima Bashir, Rehan Ahmed Khan
Pakistan Journal of Health Sciences.2024; : 153. CrossRef - Performance of ChatGPT on the Peruvian National Licensing Medical Examination: Cross-Sectional Study
Javier A Flores-Cohaila, Abigaíl García-Vicente, Sonia F Vizcarra-Jiménez, Janith P De la Cruz-Galán, Jesús D Gutiérrez-Arratia, Blanca Geraldine Quiroga Torres, Alvaro Taype-Rondan
JMIR Medical Education.2023; 9: e48039. CrossRef
-
Medical students’ satisfaction level with e-learning during the COVID-19 pandemic and its related factors: a systematic review
-
Mahbubeh Tabatabaeichehr, Samane Babaei, Mahdieh Dartomi, Peiman Alesheikh, Amir Tabatabaee, Hamed Mortazavi, Zohreh Khoshgoftar
-
J Educ Eval Health Prof. 2022;19:37. Published online December 20, 2022
-
DOI: https://doi.org/10.3352/jeehp.2022.19.37
-
-
3,658
View
-
244
Download
-
12
Web of Science
-
13
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
This review investigated medical students’ satisfaction level with e-learning during the coronavirus disease 2019 (COVID-19) pandemic and its related factors.
Methods
A comprehensive systematic search was performed of international literature databases, including Scopus, PubMed, Web of Science, and Persian databases such as Iranmedex and Scientific Information Database using keywords extracted from Medical Subject Headings such as “Distance learning,” “Distance education,” “Online learning,” “Online education,” and “COVID-19” from the earliest date to July 10, 2022. The quality of the studies included in this review was evaluated using the appraisal tool for cross-sectional studies (AXIS tool).
Results
A total of 15,473 medical science students were enrolled in 24 studies. The level of satisfaction with e-learning during the COVID-19 pandemic among medical science students was 51.8%. Factors such as age, gender, clinical year, experience with e-learning before COVID-19, level of study, adaptation content of course materials, interactivity, understanding of the content, active participation of the instructor in the discussion, multimedia use in teaching sessions, adequate time dedicated to the e-learning, stress perception, and convenience had significant relationships with the satisfaction of medical students with e-learning during the COVID-19 pandemic.
Conclusion
Therefore, due to the inevitability of online education and e-learning, it is suggested that educational managers and policymakers choose the best online education method for medical students by examining various studies in this field to increase their satisfaction with e-learning.
-
Citations
Citations to this article as recorded by
- Factors affecting medical students’ satisfaction with online learning: a regression analysis of a survey
Özlem Serpil Çakmakkaya, Elif Güzel Meydanlı, Ali Metin Kafadar, Mehmet Selman Demirci, Öner Süzer, Muhlis Cem Ar, Muhittin Onur Yaman, Kaan Can Demirbaş, Mustafa Sait Gönen
BMC Medical Education.2024;[Epub] CrossRef - A comparative study on the effectiveness of online and in-class team-based learning on student performance and perceptions in virtual simulation experiments
Jing Shen, Hongyan Qi, Ruhuan Mei, Cencen Sun
BMC Medical Education.2024;[Epub] CrossRef - Pharmacy Students’ Attitudes Toward Distance Learning After the COVID-19 Pandemic: Cross-Sectional Study From Saudi Arabia
Saud Alsahali, Salman Almutairi, Salem Almutairi, Saleh Almofadhi, Mohammed Anaam, Mohammed Alshammari, Suhaj Abdulsalim, Yasser Almogbel
JMIR Formative Research.2024; 8: e54500. CrossRef - Effects of the First Wave of the COVID-19 Pandemic on the Work Readiness of Undergraduate Nursing Students in China: A Mixed-Methods Study
Lifang He, Jean Rizza Dela Cruz
Risk Management and Healthcare Policy.2024; Volume 17: 559. CrossRef - Online learning satisfaction and participation in flipped classroom and case-based learning for medical students
Irma Uliano Effting Zoch de Moura, Valentina Coutinho Baldoto Gava Chakr
Revista Brasileira de Educação Médica.2024;[Epub] CrossRef - Medical education during the coronavirus disease 2019 pandemic: an umbrella review
Seyed Aria Nejadghaderi, Zohreh Khoshgoftar, Asra Fazlollahi, Mohammad Javad Nasiri
Frontiers in Medicine.2024;[Epub] CrossRef - Exploration of the Education and Teaching Management Model for Medical International Students in China
兴亮 代
Advances in Education.2024; 14(08): 390. CrossRef - Virtual global health education partnerships for health professional students: a scoping review
Nora K. Lenhard, Crystal An, Divya Jasthi, Veronica Laurel-Vargas, Ilon Weinstein, Suet K. Lam
Global Health Promotion.2024;[Epub] CrossRef - Applying the Panarchy Framework to Examining Post-Pandemic Adaptation in the Undergraduate Medical Education Environment: A Qualitative Study
Gowda Parameshwara Prashanth, Ciraj Ali Mohammed
Teaching and Learning in Medicine.2024; : 1. CrossRef - Identifying group metacognition associated with medical students’ teamwork satisfaction in an online small group tutorial context
Chia-Ter Chao, Yen-Lin Chiu, Chiao-Ling Tsai, Mong-Wei Lin, Chih-Wei Yang, Chiao-Chi Ho, Chiun Hsu, Huey-Ling Chen
BMC Medical Education.2024;[Epub] CrossRef - Medical students’ perceptions of the post-COVID-19 educational environment in Oman
Gowda Parameshwara Prashanth, Ciraj Ali Mohammed
Learning Environments Research.2024;[Epub] CrossRef - Physician Assistant Students’ Perception of Online Didactic Education: A Cross-Sectional Study
Daniel L Anderson, Jeffrey L Alexander
Cureus.2023;[Epub] CrossRef - Mediating Role of PERMA Wellbeing in the Relationship between Insomnia and Psychological Distress among Nursing College Students
Qian Sun, Xiangyu Zhao, Yiming Gao, Di Zhao, Meiling Qi
Behavioral Sciences.2023; 13(9): 764. CrossRef
Brief report
-
Self-directed learning quotient and common learning types of pre-medical students in Korea by the Multi-Dimensional Learning Strategy Test 2nd edition: a descriptive study
-
Sun Kim, A Ra Cho, Chul Woon Chung
-
J Educ Eval Health Prof. 2022;19:32. Published online November 28, 2022
-
DOI: https://doi.org/10.3352/jeehp.2022.19.32
-
-
Abstract
PDFSupplementary Material
- This study aimed to find the self-directed learning quotient and common learning types of pre-medical students through the confirmation of 4 characteristics of learning strategies, including personality, motivation, emotion, and behavior. The response data were collected from 277 out of 294 target first-year pre-medical students from 2019 to 2021, using the Multi-Dimensional Learning Strategy Test 2nd edition. The most common learning type was a self-directed type (44.0%), stagnant type (33.9%), latent type (14.4%), and conscientiousness type (7.6%). The self-directed learning index was high (29.2%), moderate (24.6%), somewhat high (21.7%), somewhat low (14.4%), and low (10.1%). This study confirmed that many students lacked self-directed learning capabilities for learning strategies. In addition, it was found that the difficulties experienced by each student were different, and the variables resulting in difficulties were also diverse. It may provide insights into how to develop programs that can help students increase their self-directed learning capability.
Research articles
-
Is online objective structured clinical examination teaching an acceptable replacement in post-COVID-19 medical education in the United Kingdom?: a descriptive study
-
Vashist Motkur, Aniket Bharadwaj, Nimalesh Yogarajah
-
J Educ Eval Health Prof. 2022;19:30. Published online November 7, 2022
-
DOI: https://doi.org/10.3352/jeehp.2022.19.30
-
-
2,489
View
-
150
Download
-
2
Web of Science
-
2
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
Coronavirus disease 2019 (COVID-19) restrictions resulted in an increased emphasis on virtual communication in medical education. This study assessed the acceptability of virtual teaching in an online objective structured clinical examination (OSCE) series and its role in future education.
Methods
Six surgical OSCE stations were designed, covering common surgical topics, with specific tasks testing data interpretation, clinical knowledge, and communication skills. These were delivered via Zoom to students who participated in student/patient/examiner role-play. Feedback was collected by asking students to compare online teaching with previous experiences of in-person teaching. Descriptive statistics were used for Likert response data, and thematic analysis for free-text items.
Results
Sixty-two students provided feedback, with 81% of respondents finding online instructions preferable to paper equivalents. Furthermore, 65% and 68% found online teaching more efficient and accessible, respectively, than in-person teaching. Only 34% found communication with each other easier online; Forty percent preferred online OSCE teaching to in-person teaching. Students also expressed feedback in positive and negative free-text comments.
Conclusion
The data suggested that generally students were unwilling for online teaching to completely replace in-person teaching. The success of online teaching was dependent on the clinical skill being addressed; some were less amenable to a virtual setting. However, online OSCE teaching could play a role alongside in-person teaching.
-
Citations
Citations to this article as recorded by
- Feasibility and reliability of the pandemic-adapted online-onsite hybrid graduation OSCE in Japan
Satoshi Hara, Kunio Ohta, Daisuke Aono, Toshikatsu Tamai, Makoto Kurachi, Kimikazu Sugimori, Hiroshi Mihara, Hiroshi Ichimura, Yasuhiko Yamamoto, Hideki Nomura
Advances in Health Sciences Education.2024; 29(3): 949. CrossRef - Should Virtual Objective Structured Clinical Examination (OSCE) Teaching Replace or Complement Face-to-Face Teaching in the Post-COVID-19 Educational Environment: An Evaluation of an Innovative National COVID-19 Teaching Programme
Charles Gamble, Alice Oatham, Raj Parikh
Cureus.2023;[Epub] CrossRef
-
Acceptability of the 8-case objective structured clinical examination of medical students in Korea using generalizability theory: a reliability study
-
Song Yi Park, Sang-Hwa Lee, Min-Jeong Kim, Ki-Hwan Ji, Ji Ho Ryu
-
J Educ Eval Health Prof. 2022;19:26. Published online September 8, 2022
-
DOI: https://doi.org/10.3352/jeehp.2022.19.26
-
-
3,101
View
-
224
Download
-
1
Web of Science
-
1
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
This study investigated whether the reliability was acceptable when the number of cases in the objective structured clinical examination (OSCE) decreased from 12 to 8 using generalizability theory (GT).
Methods
This psychometric study analyzed the OSCE data of 439 fourth-year medical students conducted in the Busan and Gyeongnam areas of South Korea from July 12 to 15, 2021. The generalizability study (G-study) considered 3 facets—students (p), cases (c), and items (i)—and designed the analysis as p×(i:c) due to items being nested in a case. The acceptable generalizability (G) coefficient was set to 0.70. The G-study and decision study (D-study) were performed using G String IV ver. 6.3.8 (Papawork, Hamilton, ON, Canada).
Results
All G coefficients except for July 14 (0.69) were above 0.70. The major sources of variance components (VCs) were items nested in cases (i:c), from 51.34% to 57.70%, and residual error (pi:c), from 39.55% to 43.26%. The proportion of VCs in cases was negligible, ranging from 0% to 2.03%.
Conclusion
The case numbers decreased in the 2021 Busan and Gyeongnam OSCE. However, the reliability was acceptable. In the D-study, reliability was maintained at 0.70 or higher if there were more than 21 items/case in 8 cases and more than 18 items/case in 9 cases. However, according to the G-study, increasing the number of items nested in cases rather than the number of cases could further improve reliability. The consortium needs to maintain a case bank with various items to implement a reliable blueprinting combination for the OSCE.
-
Citations
Citations to this article as recorded by
- Applying the Generalizability Theory to Identify the Sources of Validity Evidence for the Quality of Communication Questionnaire
Flávia Del Castanhel, Fernanda R. Fonseca, Luciana Bonnassis Burg, Leonardo Maia Nogueira, Getúlio Rodrigues de Oliveira Filho, Suely Grosseman
American Journal of Hospice and Palliative Medicine®.2024; 41(7): 792. CrossRef