Background ChatGPT is a large language model (LLM) based on artificial intelligence (AI) capable of responding in multiple languages and generating nuanced and highly complex responses. While ChatGPT holds promising applications in medical education, its limitations and potential risks cannot be ignored.
Methods A scoping review was conducted for English articles discussing ChatGPT in the context of medical education published after 2022. A literature search was performed using PubMed/MEDLINE, Embase, and Web of Science databases, and information was extracted from the relevant studies that were ultimately included.
Results ChatGPT exhibits various potential applications in medical education, such as providing personalized learning plans and materials, creating clinical practice simulation scenarios, and assisting in writing articles. However, challenges associated with academic integrity, data accuracy, and potential harm to learning were also highlighted in the literature. The paper emphasizes certain recommendations for using ChatGPT, including the establishment of guidelines. Based on the review, 3 key research areas were proposed: cultivating the ability of medical students to use ChatGPT correctly, integrating ChatGPT into teaching activities and processes, and proposing standards for the use of AI by medical students.
Conclusion ChatGPT has the potential to transform medical education, but careful consideration is required for its full integration. To harness the full potential of ChatGPT in medical education, attention should not only be given to the capabilities of AI but also to its impact on students and teachers.
Purpose We examined United States medical students’ self-reported feedback encounters during clerkship training to better understand in situ feedback practices. Specifically, we asked: Who do students receive feedback from, about what, when, where, and how do they use it? We explored whether curricular expectations for preceptors’ written commentary aligned with feedback as it occurs naturalistically in the workplace.
Methods This study occurred from July 2021 to February 2022 at Southern Illinois University School of Medicine. We used qualitative survey-based experience sampling to gather students’ accounts of their feedback encounters in 8 core specialties. We analyzed the who, what, when, where, and why of 267 feedback encounters reported by 11 clerkship students over 30 weeks. Code frequencies were mapped qualitatively to explore patterns in feedback encounters.
Results Clerkship feedback occurs in patterns apparently related to the nature of clinical work in each specialty. These patterns may be attributable to each specialty’s “social learning ecosystem”—the distinctive learning environment shaped by the social and material aspects of a given specialty’s work, which determine who preceptors are, what students do with preceptors, and what skills or attributes matter enough to preceptors to comment on.
Conclusion Comprehensive, standardized expectations for written feedback across specialties conflict with the reality of workplace-based learning. Preceptors may be better able—and more motivated—to document student performance that occurs as a natural part of everyday work. Nurturing social learning ecosystems could facilitate workplace-based learning such that, across specialties, students acquire a comprehensive clinical skillset appropriate for graduation.
Purpose This study aimed to evaluate the impact of a transcultural nursing course on enhancing the cultural competency of graduate nursing students in Korea. We hypothesized that participants’ cultural competency would significantly improve in areas such as communication, biocultural ecology and family, dietary habits, death rituals, spirituality, equity, and empowerment and intermediation after completing the course. Furthermore, we assessed the participants’ overall satisfaction with the course.
Methods A before-and-after study was conducted with graduate nursing students at Hallym University, Chuncheon, Korea, from March to June 2023. A transcultural nursing course was developed based on Giger & Haddad’s transcultural nursing model and Purnell’s theoretical model of cultural competence. Data was collected using a cultural competence scale for registered nurses developed by Kim and his colleagues. A total of 18 students participated, and the paired t-test was employed to compare pre-and post-intervention scores.
Results The study revealed significant improvements in all 7 categories of cultural nursing competence (P<0.01). Specifically, the mean differences in scores (pre–post) ranged from 0.74 to 1.09 across the categories. Additionally, participants expressed high satisfaction with the course, with an average score of 4.72 out of a maximum of 5.0.
Conclusion The transcultural nursing course effectively enhanced the cultural competency of graduate nursing students. Such courses are imperative to ensure quality care for the increasing multicultural population in Korea.
Purpose The present study was conducted to determine the effect of motion-graphic video-based training on the performance of operating room nurse students in cataract surgery using phacoemulsification at Kermanshah University of Medical Sciences in Iran.
Methods This was a randomized controlled study conducted among 36 students training to become operating room nurses. The control group only received routine training, and the intervention group received motion-graphic video-based training on the scrub nurse’s performance in cataract surgery in addition to the educator’s training. The performance of the students in both groups as scrub nurses was measured through a researcher-made checklist in a pre-test and a post-test.
Results The mean scores for performance in the pre-test and post-test were 17.83 and 26.44 in the control group and 18.33 and 50.94 in the intervention group, respectively, and a significant difference was identified between the mean scores of the pre- and post-test in both groups (P=0.001). The intervention also led to a significant increase in the mean performance score in the intervention group compared to the control group (P=0.001).
Conclusion Considering the significant difference in the performance score of the intervention group compared to the control group, motion-graphic video-based training had a positive effect on the performance of operating room nurse students, and such training can be used to improve clinical training.
ChatGPT (GPT-3.5) has entered higher education and there is a need to determine how to use it effectively. This descriptive study compared the ability of GPT-3.5 and teachers to answer questions from dental students and construct detailed intended learning outcomes. When analyzed according to a Likert scale, we found that GPT-3.5 answered the questions from dental students in a similar or even more elaborate way compared to the answers that had previously been provided by a teacher. GPT-3.5 was also asked to construct detailed intended learning outcomes for a course in microbial pathogenesis, and when these were analyzed according to a Likert scale they were, to a large degree, found irrelevant. Since students are using GPT-3.5, it is important that instructors learn how to make the best use of it both to be able to advise students and to benefit from its potential.
Citations
Citations to this article as recorded by
Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review Xiaojun Xu, Yixiao Chen, Jing Miao Journal of Educational Evaluation for Health Professions.2024; 21: 6. CrossRef
Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study Hyunju Lee, Soobin Park Journal of Educational Evaluation for Health Professions.2023; 20: 39. CrossRef
Purpose This study aimed to analyze patterns of using ChatGPT before and after group activities and to explore medical students’ perceptions of ChatGPT as a feedback tool in the classroom.
Methods The study included 99 2nd-year pre-medical students who participated in a “Leadership and Communication” course from March to June 2023. Students engaged in both individual and group activities related to negotiation strategies. ChatGPT was used to provide feedback on their solutions. A survey was administered to assess students’ perceptions of ChatGPT’s feedback, its use in the classroom, and the strengths and challenges of ChatGPT from May 17 to 19, 2023.
Results The students responded by indicating that ChatGPT’s feedback was helpful, and revised and resubmitted their group answers in various ways after receiving feedback. The majority of respondents expressed agreement with the use of ChatGPT during class. The most common response concerning the appropriate context of using ChatGPT’s feedback was “after the first round of discussion, for revisions.” There was a significant difference in satisfaction with ChatGPT’s feedback, including correctness, usefulness, and ethics, depending on whether or not ChatGPT was used during class, but there was no significant difference according to gender or whether students had previous experience with ChatGPT. The strongest advantages were “providing answers to questions” and “summarizing information,” and the worst disadvantage was “producing information without supporting evidence.”
Conclusion The students were aware of the advantages and disadvantages of ChatGPT, and they had a positive attitude toward using ChatGPT in the classroom.
Citations
Citations to this article as recorded by
Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review Xiaojun Xu, Yixiao Chen, Jing Miao Journal of Educational Evaluation for Health Professions.2024; 21: 6. CrossRef
Embracing ChatGPT for Medical Education: Exploring Its Impact on Doctors and Medical Students Yijun Wu, Yue Zheng, Baijie Feng, Yuqi Yang, Kai Kang, Ailin Zhao JMIR Medical Education.2024; 10: e52483. CrossRef
ChatGPT and Clinical Training: Perception, Concerns, and Practice of Pharm-D Students Mohammed Zawiah, Fahmi Al-Ashwal, Lobna Gharaibeh, Rana Abu Farha, Karem Alzoubi, Khawla Abu Hammour, Qutaiba A Qasim, Fahd Abrah Journal of Multidisciplinary Healthcare.2023; Volume 16: 4099. CrossRef
Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study Hyunju Lee, Soobin Park Journal of Educational Evaluation for Health Professions.2023; 20: 39. CrossRef
Purpose This study aimed to devise a valid measurement for assessing clinical students’ perceptions of teaching practices.
Methods A new tool was developed based on a meta-analysis encompassing effective clinical teaching-learning factors. Seventy-nine items were generated using a frequency (never to always) scale. The tool was applied to the University of New South Wales year 2, 3, and 6 medical students. Exploratory and confirmatory factor analysis (exploratory factor analysis [EFA] and confirmatory factor analysis [CFA], respectively) were conducted to establish the tool’s construct validity and goodness of fit, and Cronbach’s α was used for reliability.
Results In total, 352 students (44.2%) completed the questionnaire. The EFA identified student-centered learning, problem-solving learning, self-directed learning, and visual technology (reliability, 0.77 to 0.89). CFA showed acceptable goodness of fit (chi-square P<0.01, comparative fit index=0.930 and Tucker-Lewis index=0.917, root mean square error of approximation=0.069, standardized root mean square residual=0.06).
Conclusion The established tool—Student Ratings in Clinical Teaching (STRICT)—is a valid and reliable tool that demonstrates how students perceive clinical teaching efficacy. STRICT measures the frequency of teaching practices to mitigate the biases of acquiescence and social desirability. Clinical teachers may use the tool to adapt their teaching practices with more active learning activities and to utilize visual technology to facilitate clinical learning efficacy. Clinical educators may apply STRICT to assess how these teaching practices are implemented in current clinical settings.
Purpose There is limited literature related to the assessment of electronic medical record (EMR)-related competencies. To address this gap, this study explored the feasibility of an EMR objective structured clinical examination (OSCE) station to evaluate medical students’ communication skills by psychometric analyses and standardized patients’ (SPs) perspectives on EMR use in an OSCE.
Methods An OSCE station that incorporated the use of an EMR was developed and pilot-tested in March 2020. Students’ communication skills were assessed by SPs and physician examiners. Students’ scores were compared between the EMR station and 9 other stations. A psychometric analysis, including item total correlation, was done. SPs participated in a post-OSCE focus group to discuss their perception of EMRs’ effect on communication.
Results Ninety-nine 3rd-year medical students participated in a 10-station OSCE that included the use of the EMR station. The EMR station had an acceptable item total correlation (0.217). Students who leveraged graphical displays in counseling received higher OSCE station scores from the SPs (P=0.041). The thematic analysis of SPs’ perceptions of students’ EMR use from the focus group revealed the following domains of themes: technology, communication, case design, ownership of health information, and timing of EMR usage.
Conclusion This study demonstrated the feasibility of incorporating EMR in assessing learner communication skills in an OSCE. The EMR station had acceptable psychometric characteristics. Some medical students were able to efficiently use the EMRs as an aid in patient counseling. Teaching students how to be patient-centered even in the presence of technology may promote engagement.
Purpose The aim of this study was to identify factors influencing the learning transfer of nursing students in a non-face-to-face educational environment through structural equation modeling and suggest ways to improve the transfer of learning.
Methods In this cross-sectional study, data were collected via online surveys from February 9 to March 1, 2022, from 218 nursing students in Korea. Learning transfer, learning immersion, learning satisfaction, learning efficacy, self-directed learning ability and information technology utilization ability were analyzed using IBM SPSS for Windows ver. 22.0 and AMOS ver. 22.0.
Results The assessment of structural equation modeling showed adequate model fit, with normed χ2=1.74 (P<0.024), goodness-of-fit index=0.97, adjusted goodness-of-fit index=0.93, comparative fit index=0.98, root mean square residual=0.02, Tucker-Lewis index=0.97, normed fit index=0.96, and root mean square error of approximation=0.06. In a hypothetical model analysis, 9 out of 11 pathways of the hypothetical structural model for learning transfer in nursing students were statistically significant. Learning self-efficacy and learning immersion of nursing students directly affected learning transfer, and subjective information technology utilization ability, self-directed learning ability, and learning satisfaction were variables with indirect effects. The explanatory power of immersion, satisfaction, and self-efficacy for learning transfer was 44.4%.
Conclusion The assessment of structural equation modeling indicated an acceptable fit. It is necessary to improve the transfer of learning through the development of a self-directed program for learning ability improvement, including the use of information technology in nursing students’ learning environment in non-face-to-face conditions.
Citations
Citations to this article as recorded by
Flow in Relation to Academic Achievement in Online-Learning: A Meta-Analysis Study Da Xing, Yunjung Lee, Gyun Heo Measurement: Interdisciplinary Research and Perspectives.2024; : 1. CrossRef
The Mediating Effect of Perceived Institutional Support on Inclusive Leadership and Academic Loyalty in Higher Education Olabode Gbobaniyi, Shalini Srivastava, Abiodun Kolawole Oyetunji, Chiemela Victor Amaechi, Salmia Binti Beddu, Bajpai Ankita Sustainability.2023; 15(17): 13195. CrossRef
Transfer of Learning of New Nursing Professionals: Exploring Patterns and the Effect of Previous Work Experience Helena Roig-Ester, Paulina Elizabeth Robalino Guerra, Carla Quesada-Pallarès, Andreas Gegenfurtner Education Sciences.2023; 14(1): 52. CrossRef
Purpose It aims to find students’ performance of and perspectives on an objective structured practical examination (OSPE) for assessment of laboratory and preclinical skills in biomedical laboratory science (BLS). It also aims to investigate the perception, acceptability, and usefulness of OSPE from the students’ and examiners’ point of view.
Methods This was a longitudinal study to implement an OSPE in BLS. The student group consisted of 198 BLS students enrolled in semester 4, 2015–2019 at Karolinska University Hospital Huddinge, Sweden. Fourteen teachers evaluated the performance by completing a checklist and global rating scales. A student survey questionnaire was administered to the participants to evaluate the student perspective. To assess quality, 4 independent observers were included to monitor the examiners.
Results Almost 50% of the students passed the initial OSPE. During the repeat OSPE, 73% of the students passed the OSPE. There was a statistically significant difference between the first and the second repeat OSPE (P<0.01) but not between the first and the third attempt (P=0.09). The student survey questionnaire was completed by 99 of the 198 students (50%) and only 63 students responded to the free-text questions (32%). According to these responses, some stations were perceived as more difficult, albeit they considered the assessment to be valid. The observers found the assessment protocols and examiner’s instructions assured the objectivity of the examination.
Conclusion The introduction of an OSPE in the education of biomedical laboratory scientists was a reliable, and useful examination of practical skills.
Purpose This study aimed to assess the effect of simulation teaching in critical care courses in a nursing study program on the quality of chest compressions of cardiopulmonary resuscitation (CPR).
Methods An observational cross-sectional study was conducted at the Faculty of Health Studies at the Technical University of Liberec. The success rate of CPR was tested in exams comparing 2 groups of students, totaling 66 different individuals, who completed half a year (group 1: intermediate exam with model simulation) or 1.5 years (group 2: final theoretical critical care exam with model simulation) of undergraduate nursing critical care education taught completely with a Laerdal SimMan 3G simulator. The quality of CPR was evaluated according to 4 components: compression depth, compression rate, time of correct frequency, and time of correct chest release.
Results Compression depth was significantly higher in group 2 than in group 1 (P=0.016). There were no significant differences in the compression rate (P=0.210), time of correct frequency (P=0.586), or time of correct chest release (P=0.514).
Conclusion Nursing students who completed the final critical care exam showed an improvement in compression depth during CPR after 2 additional semesters of critical care teaching compared to those who completed the intermediate exam. The above results indicate that regularly scheduled CPR training is necessary during critical care education for nursing students.
Purpose This study was conducted to evaluate a smartphone-based online electronic logbook used to assess the clinical skills of nurse anesthesia students in Iran.
Methods This randomized controlled study was conducted after tool development at Ahvaz Jundishapur University of Medical Sciences in Ahvaz, Iran from January 2022 to December 2022. The online electronic logbook involved in this study was an Android-compatible application used to evaluate the clinical skills of nurse anesthesia students. In the implementation phase, the online electronic logbook was piloted for 3 months in anesthesia training in comparison with a paper logbook. For this purpose, 49 second- and third-year anesthesia nursing students selected using the census method were assigned to intervention (online electronic logbook) and control (paper logbook) groups. The online electronic logbook and paper logbook were compared in terms of student satisfaction and learning outcomes.
Results A total of 39 students participated in the study. The mean satisfaction score of the intervention group was significantly higher than that of the control group (P=0.027). The mean score of learning outcomes was also significantly higher for the intervention than the control group (P=0.028).
Conclusion Smartphone technology can provide a platform for improving the evaluation of the clinical skills of nursing anesthesia students, leading to increased satisfaction and improved learning outcomes.
Purpose This study evaluated the validity of student feedback derived from Medicine Student Experience Questionnaire (MedSEQ), as well as the predictors of students’ satisfaction in the Medicine program.
Methods Data from MedSEQ applying to the University of New South Wales Medicine program in 2017, 2019, and 2021 were analyzed. Confirmatory factor analysis (CFA) and Cronbach’s α were used to assess the construct validity and reliability of MedSEQ respectively. Hierarchical multiple linear regressions were used to identify the factors that most impact students’ overall satisfaction with the program.
Results A total of 1,719 students (34.50%) responded to MedSEQ. CFA showed good fit indices (root mean square error of approximation=0.051; comparative fit index=0.939; chi-square/degrees of freedom=6.429). All factors yielded good (α>0.7) or very good (α>0.8) levels of reliability, except the “online resources” factor, which had acceptable reliability (α=0.687). A multiple linear regression model with only demographic characteristics explained 3.8% of the variance in students’ overall satisfaction, whereas the model adding 8 domains from MedSEQ explained 40%, indicating that 36.2% of the variance was attributable to students’ experience across the 8 domains. Three domains had the strongest impact on overall satisfaction: “being cared for,” “satisfaction with teaching,” and “satisfaction with assessment” (β=0.327, 0.148, 0.148, respectively; all with P<0.001).
Conclusion MedSEQ has good construct validity and high reliability, reflecting students’ satisfaction with the Medicine program. Key factors impacting students’ satisfaction are the perception of being cared for, quality teaching irrespective of the mode of delivery and fair assessment tasks which enhance learning.
Citations
Citations to this article as recorded by
Mental health and quality of life across 6 years of medical training: A year-by-year analysis Natalia de Castro Pecci Maddalena, Alessandra Lamas Granero Lucchetti, Ivana Lucia Damasio Moutinho, Oscarina da Silva Ezequiel, Giancarlo Lucchetti International Journal of Social Psychiatry.2024; 70(2): 298. CrossRef
This study aimed to compare the knowledge and interpretation ability of ChatGPT, a language model of artificial general intelligence, with those of medical students in Korea by administering a parasitology examination to both ChatGPT and medical students. The examination consisted of 79 items and was administered to ChatGPT on January 1, 2023. The examination results were analyzed in terms of ChatGPT’s overall performance score, its correct answer rate by the items’ knowledge level, and the acceptability of its explanations of the items. ChatGPT’s performance was lower than that of the medical students, and ChatGPT’s correct answer rate was not related to the items’ knowledge level. However, there was a relationship between acceptable explanations and correct answers. In conclusion, ChatGPT’s knowledge and interpretation ability for this parasitology examination were not yet comparable to those of medical students in Korea.
Citations
Citations to this article as recorded by
Performance of ChatGPT on the India Undergraduate Community Medicine Examination: Cross-Sectional Study Aravind P Gandhi, Felista Karen Joesph, Vineeth Rajagopal, P Aparnavi, Sushma Katkuri, Sonal Dayama, Prakasini Satapathy, Mahalaqua Nazli Khatib, Shilpa Gaidhane, Quazi Syed Zahiruddin, Ashish Behera JMIR Formative Research.2024; 8: e49964. CrossRef
Large Language Models and Artificial Intelligence: A Primer for Plastic Surgeons on the Demonstrated and Potential Applications, Promises, and Limitations of ChatGPT Jad Abi-Rafeh, Hong Hao Xu, Roy Kazan, Ruth Tevlin, Heather Furnas Aesthetic Surgery Journal.2024; 44(3): 329. CrossRef
Unveiling the ChatGPT phenomenon: Evaluating the consistency and accuracy of endodontic question answers Ana Suárez, Víctor Díaz‐Flores García, Juan Algar, Margarita Gómez Sánchez, María Llorente de Pedro, Yolanda Freire International Endodontic Journal.2024; 57(1): 108. CrossRef
Bob or Bot: Exploring ChatGPT's Answers to University Computer Science Assessment Mike Richards, Kevin Waugh, Mark Slaymaker, Marian Petre, John Woodthorpe, Daniel Gooch ACM Transactions on Computing Education.2024; 24(1): 1. CrossRef
Evaluating ChatGPT as a self‐learning tool in medical biochemistry: A performance assessment in undergraduate medical university examination Krishna Mohan Surapaneni, Anusha Rajajagadeesan, Lakshmi Goudhaman, Shalini Lakshmanan, Saranya Sundaramoorthi, Dineshkumar Ravi, Kalaiselvi Rajendiran, Porchelvan Swaminathan Biochemistry and Molecular Biology Education.2024; 52(2): 237. CrossRef
Examining the use of ChatGPT in public universities in Hong Kong: a case study of restricted access areas Michelle W. T. Cheng, Iris H. Y. YIM Discover Education.2024;[Epub] CrossRef
Performance of ChatGPT on Ophthalmology-Related Questions Across Various Examination Levels: Observational Study Firas Haddad, Joanna S Saade JMIR Medical Education.2024; 10: e50842. CrossRef
A comparative vignette study: Evaluating the potential role of a generative AI model in enhancing clinical decision‐making in nursing Mor Saban, Ilana Dubovi Journal of Advanced Nursing.2024;[Epub] CrossRef
Comparison of the Performance of GPT-3.5 and GPT-4 With That of Medical Students on the Written German Medical Licensing Examination: Observational Study Annika Meyer, Janik Riese, Thomas Streichert JMIR Medical Education.2024; 10: e50965. CrossRef
From hype to insight: Exploring ChatGPT's early footprint in education via altmetrics and bibliometrics Lung‐Hsiang Wong, Hyejin Park, Chee‐Kit Looi Journal of Computer Assisted Learning.2024;[Epub] CrossRef
A scoping review of artificial intelligence in medical education: BEME Guide No. 84 Morris Gordon, Michelle Daniel, Aderonke Ajiboye, Hussein Uraiby, Nicole Y. Xu, Rangana Bartlett, Janice Hanson, Mary Haas, Maxwell Spadafore, Ciaran Grafton-Clarke, Rayhan Yousef Gasiea, Colin Michie, Janet Corral, Brian Kwan, Diana Dolmans, Satid Thamma Medical Teacher.2024; : 1. CrossRef
Üniversite Öğrencilerinin ChatGPT 3,5 Deneyimleri: Yapay Zekâyla Yazılmış Masal Varyantları Bilge GÖK, Fahri TEMİZYÜREK, Özlem BAŞ Korkut Ata Türkiyat Araştırmaları Dergisi.2024; (14): 1040. CrossRef
Tracking ChatGPT Research: Insights From the Literature and the Web Omar Mubin, Fady Alnajjar, Zouheir Trabelsi, Luqman Ali, Medha Mohan Ambali Parambil, Zhao Zou IEEE Access.2024; 12: 30518. CrossRef
Potential applications of ChatGPT in obstetrics and gynecology in Korea: a review article YooKyung Lee, So Yun Kim Obstetrics & Gynecology Science.2024; 67(2): 153. CrossRef
Application of generative language models to orthopaedic practice Jessica Caterson, Olivia Ambler, Nicholas Cereceda-Monteoliva, Matthew Horner, Andrew Jones, Arwel Tomos Poacher BMJ Open.2024; 14(3): e076484. CrossRef
Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review Xiaojun Xu, Yixiao Chen, Jing Miao Journal of Educational Evaluation for Health Professions.2024; 21: 6. CrossRef
The advent of ChatGPT: Job Made Easy or Job Loss to Data Analysts Abiola Timothy Owolabi, Oluwaseyi Oluwadamilare Okunlola, Emmanuel Taiwo Adewuyi, Janet Iyabo Idowu, Olasunkanmi James Oladapo WSEAS TRANSACTIONS ON COMPUTERS.2024; 23: 24. CrossRef
ChatGPT in dentomaxillofacial radiology education Hilal Peker Öztürk, Hakan Avsever, Buğra Şenel, Şükran Ayran, Mustafa Çağrı Peker, Hatice Seda Özgedik, Nurten Baysal Journal of Health Sciences and Medicine.2024; 7(2): 224. CrossRef
Performance of ChatGPT on the Korean National Examination for Dental Hygienists Soo-Myoung Bae, Hye-Rim Jeon, Gyoung-Nam Kim, Seon-Hui Kwak, Hyo-Jin Lee Journal of Dental Hygiene Science.2024; 24(1): 62. CrossRef
Medical knowledge of ChatGPT in public health, infectious diseases, COVID-19 pandemic, and vaccines: multiple choice questions examination based performance Sultan Ayoub Meo, Metib Alotaibi, Muhammad Zain Sultan Meo, Muhammad Omair Sultan Meo, Mashhood Hamid Frontiers in Public Health.2024;[Epub] CrossRef
Applicability of ChatGPT in Assisting to Solve Higher Order Problems in Pathology Ranwir K Sinha, Asitava Deb Roy, Nikhil Kumar, Himel Mondal Cureus.2023;[Epub] CrossRef
Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers Sun Huh Journal of Educational Evaluation for Health Professions.2023; 20: 5. CrossRef
Emergence of the metaverse and ChatGPT in journal publishing after the COVID-19 pandemic Sun Huh Science Editing.2023; 10(1): 1. CrossRef
Assessing the Capability of ChatGPT in Answering First- and Second-Order Knowledge Questions on Microbiology as per Competency-Based Medical Education Curriculum Dipmala Das, Nikhil Kumar, Langamba Angom Longjam, Ranwir Sinha, Asitava Deb Roy, Himel Mondal, Pratima Gupta Cureus.2023;[Epub] CrossRef
Evaluating ChatGPT's Ability to Solve Higher-Order Questions on the Competency-Based Medical Education Curriculum in Medical Biochemistry Arindam Ghosh, Aritri Bir Cureus.2023;[Epub] CrossRef
Overview of Early ChatGPT’s Presence in Medical Literature: Insights From a Hybrid Literature Review by ChatGPT and Human Experts Omar Temsah, Samina A Khan, Yazan Chaiah, Abdulrahman Senjab, Khalid Alhasan, Amr Jamal, Fadi Aljamaan, Khalid H Malki, Rabih Halwani, Jaffar A Al-Tawfiq, Mohamad-Hani Temsah, Ayman Al-Eyadhy Cureus.2023;[Epub] CrossRef
ChatGPT for Future Medical and Dental Research Bader Fatani Cureus.2023;[Epub] CrossRef
ChatGPT in Dentistry: A Comprehensive Review Hind M Alhaidry, Bader Fatani, Jenan O Alrayes, Aljowhara M Almana, Nawaf K Alfhaed Cureus.2023;[Epub] CrossRef
Can we trust AI chatbots’ answers about disease diagnosis and patient care? Sun Huh Journal of the Korean Medical Association.2023; 66(4): 218. CrossRef
Large Language Models in Medical Education: Opportunities, Challenges, and Future Directions Alaa Abd-alrazaq, Rawan AlSaad, Dari Alhuwail, Arfan Ahmed, Padraig Mark Healy, Syed Latifi, Sarah Aziz, Rafat Damseh, Sadam Alabed Alrazak, Javaid Sheikh JMIR Medical Education.2023; 9: e48291. CrossRef
Early applications of ChatGPT in medical practice, education and research Sam Sedaghat Clinical Medicine.2023; 23(3): 278. CrossRef
A Review of Research on Teaching and Learning Transformation under the Influence of ChatGPT Technology 璇 师 Advances in Education.2023; 13(05): 2617. CrossRef
Performance of GPT-3.5 and GPT-4 on the Japanese Medical Licensing Examination: Comparison Study Soshi Takagi, Takashi Watari, Ayano Erabi, Kota Sakaguchi JMIR Medical Education.2023; 9: e48002. CrossRef
ChatGPT’s quiz skills in different otolaryngology subspecialties: an analysis of 2576 single-choice and multiple-choice board certification preparation questions Cosima C. Hoch, Barbara Wollenberg, Jan-Christoffer Lüers, Samuel Knoedler, Leonard Knoedler, Konstantin Frank, Sebastian Cotofana, Michael Alfertshofer European Archives of Oto-Rhino-Laryngology.2023; 280(9): 4271. CrossRef
Analysing the Applicability of ChatGPT, Bard, and Bing to Generate Reasoning-Based Multiple-Choice Questions in Medical Physiology Mayank Agarwal, Priyanka Sharma, Ayan Goswami Cureus.2023;[Epub] CrossRef
The Intersection of ChatGPT, Clinical Medicine, and Medical Education Rebecca Shin-Yee Wong, Long Chiau Ming, Raja Affendi Raja Ali JMIR Medical Education.2023; 9: e47274. CrossRef
The Role of Artificial Intelligence in Higher Education: ChatGPT Assessment for Anatomy Course Tarık TALAN, Yusuf KALINKARA Uluslararası Yönetim Bilişim Sistemleri ve Bilgisayar Bilimleri Dergisi.2023; 7(1): 33. CrossRef
Comparing ChatGPT’s ability to rate the degree of stereotypes and the consistency of stereotype attribution with those of medical students in New Zealand in developing a similarity rating test: a methodological study Chao-Cheng Lin, Zaine Akuhata-Huntington, Che-Wei Hsu Journal of Educational Evaluation for Health Professions.2023; 20: 17. CrossRef
Assessing the Efficacy of ChatGPT in Solving Questions Based on the Core Concepts in Physiology Arijita Banerjee, Aquil Ahmad, Payal Bhalla, Kavita Goyal Cureus.2023;[Epub] CrossRef
ChatGPT Performs on the Chinese National Medical Licensing Examination Xinyi Wang, Zhenye Gong, Guoxin Wang, Jingdan Jia, Ying Xu, Jialu Zhao, Qingye Fan, Shaun Wu, Weiguo Hu, Xiaoyang Li Journal of Medical Systems.2023;[Epub] CrossRef
Artificial intelligence and its impact on job opportunities among university students in North Lima, 2023 Doris Ruiz-Talavera, Jaime Enrique De la Cruz-Aguero, Nereo García-Palomino, Renzo Calderón-Espinoza, William Joel Marín-Rodriguez ICST Transactions on Scalable Information Systems.2023;[Epub] CrossRef
Revolutionizing Dental Care: A Comprehensive Review of Artificial Intelligence Applications Among Various Dental Specialties Najd Alzaid, Omar Ghulam, Modhi Albani, Rafa Alharbi, Mayan Othman, Hasan Taher, Saleem Albaradie, Suhael Ahmed Cureus.2023;[Epub] CrossRef
Opportunities, Challenges, and Future Directions of Generative Artificial Intelligence in Medical Education: Scoping Review Carl Preiksaitis, Christian Rose JMIR Medical Education.2023; 9: e48785. CrossRef
Exploring the impact of language models, such as ChatGPT, on student learning and assessment Araz Zirar Review of Education.2023;[Epub] CrossRef
Evaluating the reliability of ChatGPT as a tool for imaging test referral: a comparative study with a clinical decision support system Shani Rosen, Mor Saban European Radiology.2023;[Epub] CrossRef
Redesigning Tertiary Educational Evaluation with AI: A Task-Based Analysis of LIS Students’ Assessment on Written Tests and Utilizing ChatGPT at NSTU Shamima Yesmin Science & Technology Libraries.2023; : 1. CrossRef
ChatGPT and the AI revolution: a comprehensive investigation of its multidimensional impact and potential Mohd Afjal Library Hi Tech.2023;[Epub] CrossRef
The Significance of Artificial Intelligence Platforms in Anatomy Education: An Experience With ChatGPT and Google Bard Hasan B Ilgaz, Zehra Çelik Cureus.2023;[Epub] CrossRef
Is ChatGPT’s Knowledge and Interpretative Ability Comparable to First Professional MBBS (Bachelor of Medicine, Bachelor of Surgery) Students of India in Taking a Medical Biochemistry Examination? Abhra Ghosh, Nandita Maini Jindal, Vikram K Gupta, Ekta Bansal, Navjot Kaur Bajwa, Abhishek Sett Cureus.2023;[Epub] CrossRef
Ethical consideration of the use of generative artificial intelligence, including ChatGPT in writing a nursing article Sun Huh Child Health Nursing Research.2023; 29(4): 249. CrossRef
Potential Use of ChatGPT for Patient Information in Periodontology: A Descriptive Pilot Study Osman Babayiğit, Zeynep Tastan Eroglu, Dilek Ozkan Sen, Fatma Ucan Yarkac Cureus.2023;[Epub] CrossRef
Efficacy and limitations of ChatGPT as a biostatistical problem-solving tool in medical education in Serbia: a descriptive study Aleksandra Ignjatović, Lazar Stevanović Journal of Educational Evaluation for Health Professions.2023; 20: 28. CrossRef
Assessing the Performance of ChatGPT in Medical Biochemistry Using Clinical Case Vignettes: Observational Study Krishna Mohan Surapaneni JMIR Medical Education.2023; 9: e47191. CrossRef
A systematic review of ChatGPT use in K‐12 education Peng Zhang, Gemma Tur European Journal of Education.2023;[Epub] CrossRef
Performance of ChatGPT, Bard, Claude, and Bing on the Peruvian National Licensing Medical Examination: a cross-sectional study Betzy Clariza Torres-Zegarra, Wagner Rios-Garcia, Alvaro Micael Ñaña-Cordova, Karen Fatima Arteaga-Cisneros, Xiomara Cristina Benavente Chalco, Marina Atena Bustamante Ordoñez, Carlos Jesus Gutierrez Rios, Carlos Alberto Ramos Godoy, Kristell Luisa Teresa Journal of Educational Evaluation for Health Professions.2023; 20: 30. CrossRef
ChatGPT’s performance in German OB/GYN exams – paving the way for AI-enhanced medical education and clinical practice Maximilian Riedel, Katharina Kaefinger, Antonia Stuehrenberg, Viktoria Ritter, Niklas Amann, Anna Graf, Florian Recker, Evelyn Klein, Marion Kiechle, Fabian Riedel, Bastian Meyer Frontiers in Medicine.2023;[Epub] CrossRef
Medical students’ patterns of using ChatGPT as a feedback tool and perceptions of ChatGPT in a Leadership and Communication course in Korea: a cross-sectional study Janghee Park Journal of Educational Evaluation for Health Professions.2023; 20: 29. CrossRef
FROM TEXT TO DIAGNOSE: CHATGPT’S EFFICACY IN MEDICAL DECISION-MAKING Yaroslav Mykhalko, Pavlo Kish, Yelyzaveta Rubtsova, Oleksandr Kutsyn, Valentyna Koval Wiadomości Lekarskie.2023; 76(11): 2345. CrossRef
Using ChatGPT for Clinical Practice and Medical Education: Cross-Sectional Survey of Medical Students’ and Physicians’ Perceptions Pasin Tangadulrat, Supinya Sono, Boonsin Tangtrakulwanich JMIR Medical Education.2023; 9: e50658. CrossRef
Below average ChatGPT performance in medical microbiology exam compared to university students Malik Sallam, Khaled Al-Salahat Frontiers in Education.2023;[Epub] CrossRef
ChatGPT: "To be or not to be" ... in academic research. The human mind's analytical rigor and capacity to discriminate between AI bots' truths and hallucinations Aurelian Anghelescu, Ilinca Ciobanu, Constantin Munteanu, Lucia Ana Maria Anghelescu, Gelu Onose Balneo and PRM Research Journal.2023; 14(Vol.14, no): 614. CrossRef
ChatGPT Review: A Sophisticated Chatbot Models in Medical & Health-related Teaching and Learning Nur Izah Ab Razak, Muhammad Fawwaz Muhammad Yusoff, Rahmita Wirza O.K. Rahmat Malaysian Journal of Medicine and Health Sciences.2023; 19(s12): 98. CrossRef
Application of artificial intelligence chatbots, including ChatGPT, in education, scholarly work, programming, and content generation and its prospects: a narrative review Tae Won Kim Journal of Educational Evaluation for Health Professions.2023; 20: 38. CrossRef
Trends in research on ChatGPT and adoption-related issues discussed in articles: a narrative review Sang-Jun Kim Science Editing.2023; 11(1): 3. CrossRef
Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study Hyunju Lee, Soobin Park Journal of Educational Evaluation for Health Professions.2023; 20: 39. CrossRef
Purpose This study aimed to identify factors that have been studied for their associations with National Licensing Examination (ENAM) scores in Peru.
Methods A search was conducted of literature databases and registers, including EMBASE, SciELO, Web of Science, MEDLINE, Peru’s National Register of Research Work, and Google Scholar. The following key terms were used: “ENAM” and “associated factors.” Studies in English and Spanish were included. The quality of the included studies was evaluated using the Medical Education Research Study Quality Instrument (MERSQI).
Results In total, 38,500 participants were enrolled in 12 studies. Most (11/12) studies were cross-sectional, except for one case-control study. Three studies were published in peer-reviewed journals. The mean MERSQI was 10.33. A better performance on the ENAM was associated with a higher-grade point average (GPA) (n=8), internship setting in EsSalud (n=4), and regular academic status (n=3). Other factors showed associations in various studies, such as medical school, internship setting, age, gender, socioeconomic status, simulations test, study resources, preparation time, learning styles, study techniques, test-anxiety, and self-regulated learning strategies.
Conclusion The ENAM is a multifactorial phenomenon; our model gives students a locus of control on what they can do to improve their score (i.e., implement self-regulated learning strategies) and faculty, health policymakers, and managers a framework to improve the ENAM score (i.e., design remediation programs to improve GPA and integrate anxiety-management courses into the curriculum).
Citations
Citations to this article as recorded by
Performance of ChatGPT on the Peruvian National Licensing Medical Examination: Cross-Sectional Study Javier A Flores-Cohaila, Abigaíl García-Vicente, Sonia F Vizcarra-Jiménez, Janith P De la Cruz-Galán, Jesús D Gutiérrez-Arratia, Blanca Geraldine Quiroga Torres, Alvaro Taype-Rondan JMIR Medical Education.2023; 9: e48039. CrossRef