Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Most download articles

Page Path
HOME > Browse articles > Most download articles
93 Most download articles
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles

Most-download articles are from the articles published in 2022 during the last three month.

Reviews
Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review  
Xiaojun Xu, Yixiao Chen, Jing Miao
J Educ Eval Health Prof. 2024;21:6.   Published online March 15, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.6
  • 1,217 View
  • 286 Download
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Background
ChatGPT is a large language model (LLM) based on artificial intelligence (AI) capable of responding in multiple languages and generating nuanced and highly complex responses. While ChatGPT holds promising applications in medical education, its limitations and potential risks cannot be ignored.
Methods
A scoping review was conducted for English articles discussing ChatGPT in the context of medical education published after 2022. A literature search was performed using PubMed/MEDLINE, Embase, and Web of Science databases, and information was extracted from the relevant studies that were ultimately included.
Results
ChatGPT exhibits various potential applications in medical education, such as providing personalized learning plans and materials, creating clinical practice simulation scenarios, and assisting in writing articles. However, challenges associated with academic integrity, data accuracy, and potential harm to learning were also highlighted in the literature. The paper emphasizes certain recommendations for using ChatGPT, including the establishment of guidelines. Based on the review, 3 key research areas were proposed: cultivating the ability of medical students to use ChatGPT correctly, integrating ChatGPT into teaching activities and processes, and proposing standards for the use of AI by medical students.
Conclusion
ChatGPT has the potential to transform medical education, but careful consideration is required for its full integration. To harness the full potential of ChatGPT in medical education, attention should not only be given to the capabilities of AI but also to its impact on students and teachers.

Citations

Citations to this article as recorded by  
  • Chatbots in neurology and neuroscience: interactions with students, patients and neurologists
    Stefano Sandrone
    Brain Disorders.2024; : 100145.     CrossRef
Application of artificial intelligence chatbots, including ChatGPT, in education, scholarly work, programming, and content generation and its prospects: a narrative review
Tae Won Kim
J Educ Eval Health Prof. 2023;20:38.   Published online December 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.38
  • 2,575 View
  • 393 Download
  • 3 Web of Science
  • 4 Crossref
AbstractAbstract PDFSupplementary Material
This study aims to explore ChatGPT’s (GPT-3.5 version) functionalities, including reinforcement learning, diverse applications, and limitations. ChatGPT is an artificial intelligence (AI) chatbot powered by OpenAI’s Generative Pre-trained Transformer (GPT) model. The chatbot’s applications span education, programming, content generation, and more, demonstrating its versatility. ChatGPT can improve education by creating assignments and offering personalized feedback, as shown by its notable performance in medical exams and the United States Medical Licensing Exam. However, concerns include plagiarism, reliability, and educational disparities. It aids in various research tasks, from design to writing, and has shown proficiency in summarizing and suggesting titles. Its use in scientific writing and language translation is promising, but professional oversight is needed for accuracy and originality. It assists in programming tasks like writing code, debugging, and guiding installation and updates. It offers diverse applications, from cheering up individuals to generating creative content like essays, news articles, and business plans. Unlike search engines, ChatGPT provides interactive, generative responses and understands context, making it more akin to human conversation, in contrast to conventional search engines’ keyword-based, non-interactive nature. ChatGPT has limitations, such as potential bias, dependence on outdated data, and revenue generation challenges. Nonetheless, ChatGPT is considered to be a transformative AI tool poised to redefine the future of generative technology. In conclusion, advancements in AI, such as ChatGPT, are altering how knowledge is acquired and applied, marking a shift from search engines to creativity engines. This transformation highlights the increasing importance of AI literacy and the ability to effectively utilize AI in various domains of life.

Citations

Citations to this article as recorded by  
  • Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
    Xiaojun Xu, Yixiao Chen, Jing Miao
    Journal of Educational Evaluation for Health Professions.2024; 21: 6.     CrossRef
  • Artificial Intelligence: Fundamentals and Breakthrough Applications in Epilepsy
    Wesley Kerr, Sandra Acosta, Patrick Kwan, Gregory Worrell, Mohamad A. Mikati
    Epilepsy Currents.2024;[Epub]     CrossRef
  • A Developed Graphical User Interface-Based on Different Generative Pre-trained Transformers Models
    Ekrem Küçük, İpek Balıkçı Çiçek, Zeynep Küçükakçalı, Cihan Yetiş, Cemil Çolak
    ODÜ Tıp Dergisi.2024; 11(1): 18.     CrossRef
  • Art or Artifact: Evaluating the Accuracy, Appeal, and Educational Value of AI-Generated Imagery in DALL·E 3 for Illustrating Congenital Heart Diseases
    Mohamad-Hani Temsah, Abdullah N. Alhuzaimi, Mohammed Almansour, Fadi Aljamaan, Khalid Alhasan, Munirah A. Batarfi, Ibraheem Altamimi, Amani Alharbi, Adel Abdulaziz Alsuhaibani, Leena Alwakeel, Abdulrahman Abdulkhaliq Alzahrani, Khaled B. Alsulaim, Amr Jam
    Journal of Medical Systems.2024;[Epub]     CrossRef
Research article
Challenges and potential improvements in the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019) derived through meta-evaluation: a cross-sectional study  
Yoonjung Lee, Min-jung Lee, Junmoo Ahn, Chungwon Ha, Ye Ji Kang, Cheol Woong Jung, Dong-Mi Yoo, Jihye Yu, Seung-Hee Lee
J Educ Eval Health Prof. 2024;21:8.   Published online April 2, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.8
  • 448 View
  • 163 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to identify challenges and potential improvements in Korea's medical education accreditation process according to the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019). Meta-evaluation was conducted to survey the experiences and perceptions of stakeholders, including self-assessment committee members, site visit committee members, administrative staff, and medical school professors.
Methods
A cross-sectional study was conducted using surveys sent to 40 medical schools. The 332 participants included self-assessment committee members, site visit team members, administrative staff, and medical school professors. The t-test, one-way analysis of variance and the chi-square test were used to analyze and compare opinions on medical education accreditation between the categories of participants.
Results
Site visit committee members placed greater importance on the necessity of accreditation than faculty members. A shared positive view on accreditation’s role in improving educational quality was seen among self-evaluation committee members and professors. Administrative staff highly regarded the Korean Institute of Medical Education and Evaluation’s reliability and objectivity, unlike the self-evaluation committee members. Site visit committee members positively perceived the clarity of accreditation standards, differing from self-assessment committee members. Administrative staff were most optimistic about implementing standards. However, the accreditation process encountered challenges, especially in duplicating content and preparing self-evaluation reports. Finally, perceptions regarding the accuracy of final site visit reports varied significantly between the self-evaluation committee members and the site visit committee members.
Conclusion
This study revealed diverse views on medical education accreditation, highlighting the need for improved communication, expectation alignment, and stakeholder collaboration to refine the accreditation process and quality.

Citations

Citations to this article as recorded by  
  • The new placement of 2,000 entrants at Korean medical schools in 2025: is the government’s policy evidence-based?
    Sun Huh
    The Ewha Medical Journal.2024;[Epub]     CrossRef
Review
Attraction and achievement as 2 attributes of gamification in healthcare: an evolutionary concept analysis  
Hyun Kyoung Kim
J Educ Eval Health Prof. 2024;21:10.   Published online April 11, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.10
  • 378 View
  • 162 Download
AbstractAbstract PDFSupplementary Material
This study conducted a conceptual analysis of gamification in healthcare utilizing Rogers’ evolutionary concept analysis methodology to identify its attributes and provide a method for its applications in the healthcare field. Gamification has recently been used as a health intervention and education method, but the concept is used inconsistently and confusingly. A literature review was conducted to derive definitions, surrogate terms, antecedents, influencing factors, attributes (characteristics with dimensions and features), related concepts, consequences, implications, and hypotheses from various academic fields. A total of 56 journal articles in English and Korean, published between August 2 and August 7, 2023, were extracted from databases such as PubMed Central, the Institute of Electrical and Electronics Engineers, the Association for Computing Machinery Digital Library, the Research Information Sharing Service, and the Korean Studies Information Service System, using the keywords “gamification” and “healthcare.” These articles were then analyzed. Gamification in healthcare is defined as the application of game elements in health-related contexts to improve health outcomes. The attributes of this concept were categorized into 2 main areas: attraction and achievement. These categories encompass various strategies for synchronization, enjoyable engagement, visual rewards, and goal-reinforcing frames. Through a multidisciplinary analysis of the concept’s attributes and influencing factors, this paper provides practical strategies for implementing gamification in health interventions. When developing a gamification strategy, healthcare providers can reference this analysis to ensure the game elements are used both appropriately and effectively.
Research article
Development and psychometric evaluation of a 360-degree evaluation instrument to assess medical students’ performance in clinical settings at the emergency medicine department in Iran: a methodological study  
Golnaz Azami, Sanaz Aazami, Boshra Ebrahimy, Payam Emami
J Educ Eval Health Prof. 2024;21:7.   Published online April 1, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.7
  • 579 View
  • 123 Download
AbstractAbstract PDFSupplementary Material
Background
In the Iranian context, no 360-degree evaluation tool has been developed to assess the performance of prehospital medical emergency students in clinical settings. This article describes the development of a 360-degree evaluation tool and presents its first psychometric evaluation.
Methods
There were 2 steps in this study: step 1 involved developing the instrument (i.e., generating the items) and step 2 constituted the psychometric evaluation of the instrument. We performed exploratory and confirmatory factor analyses and also evaluated the instrument’s face, content, and convergent validity and reliability.
Results
The instrument contains 55 items across 6 domains, including leadership, management, and teamwork (19 items), consciousness and responsiveness (14 items), clinical and interpersonal communication skills (8 items), integrity (7 items), knowledge and accountability (4 items), and loyalty and transparency (3 items). The instrument was confirmed to be a valid measure, as the 6 domains had eigenvalues over Kaiser’s criterion of 1 and in combination explained 60.1% of the variance (Bartlett’s test of sphericity [1,485]=19,867.99, P<0.01). Furthermore, this study provided evidence for the instrument’s convergent validity and internal consistency (α=0.98), suggesting its suitability for assessing student performance.
Conclusion
We found good evidence for the validity and reliability of the instrument. Our instrument can be used to make future evaluations of student performance in the clinical setting more structured, transparent, informative, and comparable.
Review
How to review and assess a systematic review and meta-analysis article: a methodological study (secondary publication)  
Seung-Kwon Myung
J Educ Eval Health Prof. 2023;20:24.   Published online August 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.24
  • 3,659 View
  • 366 Download
  • 1 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Systematic reviews and meta-analyses have become central in many research fields, particularly medicine. They offer the highest level of evidence in evidence-based medicine and support the development and revision of clinical practice guidelines, which offer recommendations for clinicians caring for patients with specific diseases and conditions. This review summarizes the concepts of systematic reviews and meta-analyses and provides guidance on reviewing and assessing such papers. A systematic review refers to a review of a research question that uses explicit and systematic methods to identify, select, and critically appraise relevant research. In contrast, a meta-analysis is a quantitative statistical analysis that combines individual results on the same research question to estimate the common or mean effect. Conducting a meta-analysis involves defining a research topic, selecting a study design, searching literature in electronic databases, selecting relevant studies, and conducting the analysis. One can assess the findings of a meta-analysis by interpreting a forest plot and a funnel plot and by examining heterogeneity. When reviewing systematic reviews and meta-analyses, several essential points must be considered, including the originality and significance of the work, the comprehensiveness of the database search, the selection of studies based on inclusion and exclusion criteria, subgroup analyses by various factors, and the interpretation of the results based on the levels of evidence. This review will provide readers with helpful guidance to help them read, understand, and evaluate these articles.

Citations

Citations to this article as recorded by  
  • The Role of BIM in Managing Risks in Sustainability of Bridge Projects: A Systematic Review with Meta-Analysis
    Dema Munef Ahmad, László Gáspár, Zsolt Bencze, Rana Ahmad Maya
    Sustainability.2024; 16(3): 1242.     CrossRef
  • The impact of indoor carbon dioxide exposure on human brain activity: A systematic review and meta-analysis based on studies utilizing electroencephalogram signals
    Nan Zhang, Chao Liu, Caixia Hou, Wenhao Wang, Qianhui Yuan, Weijun Gao
    Building and Environment.2024; 259: 111687.     CrossRef
Educational/Faculty development material
Common models and approaches for the clinical educator to plan effective feedback encounters  
Cesar Orsini, Veena Rodrigues, Jorge Tricio, Margarita Rosel
J Educ Eval Health Prof. 2022;19:35.   Published online December 19, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.35
  • 5,792 View
  • 708 Download
  • 3 Web of Science
  • 4 Crossref
AbstractAbstract PDFSupplementary Material
Giving constructive feedback is crucial for learners to bridge the gap between their current performance and the desired standards of competence. Giving effective feedback is a skill that can be learned, practiced, and improved. Therefore, our aim was to explore models in clinical settings and assess their transferability to different clinical feedback encounters. We identified the 6 most common and accepted feedback models, including the Feedback Sandwich, the Pendleton Rules, the One-Minute Preceptor, the SET-GO model, the R2C2 (Rapport/Reaction/Content/Coach), and the ALOBA (Agenda Led Outcome-based Analysis) model. We present a handy resource describing their structure, strengths and weaknesses, requirements for educators and learners, and suitable feedback encounters for use for each model. These feedback models represent practical frameworks for educators to adopt but also to adapt to their preferred style, combining and modifying them if necessary to suit their needs and context.

Citations

Citations to this article as recorded by  
  • Navigating power dynamics between pharmacy preceptors and learners
    Shane Tolleson, Mabel Truong, Natalie Rosario
    Exploratory Research in Clinical and Social Pharmacy.2024; 13: 100408.     CrossRef
  • Feedback in Medical Education—Its Importance and How to Do It
    Tarik Babar, Omer A. Awan
    Academic Radiology.2024;[Epub]     CrossRef
  • Comparison of the effects of apprenticeship training by sandwich feedback and traditional methods on final-semester operating room technology students’ perioperative competence and performance: a randomized, controlled trial
    Azam Hosseinpour, Morteza Nasiri, Fatemeh Keshmiri, Tayebeh Arabzadeh, Hossein Sharafi
    BMC Medical Education.2024;[Epub]     CrossRef
  • Feedback conversations: First things first?
    Katharine A. Robb, Marcy E. Rosenbaum, Lauren Peters, Susan Lenoch, Donna Lancianese, Jane L. Miller
    Patient Education and Counseling.2023; 115: 107849.     CrossRef
Editorial
Research articles
Discovering social learning ecosystems during clinical clerkship from United States medical students’ feedback encounters: a content analysis  
Anna Therese Cianciolo, Heeyoung Han, Lydia Anne Howes, Debra Lee Klamen, Sophia Matos
J Educ Eval Health Prof. 2024;21:5.   Published online February 28, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.5
  • 1,006 View
  • 176 Download
AbstractAbstract PDFSupplementary Material
Purpose
We examined United States medical students’ self-reported feedback encounters during clerkship training to better understand in situ feedback practices. Specifically, we asked: Who do students receive feedback from, about what, when, where, and how do they use it? We explored whether curricular expectations for preceptors’ written commentary aligned with feedback as it occurs naturalistically in the workplace.
Methods
This study occurred from July 2021 to February 2022 at Southern Illinois University School of Medicine. We used qualitative survey-based experience sampling to gather students’ accounts of their feedback encounters in 8 core specialties. We analyzed the who, what, when, where, and why of 267 feedback encounters reported by 11 clerkship students over 30 weeks. Code frequencies were mapped qualitatively to explore patterns in feedback encounters.
Results
Clerkship feedback occurs in patterns apparently related to the nature of clinical work in each specialty. These patterns may be attributable to each specialty’s “social learning ecosystem”—the distinctive learning environment shaped by the social and material aspects of a given specialty’s work, which determine who preceptors are, what students do with preceptors, and what skills or attributes matter enough to preceptors to comment on.
Conclusion
Comprehensive, standardized expectations for written feedback across specialties conflict with the reality of workplace-based learning. Preceptors may be better able—and more motivated—to document student performance that occurs as a natural part of everyday work. Nurturing social learning ecosystems could facilitate workplace-based learning such that, across specialties, students acquire a comprehensive clinical skillset appropriate for graduation.
ChatGPT (GPT-4) passed the Japanese National License Examination for Pharmacists in 2022, answering all items including those with diagrams: a descriptive study  
Hiroyasu Sato, Katsuhiko Ogasawara
J Educ Eval Health Prof. 2024;21:4.   Published online February 28, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.4
  • 1,176 View
  • 180 Download
AbstractAbstract PDFSupplementary Material
Purpose
The objective of this study was to assess the performance of ChatGPT (GPT-4) on all items, including those with diagrams, in the Japanese National License Examination for Pharmacists (JNLEP) and compare it with the previous GPT-3.5 model’s performance.
Methods
The 107th JNLEP, conducted in 2022, with 344 items input into the GPT-4 model, was targeted for this study. Separately, 284 items, excluding those with diagrams, were entered into the GPT-3.5 model. The answers were categorized and analyzed to determine accuracy rates based on categories, subjects, and presence or absence of diagrams. The accuracy rates were compared to the main passing criteria (overall accuracy rate ≥62.9%).
Results
The overall accuracy rate for all items in the 107th JNLEP in GPT-4 was 72.5%, successfully meeting all the passing criteria. For the set of items without diagrams, the accuracy rate was 80.0%, which was significantly higher than that of the GPT-3.5 model (43.5%). The GPT-4 model demonstrated an accuracy rate of 36.1% for items that included diagrams.
Conclusion
Advancements that allow GPT-4 to process images have made it possible for LLMs to answer all items in medical-related license examinations. This study’s findings confirm that ChatGPT (GPT-4) possesses sufficient knowledge to meet the passing criteria.
Importance, performance frequency, and predicted future importance of dietitians’ jobs by practicing dietitians in Korea: a survey study
Cheongmin Sohn, Sooyoun Kwon, Won Gyoung Kim, Kyung-Eun Lee, Sun-Young Lee, Seungmin Lee
J Educ Eval Health Prof. 2024;21:1.   Published online January 2, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.1
  • 1,099 View
  • 219 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to explore the perceptions held by practicing dietitians of the importance of their tasks performed in current work environments, the frequency at which those tasks are performed, and predictions about the importance of those tasks in future work environments.
Methods
This was a cross-sectional survey study. An online survey was administered to 350 practicing dietitians. They were asked to assess the importance, performance frequency, and predicted changes in the importance of 27 tasks using a 5-point scale. Descriptive statistics were calculated, and the means of the variables were compared across categorized work environments using analysis of variance.
Results
The importance scores of all surveyed tasks were higher than 3.0, except for the marketing management task. Self-development, nutrition education/counseling, menu planning, food safety management, and documentation/data management were all rated higher than 4.0. The highest performance frequency score was related to documentation/data management. The importance scores of all duties, except for professional development, differed significantly by workplace. As for predictions about the future importance of the tasks surveyed, dietitians responded that the importance of all 27 tasks would either remain at current levels or increase in the future.
Conclusion
Twenty-seven tasks were confirmed to represent dietitians’ job functions in various workplaces. These tasks can be used to improve the test specifications of the Korean Dietitian Licensing Examination and the curriculum of dietetic education programs.
Development and validity evidence for the resident-led large group teaching assessment instrument in the United States: a methodological study  
Ariel Shana Frey-Vogel, Kristina Dzara, Kimberly Anne Gifford, Yoon Soo Park, Justin Berk, Allison Heinly, Darcy Wolcott, Daniel Adam Hall, Shannon Elliott Scott-Vernaglia, Katherine Anne Sparger, Erica Ye-pyng Chung
J Educ Eval Health Prof. 2024;21:3.   Published online February 23, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.3
  • 615 View
  • 157 Download
AbstractAbstract PDFSupplementary Material
Purpose
Despite educational mandates to assess resident teaching competence, limited instruments with validity evidence exist for this purpose. Existing instruments do not allow faculty to assess resident-led teaching in a large group format or whether teaching was interactive. This study gathers validity evidence on the use of the Resident-led Large Group Teaching Assessment Instrument (Relate), an instrument used by faculty to assess resident teaching competency. Relate comprises 23 behaviors divided into 6 elements: learning environment, goals and objectives, content of talk, promotion of understanding and retention, session management, and closure.
Methods
Messick’s unified validity framework was used for this study. Investigators used video recordings of resident-led teaching from 3 pediatric residency programs to develop Relate and a rater guidebook. Faculty were trained on instrument use through frame-of-reference training. Resident teaching at all sites was video-recorded during 2018–2019. Two trained faculty raters assessed each video. Descriptive statistics on performance were obtained. Validity evidence sources include: rater training effect (response process), reliability and variability (internal structure), and impact on Milestones assessment (relations to other variables).
Results
Forty-eight videos, from 16 residents, were analyzed. Rater training improved inter-rater reliability from 0.04 to 0.64. The Φ-coefficient reliability was 0.50. There was a significant correlation between overall Relate performance and the pediatric teaching Milestone (r=0.34, P=0.019).
Conclusion
Relate provides validity evidence with sufficient reliability to measure resident-led large-group teaching competence.
Reviews
Can an artificial intelligence chatbot be the author of a scholarly article?  
Ju Yoen Lee
J Educ Eval Health Prof. 2023;20:6.   Published online February 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.6
  • 8,531 View
  • 675 Download
  • 38 Web of Science
  • 42 Crossref
AbstractAbstract PDFSupplementary Material
At the end of 2022, the appearance of ChatGPT, an artificial intelligence (AI) chatbot with amazing writing ability, caused a great sensation in academia. The chatbot turned out to be very capable, but also capable of deception, and the news broke that several researchers had listed the chatbot (including its earlier version) as co-authors of their academic papers. In response, Nature and Science expressed their position that this chatbot cannot be listed as an author in the papers they publish. Since an AI chatbot is not a human being, in the current legal system, the text automatically generated by an AI chatbot cannot be a copyrighted work; thus, an AI chatbot cannot be an author of a copyrighted work. Current AI chatbots such as ChatGPT are much more advanced than search engines in that they produce original text, but they still remain at the level of a search engine in that they cannot take responsibility for their writing. For this reason, they also cannot be authors from the perspective of research ethics.

Citations

Citations to this article as recorded by  
  • Risks of abuse of large language models, like ChatGPT, in scientific publishing: Authorship, predatory publishing, and paper mills
    Graham Kendall, Jaime A. Teixeira da Silva
    Learned Publishing.2024; 37(1): 55.     CrossRef
  • Can ChatGPT be an author? A study of artificial intelligence authorship policies in top academic journals
    Brady D. Lund, K.T. Naheem
    Learned Publishing.2024; 37(1): 13.     CrossRef
  • The Role of AI in Writing an Article and Whether it Can Be a Co-author: What if it Gets Support From 2 Different AIs Like ChatGPT and Google Bard for the Same Theme?
    İlhan Bahşi, Ayşe Balat
    Journal of Craniofacial Surgery.2024; 35(1): 274.     CrossRef
  • Artificial Intelligence–Generated Scientific Literature: A Critical Appraisal
    Justyna Zybaczynska, Matthew Norris, Sunjay Modi, Jennifer Brennan, Pooja Jhaveri, Timothy J. Craig, Taha Al-Shaikhly
    The Journal of Allergy and Clinical Immunology: In Practice.2024; 12(1): 106.     CrossRef
  • Does Google’s Bard Chatbot perform better than ChatGPT on the European hand surgery exam?
    Goetsch Thibaut, Armaghan Dabbagh, Philippe Liverneaux
    International Orthopaedics.2024; 48(1): 151.     CrossRef
  • A Brief Review of the Efficacy in Artificial Intelligence and Chatbot-Generated Personalized Fitness Regimens
    Daniel K. Bays, Cole Verble, Kalyn M. Powers Verble
    Strength & Conditioning Journal.2024;[Epub]     CrossRef
  • Academic publisher guidelines on AI usage: A ChatGPT supported thematic analysis
    Mike Perkins, Jasper Roe
    F1000Research.2024; 12: 1398.     CrossRef
  • The Use of Artificial Intelligence in Writing Scientific Review Articles
    Melissa A. Kacena, Lilian I. Plotkin, Jill C. Fehrenbacher
    Current Osteoporosis Reports.2024; 22(1): 115.     CrossRef
  • Using AI to Write a Review Article Examining the Role of the Nervous System on Skeletal Homeostasis and Fracture Healing
    Murad K. Nazzal, Ashlyn J. Morris, Reginald S. Parker, Fletcher A. White, Roman M. Natoli, Jill C. Fehrenbacher, Melissa A. Kacena
    Current Osteoporosis Reports.2024; 22(1): 217.     CrossRef
  • GenAI et al.: Cocreation, Authorship, Ownership, Academic Ethics and Integrity in a Time of Generative AI
    Aras Bozkurt
    Open Praxis.2024; 16(1): 1.     CrossRef
  • An integrative decision-making framework to guide policies on regulating ChatGPT usage
    Umar Ali Bukar, Md Shohel Sayeed, Siti Fatimah Abdul Razak, Sumendra Yogarayan, Oluwatosin Ahmed Amodu
    PeerJ Computer Science.2024; 10: e1845.     CrossRef
  • Artificial Intelligence and Its Role in Medical Research
    Anurag Gola, Ambarish Das, Amar B. Gumataj, S. Amirdhavarshini, J. Venkatachalam
    Current Medical Issues.2024; 22(2): 97.     CrossRef
  • From advancements to ethics: Assessing ChatGPT’s role in writing research paper
    Vasu Gupta, Fnu Anamika, Kinna Parikh, Meet A Patel, Rahul Jain, Rohit Jain
    Turkish Journal of Internal Medicine.2024; 6(2): 74.     CrossRef
  • Yapay Zekânın Edebiyatta Kullanım Serüveni
    Nesime Ceyhan Akça, Serap Aslan Cobutoğlu, Özlem Yeşim Özbek, Mehmet Furkan Akça
    RumeliDE Dil ve Edebiyat Araştırmaları Dergisi.2024; (39): 283.     CrossRef
  • ChatGPT's Gastrointestinal Tumor Board Tango: A limping dance partner?
    Ughur Aghamaliyev, Javad Karimbayli, Clemens Giessen-Jung, Ilmer Matthias, Kristian Unger, Dorian Andrade, Felix O. Hofmann, Maximilian Weniger, Martin K. Angele, C. Benedikt Westphalen, Jens Werner, Bernhard W. Renz
    European Journal of Cancer.2024; 205: 114100.     CrossRef
  • Gout and Gout-Related Comorbidities: Insight and Limitations from Population-Based Registers in Sweden
    Panagiota Drivelegka, Lennart TH Jacobsson, Mats Dehlin
    Gout, Urate, and Crystal Deposition Disease.2024; 2(2): 144.     CrossRef
  • Artificial intelligence in academic cardiothoracic surgery
    Adham AHMED, Irbaz HAMEED
    The Journal of Cardiovascular Surgery.2024;[Epub]     CrossRef
  • The emergence of generative artificial intelligence platforms in 2023, journal metrics, appreciation to reviewers and volunteers, and obituary
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2024; 21: 9.     CrossRef
  • Universal skepticism of ChatGPT: a review of early literature on chat generative pre-trained transformer
    Casey Watters, Michal K. Lemanski
    Frontiers in Big Data.2023;[Epub]     CrossRef
  • The importance of human supervision in the use of ChatGPT as a support tool in scientific writing
    William Castillo-González
    Metaverse Basic and Applied Research.2023;[Epub]     CrossRef
  • ChatGPT for Future Medical and Dental Research
    Bader Fatani
    Cureus.2023;[Epub]     CrossRef
  • Chatbots in Medical Research
    Punit Sharma
    Clinical Nuclear Medicine.2023; 48(9): 838.     CrossRef
  • Potential applications of ChatGPT in dermatology
    Nicolas Kluger
    Journal of the European Academy of Dermatology and Venereology.2023;[Epub]     CrossRef
  • The emergent role of artificial intelligence, natural learning processing, and large language models in higher education and research
    Tariq Alqahtani, Hisham A. Badreldin, Mohammed Alrashed, Abdulrahman I. Alshaya, Sahar S. Alghamdi, Khalid bin Saleh, Shuroug A. Alowais, Omar A. Alshaya, Ishrat Rahman, Majed S. Al Yami, Abdulkareem M. Albekairy
    Research in Social and Administrative Pharmacy.2023; 19(8): 1236.     CrossRef
  • ChatGPT Performance on the American Urological Association Self-assessment Study Program and the Potential Influence of Artificial Intelligence in Urologic Training
    Nicholas A. Deebel, Ryan Terlecki
    Urology.2023; 177: 29.     CrossRef
  • Intelligence or artificial intelligence? More hard problems for authors of Biological Psychology, the neurosciences, and everyone else
    Thomas Ritz
    Biological Psychology.2023; 181: 108590.     CrossRef
  • The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts
    Mohammad Hosseini, David B Resnik, Kristi Holmes
    Research Ethics.2023; 19(4): 449.     CrossRef
  • How trustworthy is ChatGPT? The case of bibliometric analyses
    Faiza Farhat, Shahab Saquib Sohail, Dag Øivind Madsen
    Cogent Engineering.2023;[Epub]     CrossRef
  • Disclosing use of Artificial Intelligence: Promoting transparency in publishing
    Parvaiz A. Koul
    Lung India.2023; 40(5): 401.     CrossRef
  • ChatGPT in medical research: challenging time ahead
    Daideepya C Bhargava, Devendra Jadav, Vikas P Meshram, Tanuj Kanchan
    Medico-Legal Journal.2023; 91(4): 223.     CrossRef
  • Academic publisher guidelines on AI usage: A ChatGPT supported thematic analysis
    Mike Perkins, Jasper Roe
    F1000Research.2023; 12: 1398.     CrossRef
  • Ethical consideration of the use of generative artificial intelligence, including ChatGPT in writing a nursing article
    Sun Huh
    Child Health Nursing Research.2023; 29(4): 249.     CrossRef
  • ChatGPT in medical writing: A game-changer or a gimmick?
    Shital Sarah Ahaley, Ankita Pandey, Simran Kaur Juneja, Tanvi Suhane Gupta, Sujatha Vijayakumar
    Perspectives in Clinical Research.2023;[Epub]     CrossRef
  • Artificial Intelligence-Supported Systems in Anesthesiology and Its Standpoint to Date—A Review
    Fiona M. P. Pham
    Open Journal of Anesthesiology.2023; 13(07): 140.     CrossRef
  • ChatGPT as an innovative tool for increasing sales in online stores
    Michał Orzoł, Katarzyna Szopik-Depczyńska
    Procedia Computer Science.2023; 225: 3450.     CrossRef
  • Intelligent Plagiarism as a Misconduct in Academic Integrity
    Jesús Miguel Muñoz-Cantero, Eva Maria Espiñeira-Bellón
    Acta Médica Portuguesa.2023; 37(1): 1.     CrossRef
  • Follow-up of Artificial Intelligence Development and its Controlled Contribution to the Article: Step to the Authorship?
    Ekrem Solmaz
    European Journal of Therapeutics.2023;[Epub]     CrossRef
  • May Artificial Intelligence Be a Co-Author on an Academic Paper?
    Ayşe Balat, İlhan Bahşi
    European Journal of Therapeutics.2023; 29(3): e12.     CrossRef
  • Opportunities and challenges for ChatGPT and large language models in biomedicine and health
    Shubo Tian, Qiao Jin, Lana Yeganova, Po-Ting Lai, Qingqing Zhu, Xiuying Chen, Yifan Yang, Qingyu Chen, Won Kim, Donald C Comeau, Rezarta Islamaj, Aadit Kapoor, Xin Gao, Zhiyong Lu
    Briefings in Bioinformatics.2023;[Epub]     CrossRef
  • ChatGPT: "To be or not to be" ... in academic research. The human mind's analytical rigor and capacity to discriminate between AI bots' truths and hallucinations
    Aurelian Anghelescu, Ilinca Ciobanu, Constantin Munteanu, Lucia Ana Maria Anghelescu, Gelu Onose
    Balneo and PRM Research Journal.2023; 14(Vol.14, no): 614.     CrossRef
  • Editorial policies on the use of generative artificial intelligence in article writing and peer-review in the Journal of Educational Evaluation for Health Professions
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2023; 20: 40.     CrossRef
  • Should We Wait for Major Frauds to Unveil to Plan an AI Use License?
    Istemihan Coban
    European Journal of Therapeutics.2023; 30(2): 198.     CrossRef
Factors associated with medical students’ scores on the National Licensing Exam in Peru: a systematic review  
Javier Alejandro Flores-Cohaila
J Educ Eval Health Prof. 2022;19:38.   Published online December 29, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.38
  • 3,334 View
  • 294 Download
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to identify factors that have been studied for their associations with National Licensing Examination (ENAM) scores in Peru.
Methods
A search was conducted of literature databases and registers, including EMBASE, SciELO, Web of Science, MEDLINE, Peru’s National Register of Research Work, and Google Scholar. The following key terms were used: “ENAM” and “associated factors.” Studies in English and Spanish were included. The quality of the included studies was evaluated using the Medical Education Research Study Quality Instrument (MERSQI).
Results
In total, 38,500 participants were enrolled in 12 studies. Most (11/12) studies were cross-sectional, except for one case-control study. Three studies were published in peer-reviewed journals. The mean MERSQI was 10.33. A better performance on the ENAM was associated with a higher-grade point average (GPA) (n=8), internship setting in EsSalud (n=4), and regular academic status (n=3). Other factors showed associations in various studies, such as medical school, internship setting, age, gender, socioeconomic status, simulations test, study resources, preparation time, learning styles, study techniques, test-anxiety, and self-regulated learning strategies.
Conclusion
The ENAM is a multifactorial phenomenon; our model gives students a locus of control on what they can do to improve their score (i.e., implement self-regulated learning strategies) and faculty, health policymakers, and managers a framework to improve the ENAM score (i.e., design remediation programs to improve GPA and integrate anxiety-management courses into the curriculum).

Citations

Citations to this article as recorded by  
  • Performance of ChatGPT on the Peruvian National Licensing Medical Examination: Cross-Sectional Study
    Javier A Flores-Cohaila, Abigaíl García-Vicente, Sonia F Vizcarra-Jiménez, Janith P De la Cruz-Galán, Jesús D Gutiérrez-Arratia, Blanca Geraldine Quiroga Torres, Alvaro Taype-Rondan
    JMIR Medical Education.2023; 9: e48039.     CrossRef
Prevalence of burnout and related factors in nursing faculty members: a systematic review  
Marziyeh Hosseini, Mitra Soltanian, Camellia Torabizadeh, Zahra Hadian Shirazi
J Educ Eval Health Prof. 2022;19:16.   Published online July 14, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.16
  • 4,805 View
  • 435 Download
  • 8 Web of Science
  • 10 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
The current study aimed to identify the prevalence of burnout and related factors in nursing faculty members through a systematic review of the literature.
Methods
A comprehensive search of electronic databases, including Scopus, PubMed, Web of Science, Iranmedex, and Scientific Information Database was conducted via keywords extracted from Medical Subject Headings, including burnout and nursing faculty, for studies published from database inception to April 1, 2022. The quality of the included studies in this review was assessed using the appraisal tool for cross-sectional studies.
Results
A total of 2,551 nursing faculty members were enrolled in 11 studies. The mean score of burnout in nursing faculty members based on the Maslach Burnout Inventory (MBI) was 59.28 out of 132. The burnout score in this study was presented in 3 MBI subscales: emotional exhaustion, 21.24 (standard deviation [SD]=9.70) out of 54; depersonalization, 5.88 (SD=4.20) out of 30; and personal accomplishment, 32.16 (SD=6.45) out of 48. Several factors had significant relationships with burnout in nursing faculty members, including gender, level of education, hours of work, number of classroom, students taught, full-time work, job pressure, perceived stress, subjective well-being, marital status, job satisfaction, work setting satisfaction, workplace empowerment, collegial support, management style, fulfillment of self-expectation, communication style, humor, and academic position.
Conclusion
Overall, the mean burnout scores in nursing faculty members were moderate. Therefore, health policymakers and managers can reduce the likelihood of burnout in nursing faculty members by using psychosocial interventions and support.

Citations

Citations to this article as recorded by  
  • Strategies to promote nurse educator well-being and prevent burnout: An integrative review
    Allan Lovern, Lindsay Quinlan, Stephanie Brogdon, Cora Rabe, Laura S. Bonanno
    Teaching and Learning in Nursing.2024; 19(2): 185.     CrossRef
  • ALS Health care provider wellness
    Gregory Hansen, Sarah Burton-MacLeod, Kerri Lynn Schellenberg
    Amyotrophic Lateral Sclerosis and Frontotemporal Degeneration.2024; 25(3-4): 299.     CrossRef
  • Cuidando al profesorado: resultados de un programa a distancia de autocuidado para educadores de profesiones de la salud
    Denisse Zúñiga, Guadalupe Echeverría, Pía Nitsche, Nuria Pedrals, Attilio Rigotti, Marisol Sirhan, Klaus Puschel, Marcela Bitran
    Educación Médica.2024; 25(1): 100871.     CrossRef
  • Civility and resilience practices to address chronic workplace stress in nursing academia
    Teresa M. Stephens, Cynthia M. Clark
    Teaching and Learning in Nursing.2024; 19(2): 119.     CrossRef
  • Burnout among Chinese live streamers: Prevalence and correlates
    Shi Chen, Hanqin Wang, Shang Yang, Fushen Zhang, Xiao Gao, Ziwei Liu, Jenny Wilkinson
    PLOS ONE.2024; 19(5): e0301984.     CrossRef
  • Holistic Wellness Support Systems for Nursing Faculty
    Ipuna Estavillo Black, LaTricia Perry, Hyunhwa Lee
    Nursing Education Perspectives.2024;[Epub]     CrossRef
  • The state of mental health, burnout, mattering and perceived wellness culture in Doctorally prepared nursing faculty with implications for action
    Bernadette Mazurek Melnyk, Lee Ann Strait, Cindy Beckett, Andreanna Pavan Hsieh, Jeffery Messinger, Randee Masciola
    Worldviews on Evidence-Based Nursing.2023; 20(2): 142.     CrossRef
  • Pressures in the Ivory Tower: An Empirical Study of Burnout Scores among Nursing Faculty
    Sheila A. Boamah, Michael Kalu, Rosain Stennett, Emily Belita, Jasmine Travers
    International Journal of Environmental Research and Public Health.2023; 20(5): 4398.     CrossRef
  • Understanding and Fostering Mental Health and Well-Being among University Faculty: A Narrative Review
    Dalal Hammoudi Halat, Abderrezzaq Soltani, Roua Dalli, Lama Alsarraj, Ahmed Malki
    Journal of Clinical Medicine.2023; 12(13): 4425.     CrossRef
  • A mixed-methods study of the effectiveness and perceptions of a course design institute for health science educators
    Julie Speer, Quincy Conley, Derek Thurber, Brittany Williams, Mitzi Wasden, Brenda Jackson
    BMC Medical Education.2022;[Epub]     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions