Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Most download articles

Page Path
HOME > Browse articles > Most download articles
109 Most download articles
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles

Most-download articles are from the articles published in 2022 during the last three month.

Review
Application of artificial intelligence chatbots, including ChatGPT, in education, scholarly work, programming, and content generation and its prospects: a narrative review
Tae Won Kim
J Educ Eval Health Prof. 2023;20:38.   Published online December 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.38
  • 7,686 View
  • 907 Download
  • 8 Web of Science
  • 11 Crossref
AbstractAbstract PDFSupplementary Material
This study aims to explore ChatGPT’s (GPT-3.5 version) functionalities, including reinforcement learning, diverse applications, and limitations. ChatGPT is an artificial intelligence (AI) chatbot powered by OpenAI’s Generative Pre-trained Transformer (GPT) model. The chatbot’s applications span education, programming, content generation, and more, demonstrating its versatility. ChatGPT can improve education by creating assignments and offering personalized feedback, as shown by its notable performance in medical exams and the United States Medical Licensing Exam. However, concerns include plagiarism, reliability, and educational disparities. It aids in various research tasks, from design to writing, and has shown proficiency in summarizing and suggesting titles. Its use in scientific writing and language translation is promising, but professional oversight is needed for accuracy and originality. It assists in programming tasks like writing code, debugging, and guiding installation and updates. It offers diverse applications, from cheering up individuals to generating creative content like essays, news articles, and business plans. Unlike search engines, ChatGPT provides interactive, generative responses and understands context, making it more akin to human conversation, in contrast to conventional search engines’ keyword-based, non-interactive nature. ChatGPT has limitations, such as potential bias, dependence on outdated data, and revenue generation challenges. Nonetheless, ChatGPT is considered to be a transformative AI tool poised to redefine the future of generative technology. In conclusion, advancements in AI, such as ChatGPT, are altering how knowledge is acquired and applied, marking a shift from search engines to creativity engines. This transformation highlights the increasing importance of AI literacy and the ability to effectively utilize AI in various domains of life.

Citations

Citations to this article as recorded by  
  • Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
    Xiaojun Xu, Yixiao Chen, Jing Miao
    Journal of Educational Evaluation for Health Professions.2024; 21: 6.     CrossRef
  • Artificial Intelligence: Fundamentals and Breakthrough Applications in Epilepsy
    Wesley Kerr, Sandra Acosta, Patrick Kwan, Gregory Worrell, Mohamad A. Mikati
    Epilepsy Currents.2024;[Epub]     CrossRef
  • A Developed Graphical User Interface-Based on Different Generative Pre-trained Transformers Models
    Ekrem Küçük, İpek Balıkçı Çiçek, Zeynep Küçükakçalı, Cihan Yetiş, Cemil Çolak
    ODÜ Tıp Dergisi.2024; 11(1): 18.     CrossRef
  • Art or Artifact: Evaluating the Accuracy, Appeal, and Educational Value of AI-Generated Imagery in DALL·E 3 for Illustrating Congenital Heart Diseases
    Mohamad-Hani Temsah, Abdullah N. Alhuzaimi, Mohammed Almansour, Fadi Aljamaan, Khalid Alhasan, Munirah A. Batarfi, Ibraheem Altamimi, Amani Alharbi, Adel Abdulaziz Alsuhaibani, Leena Alwakeel, Abdulrahman Abdulkhaliq Alzahrani, Khaled B. Alsulaim, Amr Jam
    Journal of Medical Systems.2024;[Epub]     CrossRef
  • Authentic assessment in medical education: exploring AI integration and student-as-partners collaboration
    Syeda Sadia Fatima, Nabeel Ashfaque Sheikh, Athar Osama
    Postgraduate Medical Journal.2024; 100(1190): 959.     CrossRef
  • Comparative performance analysis of large language models: ChatGPT-3.5, ChatGPT-4 and Google Gemini in glucocorticoid-induced osteoporosis
    Linjian Tong, Chaoyang Zhang, Rui Liu, Jia Yang, Zhiming Sun
    Journal of Orthopaedic Surgery and Research.2024;[Epub]     CrossRef
  • Can AI-Generated Clinical Vignettes in Japanese Be Used Medically and Linguistically?
    Yasutaka Yanagita, Daiki Yokokawa, Shun Uchida, Yu Li, Takanori Uehara, Masatomi Ikusaka
    Journal of General Internal Medicine.2024;[Epub]     CrossRef
  • ChatGPT vs. sleep disorder specialist responses to common sleep queries: Ratings by experts and laypeople
    Jiyoung Kim, Seo-Young Lee, Jee Hyun Kim, Dong-Hyeon Shin, Eun Hye Oh, Jin A Kim, Jae Wook Cho
    Sleep Health.2024;[Epub]     CrossRef
  • Technology integration into Chinese as a foreign language learning in higher education: An integrated bibliometric analysis and systematic review (2000–2024)
    Binze Xu
    Language Teaching Research.2024;[Epub]     CrossRef
  • The Transformative Power of Generative Artificial Intelligence for Achieving the Sustainable Development Goal of Quality Education
    Prema Nedungadi, Kai-Yu Tang, Raghu Raman
    Sustainability.2024; 16(22): 9779.     CrossRef
  • The Development and Validation of an Artificial Intelligence Chatbot Dependence Scale
    Xing Zhang, Mingyue Yin, Mingyang Zhang, Zhaoqian Li, Hansen Li
    Cyberpsychology, Behavior, and Social Networking.2024;[Epub]     CrossRef
Educational/Faculty development material
Common models and approaches for the clinical educator to plan effective feedback encounters  
Cesar Orsini, Veena Rodrigues, Jorge Tricio, Margarita Rosel
J Educ Eval Health Prof. 2022;19:35.   Published online December 19, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.35
  • 10,029 View
  • 1,043 Download
  • 5 Web of Science
  • 5 Crossref
AbstractAbstract PDFSupplementary Material
Giving constructive feedback is crucial for learners to bridge the gap between their current performance and the desired standards of competence. Giving effective feedback is a skill that can be learned, practiced, and improved. Therefore, our aim was to explore models in clinical settings and assess their transferability to different clinical feedback encounters. We identified the 6 most common and accepted feedback models, including the Feedback Sandwich, the Pendleton Rules, the One-Minute Preceptor, the SET-GO model, the R2C2 (Rapport/Reaction/Content/Coach), and the ALOBA (Agenda Led Outcome-based Analysis) model. We present a handy resource describing their structure, strengths and weaknesses, requirements for educators and learners, and suitable feedback encounters for use for each model. These feedback models represent practical frameworks for educators to adopt but also to adapt to their preferred style, combining and modifying them if necessary to suit their needs and context.

Citations

Citations to this article as recorded by  
  • Navigating power dynamics between pharmacy preceptors and learners
    Shane Tolleson, Mabel Truong, Natalie Rosario
    Exploratory Research in Clinical and Social Pharmacy.2024; 13: 100408.     CrossRef
  • Feedback in Medical Education—Its Importance and How to Do It
    Tarik Babar, Omer A. Awan
    Academic Radiology.2024;[Epub]     CrossRef
  • Comparison of the effects of apprenticeship training by sandwich feedback and traditional methods on final-semester operating room technology students’ perioperative competence and performance: a randomized, controlled trial
    Azam Hosseinpour, Morteza Nasiri, Fatemeh Keshmiri, Tayebeh Arabzadeh, Hossein Sharafi
    BMC Medical Education.2024;[Epub]     CrossRef
  • Evaluating the Quality of Narrative Feedback for Entrustable Professional Activities in a Surgery Residency Program
    Rosephine Del Fernandes, Ingrid de Vries, Laura McEwen, Steve Mann, Timothy Phillips, Boris Zevin
    Annals of Surgery.2024; 280(6): 916.     CrossRef
  • Feedback conversations: First things first?
    Katharine A. Robb, Marcy E. Rosenbaum, Lauren Peters, Susan Lenoch, Donna Lancianese, Jane L. Miller
    Patient Education and Counseling.2023; 115: 107849.     CrossRef
Research article
A new performance evaluation indicator for the LEE Jong-wook Fellowship Program of Korea Foundation for International Healthcare to better assess its long-term educational impacts: a Delphi study  
Minkyung Oh, Bo Young Yoon
J Educ Eval Health Prof. 2024;21:27.   Published online October 2, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.27
  • 336 View
  • 178 Download
AbstractAbstract PDFSupplementary Material
Purpose
The Dr. LEE Jong-wook Fellowship Program, established by the Korea Foundation for International Healthcare (KOFIH), aims to strengthen healthcare capacity in partner countries. The aim of the study was to develop new performance evaluation indicators for the program to better assess long-term educational impact across various courses and professional roles.
Methods
A 3-stage process was employed. First, a literature review of established evaluation models (Kirkpatrick’s 4 levels, context/input/process/product evaluation model, Organization for Economic Cooperation and Development Assistance Committee criteria) was conducted to devise evaluation criteria. Second, these criteria were validated via a 2-round Delphi survey with 18 experts in training projects from May 2021 to June 2021. Third, the relative importance of the evaluation criteria was determined using the analytic hierarchy process (AHP), calculating weights and ensuring consistency through the consistency index and consistency ratio (CR), with CR values below 0.1 indicating acceptable consistency.
Results
The literature review led to a combined evaluation model, resulting in 4 evaluation areas, 20 items, and 92 indicators. The Delphi surveys confirmed the validity of these indicators, with content validity ratio values exceeding 0.444. The AHP analysis assigned weights to each indicator, and CR values below 0.1 indicated consistency. The final set of evaluation indicators was confirmed through a workshop with KOFIH and adopted as the new evaluation tool.
Conclusion
The developed evaluation framework provides a comprehensive tool for assessing the long-term outcomes of the Dr. LEE Jong-wook Fellowship Program. It enhances evaluation capabilities and supports improvements in the training program’s effectiveness and international healthcare collaboration.
Educational/Faculty development material
The performance of ChatGPT-4.0o in medical imaging evaluation: a cross-sectional study  
Elio Stefan Arruzza, Carla Marie Evangelista, Minh Chau
J Educ Eval Health Prof. 2024;21:29.   Published online October 31, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.29
  • 510 View
  • 166 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
This study investigated the performance of ChatGPT-4.0o in evaluating the quality of positioning in radiographic images. Thirty radiographs depicting a variety of knee, elbow, ankle, hand, pelvis, and shoulder projections were produced using anthropomorphic phantoms and uploaded to ChatGPT-4.0o. The model was prompted to provide a solution to identify any positioning errors with justification and offer improvements. A panel of radiographers assessed the solutions for radiographic quality based on established positioning criteria, with a grading scale of 1–5. In only 20% of projections, ChatGPT-4.0o correctly recognized all errors with justifications and offered correct suggestions for improvement. The most commonly occurring score was 3 (9 cases, 30%), wherein the model recognized at least 1 specific error and provided a correct improvement. The mean score was 2.9. Overall, low accuracy was demonstrated, with most projections receiving only partially correct solutions. The findings reinforce the importance of robust radiography education and clinical experience.

Citations

Citations to this article as recorded by  
  • Conversational LLM Chatbot ChatGPT-4 for Colonoscopy Boston Bowel Preparation Scoring: An Artificial Intelligence-to-Head Concordance Analysis
    Raffaele Pellegrino, Alessandro Federico, Antonietta Gerarda Gravina
    Diagnostics.2024; 14(22): 2537.     CrossRef
Research article
The effect of simulation-based training on problem-solving skills, critical thinking skills, and self-efficacy among nursing students in Vietnam: a before-and-after study  
Tran Thi Hoang Oanh, Luu Thi Thuy, Ngo Thi Thu Huyen
J Educ Eval Health Prof. 2024;21:24.   Published online September 23, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.24
  • 693 View
  • 211 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study investigated the effect of simulation-based training on nursing students’ problem-solving skills, critical thinking skills, and self-efficacy.
Methods
A single-group pretest and posttest study was conducted among 173 second-year nursing students at a public university in Vietnam from May 2021 to July 2022. Each student participated in the adult nursing preclinical practice course, which utilized a moderate-fidelity simulation teaching approach. Instruments including the Personal Problem-Solving Inventory Scale, Critical Thinking Skills Questionnaire, and General Self-Efficacy Questionnaire were employed to measure participants’ problem-solving skills, critical thinking skills, and self-efficacy. Data were analyzed using descriptive statistics and the paired-sample t-test with the significance level set at P<0.05.
Results
The mean score of the Personal Problem-Solving Inventory posttest (127.24±12.11) was lower than the pretest score (131.42±16.95), suggesting an improvement in the problem-solving skills of the participants (t172=2.55, P=0.011). There was no statistically significant difference in critical thinking skills between the pretest and posttest (P=0.854). Self-efficacy among nursing students showed a substantial increase from the pretest (27.91±5.26) to the posttest (28.71±3.81), with t172=-2.26 and P=0.025.
Conclusion
The results suggest that simulation-based training can improve problem-solving skills and increase self-efficacy among nursing students. Therefore, the integration of simulation-based training in nursing education is recommended.
Software report
The irtQ R package: a user-friendly tool for item response theory-based test data analysis and calibration  
Hwanggyu Lim, Kyungseok Kang
J Educ Eval Health Prof. 2024;21:23.   Published online September 12, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.23
  • 769 View
  • 189 Download
AbstractAbstract PDFSupplementary Material
Computerized adaptive testing (CAT) has become a widely adopted test design for high-stakes licensing and certification exams, particularly in the health professions in the United States, due to its ability to tailor test difficulty in real time, reducing testing time while providing precise ability estimates. A key component of CAT is item response theory (IRT), which facilitates the dynamic selection of items based on examinees' ability levels during a test. Accurate estimation of item and ability parameters is essential for successful CAT implementation, necessitating convenient and reliable software to ensure precise parameter estimation. This paper introduces the irtQ R package (http://CRAN.R-project.org/), which simplifies IRT-based analysis and item calibration under unidimensional IRT models. While it does not directly simulate CAT, it provides essential tools to support CAT development, including parameter estimation using marginal maximum likelihood estimation via the expectation-maximization algorithm, pretest item calibration through fixed item parameter calibration and fixed ability parameter calibration methods, and examinee ability estimation. The package also enables users to compute item and test characteristic curves and information functions necessary for evaluating the psychometric properties of a test. This paper illustrates the key features of the irtQ package through examples using simulated datasets, demonstrating its utility in IRT applications such as test data analysis and ability scoring. By providing a user-friendly environment for IRT analysis, irtQ significantly enhances the capacity for efficient adaptive testing research and operations. Finally, the paper highlights additional core functionalities of irtQ, emphasizing its broader applicability to the development and operation of IRT-based assessments.
Research article
Reliability of a workplace-based assessment for the United States general surgical trainees’ intraoperative performance using multivariate generalizability theory: a psychometric study
Ting Sun, Stella Yun Kim, Brigitte Kristin Smith, Yoon Soo Park
J Educ Eval Health Prof. 2024;21:26.   Published online September 24, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.26
  • 511 View
  • 147 Download
AbstractAbstract PDFSupplementary Material
Purpose
The System for Improving and Measuring Procedure Learning (SIMPL), a smartphone-based operative assessment application, was developed to assess the intraoperative performance of surgical residents. This study aims to examine the reliability of the SIMPL assessment and determine the optimal number of procedures for a reliable assessment.
Methods
In this retrospective observational study, we analyzed data collected between 2015 and 2023 from 4,616 residents across 94 General Surgery Residency programs in the United States that utilized the SIMPL smartphone application. We employed multivariate generalizability theory and initially conducted generalizability studies to estimate the variance components associated with procedures. We then performed decision studies to estimate the reliability coefficient and the minimum number of procedures required for a reproducible assessment.
Results
We estimated that the reliability of the assessment of surgical trainees’ intraoperative autonomy and performance using SIMPL exceeded 0.70. Additionally, the optimal number of procedures required for a reproducible assessment was 10, 17, 15, and 17 for postgraduate year (PGY) 2, PGY 3, PGY 4, and PGY 5, respectively. Notably, the study highlighted that the assessment of residents in their senior years necessitated a larger number of procedures compared to those in their junior years.
Conclusion
The study demonstrated that the SIMPL assessment is reliably effective for evaluating the intraoperative performance of surgical trainees. Adjusting the number of procedures based on the trainees’ training stage enhances the assessment process’s accuracy and effectiveness.
Reviews
Immersive simulation in nursing and midwifery education: a systematic review  
Lahoucine Ben Yahya, Aziz Naciri, Mohamed Radid, Ghizlane Chemsi
J Educ Eval Health Prof. 2024;21:19.   Published online August 8, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.19
  • 1,765 View
  • 313 Download
AbstractAbstract PDFSupplementary Material
Purpose
Immersive simulation is an innovative training approach in health education that enhances student learning. This study examined its impact on engagement, motivation, and academic performance in nursing and midwifery students.
Methods
A comprehensive systematic search was meticulously conducted in 4 reputable databases—Scopus, PubMed, Web of Science, and Science Direct—following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. The research protocol was pre-registered in the PROSPERO registry, ensuring transparency and rigor. The quality of the included studies was assessed using the Medical Education Research Study Quality Instrument.
Results
Out of 90 identified studies, 11 were included in the present review, involving 1,090 participants. Four out of 5 studies observed high post-test engagement scores in the intervention groups. Additionally, 5 out of 6 studies that evaluated motivation found higher post-test motivational scores in the intervention groups than in control groups using traditional approaches. Furthermore, among the 8 out of 11 studies that evaluated academic performance during immersive simulation training, 5 reported significant differences (P<0.001) in favor of the students in the intervention groups.
Conclusion
Immersive simulation, as demonstrated by this study, has a significant potential to enhance student engagement, motivation, and academic performance, surpassing traditional teaching methods. This potential underscores the urgent need for future research in various contexts to better integrate this innovative educational approach into nursing and midwifery education curricula, inspiring hope for improved teaching methods.
The legality and appropriateness of keeping Korean Medical Licensing Examination items confidential: a comparative analysis and review of court rulings  
Jae Sun Kim, Dae Un Hong, Ju Yoen Lee
J Educ Eval Health Prof. 2024;21:28.   Published online October 15, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.28
  • 267 View
  • 122 Download
AbstractAbstract PDFSupplementary Material
This study examines the legality and appropriateness of keeping the multiple-choice question items of the Korean Medical Licensing Examination (KMLE) confidential. Through an analysis of cases from the United States, Canada, and Australia, where medical licensing exams are conducted using item banks and computer-based testing, we found that exam items are kept confidential to ensure fairness and prevent cheating. In Korea, the Korea Health Personnel Licensing Examination Institute (KHPLEI) has been disclosing KMLE questions despite concerns over exam integrity. Korean courts have consistently ruled that multiple-choice question items prepared by public institutions are non-public information under Article 9(1)(v) of the Korea Official Information Disclosure Act (KOIDA), which exempts disclosure if it significantly hinders the fairness of exams or research and development. The Constitutional Court of Korea has upheld this provision. Given the time and cost involved in developing high-quality items and the need to accurately assess examinees’ abilities, there are compelling reasons to keep KMLE items confidential. As a public institution responsible for selecting qualified medical practitioners, KHPLEI should establish its disclosure policy based on a balanced assessment of public interest, without influence from specific groups. We conclude that KMLE questions qualify as non-public information under KOIDA, and KHPLEI may choose to maintain their confidentiality to ensure exam fairness and efficiency.
Insights into undergraduate medical student selection tools: a systematic review and meta-analysis  
Pin-Hsiang Huang, Arash Arianpoor, Silas Taylor, Jenzel Gonzales, Boaz Shulruf
J Educ Eval Health Prof. 2024;21:22.   Published online September 12, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.22
  • 655 View
  • 186 Download
AbstractAbstract PDFSupplementary Material
Purpose
Evaluating medical school selection tools is vital for evidence-based student selection. With previous reviews revealing knowledge gaps, this meta-analysis offers insights into the effectiveness of these selection tools.
Methods
A systematic review and meta-analysis were conducted applying the following criteria: peer-reviewed articles available in English, published from 2010 and which include empirical data linking performance in selection tools with assessment and dropout outcomes of undergraduate entry medical programs. Systematic reviews, meta-analyses, general opinion pieces, or commentaries were excluded. Effect sizes (ESs) of the predictability of academic and clinical performance within and by the end of the medicine program were extracted, and the pooled ESs were presented.
Results
Sixty-seven out of 2,212 articles were included, which yielded 236 ESs. Previous academic achievement predicted medical program academic performance (Cohen’s d=0.697 in early program; 0.619 in end of program) and clinical exams (0.545 in end of program). Within aptitude tests, verbal reasoning and quantitative reasoning predicted academic achievement in the early program and in the last years (0.704 & 0.643, respectively). Overall aptitude tests predicted academic achievement in both the early and last years (0.550 & 0.371, respectively). Neither panel interviews, multiple mini-interviews, nor situational judgement tests (SJT) yielded statistically significant pooled ES.
Conclusion
Current evidence suggests that learning outcomes are predicted by previous academic achievement and aptitude tests. The predictive value of SJT and topics such as selection algorithms, features of interview (e.g., content of the questions) and the way the interviewers’ reports are used, warrant further research.
How to review and assess a systematic review and meta-analysis article: a methodological study (secondary publication)  
Seung-Kwon Myung
J Educ Eval Health Prof. 2023;20:24.   Published online August 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.24
  • 8,599 View
  • 669 Download
  • 7 Web of Science
  • 7 Crossref
AbstractAbstract PDFSupplementary Material
Systematic reviews and meta-analyses have become central in many research fields, particularly medicine. They offer the highest level of evidence in evidence-based medicine and support the development and revision of clinical practice guidelines, which offer recommendations for clinicians caring for patients with specific diseases and conditions. This review summarizes the concepts of systematic reviews and meta-analyses and provides guidance on reviewing and assessing such papers. A systematic review refers to a review of a research question that uses explicit and systematic methods to identify, select, and critically appraise relevant research. In contrast, a meta-analysis is a quantitative statistical analysis that combines individual results on the same research question to estimate the common or mean effect. Conducting a meta-analysis involves defining a research topic, selecting a study design, searching literature in electronic databases, selecting relevant studies, and conducting the analysis. One can assess the findings of a meta-analysis by interpreting a forest plot and a funnel plot and by examining heterogeneity. When reviewing systematic reviews and meta-analyses, several essential points must be considered, including the originality and significance of the work, the comprehensiveness of the database search, the selection of studies based on inclusion and exclusion criteria, subgroup analyses by various factors, and the interpretation of the results based on the levels of evidence. This review will provide readers with helpful guidance to help them read, understand, and evaluate these articles.

Citations

Citations to this article as recorded by  
  • Testing the distinction between sadism and psychopathy: A metanalysis
    Bruno Bonfá-Araujo, Gisele Magarotto Machado, Ariela Raissa Lima-Costa, Fernanda Otoni, Mahnoor Nadeem, Peter K. Jonason
    Personality and Individual Differences.2025; 235: 112973.     CrossRef
  • The Role of BIM in Managing Risks in Sustainability of Bridge Projects: A Systematic Review with Meta-Analysis
    Dema Munef Ahmad, László Gáspár, Zsolt Bencze, Rana Ahmad Maya
    Sustainability.2024; 16(3): 1242.     CrossRef
  • The association between long noncoding RNA ABHD11-AS1 and malignancy prognosis: a meta-analysis
    Guangyao Lin, Tao Ye, Jing Wang
    BMC Cancer.2024;[Epub]     CrossRef
  • The impact of indoor carbon dioxide exposure on human brain activity: A systematic review and meta-analysis based on studies utilizing electroencephalogram signals
    Nan Zhang, Chao Liu, Caixia Hou, Wenhao Wang, Qianhui Yuan, Weijun Gao
    Building and Environment.2024; 259: 111687.     CrossRef
  • Efficacy of mechanical debridement with adjunct antimicrobial photodynamic therapy against peri-implant subgingival oral yeasts colonization: A systematic review and meta-analysis
    Dena Ali, Jenna Alsalman
    Photodiagnosis and Photodynamic Therapy.2024; 50: 104399.     CrossRef
  • The effectiveness and usability of online, group-based interventions for people with severe obesity: a systematic review and meta-analysis
    Madison Milne-Ives, Lorna Burns, Dawn Swancutt, Raff Calitri, Ananya Ananthakrishnan, Helene Davis, Jonathan Pinkney, Mark Tarrant, Edward Meinert
    International Journal of Obesity.2024;[Epub]     CrossRef
  • Non-invasive brain stimulation enhances motor and cognitive performances during dual tasks in patients with Parkinson’s disease: a systematic review and meta-analysis
    Hajun Lee, Beom Jin Choi, Nyeonju Kang
    Journal of NeuroEngineering and Rehabilitation.2024;[Epub]     CrossRef
Educational/Faculty development material
The 6 degrees of curriculum integration in medical education in the United States  
Julie Youm, Jennifer Christner, Kevin Hittle, Paul Ko, Cinda Stone, Angela D. Blood, Samara Ginzburg
J Educ Eval Health Prof. 2024;21:15.   Published online June 13, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.15
  • 2,077 View
  • 371 Download
AbstractAbstract PDFSupplementary Material
Despite explicit expectations and accreditation requirements for integrated curriculum, there needs to be more clarity around an accepted common definition, best practices for implementation, and criteria for successful curriculum integration. To address the lack of consensus surrounding integration, we reviewed the literature and herein propose a definition for curriculum integration for the medical education audience. We further believe that medical education is ready to move beyond “horizontal” (1-dimensional) and “vertical” (2-dimensional) integration and propose a model of “6 degrees of curriculum integration” to expand the 2-dimensional concept for future designs of medical education programs and best prepare learners to meet the needs of patients. These 6 degrees include: interdisciplinary, timing and sequencing, instruction and assessment, incorporation of basic and clinical sciences, knowledge and skills-based competency progression, and graduated responsibilities in patient care. We encourage medical educators to look beyond 2-dimensional integration to this holistic and interconnected representation of curriculum integration.
Research articles
Impact of a change from A–F grading to honors/pass/fail grading on academic performance at Yonsei University College of Medicine in Korea: a cross-sectional serial mediation analysis  
Min-Kyeong Kim, Hae Won Kim
J Educ Eval Health Prof. 2024;21:20.   Published online August 16, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.20
Correction in: J Educ Eval Health Prof 2024;21(0):35
  • 672 View
  • 280 Download
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to explore how the grading system affected medical students’ academic performance based on their perceptions of the learning environment and intrinsic motivation in the context of changing from norm-referenced A–F grading to criterion-referenced honors/pass/fail grading.
Methods
The study involved 238 second-year medical students from 2014 (n=127, A–F grading) and 2015 (n=111, honors/pass/fail grading) at Yonsei University College of Medicine in Korea. Scores on the Dundee Ready Education Environment Measure, the Academic Motivation Scale, and the Basic Medical Science Examination were used to measure overall learning environment perceptions, intrinsic motivation, and academic performance, respectively. Serial mediation analysis was conducted to examine the pathways between the grading system and academic performance, focusing on the mediating roles of student perceptions and intrinsic motivation.
Results
The honors/pass/fail grading class students reported more positive perceptions of the learning environment, higher intrinsic motivation, and better academic performance than the A–F grading class students. Mediation analysis demonstrated a serial mediation effect between the grading system and academic performance through learning environment perceptions and intrinsic motivation. Student perceptions and intrinsic motivation did not independently mediate the relationship between the grading system and performance.
Conclusion
Reducing the number of grades and eliminating rank-based grading might have created an affirming learning environment that fulfills basic psychological needs and reinforces the intrinsic motivation linked to academic performance. The cumulative effect of these 2 mediators suggests that a comprehensive approach should be used to understand student performance.

Citations

Citations to this article as recorded by  
  • Erratum: Impact of a change from A–F grading to honors/pass/fail grading on academic performance at Yonsei University College of Medicine in Korea: a cross-sectional serial mediation analysis

    Journal of Educational Evaluation for Health Professions.2024; 21: 35.     CrossRef
GPT-4o’s competency in answering the simulated written European Board of Interventional Radiology exam compared to a medical student and experts in Germany and its ability to generate exam items on interventional radiology: a descriptive study
Sebastian Ebel, Constantin Ehrengut, Timm Denecke, Holger Gößmann, Anne Bettina Beeskow
J Educ Eval Health Prof. 2024;21:21.   Published online August 20, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.21
  • 593 View
  • 257 Download
  • 1 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to determine whether ChatGPT-4o, a generative artificial intelligence (AI) platform, was able to pass a simulated written European Board of Interventional Radiology (EBIR) exam and whether GPT-4o can be used to train medical students and interventional radiologists of different levels of expertise by generating exam items on interventional radiology.
Methods
GPT-4o was asked to answer 370 simulated exam items of the Cardiovascular and Interventional Radiology Society of Europe (CIRSE) for EBIR preparation (CIRSE Prep). Subsequently, GPT-4o was requested to generate exam items on interventional radiology topics at levels of difficulty suitable for medical students and the EBIR exam. Those generated items were answered by 4 participants, including a medical student, a resident, a consultant, and an EBIR holder. The correctly answered items were counted. One investigator checked the answers and items generated by GPT-4o for correctness and relevance. This work was done from April to July 2024.
Results
GPT-4o correctly answered 248 of the 370 CIRSE Prep items (67.0%). For 50 CIRSE Prep items, the medical student answered 46.0%, the resident 42.0%, the consultant 50.0%, and the EBIR holder 74.0% correctly. All participants answered 82.0% to 92.0% of the 50 GPT-4o generated items at the student level correctly. For the 50 GPT-4o items at the EBIR level, the medical student answered 32.0%, the resident 44.0%, the consultant 48.0%, and the EBIR holder 66.0% correctly. All participants could pass the GPT-4o-generated items for the student level; while the EBIR holder could pass the GPT-4o-generated items for the EBIR level. Two items (0.3%) out of 150 generated by the GPT-4o were assessed as implausible.
Conclusion
GPT-4o could pass the simulated written EBIR exam and create exam items of varying difficulty to train medical students and interventional radiologists.

Citations

Citations to this article as recorded by  
  • From GPT-3.5 to GPT-4.o: A Leap in AI’s Medical Exam Performance
    Markus Kipp
    Information.2024; 15(9): 543.     CrossRef
  • Performance of ChatGPT and Bard on the medical licensing examinations varies across different cultures: a comparison study
    Yikai Chen, Xiujie Huang, Fangjie Yang, Haiming Lin, Haoyu Lin, Zhuoqun Zheng, Qifeng Liang, Jinhai Zhang, Xinxin Li
    BMC Medical Education.2024;[Epub]     CrossRef
Training satisfaction and future employment consideration among physician and nursing trainees at rural Veterans Affairs facilities in the United States during COVID-19: a time-series before and after study  
Heather Northcraft, Tiffany Radcliff, Anne Reid Griffin, Jia Bai, Aram Dobalian
J Educ Eval Health Prof. 2024;21:25.   Published online September 24, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.25
  • 450 View
  • 120 Download
AbstractAbstract PDFSupplementary Material
Purpose
The coronavirus disease 2019 (COVID-19) pandemic limited healthcare professional education and training opportunities in rural communities. Because the US Department of Veterans Affairs (VA) has robust programs to train clinicians in the United States, this study examined VA trainee perspectives regarding pandemic-related training in rural and urban areas and interest in future employment with the VA.
Methods
Survey responses were collected nationally from VA physicians and nursing trainees before and after COVID-19 (2018 to 2021). Logistic regression models were used to test the association between pandemic timing (pre-pandemic or pandemic), trainee program (physician or nurse), and the interaction of trainee pandemic timing and program on VA trainee satisfaction and trainee likelihood to consider future VA employment in rural and urban areas.
Results
While physician trainees at urban facilities reported decreases in overall training satisfaction and corresponding decreases in the likelihood of considering future VA employment from pre-pandemic to pandemic, rural physician trainees showed no changes in either outcome. In contrast, while nursing trainees at both urban and rural sites had decreases in training satisfaction associated with the pandemic, there was no corresponding effect on the likelihood of future employment by nurses at either urban or rural VA sites.
Conclusion
The study’s findings suggest differences in the training experiences of physicians and nurses at rural sites, as well as between physician trainees at urban and rural sites. Understanding these nuances can inform the development of targeted approaches to address the ongoing provider shortages that rural communities in the United States are facing.

JEEHP : Journal of Educational Evaluation for Health Professions
TOP