Research articles
-
Inter-rater reliability and content validity of the measurement tool for portfolio assessments used in the Introduction to Clinical Medicine course at Ewha Womans University College of Medicine: a methodological study
-
Dong-Mi Yoo, Jae Jin Han
-
J Educ Eval Health Prof. 2024;21:39. Published online December 10, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.39
-
-
Abstract
PDFSupplementary Material
- Purpose
This study aimed to examine the reliability and validity of a measurement tool for portfolio assessments in medical education. Specifically, it investigated scoring consistency among raters and assessment criteria appropriateness according to an expert panel.
Methods
A cross-sectional observational study was conducted from September to December 2018 for the Introduction to Clinical Medicine course at the Ewha Womans University College of Medicine. Data were collected for 5 randomly selected portfolios scored by a gold-standard rater and 6 trained raters. An expert panel assessed the validity of 12 assessment items using the content validity index (CVI). Statistical analysis included Pearson correlation coefficients for rater alignment, the intraclass correlation coefficient (ICC) for inter-rater reliability, and the CVI for item-level validity.
Results
Rater 1 had the highest Pearson correlation (0.8916) with the gold-standard rater, while Rater 5 had the lowest (0.4203). The ICC for all raters was 0.3821, improving to 0.4415 after excluding Raters 1 and 5, indicating a 15.6% reliability increase. All assessment items met the CVI threshold of ≥0.75, with some achieving a perfect score (CVI=1.0). However, items like “sources” and “level and degree of performance” showed lower validity (CVI=0.72).
Conclusion
The present measurement tool for portfolio assessments demonstrated moderate reliability and strong validity, supporting its use as a credible tool. For a more reliable portfolio assessment, more faculty training is needed.
-
Development and validation of a measurement tool to assess student perceptions of using real patients in physical therapy education at the Rocky Mountain University, the United States: a methodological study
-
Stacia Hall Thompson, Hina Garg, Mary Shotwell, Michelle Webb
-
J Educ Eval Health Prof. 2024;21:30. Published online November 7, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.30
-
-
Abstract
PDFSupplementary Material
- Purpose
This study aimed to develop and validate the Student Perceptions of Real Patient Use in Physical Therapy Education (SPRP-PTE) survey to assess physical therapy student (SPT) perceptions regarding real patient use in didactic education.
Methods
This cross-sectional observational study developed a 48-item survey and tested the survey on 130 SPTs. Face and content validity were determined by an expert review and content validity index (CVI). Construct validity and internal consistency reliability were determined via exploratory factor analysis (EFA) and Cronbach’s α.
Results
Three main constructs were identified (value, satisfaction, and confidence), each having 4 subconstruct components (overall, cognitive, psychomotor, and affective learning). Expert review demonstrated adequate face and content validity (CVI=96%). The initial EFA of the 48-item survey revealed items with inconsistent loadings and low correlations, leading to the removal of 18 items. An EFA of the 30-item survey demonstrated 1-factor loadings of all survey constructs except satisfaction and the entire survey. All constructs had adequate internal consistency (Cronbach’s α >0.85).
Conclusion
The SPRP-PTE survey provides a reliable and valid way to assess student perceptions of real patient use. Future studies are encouraged to validate the SPRP-PTE survey further.
Educational/Faculty development material
-
The performance of ChatGPT-4.0o in medical imaging evaluation: a cross-sectional study
-
Elio Stefan Arruzza, Carla Marie Evangelista, Minh Chau
-
J Educ Eval Health Prof. 2024;21:29. Published online October 31, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.29
-
-
885
View
-
225
Download
-
2
Web of Science
-
2
Crossref
-
Abstract
PDFSupplementary Material
- This study investigated the performance of ChatGPT-4.0o in evaluating the quality of positioning in radiographic images. Thirty radiographs depicting a variety of knee, elbow, ankle, hand, pelvis, and shoulder projections were produced using anthropomorphic phantoms and uploaded to ChatGPT-4.0o. The model was prompted to provide a solution to identify any positioning errors with justification and offer improvements. A panel of radiographers assessed the solutions for radiographic quality based on established positioning criteria, with a grading scale of 1–5. In only 20% of projections, ChatGPT-4.0o correctly recognized all errors with justifications and offered correct suggestions for improvement. The most commonly occurring score was 3 (9 cases, 30%), wherein the model recognized at least 1 specific error and provided a correct improvement. The mean score was 2.9. Overall, low accuracy was demonstrated, with most projections receiving only partially correct solutions. The findings reinforce the importance of robust radiography education and clinical experience.
-
Citations
Citations to this article as recorded by
- Conversational LLM Chatbot ChatGPT-4 for Colonoscopy Boston Bowel Preparation Scoring: An Artificial Intelligence-to-Head Concordance Analysis
Raffaele Pellegrino, Alessandro Federico, Antonietta Gerarda Gravina
Diagnostics.2024; 14(22): 2537. CrossRef - Effectiveness of ChatGPT-4o in developing continuing professional development plans for graduate radiographers: a descriptive study
Minh Chau, Elio Stefan Arruzza, Kelly Spuur
Journal of Educational Evaluation for Health Professions.2024; 21: 34. CrossRef
Research articles
-
A new performance evaluation indicator for the LEE Jong-wook Fellowship Program of Korea Foundation for International Healthcare to better assess its long-term educational impacts: a Delphi study
-
Minkyung Oh, Bo Young Yoon
-
J Educ Eval Health Prof. 2024;21:27. Published online October 2, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.27
-
-
Abstract
PDFSupplementary Material
- Purpose
The Dr. LEE Jong-wook Fellowship Program, established by the Korea Foundation for International Healthcare (KOFIH), aims to strengthen healthcare capacity in partner countries. The aim of the study was to develop new performance evaluation indicators for the program to better assess long-term educational impact across various courses and professional roles.
Methods
A 3-stage process was employed. First, a literature review of established evaluation models (Kirkpatrick’s 4 levels, context/input/process/product evaluation model, Organization for Economic Cooperation and Development Assistance Committee criteria) was conducted to devise evaluation criteria. Second, these criteria were validated via a 2-round Delphi survey with 18 experts in training projects from May 2021 to June 2021. Third, the relative importance of the evaluation criteria was determined using the analytic hierarchy process (AHP), calculating weights and ensuring consistency through the consistency index and consistency ratio (CR), with CR values below 0.1 indicating acceptable consistency.
Results
The literature review led to a combined evaluation model, resulting in 4 evaluation areas, 20 items, and 92 indicators. The Delphi surveys confirmed the validity of these indicators, with content validity ratio values exceeding 0.444. The AHP analysis assigned weights to each indicator, and CR values below 0.1 indicated consistency. The final set of evaluation indicators was confirmed through a workshop with KOFIH and adopted as the new evaluation tool.
Conclusion
The developed evaluation framework provides a comprehensive tool for assessing the long-term outcomes of the Dr. LEE Jong-wook Fellowship Program. It enhances evaluation capabilities and supports improvements in the training program’s effectiveness and international healthcare collaboration.
-
The effect of simulation-based training on problem-solving skills, critical thinking skills, and self-efficacy among nursing students in Vietnam: a before-and-after study
-
Tran Thi Hoang Oanh, Luu Thi Thuy, Ngo Thi Thu Huyen
-
J Educ Eval Health Prof. 2024;21:24. Published online September 23, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.24
-
-
Abstract
PDFSupplementary Material
- Purpose
This study investigated the effect of simulation-based training on nursing students’ problem-solving skills, critical thinking skills, and self-efficacy.
Methods
A single-group pretest and posttest study was conducted among 173 second-year nursing students at a public university in Vietnam from May 2021 to July 2022. Each student participated in the adult nursing preclinical practice course, which utilized a moderate-fidelity simulation teaching approach. Instruments including the Personal Problem-Solving Inventory Scale, Critical Thinking Skills Questionnaire, and General Self-Efficacy Questionnaire were employed to measure participants’ problem-solving skills, critical thinking skills, and self-efficacy. Data were analyzed using descriptive statistics and the paired-sample t-test with the significance level set at P<0.05.
Results
The mean score of the Personal Problem-Solving Inventory posttest (127.24±12.11) was lower than the pretest score (131.42±16.95), suggesting an improvement in the problem-solving skills of the participants (t172=2.55, P=0.011). There was no statistically significant difference in critical thinking skills between the pretest and posttest (P=0.854). Self-efficacy among nursing students showed a substantial increase from the pretest (27.91±5.26) to the posttest (28.71±3.81), with t172=-2.26 and P=0.025.
Conclusion
The results suggest that simulation-based training can improve problem-solving skills and increase self-efficacy among nursing students. Therefore, the integration of simulation-based training in nursing education is recommended.
Review
-
Insights into undergraduate medical student selection tools: a systematic review and meta-analysis
-
Pin-Hsiang Huang, Arash Arianpoor, Silas Taylor, Jenzel Gonzales, Boaz Shulruf
-
J Educ Eval Health Prof. 2024;21:22. Published online September 12, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.22
-
Correction in: J Educ Eval Health Prof 2024;21(0):41
-
853
View
-
208
Download
-
1
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
Evaluating medical school selection tools is vital for evidence-based student selection. With previous reviews revealing knowledge gaps, this meta-analysis offers insights into the effectiveness of these selection tools.
Methods
A systematic review and meta-analysis were conducted applying the following criteria: peer-reviewed articles available in English, published from 2010 and which include empirical data linking performance in selection tools with assessment and dropout outcomes of undergraduate entry medical programs. Systematic reviews, meta-analyses, general opinion pieces, or commentaries were excluded. Effect sizes (ESs) of the predictability of academic and clinical performance within and by the end of the medicine program were extracted, and the pooled ESs were presented.
Results
Sixty-seven out of 2,212 articles were included, which yielded 236 ESs. Previous academic achievement predicted medical program academic performance (Cohen’s d=0.697 in early program; 0.619 in end of program) and clinical exams (0.545 in end of program). Within aptitude tests, verbal reasoning and quantitative reasoning predicted academic achievement in the early program and in the last years (0.704 & 0.643, respectively). Overall aptitude tests predicted academic achievement in both the early and last years (0.550 & 0.371, respectively). Neither panel interviews, multiple mini-interviews, nor situational judgement tests (SJT) yielded statistically significant pooled ES.
Conclusion
Current evidence suggests that learning outcomes are predicted by previous academic achievement and aptitude tests. The predictive value of SJT and topics such as selection algorithms, features of interview (e.g., content of the questions) and the way the interviewers’ reports are used, warrant further research.
-
Citations
Citations to this article as recorded by
- Notice of Retraction and Replacement: Insights into undergraduate medical student selection tools: a systematic review and meta-analysis
Pin-Hsiang Huang, Arash Arianpoor, Silas Taylor, Jenzel Gonzales, Boaz Shulruf
Journal of Educational Evaluation for Health Professions.2024; 21: 41. CrossRef
Research articles
-
Impact of a change from A–F grading to honors/pass/fail grading on academic performance at Yonsei University College of Medicine in Korea: a cross-sectional serial mediation analysis
-
Min-Kyeong Kim, Hae Won Kim
-
J Educ Eval Health Prof. 2024;21:20. Published online August 16, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.20
-
Correction in: J Educ Eval Health Prof 2024;21(0):35
-
869
View
-
290
Download
-
1
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
This study aimed to explore how the grading system affected medical students’ academic performance based on their perceptions of the learning environment and intrinsic motivation in the context of changing from norm-referenced A–F grading to criterion-referenced honors/pass/fail grading.
Methods
The study involved 238 second-year medical students from 2014 (n=127, A–F grading) and 2015 (n=111, honors/pass/fail grading) at Yonsei University College of Medicine in Korea. Scores on the Dundee Ready Education Environment Measure, the Academic Motivation Scale, and the Basic Medical Science Examination were used to measure overall learning environment perceptions, intrinsic motivation, and academic performance, respectively. Serial mediation analysis was conducted to examine the pathways between the grading system and academic performance, focusing on the mediating roles of student perceptions and intrinsic motivation.
Results
The honors/pass/fail grading class students reported more positive perceptions of the learning environment, higher intrinsic motivation, and better academic performance than the A–F grading class students. Mediation analysis demonstrated a serial mediation effect between the grading system and academic performance through learning environment perceptions and intrinsic motivation. Student perceptions and intrinsic motivation did not independently mediate the relationship between the grading system and performance.
Conclusion
Reducing the number of grades and eliminating rank-based grading might have created an affirming learning environment that fulfills basic psychological needs and reinforces the intrinsic motivation linked to academic performance. The cumulative effect of these 2 mediators suggests that a comprehensive approach should be used to understand student performance.
-
Citations
Citations to this article as recorded by
- Erratum: Impact of a change from A–F grading to honors/pass/fail grading on academic performance at Yonsei University College of Medicine in Korea: a cross-sectional serial mediation analysis
Journal of Educational Evaluation for Health Professions.2024; 21: 35. CrossRef
Special article on the 20th anniversary of the journal
-
Comparison of real data and simulated data analysis of a stopping rule based on the standard error of measurement in computerized adaptive testing for medical examinations in Korea: a psychometric study
-
Dong Gi Seo, Jeongwook Choi, Jinha Kim
-
J Educ Eval Health Prof. 2024;21:18. Published online July 9, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.18
-
-
Abstract
PDFSupplementary Material
- Purpose
This study aimed to compare and evaluate the efficiency and accuracy of computerized adaptive testing (CAT) under 2 stopping rules (standard error of measurement [SEM]=0.3 and 0.25) using both real and simulated data in medical examinations in Korea.
Methods
This study employed post-hoc simulation and real data analysis to explore the optimal stopping rule for CAT in medical examinations. The real data were obtained from the responses of 3rd-year medical students during examinations in 2020 at Hallym University College of Medicine. Simulated data were generated using estimated parameters from a real item bank in R. Outcome variables included the number of examinees’ passing or failing with SEM values of 0.25 and 0.30, the number of items administered, and the correlation. The consistency of real CAT result was evaluated by examining consistency of pass or fail based on a cut score of 0.0. The efficiency of all CAT designs was assessed by comparing the average number of items administered under both stopping rules.
Results
Both SEM 0.25 and SEM 0.30 provided a good balance between accuracy and efficiency in CAT. The real data showed minimal differences in pass/fail outcomes between the 2 SEM conditions, with a high correlation (r=0.99) between ability estimates. The simulation results confirmed these findings, indicating similar average item numbers between real and simulated data.
Conclusion
The findings suggest that both SEM 0.25 and 0.30 are effective termination criteria in the context of the Rasch model, balancing accuracy and efficiency in CAT.
-
Performance of GPT-3.5 and GPT-4 on standardized urology knowledge assessment items in the United States: a descriptive study
-
Max Samuel Yudovich, Elizaveta Makarova, Christian Michael Hague, Jay Dilip Raman
-
J Educ Eval Health Prof. 2024;21:17. Published online July 8, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.17
-
-
1,880
View
-
304
Download
-
2
Web of Science
-
2
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
This study aimed to evaluate the performance of Chat Generative Pre-Trained Transformer (ChatGPT) with respect to standardized urology multiple-choice items in the United States.
Methods
In total, 700 multiple-choice urology board exam-style items were submitted to GPT-3.5 and GPT-4, and responses were recorded. Items were categorized based on topic and question complexity (recall, interpretation, and problem-solving). The accuracy of GPT-3.5 and GPT-4 was compared across item types in February 2024.
Results
GPT-4 answered 44.4% of items correctly compared to 30.9% for GPT-3.5 (P<0.00001). GPT-4 (vs. GPT-3.5) had higher accuracy with urologic oncology (43.8% vs. 33.9%, P=0.03), sexual medicine (44.3% vs. 27.8%, P=0.046), and pediatric urology (47.1% vs. 27.1%, P=0.012) items. Endourology (38.0% vs. 25.7%, P=0.15), reconstruction and trauma (29.0% vs. 21.0%, P=0.41), and neurourology (49.0% vs. 33.3%, P=0.11) items did not show significant differences in performance across versions. GPT-4 also outperformed GPT-3.5 with respect to recall (45.9% vs. 27.4%, P<0.00001), interpretation (45.6% vs. 31.5%, P=0.0005), and problem-solving (41.8% vs. 34.5%, P=0.56) type items. This difference was not significant for the higher-complexity items.
Conclusions
ChatGPT performs relatively poorly on standardized multiple-choice urology board exam-style items, with GPT-4 outperforming GPT-3.5. The accuracy was below the proposed minimum passing standards for the American Board of Urology’s Continuing Urologic Certification knowledge reinforcement activity (60%). As artificial intelligence progresses in complexity, ChatGPT may become more capable and accurate with respect to board examination items. For now, its responses should be scrutinized.
-
Citations
Citations to this article as recorded by
- From GPT-3.5 to GPT-4.o: A Leap in AI’s Medical Exam Performance
Markus Kipp
Information.2024; 15(9): 543. CrossRef - Artificial Intelligence can Facilitate Application of Risk Stratification Algorithms to Bladder Cancer Patient Case Scenarios
Max S Yudovich, Ahmad N Alzubaidi, Jay D Raman
Clinical Medicine Insights: Oncology.2024;[Epub] CrossRef
Educational/Faculty development material
-
The 6 degrees of curriculum integration in medical education in the United States
-
Julie Youm, Jennifer Christner, Kevin Hittle, Paul Ko, Cinda Stone, Angela D. Blood, Samara Ginzburg
-
J Educ Eval Health Prof. 2024;21:15. Published online June 13, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.15
-
-
Abstract
PDFSupplementary Material
- Despite explicit expectations and accreditation requirements for integrated curriculum, there needs to be more clarity around an accepted common definition, best practices for implementation, and criteria for successful curriculum integration. To address the lack of consensus surrounding integration, we reviewed the literature and herein propose a definition for curriculum integration for the medical education audience. We further believe that medical education is ready to move beyond “horizontal” (1-dimensional) and “vertical” (2-dimensional) integration and propose a model of “6 degrees of curriculum integration” to expand the 2-dimensional concept for future designs of medical education programs and best prepare learners to meet the needs of patients. These 6 degrees include: interdisciplinary, timing and sequencing, instruction and assessment, incorporation of basic and clinical sciences, knowledge and skills-based competency progression, and graduated responsibilities in patient care. We encourage medical educators to look beyond 2-dimensional integration to this holistic and interconnected representation of curriculum integration.
Research articles
-
Redesigning a faculty development program for clinical teachers in Indonesia: a before-and-after study
-
Rita Mustika, Nadia Greviana, Dewi Anggraeni Kusumoningrum, Anyta Pinasthika
-
J Educ Eval Health Prof. 2024;21:14. Published online June 13, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.14
-
-
Abstract
PDFSupplementary Material
- Purpose
Faculty development (FD) is important to support teaching, including for clinical teachers. Faculty of Medicine Universitas Indonesia (FMUI) has conducted a clinical teacher training program developed by the medical education department since 2008, both for FMUI teachers and for those at other centers in Indonesia. However, participation is often challenging due to clinical, administrative, and research obligations. The coronavirus disease 2019 pandemic amplified the urge to transform this program. This study aimed to redesign and evaluate an FD program for clinical teachers that focuses on their needs and current situation.
Methods
A 5-step design thinking framework (empathizing, defining, ideating, prototyping, and testing) was used with a pre/post-test design. Design thinking made it possible to develop a participant-focused program, while the pre/post-test design enabled an assessment of the program’s effectiveness.
Results
Seven medical educationalists and 4 senior and 4 junior clinical teachers participated in a group discussion in the empathize phase of design thinking. The research team formed a prototype of a 3-day blended learning course, with an asynchronous component using the Moodle learning management system and a synchronous component using the Zoom platform. Pre-post-testing was done in 2 rounds, with 107 and 330 participants, respectively. Evaluations of the first round provided feedback for improving the prototype for the second round.
Conclusion
Design thinking enabled an innovative-creative process of redesigning FD that emphasized participants’ needs. The pre/post-testing showed that the program was effective. Combining asynchronous and synchronous learning expands access and increases flexibility. This approach could also apply to other FD programs.
-
Development of examination objectives for the Korean paramedic and emergency medical technician examination: a survey study
-
Tai-hwan Uhm, Heakyung Choi, Seok Hwan Hong, Hyungsub Kim, Minju Kang, Keunyoung Kim, Hyejin Seo, Eunyoung Ki, Hyeryeong Lee, Heejeong Ahn, Uk-jin Choi, Sang Woong Park
-
J Educ Eval Health Prof. 2024;21:13. Published online June 12, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.13
-
-
Abstract
PDFSupplementary Material
- Purpose
The duties of paramedics and emergency medical technicians (P&EMTs) are continuously changing due to developments in medical systems. This study presents evaluation goals for P&EMTs by analyzing their work, especially the tasks that new P&EMTs (with less than 3 years’ experience) find difficult, to foster the training of P&EMTs who could adapt to emergency situations after graduation.
Methods
A questionnaire was created based on prior job analyses of P&EMTs. The survey questions were reviewed through focus group interviews, from which 253 task elements were derived. A survey was conducted from July 10, 2023 to October 13, 2023 on the frequency, importance, and difficulty of the 6 occupations in which P&EMTs were employed.
Results
The P&EMTs’ most common tasks involved obtaining patients’ medical histories and measuring vital signs, whereas the most important task was cardiopulmonary resuscitation (CPR). The task elements that the P&EMTs found most difficult were newborn delivery and infant CPR. New paramedics reported that treating patients with fractures, poisoning, and childhood fever was difficult, while new EMTs reported that they had difficulty keeping diaries, managing ambulances, and controlling infection.
Conclusion
Communication was the most important item for P&EMTs, whereas CPR was the most important skill. It is important for P&EMTs to have knowledge of all tasks; however, they also need to master frequently performed tasks and those that pose difficulties in the field. By deriving goals for evaluating P&EMTs, changes could be made to their education, thereby making it possible to train more capable P&EMTs.
-
Events related to medication errors and related factors involving nurses’ behavior to reduce medication errors in Japan: a Bayesian network modeling-based factor analysis and scenario analysis
-
Naotaka Sugimura, Katsuhiko Ogasawara
-
J Educ Eval Health Prof. 2024;21:12. Published online June 11, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.12
-
-
Abstract
PDFSupplementary Material
- Purpose
This study aimed to identify the relationships between medication errors and the factors affecting nurses’ knowledge and behavior in Japan using Bayesian network modeling. It also aimed to identify important factors through scenario analysis with consideration of nursing students’ and nurses’ education regarding patient safety and medications.
Methods
We used mixed methods. First, error events related to medications and related factors were qualitatively extracted from 119 actual incident reports in 2022 from the database of the Japan Council for Quality Health Care. These events and factors were then quantitatively evaluated in a flow model using Bayesian network, and a scenario analysis was conducted to estimate the posterior probabilities of events when the prior probabilities of some factors were 0%.
Results
There were 10 types of events related to medication errors. A 5-layer flow model was created using Bayesian network analysis. The scenario analysis revealed that “failure to confirm the 5 rights,” “unfamiliarity with operations of medications,” “insufficient knowledge of medications,” and “assumptions and forgetfulness” were factors that were significantly associated with the occurrence of medical errors.
Conclusion
This study provided an estimate of the effects of mitigating nurses’ behavioral factors that trigger medication errors. The flow model itself can also be used as an educational tool to reflect on behavior when incidents occur. It is expected that patient safety education will be recognized as a major element of nursing education worldwide and that an integrated curriculum will be developed.
-
Revised evaluation objectives of the Korean Dentist Clinical Skill Test: a survey study and focus group interviews
-
Jae-Hoon Kim, Young J Kim, Deuk-Sang Ma, Se-Hee Park, Ahran Pae, June-Sung Shim, Il-Hyung Yang, Ui-Won Jung, Byung-Joon Choi, Yang-Hyun Chun
-
J Educ Eval Health Prof. 2024;21:11. Published online May 30, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.11
-
-
Abstract
PDFSupplementary Material
- Purpose
This study aimed to propose a revision of the evaluation objectives of the Korean Dentist Clinical Skill Test by analyzing the opinions of those involved in the examination after a review of those objectives.
Methods
The clinical skill test objectives were reviewed based on the national-level dental practitioner competencies, dental school educational competencies, and the third dental practitioner job analysis. Current and former examinees were surveyed about their perceptions of the evaluation objectives. The validity of 22 evaluation objectives and overlapping perceptions based on area of specialty were surveyed on a 5-point Likert scale by professors who participated in the clinical skill test and dental school faculty members. Additionally, focus group interviews were conducted with experts on the examination.
Results
It was necessary to consider including competency assessments for “emergency rescue skills” and “planning and performing prosthetic treatment.” There were no significant differences between current and former examinees in their perceptions of the clinical skill test’s objectives. The professors who participated in the examination and dental school faculty members recognized that most of the objectives were valid. However, some responses stated that “oromaxillofacial cranial nerve examination,” “temporomandibular disorder palpation test,” and “space management for primary and mixed dentition” were unfeasible evaluation objectives and overlapped with dental specialty areas.
Conclusion
When revising the Korean Dentist Clinical Skill Test’s objectives, it is advisable to consider incorporating competency assessments related to “emergency rescue skills” and “planning and performing prosthetic treatment.”
Review
-
Attraction and achievement as 2 attributes of gamification in healthcare: an evolutionary concept analysis
-
Hyun Kyoung Kim
-
J Educ Eval Health Prof. 2024;21:10. Published online April 11, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.10
-
-
2,046
View
-
333
Download
-
1
Web of Science
-
1
Crossref
-
Abstract
PDFSupplementary Material
- This study conducted a conceptual analysis of gamification in healthcare utilizing Rogers’ evolutionary concept analysis methodology to identify its attributes and provide a method for its applications in the healthcare field. Gamification has recently been used as a health intervention and education method, but the concept is used inconsistently and confusingly. A literature review was conducted to derive definitions, surrogate terms, antecedents, influencing factors, attributes (characteristics with dimensions and features), related concepts, consequences, implications, and hypotheses from various academic fields. A total of 56 journal articles in English and Korean, published between August 2 and August 7, 2023, were extracted from databases such as PubMed Central, the Institute of Electrical and Electronics Engineers, the Association for Computing Machinery Digital Library, the Research Information Sharing Service, and the Korean Studies Information Service System, using the keywords “gamification” and “healthcare.” These articles were then analyzed. Gamification in healthcare is defined as the application of game elements in health-related contexts to improve health outcomes. The attributes of this concept were categorized into 2 main areas: attraction and achievement. These categories encompass various strategies for synchronization, enjoyable engagement, visual rewards, and goal-reinforcing frames. Through a multidisciplinary analysis of the concept’s attributes and influencing factors, this paper provides practical strategies for implementing gamification in health interventions. When developing a gamification strategy, healthcare providers can reference this analysis to ensure the game elements are used both appropriately and effectively.
-
Citations
Citations to this article as recorded by
- Short-Term Impact of Digital Mental Health Interventions on Psychological Well-Being and Blood Sugar Control in Type 2 Diabetes Patients in Riyadh
Abdulaziz M. Alodhialah, Ashwaq A. Almutairi, Mohammed Almutairi
Healthcare.2024; 12(22): 2257. CrossRef