Purpose The aim of this study was to identify factors influencing the learning transfer of nursing students in a non-face-to-face educational environment through structural equation modeling and suggest ways to improve the transfer of learning.
Methods In this cross-sectional study, data were collected via online surveys from February 9 to March 1, 2022, from 218 nursing students in Korea. Learning transfer, learning immersion, learning satisfaction, learning efficacy, self-directed learning ability and information technology utilization ability were analyzed using IBM SPSS for Windows ver. 22.0 and AMOS ver. 22.0.
Results The assessment of structural equation modeling showed adequate model fit, with normed χ2=1.74 (P<0.024), goodness-of-fit index=0.97, adjusted goodness-of-fit index=0.93, comparative fit index=0.98, root mean square residual=0.02, Tucker-Lewis index=0.97, normed fit index=0.96, and root mean square error of approximation=0.06. In a hypothetical model analysis, 9 out of 11 pathways of the hypothetical structural model for learning transfer in nursing students were statistically significant. Learning self-efficacy and learning immersion of nursing students directly affected learning transfer, and subjective information technology utilization ability, self-directed learning ability, and learning satisfaction were variables with indirect effects. The explanatory power of immersion, satisfaction, and self-efficacy for learning transfer was 44.4%.
Conclusion The assessment of structural equation modeling indicated an acceptable fit. It is necessary to improve the transfer of learning through the development of a self-directed program for learning ability improvement, including the use of information technology in nursing students’ learning environment in non-face-to-face conditions.
Purpose It aims to find students’ performance of and perspectives on an objective structured practical examination (OSPE) for assessment of laboratory and preclinical skills in biomedical laboratory science (BLS). It also aims to investigate the perception, acceptability, and usefulness of OSPE from the students’ and examiners’ point of view.
Methods This was a longitudinal study to implement an OSPE in BLS. The student group consisted of 198 BLS students enrolled in semester 4, 2015–2019 at Karolinska University Hospital Huddinge, Sweden. Fourteen teachers evaluated the performance by completing a checklist and global rating scales. A student survey questionnaire was administered to the participants to evaluate the student perspective. To assess quality, 4 independent observers were included to monitor the examiners.
Results Almost 50% of the students passed the initial OSPE. During the repeat OSPE, 73% of the students passed the OSPE. There was a statistically significant difference between the first and the second repeat OSPE (P<0.01) but not between the first and the third attempt (P=0.09). The student survey questionnaire was completed by 99 of the 198 students (50%) and only 63 students responded to the free-text questions (32%). According to these responses, some stations were perceived as more difficult, albeit they considered the assessment to be valid. The observers found the assessment protocols and examiner’s instructions assured the objectivity of the examination.
Conclusion The introduction of an OSPE in the education of biomedical laboratory scientists was a reliable, and useful examination of practical skills.
Purpose The present study aimed to investigate the effect of a mini-clinical evaluation exercise (CEX) assessment on improving the clinical skills of nurse anesthesia students at Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran.
Methods This study started on November 1, 2022, and ended on December 1, 2022. It was conducted among 50 nurse anesthesia students divided into intervention and control groups. The intervention group’s clinical skills were evaluated 4 times using the mini-CEX method. In contrast, the same skills were evaluated in the control group based on the conventional method—that is, general supervision by the instructor during the internship and a summative evaluation based on a checklist at the end of the course. The intervention group students also filled out a questionnaire to measure their satisfaction with the mini-CEX method.
Results The mean score of the students in both the control and intervention groups increased significantly on the post-test (P<0.0001), but the improvement in the scores of the intervention group was significantly greater compared with the control group (P<0.0001). The overall mean score for satisfaction in the intervention group was 76.3 out of a maximum of 95.
Conclusion The findings of this study showed that using mini-CEX as a formative evaluation method to evaluate clinical skills had a significant effect on the improvement of nurse anesthesia students’ clinical skills, and they had a very favorable opinion about this evaluation method.
A virtual point-of-care ultrasound (POCUS) education program was initiated to introduce handheld ultrasound technology to Georgetown Public Hospital Corporation in Guyana, a low-resource setting. We studied ultrasound competency and participant satisfaction in a cohort of 20 physicians-in-training through the urology clinic. The program consisted of a training phase, where they learned how to use the Butterfly iQ ultrasound, and a mentored implementation phase, where they applied their skills in the clinic. The assessment was through written exams and an objective structured clinical exam (OSCE). Fourteen students completed the program. The written exam scores were 3.36/5 in the training phase and 3.57/5 in the mentored implementation phase, and all students earned 100% on the OSCE. Students expressed satisfaction with the program. Our POCUS education program demonstrates the potential to teach clinical skills in low-resource settings and the value of virtual global health partnerships in advancing POCUS and minimally invasive diagnostics.
Purpose This study aimed to assess the effect of simulation teaching in critical care courses in a nursing study program on the quality of chest compressions of cardiopulmonary resuscitation (CPR).
Methods An observational cross-sectional study was conducted at the Faculty of Health Studies at the Technical University of Liberec. The success rate of CPR was tested in exams comparing 2 groups of students, totaling 66 different individuals, who completed half a year (group 1: intermediate exam with model simulation) or 1.5 years (group 2: final theoretical critical care exam with model simulation) of undergraduate nursing critical care education taught completely with a Laerdal SimMan 3G simulator. The quality of CPR was evaluated according to 4 components: compression depth, compression rate, time of correct frequency, and time of correct chest release.
Results Compression depth was significantly higher in group 2 than in group 1 (P=0.016). There were no significant differences in the compression rate (P=0.210), time of correct frequency (P=0.586), or time of correct chest release (P=0.514).
Conclusion Nursing students who completed the final critical care exam showed an improvement in compression depth during CPR after 2 additional semesters of critical care teaching compared to those who completed the intermediate exam. The above results indicate that regularly scheduled CPR training is necessary during critical care education for nursing students.
At the end of 2022, the appearance of ChatGPT, an artificial intelligence (AI) chatbot with amazing writing ability, caused a great sensation in academia. The chatbot turned out to be very capable, but also capable of deception, and the news broke that several researchers had listed the chatbot (including its earlier version) as co-authors of their academic papers. In response, Nature and Science expressed their position that this chatbot cannot be listed as an author in the papers they publish. Since an AI chatbot is not a human being, in the current legal system, the text automatically generated by an AI chatbot cannot be a copyrighted work; thus, an AI chatbot cannot be an author of a copyrighted work. Current AI chatbots such as ChatGPT are much more advanced than search engines in that they produce original text, but they still remain at the level of a search engine in that they cannot take responsibility for their writing. For this reason, they also cannot be authors from the perspective of research ethics.
Citations
Citations to this article as recorded by
The importance of human supervision in the use of ChatGPT as a support tool in scientific writing William Castillo-González Metaverse Basic and Applied Research.2023;[Epub] CrossRef
ChatGPT for Future Medical and Dental Research Bader Fatani Cureus.2023;[Epub] CrossRef
Chatbots in Medical Research Punit Sharma Clinical Nuclear Medicine.2023;[Epub] CrossRef
Potential applications of
ChatGPT
in dermatology
Nicolas Kluger Journal of the European Academy of Dermatology and Venereology.2023;[Epub] CrossRef
ChatGPT performance on the American Urological Association (AUA) Self-Assessment Study Program and the potential influence of artificial intelligence (AI) in urologic training Nicholas A. Deebel, Ryan Terlecki Urology.2023;[Epub] CrossRef
Intelligence or artificial intelligence? More hard problems for authors of Biological Psychology, the neurosciences, and everyone else Thomas Ritz Biological Psychology.2023; : 108590. CrossRef
Purpose Orthopedic manual therapy (OMT) education demonstrates significant variability between philosophies and while literature has offered a more comprehensive understanding of the contextual, patient specific, and technique factors which interact to influence outcome, most OMT training paradigms continue to emphasize the mechanical basis for OMT application. The purpose of this study was to establish consensus on modifications & adaptions to training paradigms which need to occur within OMT education to align with current evidence.
Methods A 3-round Delphi survey instrument designed to identify foundational knowledge to include and omit from OMT education was completed by 28 educators working within high level manual therapy education programs internationally. Round 1 consisted of open-ended questions to identify content in each area. Round 2 and Round 3 allowed participants to rank the themes identified in Round 1.
Results Consensus was reached on 25 content areas to include within OMT education, 1 content area to omit from OMT education, and 34 knowledge components which should be present in those providing OMT. Support was seen for education promoting understanding the complex psychological, neurophysiological, and biomechanical systems as they relate to both evaluation and treatment effect. While some concepts were more consistently supported there was significant variability in responses which is largely expected to be related to previous training.
Conclusion The results of this study indicate manual therapy educators understanding of evidence-based practice as support for all 3 tiers of evidence were represented. The results of this study should guide OMT training program development and modification.
Citations
Citations to this article as recorded by
Modernizing patient-centered manual therapy: Findings from a Delphi study on orthopaedic manual therapy application Damian Keter, David Griswold, Kenneth Learman, Chad Cook Musculoskeletal Science and Practice.2023; 65: 102777. CrossRef
Purpose Nutrition support nurse is a member of a nutrition support team and is a health care professional who takes a significant part in all aspects of nutritional care. This study aims to investigate ways to improve the quality of tasks performed by nutrition support nurses through survey questionnaires in Korea.
Methods An online survey was conducted between October 12 and November 31, 2018. The questionnaire consists of 36 items categorized into 5 subscales: nutrition-focused support care, education and counseling, consultation and coordination, research and quality improvement, and leadership. The importance–performance analysis method was used to confirm the relationship between the importance and performance of nutrition support nurses’ tasks.
Results A total of 101 nutrition support nurses participated in this survey. The importance (5.56±0.78) and performance (4.50±1.06) of nutrition support nurses’ tasks showed a significant difference (t=11.27, P<0.001). Education, counseling/consultation, and participation in developing their processes and guidelines were identified as low-performance activities compared with their importance.
Conclusion To intervene nutrition support effectively, nutrition support nurses should have the qualification or competency through the education program based on their practice. Improved awareness of nutrition support nurses participating in research and quality improvement activity for role development is required.
Purpose This study evaluated the validity of student feedback derived from Medicine Student Experience Questionnaire (MedSEQ), as well as the predictors of students’ satisfaction in the Medicine program.
Methods Data from MedSEQ applying to the University of New South Wales Medicine program in 2017, 2019, and 2021 were analyzed. Confirmatory factor analysis (CFA) and Cronbach’s α were used to assess the construct validity and reliability of MedSEQ respectively. Hierarchical multiple linear regressions were used to identify the factors that most impact students’ overall satisfaction with the program.
Results A total of 1,719 students (34.50%) responded to MedSEQ. CFA showed good fit indices (root mean square error of approximation=0.051; comparative fit index=0.939; chi-square/degrees of freedom=6.429). All factors yielded good (α>0.7) or very good (α>0.8) levels of reliability, except the “online resources” factor, which had acceptable reliability (α=0.687). A multiple linear regression model with only demographic characteristics explained 3.8% of the variance in students’ overall satisfaction, whereas the model adding 8 domains from MedSEQ explained 40%, indicating that 36.2% of the variance was attributable to students’ experience across the 8 domains. Three domains had the strongest impact on overall satisfaction: “being cared for,” “satisfaction with teaching,” and “satisfaction with assessment” (β=0.327, 0.148, 0.148, respectively; all with P<0.001).
Conclusion MedSEQ has good construct validity and high reliability, reflecting students’ satisfaction with the Medicine program. Key factors impacting students’ satisfaction are the perception of being cared for, quality teaching irrespective of the mode of delivery and fair assessment tasks which enhance learning.
This study aimed to compare the knowledge and interpretation ability of ChatGPT, a language model of artificial general intelligence, with those of medical students in Korea by administering a parasitology examination to both ChatGPT and medical students. The examination consisted of 79 items and was administered to ChatGPT on January 1, 2023. The examination results were analyzed in terms of ChatGPT’s overall performance score, its correct answer rate by the items’ knowledge level, and the acceptability of its explanations of the items. ChatGPT’s performance was lower than that of the medical students, and ChatGPT’s correct answer rate was not related to the items’ knowledge level. However, there was a relationship between acceptable explanations and correct answers. In conclusion, ChatGPT’s knowledge and interpretation ability for this parasitology examination were not yet comparable to those of medical students in Korea.
Citations
Citations to this article as recorded by
Applicability of ChatGPT in Assisting to Solve Higher Order Problems in Pathology Ranwir K Sinha, Asitava Deb Roy, Nikhil Kumar, Himel Mondal Cureus.2023;[Epub] CrossRef
Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers Sun Huh Journal of Educational Evaluation for Health Professions.2023; 20: 5. CrossRef
Emergence of the metaverse and ChatGPT in journal publishing after the COVID-19 pandemic Sun Huh Science Editing.2023; 10(1): 1. CrossRef
Assessing the Capability of ChatGPT in Answering First- and Second-Order Knowledge Questions on Microbiology as per Competency-Based Medical Education Curriculum Dipmala Das, Nikhil Kumar, Langamba Angom Longjam, Ranwir Sinha, Asitava Deb Roy, Himel Mondal, Pratima Gupta Cureus.2023;[Epub] CrossRef
Evaluating ChatGPT's Ability to Solve Higher-Order Questions on the Competency-Based Medical Education Curriculum in Medical Biochemistry Arindam Ghosh, Aritri Bir Cureus.2023;[Epub] CrossRef
Overview of Early ChatGPT’s Presence in Medical Literature: Insights From a Hybrid Literature Review by ChatGPT and Human Experts Omar Temsah, Samina A Khan, Yazan Chaiah, Abdulrahman Senjab, Khalid Alhasan, Amr Jamal, Fadi Aljamaan, Khalid H Malki, Rabih Halwani, Jaffar A Al-Tawfiq, Mohamad-Hani Temsah, Ayman Al-Eyadhy Cureus.2023;[Epub] CrossRef
ChatGPT for Future Medical and Dental Research Bader Fatani Cureus.2023;[Epub] CrossRef
ChatGPT in Dentistry: A Comprehensive Review Hind M Alhaidry, Bader Fatani, Jenan O Alrayes, Aljowhara M Almana, Nawaf K Alfhaed Cureus.2023;[Epub] CrossRef
Can we trust AI chatbots’ answers about disease diagnosis and patient care? Sun Huh Journal of the Korean Medical Association.2023; 66(4): 218. CrossRef
Large Language Models in Medical Education: Opportunities, Challenges, and Future Directions (Preprint) Alaa Abd-alrazaq, Rawan AlSaad, Dari Alhuwail, Arfan Ahmed, Mark Healy, Syed Latifi, Sarah Aziz, Rafat Damseh, Sadam Alabed Alrazak, Javaid Sheikh JMIR Medical Education.2023;[Epub] CrossRef
Early applications of ChatGPT in medical practice, education and research Sam Sedaghat Clinical Medicine.2023; 23(3): 278. CrossRef
A Review of Research on Teaching and Learning Transformation under the Influence of ChatGPT Technology 璇 师 Advances in Education.2023; 13(05): 2617. CrossRef
Purpose This study aimed to identify factors that have been studied for their associations with National Licensing Examination (ENAM) scores in Peru.
Methods A search was conducted of literature databases and registers, including EMBASE, SciELO, Web of Science, MEDLINE, Peru’s National Register of Research Work, and Google Scholar. The following key terms were used: “ENAM” and “associated factors.” Studies in English and Spanish were included. The quality of the included studies was evaluated using the Medical Education Research Study Quality Instrument (MERSQI).
Results In total, 38,500 participants were enrolled in 12 studies. Most (11/12) studies were cross-sectional, except for one case-control study. Three studies were published in peer-reviewed journals. The mean MERSQI was 10.33. A better performance on the ENAM was associated with a higher-grade point average (GPA) (n=8), internship setting in EsSalud (n=4), and regular academic status (n=3). Other factors showed associations in various studies, such as medical school, internship setting, age, gender, socioeconomic status, simulations test, study resources, preparation time, learning styles, study techniques, test-anxiety, and self-regulated learning strategies.
Conclusion The ENAM is a multifactorial phenomenon; our model gives students a locus of control on what they can do to improve their score (i.e., implement self-regulated learning strategies) and faculty, health policymakers, and managers a framework to improve the ENAM score (i.e., design remediation programs to improve GPA and integrate anxiety-management courses into the curriculum).
Purpose This review investigated medical students’ satisfaction level with e-learning during the coronavirus disease 2019 (COVID-19) pandemic and its related factors.
Methods A comprehensive systematic search was performed of international literature databases, including Scopus, PubMed, Web of Science, and Persian databases such as Iranmedex and Scientific Information Database using keywords extracted from Medical Subject Headings such as “Distance learning,” “Distance education,” “Online learning,” “Online education,” and “COVID-19” from the earliest date to July 10, 2022. The quality of the studies included in this review was evaluated using the appraisal tool for cross-sectional studies (AXIS tool).
Results A total of 15,473 medical science students were enrolled in 24 studies. The level of satisfaction with e-learning during the COVID-19 pandemic among medical science students was 51.8%. Factors such as age, gender, clinical year, experience with e-learning before COVID-19, level of study, adaptation content of course materials, interactivity, understanding of the content, active participation of the instructor in the discussion, multimedia use in teaching sessions, adequate time dedicated to the e-learning, stress perception, and convenience had significant relationships with the satisfaction of medical students with e-learning during the COVID-19 pandemic.
Conclusion Therefore, due to the inevitability of online education and e-learning, it is suggested that educational managers and policymakers choose the best online education method for medical students by examining various studies in this field to increase their satisfaction with e-learning.
Citations
Citations to this article as recorded by
Physician Assistant Students’ Perception of Online Didactic Education: A Cross-Sectional Study Daniel L Anderson, Jeffrey L Alexander Cureus.2023;[Epub] CrossRef
Purpose This study aimed to suggest a more suitable study design and the corresponding reporting guidelines in the papers published in the Journal of Educational Evaluation for Health Professionals from January 2021 to September 2022.
Methods Among 59 papers published in the Journal of Educational Evaluation for Health Professionals from January 2021 to September 2022, research articles, review articles, and brief reports were selected. The followings were analyzed: first, the percentage of articles describing the study design in the title, abstracts, or methods; second, the portion of articles describing reporting guidelines; third, the types of study design and corresponding reporting guidelines; and fourth, the suggestion of a more suitable study design based on the study design algorithm for medical literature on interventions, systematic reviews & other review types, and epidemiological studies overview.
Results Out of 45 articles, 44 described study designs (97.8%). Out of 44, 19 articles were suggested to be described with more suitable study designs, which mainly occurred in before-and-after studies, diagnostic research, and non-randomized trials. Of the 18 reporting guidelines mentioned, 8 (44.4%) were considered perfect. STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) was used for descriptive studies, before-and-after studies, and randomized controlled trials; however, its use should be reconsidered.
Conclusion Some declarations of study design and reporting guidelines were suggested to be described with more suitable ones. Education and training on study design and reporting guidelines for researchers are needed, and reporting guideline policies for descriptive studies should also be implemented.
Citations
Citations to this article as recorded by
Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers Sun Huh Journal of Educational Evaluation for Health Professions.2023; 20: 5. CrossRef
Giving constructive feedback is crucial for learners to bridge the gap between their current performance and the desired standards of competence. Giving effective feedback is a skill that can be learned, practiced, and improved. Therefore, our aim was to explore models in clinical settings and assess their transferability to different clinical feedback encounters. We identified the 6 most common and accepted feedback models, including the Feedback Sandwich, the Pendleton Rules, the One-Minute Preceptor, the SET-GO model, the R2C2 (Rapport/Reaction/Content/Coach), and the ALOBA (Agenda Led Outcome-based Analysis) model. We present a handy resource describing their structure, strengths and weaknesses, requirements for educators and learners, and suitable feedback encounters for use for each model. These feedback models represent practical frameworks for educators to adopt but also to adapt to their preferred style, combining and modifying them if necessary to suit their needs and context.