-
Changing medical students’ perception of the evaluation culture: Is it possible?
-
Jorie M. Colbert-Getz, Steven Baumann
-
J Educ Eval Health Prof. 2016;13:8. Published online February 15, 2016
-
DOI: https://doi.org/10.3352/jeehp.2016.13.8
-
-
28,910
View
-
182
Download
-
1
Web of Science
-
2
Crossref
-
Abstract
PDF
- Student feedback is a critical component of the teacher-learner cycle. However, there is not a gold standard course or clerkship evaluation form and limited research on the impact of changing the evaluation process. Results from a focus group and pre-implementation feedback survey coupled with best practices in survey design were used to improve all course/clerkship evaluation for academic year 2013-2014. In spring 2014 we asked all subjected students in University of Utah School of Medicine, United States of America to complete the same feedback survey (post-implementation survey). We assessed the evaluation climate with 3 measures on the feedback survey: overall satisfaction with the evaluation process; time students gave effort to the process; and time students used shortcuts. Scores from these measures were compared between 2013 and 2014 with Mann-Whitney U-tests. Response rates were 79% (254) for 2013 and 52% (179) for 2014. Students’ overall satisfaction score were significantly higher (more positive) post-implementation compared to pre-implementation (P<0.001). There was no change in the amount of time students gave effort to completing evaluations (P=0.981) and no change for the amount of time they used shortcuts to complete evaluations (P=0.956). We were able to change overall satisfaction with the medical school evaluation culture, but there was no change in the amount of time students gave effort to completing evaluations and times they used shortcuts to complete evaluations. To ensure accurate evaluation results we will need to focus our efforts on time needed to complete course evaluations across all four years.
-
Citations
Citations to this article as recorded by
- Investigating the impact of multimodal training on surgical informed consent in final year medical students: A quasi-experimental study
Ashok Ninan Oommen, Ronnie Thomas, Shyama Sasidharan, Srikanth Muraleedhar, Unnikishnan Uttumadathil Gopinathan, Rejana Rachel Joy, Yadu Krishnan Girish Babu Journal of Medical Education Development.2024; 17(55): 10. CrossRef - Benefits of focus group discussions beyond online surveys in course evaluations by medical students in the United States: a qualitative study
Katharina Brandl, Soniya V. Rabadia, Alexander Chang, Jess Mandel Journal of Educational Evaluation for Health Professions.2018; 15: 25. CrossRef
-
Emergency medicine and internal medicine trainees’ smartphone use in clinical settings in the United States
-
Sonja E. Raaum, Christian Arbelaez, Carlos Eduardo Vallejo, Andres M. Patino, Jorie M. Colbert-Getz, Caroline K. Milne
-
J Educ Eval Health Prof. 2015;12:48. Published online October 29, 2015
-
DOI: https://doi.org/10.3352/jeehp.2015.12.48
-
-
27,625
View
-
151
Download
-
5
Web of Science
-
6
Crossref
-
Abstract
PDF
- Purpose
Smartphone technology offers a multitude of applications (apps) that provide a wide range of functions for healthcare professionals. Medical trainees are early adopters of this technology, but how they use smartphones in clinical care remains unclear. Our objective was to further characterize smartphone use by medical trainees at two United States academic institutions, as well as their prior training in the clinical use of smartphones. Methods: In 2014, we surveyed 347 internal medicine and emergency medicine resident physicians at the University of Utah and Brigham and Women’s Hospital about their smartphone use and prior training experiences. Scores (0%–100%) were calculated to assess the frequency of their use of general features (email, text) and patient-specific apps, and the results were compared according to resident level and program using the Mann-Whitney U-test. Results: A total of 184 residents responded (response rate, 53.0%). The average score for using general features, 14.4/20 (72.2%) was significantly higher than the average score for using patient-specific features and apps, 14.1/44 (33.0%, P<0.001). The average scores for the use of general features, were significantly higher for year 3–4 residents, 15.0/20 (75.1%) than year 1–2 residents, 14.1/20 (70.5%, P=0.035), and for internal medicine residents, 14.9/20 (74.6%) in comparison to emergency medicine residents, 12.9/20 (64.3%, P= 0.001). The average score reflecting the use of patient-specific apps was significantly higher for year 3–4 residents, 16.1/44 (36.5%) than for year 1–2 residents, 13.7/44 (31.1%; P=0.044). Only 21.7% of respondents had received prior training in clinical smartphone use. Conclusion: Residents used smartphones for general features more frequently than for patient-specific features, but patient-specific use increased with training. Few residents have received prior training in the clinical use of smartphones.
-
Citations
Citations to this article as recorded by
- Prevalence and patterns of mobile device usage among physicians in clinical practice: A systematic review
Judith Kraushaar, Sabine Bohnet-Joschko Health Informatics Journal.2023;[Epub] CrossRef - SMARTPHONE MEDICAL APPLICATION USE AND ASSOCIATED FACTORS AMONG PHYSICIAN AT REFERRAL HOSPITALS IN AMHARA REGION NORTH ETHIOPIA: A CROSS-SECTIONAL STUDY, 2019. (Preprint)
Gizaw Hailiye, Binyam Cheklu Tilahun, Habtamu Alganeh Guadie, Ashenafi Tazebew Amare JMIR mHealth and uHealth.2020;[Epub] CrossRef - Online webinar training to analyse complex atrial fibrillation maps: A randomized trial
João Mesquita, Natasha Maniar, Tina Baykaner, Albert J. Rogers, Mark Swerdlow, Mahmood I. Alhusseini, Fatemah Shenasa, Catarina Brizido, Daniel Matos, Pedro Freitas, Ana Rita Santos, Gustavo Rodrigues, Claudia Silva, Miguel Rodrigo, Yan Dong, Paul Clopton PLOS ONE.2019; 14(7): e0217988. CrossRef - Learning strategies among adult CHD fellows
Jouke P. Bokma, Joshua A. Daily, Adrienne H. Kovacs, Erwin N. Oechslin, Helmut Baumgartner, Paul Khairy, Barbara J.M. Mulder, Gruschen R. Veldtman Cardiology in the Young.2019; 29(11): 1356. CrossRef - Incorporating sleep medicine content into medical school through neuroscience core curricula
Rachel Marie E. Salas, Roy E. Strowd, Imran Ali, Madhu Soni, Logan Schneider, Joseph Safdieh, Bradley V. Vaughn, Alon Y. Avidan, Jane B. Jeffery, Charlene E. Gamaldo Neurology.2018; 91(13): 597. CrossRef - E-Scripts and Cell Phones
Susie T. Harris, Paul D. Bell, Elizabeth A. Baker The Health Care Manager.2017; 36(4): 320. CrossRef
-
Developing a situational judgment test blueprint for assessing the non-cognitive skills of applicants to the University of Utah School of Medicine, the United States
-
Jorie M. Colbert-Getz, Karly Pippitt, Benjamin Chan
-
J Educ Eval Health Prof. 2015;12:51. Published online October 31, 2015
-
DOI: https://doi.org/10.3352/jeehp.2015.12.51
-
-
31,738
View
-
222
Download
-
10
Web of Science
-
5
Crossref
-
Abstract
PDF
- Purpose
The situational judgment test (SJT) shows promise for assessing the non-cognitive skills of medical school applicants, but has only been used in Europe. Since the admissions processes and education levels of applicants to medical school are different in the United States and in Europe, it is necessary to obtain validity evidence of the SJT based on a sample of United States applicants. Methods: Ninety SJT items were developed and Kane’s validity framework was used to create a test blueprint. A total of 489 applicants selected for assessment/interview day at the University of Utah School of Medicine during the 2014-2015 admissions cycle completed one of five SJTs, which assessed professionalism, coping with pressure, communication, patient focus, and teamwork. Item difficulty, each item’s discrimination index, internal consistency, and the categorization of items by two experts were used to create the test blueprint. Results: The majority of item scores were within an acceptable range of difficulty, as measured by the difficulty index (0.50-0.85) and had fair to good discrimination. However, internal consistency was low for each domain, and 63% of items appeared to assess multiple domains. The concordance of categorization between the two educational experts ranged from 24% to 76% across the five domains. Conclusion: The results of this study will help medical school admissions departments determine how to begin constructing a SJT. Further testing with a more representative sample is needed to determine if the SJT is a useful assessment tool for measuring the non-cognitive skills of medical school applicants.
-
Citations
Citations to this article as recorded by
- New Advances in Physician Assistant Admissions: The History of Situational Judgement Tests and the Development of CASPer
Shalon R. Buchs, M. Jane McDaniel Journal of Physician Assistant Education.2021; 32(2): 87. CrossRef - The association between Situational Judgement Test (SJT) scores and professionalism concerns in undergraduate medical education
Gurvinder S. Sahota, Jaspal S. Taggar Medical Teacher.2020; 42(8): 937. CrossRef - Exploring Behavioral Competencies for Effective Medical Practice in Nigeria
Adanna Chukwuma, Uche Obi, Ifunanya Agu, Chinyere Mbachu Journal of Medical Education and Curricular Development.2020;[Epub] CrossRef - Situational judgment test validity: an exploratory model of the participant response process using cognitive and think-aloud interviews
Michael D. Wolcott, Nikki G. Lobczowski, Jacqueline M. Zeeman, Jacqueline E. McLaughlin BMC Medical Education.2020;[Epub] CrossRef - Computerized test versus personal interview as admission methods for graduate nursing studies: A retrospective cohort study
Koren Hazut, Pnina Romem, Smadar Malkin, Ilana Livshiz‐Riven Nursing & Health Sciences.2016; 18(4): 503. CrossRef
|