1Department of Pediatrics, Harvard Medical School and Massachusetts General Hospital, Boston, MA, USA
2Center for Educator Development, Advancement, and Research and Department of Family and Community Medicine, Saint Louis University School of Medicine, Saint Louis, MO, USA
3Department of Pediatrics, Cleveland Clinic Lerner College of Medicine at Case Western University, Cleveland, OH, USA
4Department of Medical Education, University of Illinois College of Medicine, Chicago, IL, USA
5Department of Pediatrics and Internal Medicine, The Warren Alpert Medical School of Brown University and Hasbro Children’s Hospital, Providence, RI, USA
6Department of Pediatrics, The Warren Alpert Medical School of Brown University and Hasbro Children’s Hospital, Providence, RI, USA
7Department of Pediatrics, Geisel School of Medicine at Dartmouth and Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
© 2024 Korea Health Personnel Licensing Examination Institute
This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Authors’ contributions
Conceptualization: ASFV, KD, KAG, EYC. Data curation: KD. Methodology/formal analysis/validation: ASFV, KD, KAG, YSP, EYC. Project administration: ASFV, KD, KAG, EYC. Funding acquisition: ASFV, KD, KAG, EYC. Writing–original draft: ASFV. Writing–review & editing: ASFV, KD, KAG, YSP, JB, AH, DW, DAH, SESV, KAS, EYC.
Conflict of interest
Yoon Soo Park at the University of Illinois College of Medicine was the editorial board member of the Journal of Educational Evaluation for Health Professions from 2015 to 2020. He was not involved in the peer review process. Otherwise, no potential conflict of interest relevant to this article was reported.
Funding
This study was funded by the Association of American Medical Colleges (FundRef ID: 10.13039/10005435) Northeastern Group on Educational Affairs (NEGEA) collaborative research grant (2016). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Data availability
Data files are available from Harvard Dataverse: https://doi.org/10.7910/DVN/2JF1E7
Dataset 1. Raw research data generated and analyzed in the current study.
Interrater reliability | Pre-training | Post-training |
---|---|---|
Exact agreement (%) | 26 | 62 |
Kappa | –0.09 | 0.43 |
Weighted Kappa (intraclass correlations) | 0.04 | 0.64 |
Relate sub-element | Program A | Program B | Program C | Overall |
---|---|---|---|---|
Created a respectful and open climate | 1.60±0.35 | 1.63±0.23 | 1.66±0.28 | 1.61±0.33 |
Clearly communicated the importance of the topic and encouraged participant engagement throughout the presentation | 0.98±0.50 | 1.38±0.52 | 1.35±0.42 | 1.10±0.51 |
Set and communicated learner-centered, clear objectives appropriate for the time allotted | 1.32±0.80 | 0.75±0.71 | 0.34±0.68 | 1.05±0.86 |
Demonstrated appropriate knowledge of the topic and used appropriate references | 1.21±0.50 | 1.31±0.53 | 1.34±0.42 | 1.25±0.48 |
Tailored presentation level to participants’ understanding of the material | 1.27±0.49 | 0.75±0.38 | 1.18±0.55 | 1.20±0.51 |
Explained concepts and interrelationships clearly | 1.41±0.46 | 1.54±0.35 | 1.61±0.34 | 1.47±0.43 |
Used effective questioning and interactive techniques to promote learning and probed for supporting evidence or participants’ thought processes | 1.09±0.51 | 0.54±0.47 | 1.42±0.50 | 1.12±0.55 |
Made efficient use of teaching time with appropriate pace and time spent on each objective and each component of the session | 1.48±0.57 | 0.88±0.23 | 1.07±0.58 | 1.34±0.59 |
Content was logically organized with smooth transitions to assist comprehension and retention | 1.70±0.50 | 0.88±0.35 | 1.14±0.47 | 1.50±0.56 |
Summarized key concepts and lessons learned | 0.58±0.70 | 0.63±0.92 | 0.41±0.73 | 0.54±0.72 |
Explicitly encouraged further learning | 0.14±0.43 | 0.13±0.35 | 0.14±0.35 | 0.14±0.40 |
Interrater reliability | Pre-training | Post-training |
---|---|---|
Exact agreement (%) | 26 | 62 |
Kappa | –0.09 | 0.43 |
Weighted Kappa (intraclass correlations) | 0.04 | 0.64 |
Effect (meaning associated with the effect) | df | VC | % VC |
---|---|---|---|
Learner (true differences between learners) | 12 | 0.005 | 2.9 |
Occasion: learner (variation in learner performance by occasion) | 32 | 0.006 | 3.9 |
Rater (variation in rater severity) | 1 | 0.000 | 0 |
Item (variation in item difficulty) | 22 | 0.042 | 26.2 |
Learner×rater (variation in learner performance by rater) | 12 | 0.002 | 1.5 |
Learner×item (variation in learner performance by item) | 264 | 0.018 | 11.0 |
(Occasion×rater): learner (variation in learner performance by rater and occasion) | 32 | 0.009 | 5.6 |
(Occasion×item): learner (variation in learner performance by occasion and item) | 704 | 0.021 | 13.3 |
Rater×item (variation in rater severity by item) | 22 | 0.001 | 0.6 |
Learner×rater×item (variation in learner performance by rater and item) | 264 | 0.006 | 3.9 |
Residual error (unexplained error) | 704 | 0.050 | 31.2 |
Values are presented as mean±standard deviation. The number of resident subjects for each group is not included to preserve the confidentiality of the program identities. Relate, Resident-led Large Group Teaching Assessment Instrument.
df, degrees of freedom; VC, variance component. Generalizability study using (occasion: learner)×(rater×item) design. Φ-coefficient reliability is 0.50.