Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Search

Page Path
HOME > Search
6 "Canada"
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles
Research article
Performance of the Ebel standard-setting method for the spring 2019 Royal College of Physicians and Surgeons of Canada internal medicine certification examination consisting of multiple-choice questions  
Jimmy Bourque, Haley Skinner, Jonathan Dupré, Maria Bacchus, Martha Ainslie, Irene W. Y. Ma, Gary Cole
J Educ Eval Health Prof. 2020;17:12.   Published online April 20, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.12
  • 6,404 View
  • 168 Download
  • 8 Web of Science
  • 7 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to assess the performance of the Ebel standard-setting method for the spring 2019 Royal College of Physicians and Surgeons of Canada internal medicine certification examination consisting of multiple-choice questions. Specifically, the following parameters were evaluated: inter-rater agreement, the correlations between Ebel scores and item facility indices, the impact of raters’ knowledge of correct answers on the Ebel score, and the effects of raters’ specialty on inter-rater agreement and Ebel scores.
Methods
Data were drawn from a Royal College of Physicians and Surgeons of Canada certification exam. The Ebel method was applied to 203 multiple-choice questions by 49 raters. Facility indices came from 194 candidates. We computed the Fleiss kappa and the Pearson correlations between Ebel scores and item facility indices. We investigated differences in the Ebel score according to whether correct answers were provided or not and differences between internists and other specialists using the t-test.
Results
The Fleiss kappa was below 0.15 for both facility and relevance. The correlation between Ebel scores and facility indices was low when correct answers were provided and negligible when they were not. The Ebel score was the same whether the correct answers were provided or not. Inter-rater agreement and Ebel scores were not significantly different between internists and other specialists.
Conclusion
Inter-rater agreement and correlations between item Ebel scores and facility indices were consistently low; furthermore, raters’ knowledge of the correct answers and raters’ specialty had no effect on Ebel scores in the present setting.

Citations

Citations to this article as recorded by  
  • Competency Standard Derivation for Point-of-Care Ultrasound Image Interpretation for Emergency Physicians
    Maya Harel-Sterling, Charisse Kwan, Jonathan Pirie, Mark Tessaro, Dennis D. Cho, Ailish Coblentz, Mohamad Halabi, Eyal Cohen, Lynne E. Nield, Martin Pusic, Kathy Boutis
    Annals of Emergency Medicine.2023; 81(4): 413.     CrossRef
  • The effects of a land-based home exercise program on surfing performance in recreational surfers
    Jerry-Thomas Monaco, Richard Boergers, Thomas Cappaert, Michael Miller, Jennifer Nelson, Meghan Schoenberger
    Journal of Sports Sciences.2023; 41(4): 358.     CrossRef
  • Medical specialty certification exams studied according to the Ottawa Quality Criteria: a systematic review
    Daniel Staudenmann, Noemi Waldner, Andrea Lörwald, Sören Huwendiek
    BMC Medical Education.2023;[Epub]     CrossRef
  • A Target Population Derived Method for Developing a Competency Standard in Radiograph Interpretation
    Michelle S. Lee, Martin V. Pusic, Mark Camp, Jennifer Stimec, Andrew Dixon, Benoit Carrière, Joshua E. Herman, Kathy Boutis
    Teaching and Learning in Medicine.2022; 34(2): 167.     CrossRef
  • Pediatric Musculoskeletal Radiographs: Anatomy and Fractures Prone to Diagnostic Error Among Emergency Physicians
    Winny Li, Jennifer Stimec, Mark Camp, Martin Pusic, Joshua Herman, Kathy Boutis
    The Journal of Emergency Medicine.2022; 62(4): 524.     CrossRef
  • Possibility of independent use of the yes/no Angoff and Hofstee methods for the standard setting of the Korean Medical Licensing Examination written test: a descriptive study
    Do-Hwan Kim, Ye Ji Kang, Hoon-Ki Park
    Journal of Educational Evaluation for Health Professions.2022; 19: 33.     CrossRef
  • Image interpretation: Learning analytics–informed education opportunities
    Elana Thau, Manuela Perez, Martin V. Pusic, Martin Pecaric, David Rizzuti, Kathy Boutis
    AEM Education and Training.2021;[Epub]     CrossRef
Brief Report
Potential of feedback during objective structured clinical examination to evoke an emotional response in medical students in Canada  
Dalia Limor Karol, Debra Pugh
J Educ Eval Health Prof. 2020;17:5.   Published online February 18, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.5
  • 6,562 View
  • 164 Download
  • 3 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Feedback has been shown to be an important driver for learning. However, many factors, such as the emotional reactions feedback evokes, may impact its effect. This study aimed to explore medical students’ perspectives on the verbal feedback they receive during an objective structured clinical examination (OSCE); their emotional reaction to this; and its impact on their subsequent performance. To do this, medical students enrolled at 4 Canadian medical schools were invited to complete a web-based survey regarding their experiences. One hundred and fifty-eight participants completed the survey. Twenty-nine percent of respondents asserted that they had experienced emotional reactions to verbal feedback received in an OSCE setting. The most common emotional responses reported were embarrassment and anxiousness. Some students (n=20) reported that the feedback they received negatively impacted subsequent OSCE performance. This study demonstrates that feedback provided during an OSCE can evoke an emotional response in students and potentially impact subsequent performance.

Citations

Citations to this article as recorded by  
  • Memory, credibility and insight: How video-based feedback promotes deeper reflection and learning in objective structured clinical exams
    Alexandra Makrides, Peter Yeates
    Medical Teacher.2022; 44(6): 664.     CrossRef
  • Objective structured clinical examination in fundamentals of nursing and obstetric care as method of verification and assessing the degree of achievement of learning outcomes
    Lucyna Sochocka, Teresa Niechwiadowicz-Czapka, Mariola Wojtal, Monika Przestrzelska, Iwona Kiersnowska, Katarzyna Szwamel
    Pielegniarstwo XXI wieku / Nursing in the 21st Century.2021; 20(3): 190.     CrossRef
Brief reports
No observed effect of a student-led mock objective structured clinical examination on subsequent performance scores in medical students in Canada  
Lorenzo Madrazo, Claire Bo Lee, Meghan McConnell, Karima Khamisa, Debra Pugh
J Educ Eval Health Prof. 2019;16:14.   Published online May 27, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.14
  • 14,120 View
  • 203 Download
  • 9 Web of Science
  • 8 Crossref
AbstractAbstract PDFSupplementary Material
Student-led peer-assisted mock objective structured clinical examinations (MOSCEs) have been used in various settings to help students prepare for subsequent higher-stakes, faculty-run OSCEs. MOSCE participants generally valued feedback from peers and reported benefits to learning. Our study investigated whether participation in a peer-assisted MOSCE affected subsequent OSCE performance. To determine whether mean OSCE scores differed depending on whether medical students participated in the MOSCE, we conducted a between-subjects analysis of variance, with cohort (2016 vs. 2017) and MOSCE participation (MOSCE vs. no MOSCE) as independent variables and the mean OSCE score as the dependent variable. Participation in the MOSCE had no influence on mean OSCE scores (P=0.19). There was a significant correlation between mean MOSCE scores and mean OSCE scores (Pearson r=0.52, P<0.001). Although previous studies described self-reported benefits from participation in student-led MOSCEs, it was not associated with objective benefits in this study.

Citations

Citations to this article as recorded by  
  • Impact of familiarity with the format of the exam on performance in the OSCE of undergraduate medical students – an interventional study
    Hannes Neuwirt, Iris E. Eder, Philipp Gauckler, Lena Horvath, Stefan Koeck, Maria Noflatscher, Benedikt Schaefer, Anja Simeon, Verena Petzer, Wolfgang M. Prodinger, Christoph Berendonk
    BMC Medical Education.2024;[Epub]     CrossRef
  • Perceived and actual value of Student‐led Objective Structured Clinical Examinations
    Brandon Stretton, Adam Montagu, Aline Kunnel, Jenni Louise, Nathan Behrendt, Joshua Kovoor, Stephen Bacchi, Josephine Thomas, Ellen Davies
    The Clinical Teacher.2024;[Epub]     CrossRef
  • Benefits of semiology taught using near-peer tutoring are sustainable
    Benjamin Gripay, Thomas André, Marie De Laval, Brice Peneau, Alexandre Secourgeon, Nicolas Lerolle, Cédric Annweiler, Grégoire Justeau, Laurent Connan, Ludovic Martin, Loïc Bière
    BMC Medical Education.2022;[Epub]     CrossRef
  • Identification des facteurs associés à la réussite aux examens cliniques objectifs et structurés dans la faculté de médecine de Rouen
    M. Leclercq, M. Vannier, Y. Benhamou, A. Liard, V. Gilard, I. Auquit-Auckbur, H. Levesque, L. Sibert, P. Schneider
    La Revue de Médecine Interne.2022; 43(5): 278.     CrossRef
  • Evaluation of the Experience of Peer-led Mock Objective Structured Practical Examination for First- and Second-year Medical Students
    Faisal Alsaif, Lamia Alkuwaiz, Mohammed Alhumud, Reem Idris, Lina Neel, Mansour Aljabry, Mona Soliman
    Advances in Medical Education and Practice.2022; Volume 13: 987.     CrossRef
  • The use of a formative OSCE to prepare emergency medicine residents for summative OSCEs: a mixed-methods cohort study
    Magdalene Hui Min Lee, Dong Haur Phua, Kenneth Wei Jian Heng
    International Journal of Emergency Medicine.2021;[Epub]     CrossRef
  • Tutor–Student Partnership in Practice OSCE to Enhance Medical Education
    Eve Cosker, Valentin Favier, Patrice Gallet, Francis Raphael, Emmanuelle Moussier, Louise Tyvaert, Marc Braun, Eva Feigerlova
    Medical Science Educator.2021; 31(6): 1803.     CrossRef
  • Peers as OSCE assessors for junior medical students – a review of routine use: a mixed methods study
    Simon Schwill, Johanna Fahrbach-Veeser, Andreas Moeltner, Christiane Eicher, Sonia Kurczyk, David Pfisterer, Joachim Szecsenyi, Svetla Loukanova
    BMC Medical Education.2020;[Epub]     CrossRef
MEDTalks: a student-driven program to enhance undergraduate student understanding and interest in medical schools in Canada  
Jayson Azzi, Dalia Karol, Tayler Bailey, Christopher Jerome Ramnanan
J Educ Eval Health Prof. 2019;16:13.   Published online May 22, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.13
  • 13,479 View
  • 187 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Given the lack of programs geared towards educating undergraduate students about medical school, the purpose of this study was to evaluate whether a medical student–driven initiative program, MEDTalks, enhanced undergraduate students’ understanding of medical school in Canada and stimulated their interest in pursuing medicine. The MEDTalks program, which ran between January and April 2018 at the University of Ottawa, consisted of 5 teaching sessions, each including large-group lectures, small-group case-based learning, physical skills tutorials, and anatomy lab demonstrations, to mimic the typical medical school curriculum. At the end of the program, undergraduate student learners were invited to complete a feedback questionnaire. Twenty-nine participants provided feedback, of whom 25 reported that MEDTalks allowed them to gain exposure to the University of Ottawa medical program; 27 said that it gave them a greater understanding of the teaching structure; and 25 responded that it increased their interest in attending medical school. The MEDTalks program successfully developed a greater understanding of medical school and helped stimulate interest in pursuing medical studies among undergraduate students.

Citations

Citations to this article as recorded by  
  • Assessing the Impact of Early Undergraduate Exposure to the Medical School Curriculum
    Christiana M. Cornea, Gary Beck Dallaghan, Thomas Koonce
    Medical Science Educator.2022; 32(1): 103.     CrossRef
Technical Reports
Calibrating the Medical Council of Canada’s Qualifying Examination Part I using an integrated item response theory framework: a comparison of models and designs  
Andre F. De Champlain, Andre-Philippe Boulais, Andrew Dallas
J Educ Eval Health Prof. 2016;13:6.   Published online January 20, 2016
DOI: https://doi.org/10.3352/jeehp.2016.13.6
  • 32,931 View
  • 194 Download
  • 4 Web of Science
  • 4 Crossref
AbstractAbstract PDF
Purpose
The aim of this research was to compare different methods of calibrating multiple choice question (MCQ) and clinical decision making (CDM) components for the Medical Council of Canada’s Qualifying Examination Part I (MCCQEI) based on item response theory.
Methods
Our data consisted of test results from 8,213 first time applicants to MCCQEI in spring and fall 2010 and 2011 test administrations. The data set contained several thousand multiple choice items and several hundred CDM cases. Four dichotomous calibrations were run using BILOG-MG 3.0. All 3 mixed item format (dichotomous MCQ responses and polytomous CDM case scores) calibrations were conducted using PARSCALE 4.
Results
The 2-PL model had identical numbers of items with chi-square values at or below a Type I error rate of 0.01 (83/3,499 or 0.02). In all 3 polytomous models, whether the MCQs were either anchored or concurrently run with the CDM cases, results suggest very poor fit. All IRT abilities estimated from dichotomous calibration designs correlated very highly with each other. IRT-based pass-fail rates were extremely similar, not only across calibration designs and methods, but also with regard to the actual reported decision to candidates. The largest difference noted in pass rates was 4.78%, which occurred between the mixed format concurrent 2-PL graded response model (pass rate= 80.43%) and the dichotomous anchored 1-PL calibrations (pass rate= 85.21%).
Conclusion
Simpler calibration designs with dichotomized items should be implemented. The dichotomous calibrations provided better fit of the item response matrix than more complex, polytomous calibrations.

Citations

Citations to this article as recorded by  
  • Plus ça change, plus c’est pareil: Making a continued case for the use of MCQs in medical education
    Debra Pugh, André De Champlain, Claire Touchie
    Medical Teacher.2019; 41(5): 569.     CrossRef
  • Identifying the Essential Portions of the Skill Acquisition Process Using Item Response Theory
    Saseem Poudel, Yusuke Watanabe, Yo Kurashima, Yoichi M. Ito, Yoshihiro Murakami, Kimitaka Tanaka, Hiroshi Kawase, Toshiaki Shichinohe, Satoshi Hirano
    Journal of Surgical Education.2019; 76(4): 1101.     CrossRef
  • FUZZY CLASSIFICATION OF DICHOTOMOUS TEST ITEMS AND SOCIAL INDICATORS DIFFERENTIATION PROPERTY
    Aleksandras Krylovas, Natalja Kosareva, Julija Karaliūnaitė
    Technological and Economic Development of Economy.2018; 24(4): 1755.     CrossRef
  • Analysis of the suitability of the Korean Federation of Science and Technology Societies journal evaluation tool
    Geum‐Hee Jeong, Sun Huh
    Learned Publishing.2016; 29(3): 193.     CrossRef
Best-fit model of exploratory and confirmatory factor analysis of the 2010 Medical Council of Canada Qualifying Examination Part I clinical decision-making cases  
André F. Champlain
J Educ Eval Health Prof. 2015;12:11.   Published online April 15, 2015
DOI: https://doi.org/10.3352/jeehp.2015.12.11
  • 33,349 View
  • 214 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDF
Purpose
This study aims to assess the fit of a number of exploratory and confirmatory factor analysis models to the 2010 Medical Council of Canada Qualifying Examination Part I (MCCQE1) clinical decision-making (CDM) cases. The outcomes of this study have important implications for a range of domains, including scoring and test development. Methods: The examinees included all first-time Canadian medical graduates and international medical graduates who took the MCCQE1 in spring or fall 2010. The fit of one- to five-factor exploratory models was assessed for the item response matrix of the 2010 CDM cases. Five confirmatory factor analytic models were also examined with the same CDM response matrix. The structural equation modeling software program Mplus was used for all analyses. Results: Out of the five exploratory factor analytic models that were evaluated, a three-factor model provided the best fit. Factor 1 loaded on three medicine cases, two obstetrics and gynecology cases, and two orthopedic surgery cases. Factor 2 corresponded to pediatrics, and the third factor loaded on psychiatry cases. Among the five confirmatory factor analysis models examined in this study, three- and four-factor lifespan period models and the five-factor discipline models provided the best fit. Conclusion: The results suggest that knowledge of broad disciplinary domains best account for performance on CDM cases. In test development, particular effort should be placed on developing CDM cases according to broad discipline and patient age domains; CDM testlets should be assembled largely using the criteria of discipline and age.

Citations

Citations to this article as recorded by  
  • Exploratory Factor Analysis of a Computerized Case-Based F-Type Testlet Variant
    Yavuz Selim Kıyak, Işıl İrem Budakoğlu, Dilara Bakan Kalaycıoğlu, Özlem Coşkun
    Medical Science Educator.2023; 33(5): 1191.     CrossRef
  • The key-features approach to assess clinical decisions: validity evidence to date
    G. Bordage, G. Page
    Advances in Health Sciences Education.2018; 23(5): 1005.     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions