Background In the Iranian context, no 360-degree evaluation tool has been developed to assess the performance of prehospital medical emergency students in clinical settings. This article describes the development of a 360-degree evaluation tool and presents its first psychometric evaluation.
Methods There were 2 steps in this study: step 1 involved developing the instrument (i.e., generating the items) and step 2 constituted the psychometric evaluation of the instrument. We performed exploratory and confirmatory factor analyses and also evaluated the instrument’s face, content, and convergent validity and reliability.
Results The instrument contains 55 items across 6 domains, including leadership, management, and teamwork (19 items), consciousness and responsiveness (14 items), clinical and interpersonal communication skills (8 items), integrity (7 items), knowledge and accountability (4 items), and loyalty and transparency (3 items). The instrument was confirmed to be a valid measure, as the 6 domains had eigenvalues over Kaiser’s criterion of 1 and in combination explained 60.1% of the variance (Bartlett’s test of sphericity [1,485]=19,867.99, P<0.01). Furthermore, this study provided evidence for the instrument’s convergent validity and internal consistency (α=0.98), suggesting its suitability for assessing student performance.
Conclusion We found good evidence for the validity and reliability of the instrument. Our instrument can be used to make future evaluations of student performance in the clinical setting more structured, transparent, informative, and comparable.
Purpose Traditional approaches to blueprint creation may focus on fine-grained detail at the expense of important foundational concepts. The purpose of this study was to develop a method for constructing an assessment blueprint to guide the creation of a new post-test for a two-day prehospital emergency medical services training program. Methods: In order to create the blueprint, we first determined the proportions of the total classroom and home-study minutes associated with the lower- and higher-order cognitive objectives of each chapter of the textbook and the two-day classroom activities during training courses conducted from January to April 2015. These proportions were then applied to a 50-question test structure in order to calculate the number of desired questions by chapter and content type. Results: Our blueprint called for the test to contain an almost even split of lower- and higher-order cognitive questions. One-best-answer multiple choice items and extended matching-type items were written to assess lower- and higher-order cognitive content, respectively. Conclusion: We report the first known application of an assessment blueprint to a prehospital professional development education program. Our approach to blueprint creation is computationally straightforward and could be easily adopted by a group of instructors with a basic understanding of lower- and higher-order cognitive constructs. By blueprinting at the chapter level, as we have done, item-writers should be more inclined to construct questions that focus on important central themes or procedures.
Citations
Citations to this article as recorded by
Evaluation of Emergency First Response’s Competency in Undergraduate College Students: Enhancing Sustainable Medical Education in the Community for Work Occupational Safety Graciano Dieck-Assad, Omar Israel González Peña, José Manuel Rodríguez-Delgado International Journal of Environmental Research and Public Health.2021; 18(15): 7814. CrossRef
Quality assurance of test blueprinting Ghada Eweda, Zakeya Abdulbaqi Bukhary, Omayma Hamed Journal of Professional Nursing.2020; 36(3): 166. CrossRef