Benefits of focus group discussions beyond online surveys in course evaluations by medical students in the United States: a qualitative study
Article information
Abstract
In addition to online questionnaires, many medical schools use supplemental evaluation tools such as focus groups to evaluate their courses. Although some benefits of using focus groups in program evaluation have been described, it is unknown whether these inperson data collection methods provide sufficient additional information beyond online evaluations to justify them. In this study, we analyze recommendations gathered from student evaluation team (SET) focus group meetings and analyzed whether these items were captured in open-ended comments within the online evaluations. Our results indicate that online evaluations captured only 49% of the recommendations identified via SETs. Surveys to course directors identified that 74% of the recommendations exclusively identified via the SETs were implemented within their courses. Our results indicate that SET meetings provided information not easily captured in online evaluations and that these recommendations resulted in actual course changes.
The evaluation of medical school courses requires a range of methods to gain a sufficiently comprehensive view of the program [1,2]. Most medical schools use quantitative methods in the form of closedend rating scales with 1 or 2 opportunities for open-ended comments [3]. These quantitative methods are simple in design, easy to operate, and useful for obtaining information from a large number of students. However, the scope of online evaluations is limited and several studies have reported that students fill out these evaluations mindlessly [4]. As a consequence, some medical schools have implemented qualitative collection methods such as focus groups to supplement online course evaluations and to ‘tell the story’ behind closedended rating scales [5,6]. Focus groups provide space for clarifying questions and allow a face-to-face dialogue between students and faculty. In addition, focus groups can encourage student interactions that reveal issues not addressed in online evaluations and promote discussion of practical solutions. The process of organizing, conducting, and analyzing data from focus groups requires significant resources. It is unclear, however, whether these in-person qualitative methods provide sufficient additional information beyond online evaluations to warrant investment in them. Furthermore, any evaluation system must be judged on whether the results collected actually lead to curricular changes.
The University of California, San Diego (UCSD) School of Medicine has implemented student evaluation team (SET) focus group meetings for the evaluation of preclerkship courses in addition to online questionnaires [5]. In this study we analyzed the recommendations gathered in SET meetings and compared them to the information captured from the open-ended comments of online evaluations. We next determined whether recommendations from SET meetings resulted in actual course changes (Fig. 1).
SET meetings were scheduled after each of the preclerkship core courses. The course director, academic deans, and approximately 16 randomly selected students who recently completed the course participated. The Assistant Dean for Educational Development and Evaluation (Doctor of Philosophy in Psychology and Master in Health Profession Education), who is not involved the coursework, facilitated these meetings. In the meetings, students considered the course as a whole and commented on “what worked well in this course and what didn’t” [5].
Notes from 9 SET meetings for second-year medical student courses (academic year 2015–2016) taken by 2 second-year medical students (S.V.R. and A.C.) were analyzed. SET meetings were scheduled on 9/25/15 for course 1, 10/12/15 for course 2, 10/23/15 for course 3, 11/2/15 for course 4, 11/30/15 for course 5, 1/4/16 for course 6, 2/19/16 for course 7, 3/7/16 for course 8, and 3/18/16 for course 9, and lasted for 1 hour each. Feedback that included potential solutions was identified in a grounded theory-based approach and coded into the following 7 categories: issues related to specific teaching modalities used in courses, the overall course content, specific lectures (content and organization), sequencing of course events, administrative course components, exams, and study materials.
Open-ended comments from online questionnaires were analyzed for the same 9 preclerkship courses for second-year medical students. In these online questionnaires, a 20-item Likert-style survey was followed by a request for comments related to the course. The survey was administered after the end of each course and 714 deidentified responses from second-year medical students were collected. The overall response rate of the online questionnaires was 66%. A total of 293 comments from the online questionnaires of the 9 preclerkship courses were analyzed. Online comments corresponding to SET meeting comments were identified.
During the following year (2016–2017), surveys were sent to each course director (n= 9) as their course began. These surveys asked the course director whether they had implemented the suggested changes in their course. Course directors responded to each of the recommendations with “yes,” “somewhat,” or “no.” For the quantitative analysis of course directors’ responses, a response of “yes” for a specific recommendation was considered as 100% implemented. A response of “somewhat” was considered as 50% implemented, and a response of “no” as 0% implemented. Surveys were completed on 9/12/16 for course 1, 10/9/16 for course 2, 3/13/17 for course 3, 11/7/16 for course 4, 11/18/16 for course 5, 12/6/16 for course 6, 1/27/17 for course 7, 2/27/17 for course 8, and 2/27/17 for course 9. Raw data are available from Supplement 1.
Ethic statement
The UCSD Institutional Review Board designated this study as an EBP/QI/QA (evidence-based practice, quality improvement, and quality assurance) project and therefore did not require full review (IRB approval no., 151319QI).
Analysis of the SET meeting notes yielded 69 suggested course improvements that included potential solutions, which were coded into the 7 categories listed earlier (Table 1). Of the 69 issues identified via SET, online evaluations captured 34 (49%). Specifically, SETs were superior in capturing feedback regarding specific teaching modalities in courses (18% appeared in online evaluations), problems related to the overall course content (25% online), and lecture content and organization (25% online). In contrast, online evaluations captured most of the deficiencies in study materials (80%), administrative course components (67%), exam-related problems (63%), and sequencing of course events (58%).
Survey data from the course directors identified that 74% of the recommendations captured exclusively in SETs (in contrast to online evaluations) translated into course changes (26 of 35). Table 1 lists all suggested improvements and indicates whether each item was implemented by the course director and captured in the open-ended comments from the online evaluations.
Evaluation is an integral part of medical education, and many tools are available to comprehensively characterize a program. One major purpose of collecting evaluations is to guide instructional improvement. Our analysis revealed that 74% of the SET-identified actionable items translated into course changes, implying that the focus groups served as a catalyst for discrete course adjustments. Studies have suggested that written comments may provide useful information that go beyond that of numerical ratings generated by closedended Likert-style questionnaires [7]. However, 2 major problems are associated with open-ended comments. First, interpreting students’ comments is not an easy task and there are no opportunities for asking clarifying questions. Second, open-ended comments often lack specificity and contextual factors [7]. Implementing a focus group as part of the evaluation process addresses both shortcomings. SET meetings facilitate negotiation, listening, and responding. Recommendations suggested by students in these meetings are discussed with faculty and deans in a collaborative dialogue. Students can explain proposed solutions and avoid confusion or misjudgments from faculty. In contrast to online open-ended comments, SET evaluations were rich in specific suggestions for improvement and also often included a contextual factor. Most importantly, our results indicate that suggestions identified in the SET meetings met the gold standard for evaluation comments—they actually led to course changes. The ‘give-and-take’ from multiple stakeholders in a course can best facilitate this process.
No single evaluation tool will capture the entirety of all potentially useful feedback. The choice of an evaluation model should no longer be a treasure hunt for the one perfect evaluation model. It should be viewed as an ‘all of the above’ approach, rather than a ‘best single answer’ choice. Our data indicate that including open-ended focus groups can provide rich solution-based feedback that makes this a worthwhile tool to add to the evaluation toolbox.
Notes
Authors’ contributions
Conceptualization: KB, SVR, AC, JM. Data acquisition: SVR, AC. Data analysis: KB, SVR, AC. Project administration: KB, JM. Writing–original draft: KB, JM. Writing–review & editing: KB, SVR, AC, JM.
Conflict of interest
No potential conflict of interest relevant to this article was reported.
Funding
None.
Acknowledgements
None.
Supplementary materials
Supplement 1. Data files are available from https://doi.org/10.7910/DVN/ZFKN5R.
Supplement 2. Audio recording of the abstract.