Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Articles

Page Path
HOME > J Educ Eval Health Prof > Volume 14; 2017 > Article
Research article
Developing a framework for evaluating the impact of Healthcare Improvement Science Education across Europe: a qualitative study
Manuel Lillo-Crespo1orcid, M. Cristina Sierras-Davó1*orcid, Rhoda MacRae2orcid, Kevin Rooney3orcid

DOI: https://doi.org/10.3352/jeehp.2017.14.28
Published online: November 29, 2017

1Nursing Department, Faculty of Health Sciences, University of Alicante, Alicante, Spain

2Institute for Healthcare Policy and Practice, School of Health Nursing and Midwifery, The University of the West of Scotland, Hamilton, UK

3Department of Anaesthesia and Intensive Care Medicine, Royal Alexandra Hospital, Paisley, UK

*Corresponding email: mcristinasierras@gmail.com

Editor: Sun Huh, Hallym University, Korea

• Received: July 27, 2017   • Accepted: November 29, 2017

© 2017, Korea Health Personnel Licensing Examination Institute

This is an open-access article distributed under the terms of the Creative Commons Attribution License <http://creativecommons.org/licenses/by/4.0/>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

prev next
  • 34,709 Views
  • 428 Download
  • 10 Web of Science
  • 10 Crossref
  • 9 Scopus
  • Purpose
    Frontline healthcare professionals are well positioned to improve the systems in which they work. Educational curricula, however, have not always equipped healthcare professionals with the skills or knowledge to implement and evaluate improvements. It is important to have a robust and standardized framework in order to evaluate the impact of such education in terms of improvement, both within and across European countries. The results of such evaluations will enhance the further development and delivery of healthcare improvement science (HIS) education. We aimed to describe the development and piloting of a framework for prospectively evaluating the impact of HIS education and learning.
  • Methods
    The evaluation framework was designed collaboratively and piloted in 7 European countries following a qualitative methodology. The present study used mixed methods to gather data from students and educators. The framework took the Kirkpatrick model of evaluation as a theoretical reference.
  • Results
    The framework was found to be feasible and acceptable for use across differing European higher education contexts according to the pilot study and the participants’ consensus. It can be used effectively to evaluate and develop HIS education across European higher education institutions.
  • Conclusion
    We offer a new evaluation framework to capture the impact of HIS education. The implementation of this tool has the potential to facilitate the continuous development of HIS education.
Healthcare improvement science (HIS) constitutes a body of knowledge, as well as a strategic dimension aligned to the eHealth Action Plan 2012–2020 as a roadmap for achieving smart and sustainable healthcare systems across Europe [1]. The impact and effectiveness of HIS education must therefore be characterized. For this reason, a team led by the Faculty of Health Sciences at the University of Alicante (Spain) as an Improvement Science Training for European Healthcare Workers (ISTEW) European Commission Project partner began developing an evaluation framework designed to be used by higher education institutions [2-4]. This framework is expected to capture the impact of the HIS educational modules delivered by the ISTEW project. The aim of this paper was to outline the process of development, the resultant framework, and its piloting, and our results will enable continuous evaluation within and across all partner countries.
A qualitative mixed-methods methodology was used and divided into 2 steps: the first step corresponded to the development of the framework, and the second step involved its pilot study in different European contexts. The first stage comprised 2 elements: the gathering of a minimum data set (MDS) with the main variables or items corresponding to the educational module selected and a number of questionnaires designed for different participants at each stage of the learning process (Fig. 1) that were unified by the ISTEW teams from different countries and contexts through consensus. Kirkpatrick’s 4-level training evaluation model was the conceptual reference used to develop the specific methodology [5,6]. Once the HIS evaluation framework was agreed upon by the ISTEW partnership network composed of 7 teams, a pilot validation (the second step) using the case study method was undertaken across 7 different European educational contexts (Scotland, England, Romania, Slovenia, Italy, Poland, and Spain). The pilot validation was conducted and coordinated by the Spanish team. The total pilot sample was made up of 10 cases. Each case corresponded to a training program involving HIS in the different contexts, and all the selected programs were HIS-related or contained elements of HIS. Participants within each case were selected by convenience and contacted through face-to-face meetings or email by each partner team. The pilot sample came from the following areas: nursing (n=4), medicine (n=3), and psychology (n=3). Participants’ demographics and HIS background were identified through a short set of questions on the first page of the questionnaire (Fig. 1). This set of questions was developed in the first step using the MDS method (Table 1). All data were collected in a classroom in paper format in the beginning, and later using Google Forms remotely. A short introduction was provided, explaining all research goals and objectives, as well as the relationship of the interviewers to the project.
Ethical approval
Informed consent was provided by the subjects. This study was approved by the Institutional Review Board of the University of Alicante, Spain (IRB Number: 539194-LLP-1-2013-1).
Based on consensus, the 7 partner teams made the following decisions for developing the HIS framework (first step) and conducting a pilot content validation (second step).
Selecting a conceptual framework: the Kirkpatrick model
During the construction of the HIS evaluation framework, the partners agreed to use the 4 levels described in the Kirkpatrick model, although we added a fifth level to evaluate the return on investment. Without level 5, Kirkpatrick’s assumption ignores the potential differences involving training and training outcomes that may exist among key stakeholder groups (e.g., trainees, managers, and trainers) in organizations. Moreover, level 5 helps link the learning intervention with the outcomes in context, in relation with the cost-efficiency potentially achieved due to the HIS training programs undertaken by students.
Developing a minimum data set
According to some authors [7], the original Kirkpatrick model presents an oversimplified view of training effectiveness, because it does not consider individual or contextual influences on the evaluation of training. Therefore, characteristics of the organization, work environment, and the individual trainee are crucial input factors. To fill this gap, an MDS was developed to capture a set of information with uniform categories concerning a specific dimension [8]. The first page of each questionnaire was designed to capture the characteristics of the organization, environment, and student. It was created to incorporate contextual information, in light of the above limitation of the Kirkpatrick model [9].
Developing the questionnaires
The HIS evaluation framework was designed to be an anonymous self-completed questionnaire with an informed consent page at the beginning. Each questionnaire had open and closed questions and Likert scales. Different questionnaires were developed to capture each level of the learning process. Each respondent also had to create his or her own code, to be used at all levels. This led to the design of 5 different questionnaires for each key stakeholder group. Overall, the framework was designed to capture the impact of the different stages of the HIS learning process, from level 1 (reaction) to level 5 (return on investment).
The framework prospectively captured the outcomes and impacts of HIS education on learners, educators, and healthcare professionals, such as mentors or managers of the learners, in practice settings. Fig. 1 illustrates how the questionnaires were matched to participants.
Developing the healthcare improvement science evaluation framework and piloting process
Using the conceptual framework selected as a reference and the methodological process explained above, the ISTEW team arrived at a consensus in terms of the most appropriate levels for evaluating HIS learning, the questionnaires designed and piloted to do so, and what constituted the HIS framework itself. The partner teams completed a pilot content validation of the agreed-upon HIS evaluation framework. The raw data are available in Supplement 1. The piloting process tested the content, understanding, and usability of both the MDS and the various questionnaires. The piloting process was iterative, and successive drafts were produced and refined over time, resulting in a version that was acceptable, feasible, and suitable for use in all 7 countries. After each version, all partners shared ideas for improvement, and students’ comments were taken into account. Some parts of the questionnaires developed as part of the framework construction are shown in Figs. 2 and 3. The piloting process revealed that the more succinct questions and questionnaires were, the easier it was for participants from the 7 different countries and educational contexts to complete them. Over time, the questions became shorter and the questionnaires less complex, with more signposting and explanatory text to promote completion.
The framework composed by the HIS levels and the HIS evaluation framework questionnaires for each level provides a standardized design that overcomes some of the limitations discussed by other authors regarding the design and delivery of interventions through a multicultural European pilot program, taking into account different educational contexts. A previous study suggested using a combination of qualitative and quantitative methods to permit the determination of how context-level factors might modify the effectiveness of an intervention [10]. The methods used in the development of the HIS evaluation framework included participants’ qualitative and quantitative data obtained through the MDS and different questionnaires as well as the discussions and inter-organizational networks among the 7 ISTEW partner teams. A mixed-methods approach analyzing both qualitative and quantitative data, obtained through the MDS and a modified Kirkpatrick evaluation, enabled us to measure the effectiveness of training in various contexts in light of current challenges.
Although there is no doubt that the Kirkpatrick model has made valuable contributions to the theory and practice of evaluating training, the members of the ISTEW partnership were aware of the limitations of such a model, which have implications for the ability of training evaluators to deliver benefits and to further the interests of organizational clients. The limitations highlighted by some authors [9] include “the incompleteness of the model, the assumption of causality, and the assumption of increasing importance of information as the levels of outcomes are ascended.” The ISTEW partnership aimed to adapt and apply the Kirkpatrick model further so that it could overcome those limitations.
Some of the limitations highlighted by other authors associated with the 4 different stages in the Kirkpatrick model [11] were also discussed by the ISTEW partners during the piloting process. Consequently, the partners designed individual questionnaires for different participants to enable them to answer anonymously, with the aim of reducing reticence or concerns about participation. This gave the evaluators the opportunity to provide additional support for learners when they felt that their objectives were not met.
Kirkpatrick and Kirkpatrick [9] questioned how evaluators control for other factors that may affect the impact of the training intervention; in other words, how can we be sure that the module selected is precisely the training intervention needed? Following the recommendation of Øvretveit [12] to try to overcome this by using a tool to measure the ability provided by the modules before and after the event, the HIS evaluation framework included both quantitative and qualitative methods.
An opportunity for future learning would be to test the tool itself on the new HIS modules that were developed due to a delay in the integration of the HIS modules into educational practice. Instead, the pilot sample involved programs or modules already in use that contained elements of HIS. A further limitation is that the HIS Evaluation Framework and the questionnaires developed were in English. Thus, the pilot sample relied on participants who could read and understand English. For the most part, the pilot sample came from areas of nursing (n=4), medicine (n=3), and psychology (n=3). It would be useful for the questionnaire to be tested among a wider range of professional groups. Moreover, the MDS on the initial page of the survey was intended to capture the context and cultural data, but it may be assumed that a considerable amount of valuable information associated with the qualitative data is still missing [13]. This issue is being considered for future versions of the tool.
Finally, it was not possible to pilot level 5 (return on the HIS education investment) due to the time limitations of the project and the fact that the pilot modules were not fully developed at the time of the pilot.
The framework was implemented at “the First and Second Summer Program on Healthcare Improvement Science” course held by the University of Alicante in collaboration with the University of the West of Scotland in July 2016 and July 2017. This course, as it involved specific education in HIS, was used as part of the evaluation framework piloting. Its results will be used prospectively to keep improving the framework itself after collecting sufficient data from students from different fields and cultures, using several editions.
The evaluation framework has the potential to effectively identify strengths, weaknesses, and gaps in HIS education across Europe, as well as return on investment.
Investing in a better-educated professional staff regarding the scope of HIS could improve the quality of patient care [14] by building bridges between theory and practice and contributing to the development of an improvement culture in healthcare contexts.

Author’s contributions

Conceptualization: ML, KR, RM. Data curation: MC, ML. Methodology: ML, KR, RM. Project administration: ML. Visualization: MC. Writing–original draft: ML, MC. Writing–review & editing: ML, MC, RM, KR

Conflict of interest

No potential conflict of interest relevant to this article was reported.

Funding

This work was supported by the European Union ERASMUS Lifelong Learning Project, ISTEW (Improvement Science Training for European Healthcare Workers) (Project No. 539194-LLP-1-2013-1-UK-ERASMUS-EQR).

Supplement 1. Final report: Improvement Science Training for European Healthcare Workers (ISTEW) Workpackage Nº 10 Development of ISTEW evaluation framework.
jeehp-14-28-suppl.pdf
Supplement 2. Audio recording of the abstract
jeehp-14-28-abstract-recording.avi
Fig. 1.
Example of the minimum data set that was developed.
jeehp-14-28f1.gif
Fig. 2.
Development of the HIS evaluation framework. HIS, healthcare improvement science; MDS, minimum data set.
jeehp-14-28f2.gif
Fig. 3.
Examples of the online healthcare improvement science learning evaluation framework, level 1 (A) and level 5 (B).
jeehp-14-28f3.gif
Table 1.
Healthcare improvement science evaluation framework levels according to participants’ roles
Student Educator Manager/ tutor Manager/ professional
Level 1. reaction V V
Level 2. learning V V
Level 3. behavior/training transfer V V
Level 4. results V V V V
Level 5. return on investment V V V V
  • 1. Zander B, Aiken LH, Busse R, Rafferty AM, Sermeus W, Bruynell L. The state of nursing in the European Union. EuroHealth 2016;22(1):4-8.
  • 2. MacRae R, Rooney KD, Taylor A, Ritters K, Sansoni J, Lillo Crespo M, Skela-Savic B, O’Donnell B. Making it easy to do the right thing in healthcare: advancing improvement science education through accredited pan European higher education modules. Nurse Educ Today 2016;42:41-46. https://doi.org/10.1016/j.nedt.2016.03.023 ArticlePubMed
  • 3. Skela-Savic B, Macrae R, Lillo-Crespo M, Rooney KD. The development of a consensus definition for healthcare improvement science (HIS) in seven European countries: a consensus methods approach. Zdr Varst 2017;56:82-90. https://doi.org/10.1515/sjph-2017-0011 ArticlePubMedPMCPDF
  • 4. University of the West Scotland. Improvement Science Training for European Healthcare Workers (ISTEW) Project [Internet]. Paisley: University of the West Scotland; [cited 2017 Nov 02]. Available from: https://www.uws.ac.uk/research/
  • 5. Swensen SJ, Shanafelt T. An organizational framework to reduce professional burnout and bring back joy in practice. Jt Comm J Qual Patient Saf 2017;43:308-313. https://doi.org/10.1016/j.jcjq.2017.01.007 ArticlePubMed
  • 6. Reeves S, Clark E, Lawton S, Ream M, Ross F. Examining the nature of interprofessional interventions designed to promote patient safety: a narrative review. Int J Qual Health Care 2017;1-7. https://doi.org/10.1093/intqhc/mzx008 ArticlePDF
  • 7. Rouse DN. Employing Kirkpatrick’s evaluation framework to determine the effectiveness of health information management courses and programs. Perspect Health Inf Manag 2011;8:1c.
  • 8. Armstrong L, Lauder W, Shepherd A. An evaluation of methods used to teach quality improvement to undergraduate healthcare students to inform curriculum development within preregistration nurse education: a protocol for systematic review and narrative synthesis. Syst Rev 2015;4:8. https://doi.org/10.1186/2046-4053-4-8 ArticlePubMedPMCPDF
  • 9. Kirkpatrick DL, Kirkpatrick JD. Evaluating training programs: the four levels San Francisco (CA): Berrett-Koehler; 2012.
  • 10. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Medical Research Council Guidance. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ 2008;337:a1655. https://doi.org/10.1136/bmj.a1655 ArticlePubMedPMC
  • 11. Larkins SL, Preston R, Matte MC, Lindemann IC, Samson R, Tandinco FD, Buso D, Ross SJ, Palsdottir B, Neusy AJ. Training for Health Equity Network Thenet. Measuring social accountability in health professional education: development and international pilot testing of an evaluation framework. Med Teach 2013;35:32-45. https:/doi.org/10.3109/0142159X.2012.731106 ArticlePubMed
  • 12. Ovretveit J. Understanding the conditions for improvement: research to discover which context influences affect improvement success. BMJ Qual Saf 2011;20 Suppl 1:i18-23. https://doi.org/10.1136/bmjqs.2010.045955 ArticlePubMedPMC
  • 13. Tackett S, Grant J, Mmari K. Designing an evaluation framework for WFME basic standards for medical education. Med Teach 2016;38:291-296. https://doi.org/10.3109/0142159X.2015.1031737 ArticlePubMed
  • 14. White M, Wells J, Butterworth T. Leadership, a key element of quality improvement in healthcare: results from a literature review of “lean healthcare” and the productive ward: releasing time to care initiative. Int J Leadersh Pub Serv 2013;9(3):90-108. https://doi.org/10.1108/IJLPS-08-2013-0021 Article

Figure & Data

References

    Citations

    Citations to this article as recorded by  
    • Evaluation of cost-effectiveness of single-credit traffic safety course based on Kirkpatrick model: a case study of Iran
      Mina Golestani, Homayoun Sadeghi-bazargani, Sepideh Harzand-Jadidi, Hamid Soori
      BMC Medical Education.2024;[Epub]     CrossRef
    • Yemen Advanced Field Epidemiology Training Program: An Impact Evaluation, 2021
      Maeen Abduljalil, Abdulhakeem Al Kohlani, Aisha Jumaan, Abdulwahed Al Serouri
      Epidemiologia.2023; 4(3): 235.     CrossRef
    • How, and under what contexts, do academic–practice partnerships collaborate to implement healthcare improvement education into preregistration nursing curriculums: a realist review protocol
      Lorraine Armstrong, Chris Moir, Peta Taylor
      BMJ Open.2023; 13(10): e077784.     CrossRef
    • Developing the American College of Surgeons Quality Improvement Framework to Evaluate Local Surgical Improvement Efforts
      Clifford Y. Ko, Tejen Shah, Heidi Nelson, Avery B. Nathens
      JAMA Surgery.2022; 157(8): 737.     CrossRef
    • Kirkpatrick Model: Its Limitations as Used in Higher Education Evaluation
      Michael CAHAPAY
      International Journal of Assessment Tools in Education.2021; 8(1): 135.     CrossRef
    • Transforming the Future Healthcare Workforce across Europe through Improvement Science Training: A Qualitative Approach
      Maria Cristina Sierras-Davo, Manuel Lillo-Crespo, Patricia Verdu, Aimilia Karapostoli
      International Journal of Environmental Research and Public Health.2021; 18(3): 1298.     CrossRef
    • Qualitative evaluation of an educational intervention about healthcare improvement for nursing students
      María Cristina Sierras-Davó, Manuel Lillo-Crespo, Patricia Verdú Rodríguez
      Aquichan.2021; 21(1): 1.     CrossRef
    • Evaluation of Advanced Field Epidemiology Training Programs in the Eastern Mediterranean Region: A Multi-Country Study
      Mohannad Al Nsour, Yousef Khader, Haitham Bashier, Majd Alsoukhni
      Frontiers in Public Health.2021;[Epub]     CrossRef
    • The United Kingdom Field Epidemiology Training Programme: meeting programme objectives
      Paola Dey, Jeremy Brown, John Sandars, Yvonne Young, Ruth Ruggles, Samantha Bracebridge
      Eurosurveillance.2019;[Epub]     CrossRef
    • Mapping the Status of Healthcare Improvement Science through a Narrative Review in Six European Countries
      Manuel Lillo-Crespo, Maria Cristina Sierras-Davó, Alan Taylor, Katrina Ritters, Aimilia Karapostoli
      International Journal of Environmental Research and Public Health.2019; 16(22): 4480.     CrossRef

    Figure
    • 0
    • 1
    • 2
    Developing a framework for evaluating the impact of Healthcare Improvement Science Education across Europe: a qualitative study
    Image Image Image
    Fig. 1. Example of the minimum data set that was developed.
    Fig. 2. Development of the HIS evaluation framework. HIS, healthcare improvement science; MDS, minimum data set.
    Fig. 3. Examples of the online healthcare improvement science learning evaluation framework, level 1 (A) and level 5 (B).
    Developing a framework for evaluating the impact of Healthcare Improvement Science Education across Europe: a qualitative study
    Student Educator Manager/ tutor Manager/ professional
    Level 1. reaction V V
    Level 2. learning V V
    Level 3. behavior/training transfer V V
    Level 4. results V V V V
    Level 5. return on investment V V V V
    Table 1. Healthcare improvement science evaluation framework levels according to participants’ roles


    JEEHP : Journal of Educational Evaluation for Health Professions
    TOP