Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Search

Page Path
HOME > Search
29 "Evaluation"
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles
Research articles
Empirical effect of the Dr LEE Jong-wook Fellowship Program to empower sustainable change for the health workforce in Tanzania: a mixed-methods study  
Masoud Dauda, Swabaha Aidarus Yusuph, Harouni Yasini, Issa Mmbaga, Perpetua Mwambinngu, Hansol Park, Gyeongbae Seo, Kyoung Kyun Oh
J Educ Eval Health Prof. 2025;22:6.   Published online January 20, 2025
DOI: https://doi.org/10.3352/jeehp.2025.22.6
  • 1,172 View
  • 175 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study evaluated the Dr LEE Jong-wook Fellowship Program’s impact on Tanzania’s health workforce, focusing on relevance, effectiveness, efficiency, impact, and sustainability in addressing healthcare gaps.
Methods
A mixed-methods research design was employed. Data were collected from 97 out of 140 alumni through an online survey, 35 in-depth interviews, and one focus group discussion. The study was conducted from November to December 2023 and included alumni from 2009 to 2022. Measurement instruments included structured questionnaires for quantitative data and semi-structured guides for qualitative data. Quantitative analysis involved descriptive and inferential statistics (Spearman’s rank correlation, non-parametric tests) using Python ver. 3.11.0 and Stata ver. 14.0. Thematic analysis was employed to analyze qualitative data using NVivo ver. 12.0.
Results
Findings indicated high relevance (mean=91.6, standard deviation [SD]=8.6), effectiveness (mean=86.1, SD=11.2), efficiency (mean=82.7, SD=10.2), and impact (mean=87.7, SD=9.9), with improved skills, confidence, and institutional service quality. However, sustainability had a lower score (mean=58.0, SD=11.1), reflecting challenges in follow-up support and resource allocation. Effectiveness strongly correlated with impact (ρ=0.746, P<0.001). The qualitative findings revealed that participants valued tailored training but highlighted barriers, such as language challenges and insufficient practical components. Alumni-led initiatives contributed to knowledge sharing, but limited resources constrained sustainability.
Conclusion
The Fellowship Program enhanced Tanzania’s health workforce capacity, but it requires localized curricula and strengthened alumni networks for sustainability. These findings provide actionable insights for improving similar programs globally, confirming the hypothesis that tailored training positively influences workforce and institutional outcomes.
Reliability and construct validation of the Blended Learning Usability Evaluation–Questionnaire with interprofessional clinicians in Canada: a methodological study  
Anish Kumar Arora, Jeff Myers, Tavis Apramian, Kulamakan Kulasegaram, Daryl Bainbridge, Hsien Seow
J Educ Eval Health Prof. 2025;22:5.   Published online January 16, 2025
DOI: https://doi.org/10.3352/jeehp.2025.22.5
  • 796 View
  • 146 Download
AbstractAbstract PDFSupplementary Material
Purpose
To generate Cronbach’s alpha and further mixed methods construct validity evidence for the Blended Learning Usability Evaluation–Questionnaire (BLUE-Q).
Methods
Forty interprofessional clinicians completed the BLUE-Q after finishing a 3-month long blended learning professional development program in Ontario, Canada. Reliability was assessed with Cronbach’s α for each of the 3 sections of the BLUE-Q and for all quantitative items together. Construct validity was evaluated through the Grand-Guillaume-Perrenoud et al. framework, which consists of 3 elements: congruence, convergence, and credibility. To compare quantitative and qualitative results, descriptive statistics, including means and standard deviations for each Likert scale item of the BLUE-Q were calculated.
Results
Cronbach’s α was 0.95 for the pedagogical usability section, 0.85 for the synchronous modality section, 0.93 for the asynchronous modality section, and 0.96 for all quantitative items together. Mean ratings (with standard deviations) were 4.77 (0.506) for pedagogy, 4.64 (0.654) for synchronous learning, and 4.75 (0.536) for asynchronous learning. Of the 239 qualitative comments received, 178 were identified as substantive, of which 88% were considered congruent and 79% were considered convergent with the high means. Among all congruent responses, 69% were considered confirming statements and 31% were considered clarifying statements, suggesting appropriate credibility. Analysis of the clarifying statements assisted in identifying 5 categories of suggestions for program improvement.
Conclusion
The BLUE-Q demonstrates high reliability and appropriate construct validity in the context of a blended learning program with interprofessional clinicians, making it a valuable tool for comprehensive program evaluation, quality improvement, and evaluative research in health professions education.
Validation of the Blended Learning Usability Evaluation–Questionnaire (BLUE-Q) through an innovative Bayesian questionnaire validation approach  
Anish Kumar Arora, Charo Rodriguez, Tamara Carver, Hao Zhang, Tibor Schuster
J Educ Eval Health Prof. 2024;21:31.   Published online November 7, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.31
  • 1,093 View
  • 194 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
The primary aim of this study is to validate the Blended Learning Usability Evaluation–Questionnaire (BLUE-Q) for use in the field of health professions education through a Bayesian approach. As Bayesian questionnaire validation remains elusive, a secondary aim of this article is to serve as a simplified tutorial for engaging in such validation practices in health professions education.
Methods
A total of 10 health education-based experts in blended learning were recruited to participate in a 30-minute interviewer-administered survey. On a 5-point Likert scale, experts rated how well they perceived each item of the BLUE-Q to reflect its underlying usability domain (i.e., effectiveness, efficiency, satisfaction, accessibility, organization, and learner experience). Ratings were descriptively analyzed and converted into beta prior distributions. Participants were also given the option to provide qualitative comments for each item.
Results
After reviewing the computed expert prior distributions, 31 quantitative items were identified as having a probability of “low endorsement” and were thus removed from the questionnaire. Additionally, qualitative comments were used to revise the phrasing and order of items to ensure clarity and logical flow. The BLUE-Q’s final version comprises 23 Likert-scale items and 6 open-ended items.
Conclusion
Questionnaire validation can generally be a complex, time-consuming, and costly process, inhibiting many from engaging in proper validation practices. In this study, we demonstrate that a Bayesian questionnaire validation approach can be a simple, resource-efficient, yet rigorous solution to validating a tool for content and item-domain correlation through the elicitation of domain expert endorsement ratings.

Citations

Citations to this article as recorded by  
  • Reliability and construct validation of the Blended Learning Usability Evaluation–Questionnaire with interprofessional clinicians in Canada : a methodological study
    Anish Kumar Arora, Jeff Myers, Tavis Apramian, Kulamakan Kulasegaram, Daryl Bainbridge, Hsien Seow
    Journal of Educational Evaluation for Health Professions.2025; 22: 5.     CrossRef
A new performance evaluation indicator for the LEE Jong-wook Fellowship Program of Korea Foundation for International Healthcare to better assess its long-term educational impacts: a Delphi study  
Minkyung Oh, Bo Young Yoon
J Educ Eval Health Prof. 2024;21:27.   Published online October 2, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.27
  • 1,112 View
  • 247 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
The Dr. LEE Jong-wook Fellowship Program, established by the Korea Foundation for International Healthcare (KOFIH), aims to strengthen healthcare capacity in partner countries. The aim of the study was to develop new performance evaluation indicators for the program to better assess long-term educational impact across various courses and professional roles.
Methods
A 3-stage process was employed. First, a literature review of established evaluation models (Kirkpatrick’s 4 levels, context/input/process/product evaluation model, Organization for Economic Cooperation and Development Assistance Committee criteria) was conducted to devise evaluation criteria. Second, these criteria were validated via a 2-round Delphi survey with 18 experts in training projects from May 2021 to June 2021. Third, the relative importance of the evaluation criteria was determined using the analytic hierarchy process (AHP), calculating weights and ensuring consistency through the consistency index and consistency ratio (CR), with CR values below 0.1 indicating acceptable consistency.
Results
The literature review led to a combined evaluation model, resulting in 4 evaluation areas, 20 items, and 92 indicators. The Delphi surveys confirmed the validity of these indicators, with content validity ratio values exceeding 0.444. The AHP analysis assigned weights to each indicator, and CR values below 0.1 indicated consistency. The final set of evaluation indicators was confirmed through a workshop with KOFIH and adopted as the new evaluation tool.
Conclusion
The developed evaluation framework provides a comprehensive tool for assessing the long-term outcomes of the Dr. LEE Jong-wook Fellowship Program. It enhances evaluation capabilities and supports improvements in the training program’s effectiveness and international healthcare collaboration.

Citations

Citations to this article as recorded by  
  • Halted medical education and medical residents’ training in Korea, journal metrics, and appreciation to reviewers and volunteers
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2025; 22: 1.     CrossRef
Challenges and potential improvements in the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019) derived through meta-evaluation: a cross-sectional study  
Yoonjung Lee, Min-jung Lee, Junmoo Ahn, Chungwon Ha, Ye Ji Kang, Cheol Woong Jung, Dong-Mi Yoo, Jihye Yu, Seung-Hee Lee
J Educ Eval Health Prof. 2024;21:8.   Published online April 2, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.8
  • 2,172 View
  • 326 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to identify challenges and potential improvements in Korea's medical education accreditation process according to the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019). Meta-evaluation was conducted to survey the experiences and perceptions of stakeholders, including self-assessment committee members, site visit committee members, administrative staff, and medical school professors.
Methods
A cross-sectional study was conducted using surveys sent to 40 medical schools. The 332 participants included self-assessment committee members, site visit team members, administrative staff, and medical school professors. The t-test, one-way analysis of variance and the chi-square test were used to analyze and compare opinions on medical education accreditation between the categories of participants.
Results
Site visit committee members placed greater importance on the necessity of accreditation than faculty members. A shared positive view on accreditation’s role in improving educational quality was seen among self-evaluation committee members and professors. Administrative staff highly regarded the Korean Institute of Medical Education and Evaluation’s reliability and objectivity, unlike the self-evaluation committee members. Site visit committee members positively perceived the clarity of accreditation standards, differing from self-assessment committee members. Administrative staff were most optimistic about implementing standards. However, the accreditation process encountered challenges, especially in duplicating content and preparing self-evaluation reports. Finally, perceptions regarding the accuracy of final site visit reports varied significantly between the self-evaluation committee members and the site visit committee members.
Conclusion
This study revealed diverse views on medical education accreditation, highlighting the need for improved communication, expectation alignment, and stakeholder collaboration to refine the accreditation process and quality.

Citations

Citations to this article as recorded by  
  • The new placement of 2,000 entrants at Korean medical schools in 2025: is the government’s policy evidence-based?
    Sun Huh
    The Ewha Medical Journal.2024;[Epub]     CrossRef
Effect of an interprofessional simulation program on patient safety competencies of healthcare professionals in Switzerland: a before and after study  
Sylvain Boloré, Thomas Fassier, Nicolas Guirimand
J Educ Eval Health Prof. 2023;20:25.   Published online August 28, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.25
  • 3,194 View
  • 229 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to identify the effects of a 12-week interprofessional simulation program, operated between February 2020 and January 2021, on the patient safety competencies of healthcare professionals in Switzerland.
Methods
The simulation training was based on 2 scenarios of hospitalized patients with septic shock and respiratory failure, and trainees were expected to demonstrate patient safety competencies. A single-group before and after study was conducted after the intervention—simulation program, using a measurement tool (the Health Professional Education in Patient Safety Survey) to measure the perceived competencies of physicians, nurses, and nursing assistants. Out of 57 participants, 37 answered the questionnaire surveys 4 times: 48 hours before the training, followed by post-surveys at 24 hours, 6 weeks, and 12 weeks after the training. The linear mixed effect model was applied for the analysis.
Results
Four components out of 6 perceived patient safety competencies improved at 6 weeks but returned to a similar level before training at 12 weeks. Competencies of “communicating effectively,” “managing safety risks,” “understanding human and environmental factors that influence patient safety,” and “recognize and respond to remove immediate risks of harm” are statistically significant both overall and in the comparison between before the training and 6 weeks after the training.
Conclusion
Interprofessional simulation programs contributed to developing some areas of patient safety competencies of healthcare professionals, but only for a limited time. Interprofessional simulation programs should be repeated and combined with other forms of support, including case discussions and debriefings, to ensure lasting effects.

Citations

Citations to this article as recorded by  
  • Midwifery Students’ and Obstetricians’ Perception of Training in Non-Technical Skills
    Coralie Fregonese, Paul Guerby, Gilles Vallade, Régis Fuzier
    Simulation & Gaming.2025;[Epub]     CrossRef
  • Interprofessional education interventions for healthcare professionals to improve patient safety: a scoping review
    Yan Jiang, Yan Cai, Xue Zhang, Cong Wang
    Medical Education Online.2024;[Epub]     CrossRef
Case report
Successful pilot application of multi-attribute utility analysis concepts in evaluating academic-clinical partnerships in the United States: a case report  
Sara Elizabeth North, Amanda Nicole Sharp
J Educ Eval Health Prof. 2022;19:18.   Published online August 19, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.18
  • 2,496 View
  • 150 Download
  • 2 Web of Science
  • 3 Crossref
AbstractAbstract PDFSupplementary Material
Strong partnerships between academic health professions programs and clinical practice settings, termed academic-clinical partnerships, are essential in providing quality clinical training experiences. However, the literature does not operationalize a model by which an academic program may identify priority attributes and evaluate its partnerships. This study aimed to develop a values-based academic-clinical partnership evaluation approach, rooted in methodologies from the field of evaluation and implemented in the context of an academic Doctor of Physical Therapy clinical education program. The authors developed a semi-quantitative evaluation approach incorporating concepts from multi-attribute utility analysis (MAUA) that enabled consistent, values-based partnership evaluation. Data-informed actions led to improved overall partnership effectiveness. Pilot outcomes support the feasibility and desirability of moving toward MAUA as a potential methodological framework. Further research may lead to the development of a standardized process for any academic health profession program to perform a values-based evaluation of their academic-clinical partnerships to guide decision-making.

Citations

Citations to this article as recorded by  
  • Application of Multi-Attribute Utility Analysis as a Methodological Framework in Academic–Clinical Partnership Evaluation
    Sara E. North
    American Journal of Evaluation.2024; 45(4): 562.     CrossRef
  • Multi-attribute monitoring applications in biopharmaceutical analysis
    Anurag S. Rathore, Deepika Sarin, Sanghati Bhattacharya, Sunil Kumar
    Journal of Chromatography Open.2024; 6: 100166.     CrossRef
  • Advancing Value-Based Academic–Clinical Partnership Evaluation in Physical Therapy Education: Multiattribute Utility Analysis as a Contextual Methodological Approach
    Sara North
    Journal of Physical Therapy Education.2024;[Epub]     CrossRef
Reviews
Is accreditation in medical education in Korea an opportunity or a burden?  
Hanna Jung, Woo Taek Jeon, Shinki An
J Educ Eval Health Prof. 2020;17:31.   Published online October 21, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.31
  • 6,508 View
  • 155 Download
  • 8 Web of Science
  • 12 Crossref
AbstractAbstract PDFSupplementary Material
The accreditation process is both an opportunity and a burden for medical schools in Korea. The line that separates the two is based on how medical schools recognize and utilize the accreditation process. In other words, accreditation is a burden for medical schools if they view the accreditation process as merely a formal procedure or a means to maintain accreditation status for medical education. However, if medical schools acknowledge the positive value of the accreditation process, accreditation can be both an opportunity and a tool for developing medical education. The accreditation process has educational value by catalyzing improvements in the quality, equity, and efficiency of medical education and by increasing the available options. For the accreditation process to contribute to medical education development, accrediting agencies and medical schools must first be recognized as partners of an educational alliance working together towards common goals. Secondly, clear guidelines on accreditation standards should be periodically reviewed and shared. Finally, a formative self-evaluation process must be introduced for institutions to utilize the accreditation process as an opportunity to develop medical education. This evaluation system could be developed through collaboration among medical schools, academic societies for medical education, and the accrediting authority.

Citations

Citations to this article as recorded by  
  • Equity in Basic Medical Education accreditation standards: a scoping review protocol
    Neelofar Shaheen, Usman Mahboob, Ahsan Sethi, Muhammad Irfan
    BMJ Open.2025; 15(1): e086661.     CrossRef
  • Making Medical Education Socially Accountable in Australia and Southeast Asia: A Systematic Review
    Jyotsna Rimal, Ashish Shrestha, Elizabeth Cardell, Stephen Billett, Alfred King-yin Lam
    Medical Science Educator.2025;[Epub]     CrossRef
  • Impacts of the Accreditation Process for Undergraduate Medical Schools: A Scoping Review
    Leticia C. Girotto, Karynne B. Machado, Roberta F. C. Moreira, Milton A. Martins, Patrícia Z. Tempski
    The Clinical Teacher.2025;[Epub]     CrossRef
  • To prove or improve? Examining how paradoxical tensions shape evaluation practices in accreditation contexts
    Betty Onyura, Abigail J. Fisher, Qian Wu, Shrutikaa Rajkumar, Sarick Chapagain, Judith Nassuna, David Rojas, Latika Nirula
    Medical Education.2024; 58(3): 354.     CrossRef
  • ASPIRE for excellence in curriculum development
    John Jenkins, Sharon Peters, Peter McCrorie
    Medical Teacher.2024; 46(5): 633.     CrossRef
  • Challenges and potential improvements in the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019) derived through meta-evaluation: a cross-sectional study
    Yoonjung Lee, Min-jung Lee, Junmoo Ahn, Chungwon Ha, Ye Ji Kang, Cheol Woong Jung, Dong-Mi Yoo, Jihye Yu, Seung-Hee Lee
    Journal of Educational Evaluation for Health Professions.2024; 21: 8.     CrossRef
  • Accreditation standards items of post-2nd cycle related to the decision of accreditation of medical schools by the Korean Institute of Medical Education and Evaluation
    Kwi Hwa Park, Geon Ho Lee, Su Jin Chae, Seong Yong Kim
    Korean Journal of Medical Education.2023; 35(1): 1.     CrossRef
  • The Need for the Standards for Anatomy Labs in Medical School Evaluation and Accreditation
    Yu-Ran Heo, Jae-Ho Lee
    Anatomy & Biological Anthropology.2023; 36(3): 81.     CrossRef
  • Seal of Approval or Ticket to Triumph? The Impact of Accreditation on Medical Student Performance in Foreign Medical Council Examinations
    Saurabh RamBihariLal Shrivastava, Titi Savitri Prihatiningsih, Kresna Lintang Pratidina
    Indian Journal of Medical Specialities.2023; 14(4): 249.     CrossRef
  • Internal evaluation in the faculties affiliated to zanjan university of medical sciences: Quality assurance of medical science education based on institutional accreditation
    Alireza Abdanipour, Farhad Ramezani‐Badr, Ali Norouzi, Mehdi Ghaemi
    Journal of Medical Education Development.2022; 15(46): 61.     CrossRef
  • Development of Mission and Vision of College of Korean Medicine Using the Delphi Techniques and Big-Data Analysis
    Sanghee Yeo, Seong Hun Choi, Su Jin Chae
    Journal of Korean Medicine.2021; 42(4): 176.     CrossRef
  • Special reviews on the history and future of the Korean Institute of Medical Education and Evaluation to memorialize its collaboration with the Korea Health Personnel Licensing Examination Institute to designate JEEHP as a co-official journal
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2020; 17: 33.     CrossRef
How to execute Context, Input, Process, and Product evaluation model in medical health education  
So young Lee, Jwa-Seop Shin, Seung-Hee Lee
J Educ Eval Health Prof. 2019;16:40.   Published online December 28, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.40
  • 16,214 View
  • 703 Download
  • 14 Web of Science
  • 17 Crossref
AbstractAbstract PDFSupplementary Material
Improvements to education are necessary in order to keep up with the education requirements of today. The Context, Input, Process, and Product (CIPP) evaluation model was created for the decision-making towards education improvement, so this model is appropriate in this regard. However, application of this model in the actual context of medical health education is considered difficult in the education environment. Thus, in this study, literature survey of previous studies was investigated to examine the execution procedure of how the CIPP model can be actually applied. For the execution procedure utilizing the CIPP model, the criteria and indicators were determined from analysis results and material was collected after setting the material collection method. Afterwards, the collected material was analyzed for each CIPP element, and finally, the relationship of each CIPP element was analyzed for the final improvement decision-making. In this study, these steps were followed and the methods employed in previous studies were organized. Particularly, the process of determining the criteria and indicators was important and required a significant effort. Literature survey was carried out to analyze the most widely used criteria through content analysis and obtained a total of 12 criteria. Additional emphasis is necessary in the importance of the criteria selection for the actual application of the CIPP model. Also, a diverse range of information can be obtained through qualitative as well as quantitative methods. Above all, since the CIPP evaluation model execution result becomes the basis for the execution of further improved evaluations, the first attempt of performing without hesitation is essential.

Citations

Citations to this article as recorded by  
  • The Correctional Nursing Workforce Crisis: An Innovative Solution to Meet the Challenge
    Deborah Shelton, Lori E. Roscoe, Theresa A. Kapetanovic, Sue Smith
    Journal of Correctional Health Care.2025;[Epub]     CrossRef
  • A Novel Implementation of the CIPP Model in Undergraduate Medical Education: The Karadeniz Technical University Faculty of Medicine Experience
    Selçuk Akturan
    Tıp Eğitimi Dünyası.2025; 24(72): 5.     CrossRef
  • Evaluation of the Maryland Next Gen Test Bank Project: Implications and Recommendations
    Desirée Hensel, Diane M. Billings, Rebecca Wiseman
    Nursing Education Perspectives.2024; 45(4): 225.     CrossRef
  • Development of a blended teaching quality evaluation scale (BTQES) for undergraduate nursing based on the Context, Input, Process and Product (CIPP) evaluation model: A cross-sectional survey
    Yue Zhao, Weijuan Li, Hong Jiang, Mohedesi Siyiti, Meng Zhao, Shuping You, Yinglan Li, Ping Yan
    Nurse Education in Practice.2024; 77: 103976.     CrossRef
  • Internal evaluation of medical programs is more than housework: A scoping review
    Sujani Kodagoda Gamage, Tanisha Jowsey, Jo Bishop, Melanie Forbes, Lucy-Jane Grant, Patricia Green, Helen Houghton, Matthew Links, Mark Morgan, Joan Roehl, Jessica Stokes-Parish, Rano Mal Piryani
    PLOS ONE.2024; 19(10): e0305996.     CrossRef
  • Effectiveness of Peer-Assisted Learning in health professional education: a scoping review of systematic reviews
    Hanbo Feng, Ziyi Luo, Zijing Wu, Xiaohan Li
    BMC Medical Education.2024;[Epub]     CrossRef
  • Pengembangan Budaya Integritas Melalui Pendekatan Sufistik pada Perguruan Tinggi Berbasis Pesantren
    Imaduddin Imaduddin
    Nidhomiyyah: Jurnal Manajemen Pendidikan Islam.2024; 5(1): 66.     CrossRef
  • Evaluating the Economics Education Framework Within the Merdeka Curriculum at the Ecd Level: Case Study at Kharisma Kindergarten
    Fariha Nuraini, Annisa Mar’atus Sholikhah, Wahjoedi, Farida Rahmawati
    Eduscape : Journal of Education Insight.2024; 2(3): 186.     CrossRef
  • Self-care educational guide for mothers with gestational diabetes mellitus: A systematic review on identifying self-care domains, approaches, and their effectiveness
    Zarina Haron, Rosnah Sutan, Roshaya Zakaria, Zaleha Abdullah Mahdy
    Belitung Nursing Journal.2023; 9(1): 6.     CrossRef
  • Evaluation of the Smart Indonesia Program as a Policy to Improve Equality in Education
    Patni Ninghardjanti, Wiedy Murtini, Aniek Hindrayani, Khresna B. Sangka
    Sustainability.2023; 15(6): 5114.     CrossRef
  • Exploring Perceptions of Competency-Based Medical Education in Undergraduate Medical Students and Faculty: A Program Evaluation
    Erica Ai Li, Claire A Wilson, Jacob Davidson, Aaron Kwong, Amrit Kirpalani, Peter Zhan Tao Wang
    Advances in Medical Education and Practice.2023; Volume 14: 381.     CrossRef
  • The Evaluation of China's Double Reduction Policy: A Case Study in Dongming County Mingde Primary School
    Danyang Li , Chaimongkhon Supromin, Supit Boonlab
    International Journal of Sociologies and Anthropologies Science Reviews.2023; 3(6): 437.     CrossRef
  • Exploring the Components of the Research Empowerment Program of the Faculty Members of Kermanshah University of Medical Sciences, Iran Based on the CIPP Model: A Qualitative Study
    Mostafa Jafari, Susan Laei, Elham Kavyani, Rostam Jalali
    Educational Research in Medical Sciences.2021;[Epub]     CrossRef
  • Adapting an Integrated Program Evaluation for Promoting Competency‐Based Medical Education
    Hyunjung Ju, Minkyung Oh, Jong-Tae Lee, Bo Young Yoon
    Korean Medical Education Review.2021; 23(1): 56.     CrossRef
  • Changes in the accreditation standards of medical schools by the Korean Institute of Medical Education and Evaluation from 2000 to 2019
    Hyo Hyun Yoo, Mi Kyung Kim, Yoo Sang Yoon, Keun Mi Lee, Jong Hun Lee, Seung-Jae Hong, Jung –Sik Huh, Won Kyun Park
    Journal of Educational Evaluation for Health Professions.2020; 17: 2.     CrossRef
  • Human Resources Development via Higher Education Scholarships: A Case Study of a Ministry of Public Works and Housing Scholarship Program
    Abdullatif SETİABUDİ, Muchlis. R. LUDDIN, Yuli RAHMAWATI
    International e-Journal of Educational Studies.2020; 4(8): 209.     CrossRef
  • Exploring Components, Barriers, and Solutions for Faculty Members’ Research Empowerment Programs Based on the CIPP Model: A Qualitative Study
    Mostafa Jafari, Soosan Laei, Elham Kavyani, Rostam Jalali
    Journal of Occupational Health and Epidemiology.2020; 9(4): 213.     CrossRef
Brief report
Benefits of focus group discussions beyond online surveys in course evaluations by medical students in the United States: a qualitative study  
Katharina Brandl, Soniya V. Rabadia, Alexander Chang, Jess Mandel
J Educ Eval Health Prof. 2018;15:25.   Published online October 16, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.25
  • 23,498 View
  • 363 Download
  • 5 Web of Science
  • 7 Crossref
AbstractAbstract PDFSupplementary Material
In addition to online questionnaires, many medical schools use supplemental evaluation tools such as focus groups to evaluate their courses. Although some benefits of using focus groups in program evaluation have been described, it is unknown whether these inperson data collection methods provide sufficient additional information beyond online evaluations to justify them. In this study, we analyze recommendations gathered from student evaluation team (SET) focus group meetings and analyzed whether these items were captured in open-ended comments within the online evaluations. Our results indicate that online evaluations captured only 49% of the recommendations identified via SETs. Surveys to course directors identified that 74% of the recommendations exclusively identified via the SETs were implemented within their courses. Our results indicate that SET meetings provided information not easily captured in online evaluations and that these recommendations resulted in actual course changes.

Citations

Citations to this article as recorded by  
  • Effects of building resilience skills among undergraduate medical students in a multi-cultural, multi-ethnic setting in the United Arab Emirates: A convergent mixed methods study
    Farah Otaki, Samuel B. Ho, Bhavana Nair, Reem AlGurg, Adrian Stanley, Amar Hassan Khamis, Agnes Paulus, Laila Alsuwaidi, Ashraf Atta Mohamed Safein Salem
    PLOS ONE.2025; 20(2): e0308774.     CrossRef
  • Assessing the Utility of Oral and Maxillofacial Surgery Posters as Educational Aids in Dental Education for Undergraduate Students: Is it Useless or Helpful?
    Seyed Mohammad Ali Seyedi, Navid Kazemian, Omid Alizadeh, Zeinab Mohammadi, Maryam Jamali, Reza Shahakbari, Sahand Samieirad
    WORLD JOURNAL OF PLASTIC SURGERY.2024; 13(1): 57.     CrossRef
  • Grupos focais como ferramenta de pesquisa qualitativa na fisioterapia: implicações e expectativas
    Dartel Ferrari de Lima, Adelar Aparecido Sampaio
    Revista Pesquisa Qualitativa.2023; 11(27): 361.     CrossRef
  • Educational attainment for at-risk high school students: closing the gap
    Karen Miner-Romanoff
    SN Social Sciences.2023;[Epub]     CrossRef
  • Student evaluations of teaching and the development of a comprehensive measure of teaching effectiveness for medical schools
    Constantina Constantinou, Marjo Wijnen-Meijer
    BMC Medical Education.2022;[Epub]     CrossRef
  • National Security Law Education in Hong Kong: Qualitative Evaluation Based on the Perspective of the Students
    Daniel T. L. Shek, Xiaoqin Zhu, Diya Dou, Xiang Li
    International Journal of Environmental Research and Public Health.2022; 20(1): 553.     CrossRef
  • Mentoring as a transformative experience
    Wendy A. Hall, Sarah Liva
    Mentoring & Tutoring: Partnership in Learning.2021; 29(1): 6.     CrossRef
Research article
Evaluation of an undergraduate occupational health program in Iran based on alumni perceptions: a structural equation model  
Semira Mehralizadeh, Alireza Dehdashti, Masoud Motalebi Kashani
J Educ Eval Health Prof. 2017;14:16.   Published online July 26, 2017
DOI: https://doi.org/10.3352/jeehp.2017.14.16
  • 32,728 View
  • 317 Download
  • 3 Web of Science
  • 1 Crossref
AbstractAbstract PDF
Purpose
Evaluating educational programs can improve the quality of education. The present study evaluated the undergraduate occupational health program at the Semnan University of Medical Sciences in Semnan, Iran, with a focus on the associations between alumni perceptions of the learning environment and the outcomes of the occupational health program. Methods: A cross-sectional questionnaire survey was conducted among alumni of the undergraduate occupational health program. We asked alumni to rate their perceptions of the items using a 4-point Likert scale. The associations between alumni perceptions of the educational program and curriculum, faculty, institutional resources, and learning outcomes were modeled and described using structural equation modeling procedures. Results: A descriptive analysis of alumni perceptions indicated low evaluations for the administrative system, practical and research-based courses, and the number of faculty members. We found that a structural model of the evaluation variables of curriculum, faculty qualifications, and institutional resources significantly predicted undergraduate educational outcomes. The curriculum had direct and indirect effects on learning outcomes, mediated by faculty. Conclusion: The findings of our study highlight the usefulness of the structural equation modeling approach for examining links between variables related to the learning process and learning outcomes. Surveys of alumni can provide data for reassessing the learning environment in the light of the professional competencies needed for occupational health graduates.

Citations

Citations to this article as recorded by  
  • Integrated-Based Curriculum of Pharmaceutical Dosage Forms (ICPDF): What Factors Affect the Learning Outcome Attainment?
    Anis Yohana Chaerunisaa, Akhmad Habibi, Muhaimin Muhaimin, Mailizar Mailizar, Tommy Tanu Wijaya, Ahmad Samed Al-Adwan
    International Journal of Environmental Research and Public Health.2023; 20(5): 4272.     CrossRef
Research Articles
Evaluation of a continuing professional development training program for physicians and physician assistants in hospitals in Laos based on the Kirkpatrick model  
Hyun Bae Yoon, Jwa-Seop Shin, Ketsomsouk Bouphavanh, Yu Min Kang
J Educ Eval Health Prof. 2016;13:21.   Published online May 31, 2016
DOI: https://doi.org/10.3352/jeehp.2016.13.21
  • 32,269 View
  • 349 Download
  • 24 Web of Science
  • 20 Crossref
AbstractAbstract PDF
Purpose
Medical professionals from Korea and Laos have been working together to develop a continuing professional development training program covering the major clinical fields of primary care. This study aimed to evaluate the effectiveness of the program from 2013 to 2014 using the Kirkpatrick model. Methods: A questionnaire was used to evaluate the reaction of the trainees, and the trainers assessed the level of trainees’ performance at the beginning and the end of each clinical section. The transfer (behavioral change) of the trainees was evaluated through the review of medical records written by the trainees before and after the training program. Results: The trainees were satisfied with the training program, for which the average score was 4.48 out of 5.0. The average score of the trainees’ performance at the beginning was 2.39 out of 5.0, and rose to 3.88 at the end of each section. The average score of the medical records written before the training was 2.92 out of 5.0, and it rose to 3.34 after the training. The number of patient visits to the district hospitals increased. Conclusion: The continuing professional development training program, which was planned and implemented with the full engagement and responsibility of Lao health professionals, proved to be effective.

Citations

Citations to this article as recorded by  
  • Practicalities and dichotomies of education policy and practice of higher education in the Golden Triangle Area (Southeast Asia): Implications for international development
    Shine Wanna Aung, Than Than Aye
    Policy Futures in Education.2024; 22(7): 1421.     CrossRef
  • Evaluation of cost-effectiveness of single-credit traffic safety course based on Kirkpatrick model: a case study of Iran
    Mina Golestani, Homayoun Sadeghi-bazargani, Sepideh Harzand-Jadidi, Hamid Soori
    BMC Medical Education.2024;[Epub]     CrossRef
  • Effectiveness of E-learning on “Sexual Health” among students of Shahid Beheshti University of Medical Sciences based on the Kirkpatrick model
    Zohreh Sadat Mirmoghtadaie, Zahra Mahbadi, Zinat Mahbadi
    Journal of Education and Health Promotion.2024;[Epub]     CrossRef
  • The benefits and limitations of establishing the PA profession globally
    Arden R. Turkewitz, Jane P. Sallen, Rachel M. Smith, Kandi Pitchford, Kimberly Lay, Scott Smalley
    JAAPA.2024; 37(11): 1.     CrossRef
  • Transforming the “SEAD”: Evaluation of a Virtual Surgical Exploration and Discovery Program and its Effects on Career Decision-Making
    Kameela Miriam Alibhai, Patricia Burhunduli, Christopher Tarzi, Kush Patel, Christine Seabrook, Tim Brandys
    Journal of Surgical Education.2023; 80(2): 256.     CrossRef
  • Evaluation of the effectiveness of a training programme for nurses regarding augmentative and alternative communication with intubated patients using Kirkpatrick's model: A pilot study
    Marzieh Momennasab, Fatemeh Mohammadi, Fereshteh DehghanRad, Azita Jaberi
    Nursing Open.2023; 10(5): 2895.     CrossRef
  • Outcome Evaluation of a Transnational Postgraduate Capacity-Building Program Using the Objective Structured Clinical Examination
    Kye-Yeung Park, Hoon-Ki Park, Jwa-Seop Shin, Taejong Kim, Youngjoo Jung, Min Young Seo, Ketsomsouk Bouphavanh, Sourideth Sengchanh, Ketmany Inthachack
    Evaluation Review.2023; 47(4): 680.     CrossRef
  • Developing a capacity building training model for public health managers of low and middle income countries
    Kritika Upadhyay, Sonu Goel, Preethi John, Sara Rubinelli
    PLOS ONE.2023; 18(4): e0272793.     CrossRef
  • Portfolios with Evidence of Reflective Practice Required by Regulatory Bodies: An Integrative Review
    Marco Zaccagnini, Patricia A. Miller
    Physiotherapy Canada.2022; 74(4): 330.     CrossRef
  • Implementation and evaluation of crowdsourcing in global health education
    Huanle Cai, Huiqiong Zheng, Jinghua Li, Chun Hao, Jing Gu, Jing Liao, Yuantao Hao
    Global Health Research and Policy.2022;[Epub]     CrossRef
  • An Evaluation of the Surgical Foundations Curriculum: A National Study
    Ekaterina Kouzmina, Stephen Mann, Timothy Chaplin, Boris Zevin
    Journal of Surgical Education.2021; 78(3): 914.     CrossRef
  • Surgical data strengthening in Ethiopia: results of a Kirkpatrick framework evaluation of a data quality intervention
    Sehrish Bari, Joseph Incorvia, Katherine R. Iverson, Abebe Bekele, Kaya Garringer, Olivia Ahearn, Laura Drown, Amanu Aragaw Emiru, Daniel Burssa, Samson Workineh, Ephrem Daniel Sheferaw, John G. Meara, Andualem Beyene
    Global Health Action.2021;[Epub]     CrossRef
  • Evaluation of a Neonatal Resuscitation Training Programme for Healthcare Professionals in Zanzibar, Tanzania: A Pre-post Intervention Study
    Xiang Ding, Li Wang, Mwinyi I. Msellem, Yaojia Hu, Jun Qiu, Shiying Liu, Mi Zhang, Lihui Zhu, Jos M. Latour
    Frontiers in Pediatrics.2021;[Epub]     CrossRef
  • Evaluation of a training program on primary eye care for an Accredited Social Health Activist (ASHA) in an urban district
    Pallavi Shukla, Praveen Vashist, SurajSingh Senjam, Vivek Gupta
    Indian Journal of Ophthalmology.2020; 68(2): 356.     CrossRef
  • Micro-feedback skills workshop impacts perceptions and practices of doctoral faculty
    Najma Baseer, James Degnan, Mandy Moffat, Usman Mahboob
    BMC Medical Education.2020;[Epub]     CrossRef
  • Residents working with Médecins Sans Frontières: training and pilot evaluation
    Alba Ripoll-Gallardo, Luca Ragazzoni, Ettore Mazzanti, Grazia Meneghetti, Jeffrey Michael Franc, Alessandro Costa, Francesco della Corte
    Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine.2020;[Epub]     CrossRef
  • Medical education in Laos
    Timothy Alan Wittick, Ketsomsouk Bouphavanh, Vannyda Namvongsa, Amphay Khounthep, Amy Gray
    Medical Teacher.2019; 41(8): 877.     CrossRef
  • Evaluation of the effectiveness of a first aid health volunteers’ training programme using Kirkpatrick’s model: A pilot study
    Fatemeh Vizeshfar, Marzieh Momennasab, Shahrzad Yektatalab, Mohamad Taghi Iman
    Health Education Journal.2018; 77(2): 190.     CrossRef
  • Evaluation of a consulting training course for international development assistance for health
    Pan Gao, Hao Xiang, Suyang Liu, Yisi Liu, Shengjie Dong, Feifei Liu, Wenyuan Yu, Xiangyu Li, Li Guan, Yuanyuan Chu, Zongfu Mao, Shu Chen, Shenglan Tang
    BMC Medical Education.2018;[Epub]     CrossRef
  • Empowering the Filipino Physician through Continuing Professional Development in the Philippines: Gearing towards ASEAN Harmonization and Globalization
    Maria Minerva P Calimag
    Journal of Medicine, University of Santo Tomas.2018; 2(1): 121.     CrossRef
Smartphone-based evaluations of clinical placements—a useful complement to web-based evaluation tools  
Jesper Hessius, Jakob Johansson
J Educ Eval Health Prof. 2015;12:55.   Published online November 30, 2015
DOI: https://doi.org/10.3352/jeehp.2015.12.55
  • 28,042 View
  • 144 Download
  • 2 Web of Science
  • 5 Crossref
AbstractAbstract PDF
Purpose
Web-based questionnaires are currently the standard method for course evaluations. The high rate of smartphone adoption in Sweden makes possible a range of new uses, including course evaluation. This study examines the potential advantages and disadvantages of using a smartphone app as a complement to web-based course evaluation systems.
Methods
An iPhone app for course evaluations was developed and interfaced to an existing web-based tool. Evaluations submitted using the app were compared with those submitted using the web between August 2012 and June 2013, at the Faculty of Medicine at Uppsala University, Sweden.
Results
At the time of the study, 49% of the students were judged to own iPhones. Over the course of the study, 3,340 evaluations were submitted, of which 22.8% were submitted using the app. The median of mean scores in the submitted evaluations was 4.50 for the app (with an interquartile range of 3.70-5.20) and 4.60 (3.70-5.20) for the web (P= 0.24). The proportion of evaluations that included a free-text comment was 50.5% for the app and 49.9% for the web (P= 0.80).
Conclusion
An app introduced as a complement to a web-based course evaluation system met with rapid adoption. We found no difference in the frequency of free-text comments or in the evaluation scores. Apps appear to be promising tools for course evaluations. web-based course evaluation system met with rapid adoption. We found no difference in the frequency of free-text comments or in the evaluation scores. Apps appear to be promising tools for course evaluations.

Citations

Citations to this article as recorded by  
  • Practical tips for starting a successful national postgraduate course
    Magnus Sundbom
    MedEdPublish.2024; 13: 26.     CrossRef
  • Practical tips for starting a successful national postgraduate course
    Magnus Sundbom
    MedEdPublish.2024; 13: 26.     CrossRef
  • Practical tips for starting a successful national postgraduate course
    Magnus Sundbom
    MedEdPublish.2023; 13: 26.     CrossRef
  • Practical tips for starting a successful national postgraduate course
    Magnus Sundbom
    MedEdPublish.2023; 13: 26.     CrossRef
  • Enhancing emergency care in low-income countries using mobile technology-based training tools
    Hilary Edgcombe, Chris Paton, Mike English
    Archives of Disease in Childhood.2016; 101(12): 1149.     CrossRef
Small group effectiveness in a Caribbean medical school’s problem-based learning sessions  
P Ravi Shankar, Atanu Nandy, Ramanan Balasubramanium, Soumitra Chakravarty
J Educ Eval Health Prof. 2014;11:5.   Published online March 24, 2014
DOI: https://doi.org/10.3352/jeehp.2014.11.5
  • 65,608 View
  • 195 Download
  • 5 Web of Science
  • 5 Crossref
AbstractAbstract PDF
Purpose
The tutorial group effectiveness instrument was developed to provide objective information on the effectiveness of small groups. Student perception of small group effectiveness during the PBL process has not been previously studied in Xavier University School of Medicine. Hence the present study was carried out.
Methods
The study was conducted among the second and third semester undergraduate medical students during the last week of September 2013, in Xavier University School of Medicine, Aruba, Kingdom of the Netherlands. Students were informed about the objectives of the study and invited to participate after obtaining written, informed consent. Demographic information like gender, age, nationality and whether the respondent had been exposed to PBL before joining the institution were noted. Student perception about small group effectiveness was studied by noting their degree of agreement with a set of 19 statements using a Likert type scale.
Results
Thirty four of the 37 (91.9%) second and third semester medical students participated in the study. The mean cognitive score was 3.76 while the mean motivational and demotivational scores were 3.65 and 2.51 respectively. The median cognitive category score was 27 (maximum score 35) while the motivation score was 26 (maximum score 35) and the demotivational score was 12 (maximum being 25). There was no significant difference in scores according to respondents’ demographic characteristics.
Conclusion
Student perception about small group effectiveness was positive. Since most medical schools all over the world already have or are introducing PBL as a learning modality, Tutorial Group Effectiveness Instrument can provide valuable information about small group functioning during PBL sessions.

Citations

Citations to this article as recorded by  
  • Relationship of Prior Knowledge and Scenario Quality With the Effectiveness of Problem-based Learning Discussion among Medical Students of Universitas Malikussaleh, Aceh, Indonesia
    Mulyati Sri Rahayu, Sri Wahyuni, Yuziani Yuziani
    Malaysian Journal of Medicine and Health Sciences.2023; 19(4): 15.     CrossRef
  • Should the PBL tutor be present? A cross-sectional study of group effectiveness in synchronous and asynchronous settings
    Samuel Edelbring, Siw Alehagen, Evalotte Mörelius, AnnaKarin Johansson, Patrik Rytterström
    BMC Medical Education.2020;[Epub]     CrossRef
  • Initiating small group learning in a Caribbean medical school
    P. Ravi Shankar
    Journal of Educational Evaluation for Health Professions.2015; 12: 10.     CrossRef
  • Aprendizagem Baseada em Problemas na Graduação Médica – Uma Revisão da Literatura Atual
    Luciana Brosina de Leon, Fernanda de Quadros Onófrio
    Revista Brasileira de Educação Médica.2015; 39(4): 614.     CrossRef
  • Assessing the Effectiveness of Problem-Based Learning of Preventive Medicine Education in China
    Xiaojie Ding, Liping Zhao, Haiyan Chu, Na Tong, Chunhui Ni, Zhibin Hu, Zhengdong Zhang, Meilin Wang
    Scientific Reports.2014;[Epub]     CrossRef
Indian medical students’ perspectives on problem-based learning experiences in the undergraduate curriculum: One size does not fit all  
Nanda Bijli, Manjunatha Shankarappa
J Educ Eval Health Prof. 2013;10:11.   Published online December 31, 2012
DOI: https://doi.org/10.3352/jeehp.2013.10.11
  • 52,238 View
  • 217 Download
  • 13 Crossref
PDF

Citations

Citations to this article as recorded by  
  • Adoption of Problem-Based Learning in Medical Schools in Non-Western Countries: A Systematic Review
    See Chai Carol Chan, Anjali Rajendra Gondhalekar, George Choa, Mohammed Ahmed Rashid
    Teaching and Learning in Medicine.2024; 36(2): 111.     CrossRef
  • Barriers and Solutions to Successful Problem-Based Learning Delivery in Developing Countries – A Literature Review
    Jhiamluka Solano, Melba Zuniga Gutierrez, Esther Pinel-Guzmán, Génesis Henriquez
    Cureus.2023;[Epub]     CrossRef
  • PERCEPTION OF PHASE 1 MBBS STUDENTS ON E- LEARNING AND ONLINE ASSESSMENT DURING COVID-19 AT GOVT. MEDICAL COLLEGES, WEST BENGAL, INDIA
    Sanhita Mukherjee, Sagarika Sarkar, Hrishikesh Bagchi, Diptakanti Mukhopadhyay
    PARIPEX INDIAN JOURNAL OF RESEARCH.2022; : 42.     CrossRef
  • Ensuring Problem-Based Learning for Medical Students and Looking Forward to Developing Competencies
    Manish Taywade, Kumbha Gopi, Debkumar Pal, Bimal Kumar Sahoo
    Amrita Journal of Medicine.2022; 18(1): 1.     CrossRef
  • Practice and effectiveness of web-based problem-based learning approach in a large class-size system: A comparative study
    Yongxia Ding, Peili Zhang
    Nurse Education in Practice.2018; 31: 161.     CrossRef
  • LEARNING BIOLOGY THROUGH PROBLEM BASED LEARNING – PERCEPTION OF STUDENTS
    THAKUR PREETI, DUTT SUNIL, CHAUHAN ABHISHEK
    i-manager's Journal of Educational Technology.2018; 15(2): 44.     CrossRef
  • Development of an international comorbidity education framework
    C. Lawson, S. Pati, J. Green, G. Messina, A. Strömberg, N. Nante, D. Golinelli, A. Verzuri, S. White, T. Jaarsma, P. Walsh, P. Lonsdale, U.T. Kadam
    Nurse Education Today.2017; 55: 82.     CrossRef
  • A Rapid Review of the Factors Affecting Healthcare Students' Satisfaction with Small-Group, Active Learning Methods
    James M. Kilgour, Lisa Grundy, Lynn V. Monrouxe
    Teaching and Learning in Medicine.2016; 28(1): 15.     CrossRef
  • Students’ perceptions and satisfaction level of hybrid problem-based learning for 16 years in Kyungpook National University School of Medicine, Korea
    Sanghee Yeo, Bong Hyun Chang
    Korean Journal of Medical Education.2016; 28(1): 9.     CrossRef
  • Problem-based learning: Dental student's perception of their education environments at Qassim University
    ShahadS Alkhuwaiter, RoqayahI Aljuailan, SaeedM Banabilh
    Journal of International Society of Preventive and Community Dentistry.2016; 6(6): 575.     CrossRef
  • PERCEPTION OF PHARMACOLOGY TEACHING METHODS AMONG SECOND YEAR UNDERGRADUATE STUDENTS TOWARDS BETTER LEARNING
    Padmanabha Thiruganahalli Shivaraju, Manu Gangadhar, Chandrakantha Thippeswamy, Neha Krishnegowda
    Journal of Evidence Based Medicine and Healthcare.2016; 3(30): 1352.     CrossRef
  • Aprendizagem Baseada em Problemas na Graduação Médica – Uma Revisão da Literatura Atual
    Luciana Brosina de Leon, Fernanda de Quadros Onófrio
    Revista Brasileira de Educação Médica.2015; 39(4): 614.     CrossRef
  • EFFECT OF PROBLEM BASED LEARNING IN COMPARISION WITH LECTURE BASED LEARNING IN FORENSIC MEDICINE
    Padma kumar K
    Journal of Evidence Based Medicine and Healthcare.2015; 2(37): 5932.     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions
TOP