Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Search

Page Path
HOME > Search
10 "Statistical factor analysis"
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles
Research articles
Development and psychometric evaluation of a 360-degree evaluation instrument to assess medical students’ performance in clinical settings at the emergency medicine department in Iran: a methodological study  
Golnaz Azami, Sanaz Aazami, Boshra Ebrahimy, Payam Emami
J Educ Eval Health Prof. 2024;21:7.   Published online April 1, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.7
  • 1,365 View
  • 257 Download
AbstractAbstract PDFSupplementary Material
Background
In the Iranian context, no 360-degree evaluation tool has been developed to assess the performance of prehospital medical emergency students in clinical settings. This article describes the development of a 360-degree evaluation tool and presents its first psychometric evaluation.
Methods
There were 2 steps in this study: step 1 involved developing the instrument (i.e., generating the items) and step 2 constituted the psychometric evaluation of the instrument. We performed exploratory and confirmatory factor analyses and also evaluated the instrument’s face, content, and convergent validity and reliability.
Results
The instrument contains 55 items across 6 domains, including leadership, management, and teamwork (19 items), consciousness and responsiveness (14 items), clinical and interpersonal communication skills (8 items), integrity (7 items), knowledge and accountability (4 items), and loyalty and transparency (3 items). The instrument was confirmed to be a valid measure, as the 6 domains had eigenvalues over Kaiser’s criterion of 1 and in combination explained 60.1% of the variance (Bartlett’s test of sphericity [1,485]=19,867.99, P<0.01). Furthermore, this study provided evidence for the instrument’s convergent validity and internal consistency (α=0.98), suggesting its suitability for assessing student performance.
Conclusion
We found good evidence for the validity and reliability of the instrument. Our instrument can be used to make future evaluations of student performance in the clinical setting more structured, transparent, informative, and comparable.
Development and validation of the student ratings in clinical teaching scale in Australia: a methodological study  
Pin-Hsiang Huang, Anthony John O’Sullivan, Boaz Shulruf
J Educ Eval Health Prof. 2023;20:26.   Published online September 5, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.26
  • 1,709 View
  • 153 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to devise a valid measurement for assessing clinical students’ perceptions of teaching practices.
Methods
A new tool was developed based on a meta-analysis encompassing effective clinical teaching-learning factors. Seventy-nine items were generated using a frequency (never to always) scale. The tool was applied to the University of New South Wales year 2, 3, and 6 medical students. Exploratory and confirmatory factor analysis (exploratory factor analysis [EFA] and confirmatory factor analysis [CFA], respectively) were conducted to establish the tool’s construct validity and goodness of fit, and Cronbach’s α was used for reliability.
Results
In total, 352 students (44.2%) completed the questionnaire. The EFA identified student-centered learning, problem-solving learning, self-directed learning, and visual technology (reliability, 0.77 to 0.89). CFA showed acceptable goodness of fit (chi-square P<0.01, comparative fit index=0.930 and Tucker-Lewis index=0.917, root mean square error of approximation=0.069, standardized root mean square residual=0.06).
Conclusion
The established tool—Student Ratings in Clinical Teaching (STRICT)—is a valid and reliable tool that demonstrates how students perceive clinical teaching efficacy. STRICT measures the frequency of teaching practices to mitigate the biases of acquiescence and social desirability. Clinical teachers may use the tool to adapt their teaching practices with more active learning activities and to utilize visual technology to facilitate clinical learning efficacy. Clinical educators may apply STRICT to assess how these teaching practices are implemented in current clinical settings.
What impacts students’ satisfaction the most from Medicine Student Experience Questionnaire in Australia: a validity study  
Pin-Hsiang Huang, Gary Velan, Greg Smith, Melanie Fentoullis, Sean Edward Kennedy, Karen Jane Gibson, Kerry Uebel, Boaz Shulruf
J Educ Eval Health Prof. 2023;20:2.   Published online January 18, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.2
  • 2,269 View
  • 155 Download
  • 2 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study evaluated the validity of student feedback derived from Medicine Student Experience Questionnaire (MedSEQ), as well as the predictors of students’ satisfaction in the Medicine program.
Methods
Data from MedSEQ applying to the University of New South Wales Medicine program in 2017, 2019, and 2021 were analyzed. Confirmatory factor analysis (CFA) and Cronbach’s α were used to assess the construct validity and reliability of MedSEQ respectively. Hierarchical multiple linear regressions were used to identify the factors that most impact students’ overall satisfaction with the program.
Results
A total of 1,719 students (34.50%) responded to MedSEQ. CFA showed good fit indices (root mean square error of approximation=0.051; comparative fit index=0.939; chi-square/degrees of freedom=6.429). All factors yielded good (α>0.7) or very good (α>0.8) levels of reliability, except the “online resources” factor, which had acceptable reliability (α=0.687). A multiple linear regression model with only demographic characteristics explained 3.8% of the variance in students’ overall satisfaction, whereas the model adding 8 domains from MedSEQ explained 40%, indicating that 36.2% of the variance was attributable to students’ experience across the 8 domains. Three domains had the strongest impact on overall satisfaction: “being cared for,” “satisfaction with teaching,” and “satisfaction with assessment” (β=0.327, 0.148, 0.148, respectively; all with P<0.001).
Conclusion
MedSEQ has good construct validity and high reliability, reflecting students’ satisfaction with the Medicine program. Key factors impacting students’ satisfaction are the perception of being cared for, quality teaching irrespective of the mode of delivery and fair assessment tasks which enhance learning.

Citations

Citations to this article as recorded by  
  • Mental health and quality of life across 6 years of medical training: A year-by-year analysis
    Natalia de Castro Pecci Maddalena, Alessandra Lamas Granero Lucchetti, Ivana Lucia Damasio Moutinho, Oscarina da Silva Ezequiel, Giancarlo Lucchetti
    International Journal of Social Psychiatry.2024; 70(2): 298.     CrossRef
Development and validation of a measurement scale to assess nursing students’ readiness for the flipped classroom in Sri Lanka  
Punithalingam Youhasan, Yan Chen, Mataroria Lyndon, Marcus Alexander Henning
J Educ Eval Health Prof. 2020;17:41.   Published online December 14, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.41
  • 7,066 View
  • 271 Download
  • 9 Web of Science
  • 8 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
The aim of this study was to develop and validate a scale to measure nursing students’ readiness for the flipped classroom in Sri Lanka.
Methods
A literature review provided the theoretical framework for developing the Nursing Students’ Readiness for Flipped Classroom (NSR-FC) questionnaire. Five content experts evaluated the NSR-FC, and content validity indices (CVI) were calculated. Cross-sectional surveys among 355 undergraduate nursing students from 3 state universities in Sri Lanka were carried out to assess the psychometric properties of the NSR-FC. Principal component analysis (PCA, n=265), internal consistency (using the Cronbach α coefficient, n=265), and confirmatory factor analysis (CFA, n=90) were done to test construct validity and reliability.
Results
Thirty-seven items were included in the NSR-FC for content validation, resulting in an average scale CVI of 0.94. Two items received item level CVI of less than 0.78. The factor structures of the 35 items were explored through PCA with orthogonal factor rotation, culminating in the identification of 5 factors. These factors were classified as technological readiness, environmental readiness, personal readiness, pedagogical readiness, and interpersonal readiness. The NSR-FC also showed an overall acceptable level of internal consistency (Cronbach α=0.9). CFA verified a 4-factor model (excluding the interpersonal readiness factor) and 20 items that achieved acceptable fit (standardized root mean square residual=0.08, root mean square error of approximation=0.08, comparative fit index=0.87, and χ2/degrees of freedom=1.57).
Conclusion
The NSR-FC, as a 4-factor model, is an acceptable measurement scale for assessing nursing students’ readiness for the flipped classroom in terms of its construct validity and reliability.

Citations

Citations to this article as recorded by  
  • Design and validation of a preliminary instrument to contextualize interactions through information technologies of health professionals
    José Fidencio López Luna, Eddie Nahúm Armendáriz Mireles, Marco Aurelio Nuño Maganda, Hiram Herrera Rivas, Rubén Machucho Cadena, Jorge Arturo Hernández Almazán
    Health Informatics Journal.2024;[Epub]     CrossRef
  • AI readiness scale for teachers: Development and validation
    Mehmet Ramazanoglu, Tayfun Akın
    Education and Information Technologies.2024;[Epub]     CrossRef
  • Content validity of the Constructivist Learning in Higher Education Settings (CLHES) scale in the context of the flipped classroom in higher education
    Turki Mesfer Alqahtani, Farrah Dina Yusop, Siti Hajar Halili
    Humanities and Social Sciences Communications.2023;[Epub]     CrossRef
  • The intensivist's assessment of gastrointestinal function: A pilot study
    Varsha M. Asrani, Colin McArthur, Ian Bissett, John A. Windsor
    Australian Critical Care.2022; 35(6): 636.     CrossRef
  • Psychometric evidence of a perception scale about covid-19 vaccination process in Peruvian dentists: a preliminary validation
    César F. Cayo-Rojas, Nancy Córdova-Limaylla, Gissela Briceño-Vergel, Marysela Ladera-Castañeda, Hernán Cachay-Criado, Carlos López-Gurreonero, Alberto Cornejo-Pinto, Luis Cervantes-Ganoza
    BMC Health Services Research.2022;[Epub]     CrossRef
  • Implementation of a Web-Based Educational Intervention for Promoting Flipped Classroom Pedagogy: A Mixed-Methods Study
    Punithalingam Youhasan, Mataroria P. Lyndon, Yan Chen, Marcus A. Henning
    Medical Science Educator.2022; 33(1): 91.     CrossRef
  • Assess the feasibility of flipped classroom pedagogy in undergraduate nursing education in Sri Lanka: A mixed-methods study
    Punithalingam Youhasan, Yan Chen, Mataroria Lyndon, Marcus A. Henning, Gwo-Jen Hwang
    PLOS ONE.2021; 16(11): e0259003.     CrossRef
  • Newly appointed medical faculty members’ self-evaluation of their educational roles at the Catholic University of Korea College of Medicine in 2020 and 2021: a cross-sectional survey-based study
    Sun Kim, A Ra Cho, Chul Woon Chung
    Journal of Educational Evaluation for Health Professions.2021; 18: 28.     CrossRef
A novel tool for evaluating non-cognitive traits of doctor of physical therapy learners in the United States  
Marcus Roll, Lara Canham, Paul Salamh, Kyle Covington, Corey Simon, Chad Cook
J Educ Eval Health Prof. 2018;15:19.   Published online August 17, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.19
  • 29,243 View
  • 378 Download
  • 8 Web of Science
  • 9 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
The primary aim of this study was to develop a survey addressing an individual’s non-cognitive traits, such as emotional intelligence, interpersonal skills, social intelligence, psychological flexibility, and grit. Such a tool would provide beneficial information for the continued development of admissions standards and would help better capture the full breadth of experience and capabilities of applicants applying to doctor of physical therapy (DPT) programs.
Methods
This was a cross-sectional survey study involving learners in DPT programs at 3 academic institutions in the United States. A survey was developed based on established non-proprietary, non-cognitive measures affiliated with success and resilience. The survey was assessed for face validity, and exploratory factor analysis (EFA) was used to identify subgroups of factors based on responses to the items.
Results
A total of 298 participants (90.3%) completed all elements of the survey. EFA yielded 39 items for dimensional assessment with regression coefficients < 0.4. Within the 39 items, 3 latent constructs were identified: adaptability (16 items), intuitiveness (12 items), and engagement (11 items).
Conclusion
This preliminary non-cognitive assessment survey will be able to play a valuable role in DPT admissions decisions following further examination and refinement.

Citations

Citations to this article as recorded by  
  • A Systematic Review of Variables Used in Physical Therapist Education Program Admissions Part 2: Noncognitive Variables
    Andrea N. Bowens
    Journal of Physical Therapy Education.2024; 38(3): 192.     CrossRef
  • An exploration of the relationship between grit, reflection-in-learning, and academic performance in entry-level doctor of physical therapy students
    Elizabeth M Ardolino, Hazel Anderson, Katherine F Wilford
    Physiotherapy Theory and Practice.2024; : 1.     CrossRef
  • Personal characteristic differences among Doctor of Physical Therapy students with unique sociodemographic factors
    Kelly Reynolds, Maggie Horn, Karen Huhn, Steven Z. George
    BMC Medical Education.2024;[Epub]     CrossRef
  • Predictors of Success on the National Physical Therapy Examination in 2 US Accelerated-Hybrid Doctor of Physical Therapy Programs
    Breanna Reynolds, Casey Unverzagt, Alex Koszalinski, Roberta Gatlin, Jill Seale, Kendra Gagnon, Kareaion Eaton, Shane L. Koppenhaver
    Journal of Physical Therapy Education.2022; 36(3): 225.     CrossRef
  • Grit, Resilience, Mindset, and Academic Success in Physical Therapist Students: A Cross-Sectional, Multicenter Study
    Marlena Calo, Belinda Judd, Lucy Chipchase, Felicity Blackstock, Casey L Peiris
    Physical Therapy.2022;[Epub]     CrossRef
  • Predicting graduate student performance – A case study
    Jinghua Nie, Ashrafee Hossain
    Journal of Further and Higher Education.2021; 45(4): 524.     CrossRef
  • Examining Demographic and Preadmission Factors Predictive of First Year and Overall Program Success in a Public Physical Therapist Education Program
    Katy Mitchell, Jennifer Ellison, Elke Schaumberg, Peggy Gleeson, Christina Bickley, Anna Naiki, Severin Travis
    Journal of Physical Therapy Education.2021; 35(3): 203.     CrossRef
  • Doctor of Physical Therapy Student Grit as a Predictor of Academic Success: A Pilot Study
    Rebecca Bliss, Erin Jacobson
    Health Professions Education.2020; 6(4): 522.     CrossRef
  • Personality-oriented job analysis to identify non-cognitive factors predictive of performance in a doctor of physical therapy program in the United States
    Maureen Conard, Kristin Schweizer
    Journal of Educational Evaluation for Health Professions.2018; 15: 34.     CrossRef
Cross-validation of the Student Perceptions of Team-Based Learning Scale in the United States  
Donald H. Lein, John D. Lowman, Christopher A. Eidson, Hon K. Yuen
J Educ Eval Health Prof. 2017;14:15.   Published online June 29, 2017
DOI: https://doi.org/10.3352/jeehp.2017.14.15
  • 35,755 View
  • 333 Download
  • 4 Web of Science
  • 4 Crossref
AbstractAbstract PDF
Purpose
The purpose of this study was to cross-validate the factor structure of the previously developed Student Perceptions of Team-Based Learning (TBL) Scale among students in an entry-level doctor of physical therapy (DPT) program in the United States.
Methods
Toward the end of the semester in 2 patient/client management courses taught using TBL, 115 DPT students completed the Student Perceptions of TBL Scale, with a response rate of 87%. Principal component analysis (PCA) and confirmatory factor analysis (CFA) were conducted to replicate and confirm the underlying factor structure of the scale.
Results
Based on the PCA for the validation sample, the original 2-factor structure (preference for TBL and preference for teamwork) of the Student Perceptions of TBL Scale was replicated. The overall goodness-of-fit indices from the CFA suggested that the original 2-factor structure for the 15 items of the scale demonstrated a good model fit (comparative fit index, 0.95; non-normed fit index/Tucker-Lewis index, 0.93; root mean square error of approximation, 0.06; and standardized root mean square residual, 0.07). The 2 factors demonstrated high internal consistency (alpha= 0.83 and 0.88, respectively). DPT students taught using TBL viewed the factor of preference for teamwork more favorably than preference for TBL.
Conclusion
Our findings provide evidence supporting the replicability of the internal structure of the Student Perceptions of TBL Scale when assessing perceptions of TBL among DPT students in patient/client management courses.

Citations

Citations to this article as recorded by  
  • Improving learning experience through implementing standardized team-based learning process in undergraduate medical education
    Rebecca Andrews-Dickert, Ranjini Nagaraj, Lilian Zhan, Laura Knittig, Yuan Zhao
    BMC Medical Education.2024;[Epub]     CrossRef
  • Escala de Aprendizaje Metarregulado (AMR) en estudiantes universitarios
    Marybel E. Mollo-Flores, Angel Deroncele-Acosta, Roger P. Norabuena-Figueroa, Klinge O. Villalba-Condori
    Campus Virtuales.2023; 12(2): 175.     CrossRef
  • Use of Team-Based Learning Pedagogy to Prepare for a Pharmacy School Accreditation Self-Study
    Ruth Vinall, Ashim Malhotra, Jose Puglisi
    Pharmacy.2021; 9(3): 148.     CrossRef
  • Student Perceptions of Team-Based Learning in the Criminal Justice Classroom
    Jessica M. Craig, Brooke Nodeland, Roxanne Long, Emily Spivey
    Journal of Criminal Justice Education.2020; 31(3): 372.     CrossRef
Research Articles
Construct validity test of evaluation tool for professional behaviors of entry-level occupational therapy students in the United States  
Hon K. Yuen, Andres Azuero, Kaitlin W. Lackey, Nicole S. Brown, Sangita Shrestha
J Educ Eval Health Prof. 2016;13:22.   Published online June 1, 2016
DOI: https://doi.org/10.3352/jeehp.2016.13.22
  • 32,593 View
  • 307 Download
  • 3 Web of Science
  • 4 Crossref
AbstractAbstract PDF
Purpose
This study aimed to test the construct validity of an instrument to measure student professional behaviors in entry-level occupational therapy (OT) students in the academic setting. Methods: A total of 718 students from 37 OT programs across the United States answered a self-assessment survey of professional behavior that we developed. The survey consisted of ranking 28 attributes, each on a 5-point Likert scale. A split-sample approach was used for exploratory and then confirmatory factor analysis. Results: A three-factor solution with nine items was extracted using exploratory factor analysis [EFA] (n=430, 60%). The factors were ‘Commitment to Learning’ (2 items), ‘Skills for Learning’ (4 items), and ‘Cultural Competence’ (3 items). Confirmatory factor analysis (CFA) on the validation split (n=288, 40%) indicated fair fit for this three-factor model (fit indices: CFI=0.96, RMSEA=0.06, and SRMR=0.05). Internal consistency reliability estimates of each factor and the instrument ranged from 0.63 to 0.79. Conclusion: Results of the CFA in a separate validation dataset provided robust measures of goodness-of-fit for the three-factor solution developed in the EFA, and indicated that the three-factor model fitted the data well enough. Therefore, we can conclude that this student professional behavior evaluation instrument is a structurally validated tool to measure professional behaviors reported by entry-level OT students. The internal consistency reliability of each individual factor and the whole instrument was considered to be adequate to good.

Citations

Citations to this article as recorded by  
  • Mesleki Davranış Anketinin Türkçe Geçerlilik ve Güvenilirliği
    Sinem KARS, Gökçen AKYÜREK, Gonca BUMİN
    Ergoterapi ve Rehabilitasyon Dergisi.2021; 8(3): 191.     CrossRef
  • Professional practice behaviour: Identification and validation of key indicators
    Diane E MacKenzie, Brenda K Merritt, Rebecca Holstead, Gordon E Sarty
    British Journal of Occupational Therapy.2020; 83(7): 432.     CrossRef
  • Assessment of Employability Skills: A Systematic Review of the Availability and Usage of Professional Behavior Assessment Instruments
    Christine A. McCallum, Leigh Murray, Michele Tilstra, Alexia Lairson
    Journal of Physical Therapy Education.2020; 34(3): 252.     CrossRef
  • What is interesting in the issue 2016 of Journal of Educational Evaluation for Health Professions?
    Yera Hur
    Journal of Educational Evaluation for Health Professions.2016; 13: 46.     CrossRef
The validity and reliability of a problem-based learning implementation questionnaire  
Bhina Patria
J Educ Eval Health Prof. 2015;12:22.   Published online June 8, 2015
DOI: https://doi.org/10.3352/jeehp.2015.12.22
  • 51,799 View
  • 312 Download
  • 3 Web of Science
  • 1 Crossref
AbstractAbstract PDF
Purpose
The aim of this paper is to provide evidence for the validity and reliability of a questionnaire for assessing the implementation of problem-based learning (PBL). This questionnaire was developed to assess the quality of PBL implementation from the perspective of medical school graduates. Methods: A confirmatory factor analysis was conducted to assess the validity of the questionnaire. The analysis was based on a survey of 225 graduates of a problem-based medical school in Indonesia. Results: The results showed that the confirmatory factor analysis model had a good fit to the data. Further, the values of the standardized loading estimates, the squared inter-construct correlations, the average variances extracted, and the composite reliabilities all provided evidence of construct validity. Conclusion: The PBL implementation questionnaire was found to be valid and reliable, making it suitable for evaluation purposes.

Citations

Citations to this article as recorded by  
  • Changes in Learning Outcomes of Students Participating in Problem-Based Learning for the First Time: A Case Study of a Financial Management Course
    Yung-Chuan Lee
    The Asia-Pacific Education Researcher.2024;[Epub]     CrossRef
Technical Report
Best-fit model of exploratory and confirmatory factor analysis of the 2010 Medical Council of Canada Qualifying Examination Part I clinical decision-making cases  
André F. Champlain
J Educ Eval Health Prof. 2015;12:11.   Published online April 15, 2015
DOI: https://doi.org/10.3352/jeehp.2015.12.11
  • 33,925 View
  • 220 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDF
Purpose
This study aims to assess the fit of a number of exploratory and confirmatory factor analysis models to the 2010 Medical Council of Canada Qualifying Examination Part I (MCCQE1) clinical decision-making (CDM) cases. The outcomes of this study have important implications for a range of domains, including scoring and test development. Methods: The examinees included all first-time Canadian medical graduates and international medical graduates who took the MCCQE1 in spring or fall 2010. The fit of one- to five-factor exploratory models was assessed for the item response matrix of the 2010 CDM cases. Five confirmatory factor analytic models were also examined with the same CDM response matrix. The structural equation modeling software program Mplus was used for all analyses. Results: Out of the five exploratory factor analytic models that were evaluated, a three-factor model provided the best fit. Factor 1 loaded on three medicine cases, two obstetrics and gynecology cases, and two orthopedic surgery cases. Factor 2 corresponded to pediatrics, and the third factor loaded on psychiatry cases. Among the five confirmatory factor analysis models examined in this study, three- and four-factor lifespan period models and the five-factor discipline models provided the best fit. Conclusion: The results suggest that knowledge of broad disciplinary domains best account for performance on CDM cases. In test development, particular effort should be placed on developing CDM cases according to broad discipline and patient age domains; CDM testlets should be assembled largely using the criteria of discipline and age.

Citations

Citations to this article as recorded by  
  • Exploratory Factor Analysis of a Computerized Case-Based F-Type Testlet Variant
    Yavuz Selim Kıyak, Işıl İrem Budakoğlu, Dilara Bakan Kalaycıoğlu, Özlem Coşkun
    Medical Science Educator.2023; 33(5): 1191.     CrossRef
  • The key-features approach to assess clinical decisions: validity evidence to date
    G. Bordage, G. Page
    Advances in Health Sciences Education.2018; 23(5): 1005.     CrossRef
Research Article
Assessing the reliability and validity of the Revised Two Factor Study Process Questionnaire (R-SPQ2F) in Ghanaian medical students  
Victor Mogre, Anthony Amalba
J Educ Eval Health Prof. 2014;11:19.   Published online August 15, 2014
DOI: https://doi.org/10.3352/jeehp.2014.11.19
  • 27,176 View
  • 213 Download
  • 9 Web of Science
  • 10 Crossref
AbstractAbstract PDF
Purpose
We investigated the validity and reliability of the Revised Two Factor Study Process Questionnaire (R-SPQ2F) in preclinical students in Ghana. Methods: The R-SPQ2F was administered to 189 preclinical students of the University for Development Studies, School of Medicine and Health Sciences. Both descriptive and inferential statistics with Cronbach’s alpha test and factor analysis were done. Results: The mean age of the students was 22.69 ± 0.18 years, 60.8% (n = 115) were males and 42.3% (n = 80) were in their second year of medical training. The students had higher mean deep approach scores (31.23 ± 7.19) than that of surface approach scores (22.62 ± 6.48). Findings of the R-SPQ2F gave credence to a solution of two-factors indicating deep and surface approaches accounting for 49.80% and 33.57%, respectively, of the variance. The scales of deep approach (Cronbach’s alpha, 0.80) and surface approach (Cronbach’s alpha, 0.76) and their subscales demonstrated an internal consistency that was good. The factorial validity was comparable to other studies. Conclusion: Our study confirms the construct validity and internal consistency of the R-SPQ2F for measuring approaches to learning in Ghanaian preclinical students. Deep approach was the most dominant learning approach among the students. The questionnaire can be used to measure students’ approaches to learning in Ghana and in other African countries.

Citations

Citations to this article as recorded by  
  • A comparison of two learning approach inventories and their utility in predicting examination performance and study habits
    Andrew R. Thompson
    Advances in Physiology Education.2024; 48(2): 164.     CrossRef
  • Relationship between learning approach, Bloom’s taxonomy, and student performance in an undergraduate Human Anatomy course
    Andrew R. Thompson, Logan P. O. Lake
    Advances in Health Sciences Education.2023; 28(4): 1115.     CrossRef
  • Evaluation of Construct Validity and Reliability of the Arabic and English Versions of Biggs Study Process Scale Among Saudi University Students
    Nadeem Shafique Butt, Muhammad Abid Bashir, Sami Hamdan Alzahrani, Zohair Jamil Gazzaz, Ahmad Azam Malik
    SAGE Open.2023;[Epub]     CrossRef
  • Development and Preliminary Validation of the Physical Education-Study Process Questionnaire : Insights for Physical Education University Students
    Amayra Tannoubi, Noomen Guelmami, Tore Bonsaksen, Nasr Chalghaf, Fairouz Azaiez, Nicola Luigi Bragazzi
    Frontiers in Public Health.2022;[Epub]     CrossRef
  • The Association Between Learning Styles, Time Management Skills and Pharmacology Academic Performance Among First Year Medical Students in Universiti Putra Malaysia (UPM) During the COVID-19 Pandemic
    Azmah Sa'at, Suryati Mohd. Thani, Safuraa Salihan, Nur Izah Ab. Razak, Siti Saleha Masrudin
    Malaysian Journal of Medicine and Health Sciences.2022; 18(s14): 94.     CrossRef
  • Outcomes of 2 Multimodal Human Anatomy Courses Among Doctor of Physical Therapy Students (Entry-Level): A Quasi-experimental Study
    Sara F. Maher, Deborah J. Doherty
    Journal of Physical Therapy Education.2021; 35(1): 38.     CrossRef
  • Study Approaches of Life Science Students Using the Revised Two-Factor Study Process Questionnaire (R-SPQ-2F)
    Miguel Leiva-Brondo, Jaime Cebolla-Cornejo, Rosa Peiró, Nuria Andrés-Colás, Cristina Esteras, María Ferriol, Hugo Merle, María José Díez, Ana Pérez-de-Castro
    Education Sciences.2020; 10(7): 173.     CrossRef
  • Assessing ‘approaches to learning’ in Botswana, Ghana and Kenya
    Caine Rolleston, Rebecca Schendel, Ana M Grijalva Espinosa
    Research in Comparative and International Education.2019; 14(1): 118.     CrossRef
  • Psychometric properties of the revised two-factor study process questionnaire r-spq-2f - spanish version
    Clara Vergara-Hernández, Miguel Simancas-Pallares, Zoila Carbonell-Muñoz
    Duazary.2019; 16(2): 205.     CrossRef
  • Enfoques de Aprendizaje según el R-SPQ-2F: Análisis de sus propiedades psicométricas en estudiantes universitarios de Buenos Aires
    Agustín Freiberg Hoffmann, María Mercedes Fernández Liporace
    Revista Colombiana de Psicología.2016;[Epub]     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions
TOP