1.Profiling medical school learning environments in Malaysia: a validation study of the Johns Hopkins Learning Environment Scale.
Sean TACKETT ; Hamidah Abu BAKAR ; Nicole A SHILKOFSKI ; Niamh COADY ; Krishna RAMPAL ; Scott WRIGHT
Journal of Educational Evaluation for Health Professions 2015;12(1):39-
PURPOSE: While a strong learning environment is critical to medical student education, the assessment of medical school learning environments has confounded researchers. Our goal was to assess the validity and utility of the Johns Hopkins Learning Environment Scale (JHLES) for preclinical students at three Malaysian medical schools with distinct educational and institutional models. Two schools were new international partnerships, and the third was school leaver program established without international partnership. METHODS: First- and second-year students responded anonymously to surveys at the end of the academic year. The surveys included the JHLES, a 28-item survey using five-point Likert scale response options, the Dundee Ready Educational Environment Measure (DREEM), the most widely used method to assess learning environments internationally, a personal growth scale, and single-item global learning environment assessment variables. RESULTS: The overall response rate was 369/429 (86%). After adjusting for the medical school year, gender, and ethnicity of the respondents, the JHLES detected differences across institutions in four out of seven domains (57%), with each school having a unique domain profile. The DREEM detected differences in one out of five categories (20%). The JHLES was more strongly correlated than the DREEM to two thirds of the single-item variables and the personal growth scale. The JHLES showed high internal reliability for the total score (alpha=0.92) and the seven domains (alpha= 0.56-0.85). CONCLUSION: The JHLES detected variation between learning environment domains across three educational settings, thereby creating unique learning environment profiles. Interpretation of these profiles may allow schools to understand how they are currently supporting trainees and identify areas needing attention.
Anonyms and Pseudonyms
;
Surveys and Questionnaires
;
Education
;
Educational Measurement
;
Humans
;
Learning*
;
Malaysia*
;
Schools, Medical*
;
Students, Medical
2.Feasibility of clinical performance assessment of medical students on a virtual sub-internship in the United States
John WOLLER ; Sean TACKETT ; Ariella APFEL ; Janet RECORD ; Danelle CAYEA ; Shannon WALKER ; Amit PAHWA
Journal of Educational Evaluation for Health Professions 2021;18(1):12-
We aimed to determine whether it was feasible to assess medical students as they completed a virtual sub-internship. Six students (out of 31 who completed an in-person sub-internship) participated in a 2-week virtual sub-internship, caring for patients remotely. Residents and attendings assessed those 6 students in 15 domains using the same assessment measures from the in-person sub-internship. Raters marked “unable to assess” in 75/390 responses (19%) for the virtual sub-internship versus 88/3,405 (2.6%) for the in-person sub-internship (P=0.01), most frequently for the virtual sub-internship in the domains of the physical examination (21, 81%), rapport with patients (18, 69%), and compassion (11, 42%). Students received complete assessments in most areas. Scores were higher for the in-person than the virtual sub-internship (4.67 vs. 4.45, P<0.01) for students who completed both. Students uniformly rated the virtual clerkship positively. Students can be assessed in many domains in the context of a virtual sub-internship.
3.Feasibility of clinical performance assessment of medical students on a virtual sub-internship in the United States
John WOLLER ; Sean TACKETT ; Ariella APFEL ; Janet RECORD ; Danelle CAYEA ; Shannon WALKER ; Amit PAHWA
Journal of Educational Evaluation for Health Professions 2021;18(1):12-
We aimed to determine whether it was feasible to assess medical students as they completed a virtual sub-internship. Six students (out of 31 who completed an in-person sub-internship) participated in a 2-week virtual sub-internship, caring for patients remotely. Residents and attendings assessed those 6 students in 15 domains using the same assessment measures from the in-person sub-internship. Raters marked “unable to assess” in 75/390 responses (19%) for the virtual sub-internship versus 88/3,405 (2.6%) for the in-person sub-internship (P=0.01), most frequently for the virtual sub-internship in the domains of the physical examination (21, 81%), rapport with patients (18, 69%), and compassion (11, 42%). Students received complete assessments in most areas. Scores were higher for the in-person than the virtual sub-internship (4.67 vs. 4.45, P<0.01) for students who completed both. Students uniformly rated the virtual clerkship positively. Students can be assessed in many domains in the context of a virtual sub-internship.