1.Best fit model of exploratory and confirmatory factor analysis to the 2010 Medical Council of Canada's Qualifying Examination Part I clinical decision making cases.
Journal of Educational Evaluation for Health Professions 2015;12(1):11-
PURPOSE: This study aims to assess the fit of a number of exploratory and confirmatory factor analysis models to the 2010 Medical Council of Canada's Qualifying Examination Part I (MCCQE1) clinical decision making cases (CDM). The outcomes of this study have important implications for a number of activities, including scoring and test development. METHODS: Candidates included all first-time Canadian medical graduates and international Medical graduates who completed either the spring or fall 2010 test form of the MCCQE1. The fit of one- to five-factor exploratory models was assessed for the 2010 CDM cases item response matrix. Five confirmatory factor analytic models were also examined with the same CDM response matrix. The structural equation modeling software program, Mplus(R) was used for all analyses. RESULTS: Out of five exploratory factor analytic models, a three-factor model provided the best fit. Factor 1 loaded on 3 medicine cases, 2 obstetrics and gynecology cases, and 2 orthopedic surgery cases. Factor 2 corresponded to a pediatrics factor whereas the third factor loads on psychiatry CDM cases. Among the five confirmatory factor analysis models examined in this study, three- and four-factor lifespan period models and the five factor discipline models provided the best fit. CONCLUSION: Above results suggest that broad discipline domains best account for performance on CDM cases. In test development, particular effort should be placed on developing CDM cases according to broad discipline and patient age domains; CDM testlets should be assembled largely using these two constraints i.e. discipline and age.
Decision Making*
;
Educational Measurement
;
Factor Analysis, Statistical*
;
Gynecology
;
Humans
;
Licensure, Medical
;
Obstetrics
;
Orthopedics
;
Pediatrics
2.Calibrating the Medical Council of Canada's Qualifying Examination Part I using an integrated item response theory framework: a comparison of models and designs.
Andre F DE CHAMPLAIN ; Andre Philippe BOULAIS ; Andrew DALLAS
Journal of Educational Evaluation for Health Professions 2016;13(1):6-
PURPOSE: The aim of this research was to compare different methods of calibrating multiple choice question (MCQ) and clinical decision making (CDM) components for the Medical Council of Canada's Qualifying Examination Part I (MCCQEI) based on item response theory. METHODS: Our data consisted of test results from 8,213 first time applicants to MCCQEI in spring and fall 2010 and 2011 test administrations. The data set contained several thousand multiple choice items and several hundred CDM cases. Four dichotomous calibrations were run using BILOG-MG 3.0. All 3 mixed item format (dichotomous MCQ responses and polytomous CDM case scores) calibrations were conducted using PARSCALE 4. RESULTS: The 2-PL model had identical numbers of items with chi-square values at or below a Type I error rate of 0.01 (83/3,499 or 0.02). In all 3 polytomous models, whether the MCQs were either anchored or concurrently run with the CDM cases, results suggest very poor fit. All IRT abilities estimated from dichotomous calibration designs correlated very highly with each other. IRT-based pass-fail rates were extremely similar, not only across calibration designs and methods, but also with regard to the actual reported decision to candidates. The largest difference noted in pass rates was 4.78%, which occurred between the mixed format concurrent 2-PL graded response model (pass rate= 80.43%) and the dichotomous anchored 1-PL calibrations (pass rate= 85.21%). CONCLUSION: Simpler calibration designs with dichotomized items should be implemented. The dichotomous calibrations provided better fit of the item response matrix than more complex, polytomous calibrations.
Calibration
;
Canada
;
Clinical Decision-Making
;
Dataset
;
Educational Measurement
;
Licensure