Comparison of Patient-Physician Interaction Scores of Clinical Practice Examination between Checklists and Rating Scale.
10.4082/kjfm.2018.39.2.96
- Author:
Nam Eun KIM
1
;
Hoon Ki PARK
;
Kyong Min PARK
;
Bong Kyung SEO
;
Kye Yeung PARK
;
Hwan Sik HWANG
Author Information
1. Department of Family Medicine, Hanyang University College of Medicine, Seoul, Korea. hoonkp@hanyang.ac.kr
- Publication Type:Original Article
- Keywords:
Physician-Patient Relations;
Medical Education;
Educational Measurement;
Behavior Rating Scale;
Checklist
- MeSH:
Behavior Rating Scale;
Checklist*;
Dyspepsia;
Education, Medical;
Educational Measurement;
Humans;
Licensure;
Methods;
Physician-Patient Relations;
Students, Medical;
Weights and Measures
- From:Korean Journal of Family Medicine
2018;39(2):96-100
- CountryRepublic of Korea
- Language:English
-
Abstract:
BACKGROUND: The clinical practice examination (CPX) was introduced in 2010, and the Seoul-Gyeonggi CPX Consortium developed the patient-physician interaction (PPI) assessment tool in 2004. Both institutions use rating scales on classified sections of PPI but differ in their scoring of key components. This study investigated the accuracy of standardized patient scores across rating scales by comparing checklist methods and verified the concurrent validity of two comparable PPI rating tools. METHODS: An educational CPX module dyspepsia case was administered to 116 fourth-year medical students at Hanyang University College of Medicine. One experienced standardized patient rated exams using two different PPI scales. She scored checklists composed of 43 items related to the two original PPI scales through video clips of the same students. From these checklists, we calculated Pearson's correlation coefficient. RESULTS: The correlations of total PPI score between the checklist and rating scale methods were 0.29 for the Korean Medical Licensing Examination (KMLE) tool and 0.30 for the consortium tool. The correlations between the KMLE and consortium tools were 0.74 for checklists and 0.83 for rating scales. In terms of section scores, the consortium tool showed only three significant correlations between the two methods out of seven sections and the KMLE tool showed only two statistically significant correlations out of five sections. CONCLUSION: The rating scale and checklist methods exhibited a weak relationship in the PPI assessment, but a high correlation between assessment tools using the same method. However, the current rating scale requires modification by reorganizing key scoring components through factor analysis.