1.Emerging tasks of specialty certifying examination: educational measurement considerations.
Journal of the Korean Medical Association 2012;55(2):131-137
Medical specialty systems were launched in 1951 by the National Medical Services Law. The following year, the specialty certifying examination had implemented in the form of portfolio evaluation. A paper-and-pencil type examination was implemented in 1960, and the 55th examination was carried out in January 2012. Currently, 26 specialties are represented, and the overall pass rate is over 90%. The examination consists of a step 1 paper-and-pencil test and step 2 skills test. In the step 1 test, the test items are multiple choice questions and short answer questions. Clinical performance examination is partially applied to the step 2 test. To cope with changes in the social situation and the growth of medical services, developmental changes are needed in the specialty certifying examinations. Performance assessment is an alternative worth considering. CPX should be a major part of the skill test. A computer-based test should be introduced as soon as possible, and it could eventually be developed into an adaptive test.
Educational Measurement
;
Jurisprudence
;
Phosphatidylethanolamines
2.Developing and Testing an Evaluation Tool to Measure Clinical Teaching.
Korean Journal of Medical Education 2011;23(1):49-59
PURPOSE: Teaching evaluation tools can be used to provide feedback to medical educators and help them improve their teaching skills. The purpose of this study was to develop a clinical teaching evaluation tool and test its efficacy. METHODS: The draft tool was developed based on clinical education literature and was modified by the stakeholder groups. To examine the quality of the instrument, content and construct validity, as well as reliability and fitness were analyzed. Content validity was tested by the stakeholder groups using a survey, and construct validity was verified by confirmatory factor analysis using LISREL 8.8. Internal consistency of items was assessed thorough Cronbach alpha estimation. Rasch analysis using Winstep 3.65 was performed to estimate the fitness of the tool. RESULTS: The resulting tool consisted of 4 large categories, 25 small categories, and 43 items. According to the test results, the average importance of all 43 items was 4.03 (3.63 to 4.29). Cronbach alpha was 0.9689, and the correlation coefficients between the items were high. With regard to construct validity, 10 items needed minor modifications in the category setting. As the infit (0.76 to 1.23) and outfit (0.75 to 1.40) indices show, 42 items were fit to the item response theory. CONCLUSION: The clinical teaching evaluation tool that has been developed in this study is valid and reliable and fits the item response theory. It can be used as an evaluation method in a variety of clinical teaching settings.
Educational Measurement
;
Program Evaluation
3.Equating Scores Using Bridging Stations on the Clinical Performance Examination.
Korean Journal of Medical Education 2013;25(2):131-137
PURPOSE: This study examined the use of the Tucker linear equating method in producing an individual student's score in 3 groups with bridging stations over 3 consecutive days of the clinical performance examination (CPX) and compared the differences in scoring patterns by bridging number. METHODS: Data were drawn from 88 examinees from 3 different CPX groups-DAY1, DAY2, and DAY3-each of which comprised of 6 stations. Each group had 3 common stations, and each group had 2 or 3 stations that differed from other groups. DAY1 and DAY3 were equated to DAY2. Equated mean scores and standard deviations were compared with the originals. DAY1 and DAY3 were equated again, and the differences in scores (equated score-raw score) were compared between the 3 sets of equated scores. RESULTS: By equating to DAY2, DAY1 decreased in mean score from 58.188 to 56.549 and in standard deviation from 4.991 to 5.046, and DAY3 fell in mean score from 58.351 to 58.057 and in standard deviation from 5.546 to 5.856, which demonstrates that the scores of examinees in DAY1 and DAY2 were accentuated after use of the equation. The patterns in score differences between the equated sets to DAY1, DAY2, and DAY3 yielded information on the soundness of the equating results from individual and overall comparisons. CONCLUSION: To generate equated scores between 3 groups on 3 consecutive days of the CPX, we applied the Tucker linear equating method. We also present a method of equating reciprocal days to the anchoring day as much as bridging stations.
Clinical Competence
;
Educational Measurement
4.Introduction of Objective Structured Clinical Examination (OSCE) to Residency Examination.
Korean Journal of Medical Education 2008;20(4):343-349
PURPOSE: The purpose of this study is to analyze the development and implementation of the Objective Structured Clinical Examination (OSCE) for the residency applicant examination at Daegu-Catholic Hospital. METHODS: Fifty-four Daegu-Catholic residency applicants were evaluated by written examination, internship scores, and OSCE. The correlation between written examination scores, internship scores, and OSCE scores was assessed. RESULTS: The correlation between OSCE and internship scores showed a tendency to be more significant than that between the written test and internship scores, but OSCE-internship correlation coefficients and written test and internship correlation coefficients was not statistically significant. The distribution of OSCE and internship scores on a graph corroborated this relationship between the two variables. CONCLUSION: The OSCE presents more objective criteria for residency application tests.
Clinical Competence
;
Educational Measurement
;
Internship and Residency
6.Application and enlightenment of formative assessment in the innovation and development of higher education in basic medical science.
Li LI ; Xin CONG ; Li-Ling WU
Acta Physiologica Sinica 2020;72(6):743-750
Formative assessment aims at cultivating and improving students' comprehensive qualities and abilities. It acquires and analyzes the dynamic changes of students' learning process through flexible and diverse assessment methods. The assessment contents cover multiple aspects including knowledge, abilities, attitudes, literacy, etc., which help teachers and students to formulate practical and efficient improvement strategies and to meet the inherent needs of students for comprehensive development. This review describes the connotation of formative assessment and summarizes its application in higher education of basic medical science in China. The existing shortcomings and the reasonable advice are proposed. We also introduce our experience in the application of formative assessment in pathophysiology education. This review will provide reference and enlightenment for the reform and innovation of the assessment system in higher education of basic medical science.
China
;
Education, Medical
;
Educational Measurement
;
Humans
7.Comparisons of item difficulty and passing scores by test equating in a basic medical education curriculum
Jung Eun HWANG ; Na Jin KIM ; Su Young KIM
Korean Journal of Medical Education 2019;31(2):147-157
PURPOSE: Test equating studies in medical education have been conducted only for high-stake exams or to compare two tests given in a single course. Based on item response theory, we equated computer-based test (CBT) results from the basic medical education curriculum at the College of Medicine, the Catholic University of Korea and evaluated the validity of using fixed passing scores. METHODS: We collected 232 CBTs (28,636 items) for 40 courses administered over a study period of 9 years. The final data used for test equating included 12 pairs of tests. After test equating, Wilcoxon rank-sum tests were utilized to identify changes in item difficulty between previous tests and subsequent tests. Then, we identified gaps between equated passing scores and actual passing scores in subsequent tests through an observed-score equating method. RESULTS: The results of Wilcoxon rank-sum tests indicated that there were no significant differences in item difficulty distribution by year for seven pairs. In the other five pairs, however, the items were significantly more difficult in subsequent years than in previous years. Concerning the gaps between equated passing scores and actual passing scores, equated passing scores in 10 pairs were found to be lower than actual passing scores. In the other two pairs, equated passing scores were higher than actual passing scores. CONCLUSION: Our results suggest that the item difficulty distributions of tests taught in the same course during successive terms can differ significantly. It may therefore be problematic to use fixed passing scores without considering this possibility.
Curriculum
;
Education, Medical
;
Educational Measurement
;
Korea
;
Methods
8.Standardized Patients' Accuracy in Recording Checklist Items during Clinical Performance Examinations.
Jwa Seop SHIN ; Seong A LEE ; Hoonki PARK
Korean Journal of Medical Education 2005;17(2):197-203
PURPOSE: Standardized patients participate in clinical performance examinations not only to simulate case scenarios but also to evaluate the performance of students using a checklist. The accuracy in checking off checklist items is one of the most important factors determining the reliability of this examination. The purposes of this study were to determine the SP' s overall accuracy in recording checklist items, and whether their accuracy was affected by certain characteristics of checklist items. METHODS: Three professors, who have been fully involved in scenario development and SP training, reviewed videotapes of the examination and evaluated the performance of the students using the same checklist. SP' s checklists were marked on this 'correct checklist'. The checklists and checklist guidelines of the items marked under the score of 50 out of 100 were analyzed. RESULTS: Results showed that the accuracy of the SP' s in recording checklist items was 86.9% and was affected by certain characteristics, such as complexity or ambiguity of checklists and checklist guidelines. CONCLUSION: In this study, the SP' s accuracy in recording checklist items was good to very good, and the result suggested that the accuracy could be improved by the elaboration of checklists and checklist guidelines.
Checklist*
;
Educational Measurement
;
Humans
;
Observer Variation
;
Videotape Recording
9.Development of the implant surgical technique and assessment rating system.
Jung Chul PARK ; Ji Wan HWANG ; Jung Seok LEE ; Ui Won JUNG ; Seong Ho CHOI ; Kyoo Sung CHO ; Jung Kiu CHAI ; Chang Sung KIM
Journal of Periodontal & Implant Science 2012;42(1):25-29
PURPOSE: There has been no attempt to establish an objective implant surgical evaluation protocol to assess residents' surgical competence and improve their surgical outcomes. The present study presents a newly developed assessment and rating system and simulation model that can assist the teaching staffs to evaluate the surgical events and surgical skills of residents objectively. METHODS: Articles published in peer-reviewed English journals were selected using several scientific databases and subsequently reviewed regarding surgical competence and assessment tools. Particularly, medical journals reporting rating and evaluation protocols for various types of medical surgeries were thoroughly analyzed. Based on these studies, an implant surgical technique assessment and rating system (iSTAR) has been developed. Also, a specialized dental typodont was developed for the valid and reliable assessment of surgery. RESULTS: The iSTAR consists of two parts including surgical information and task-specific checklists. Specialized simulation model was subsequently produced and can be used in combination with iSTAR. CONCLUSIONS: The assessment and rating system provided may serve as a reference guide for teaching staffs to evaluate the residents' implant surgical techniques.
Checklist
;
Dental Implantation
;
Educational Measurement
;
Mental Competency
;
Oral Surgical Procedures
10.The Current Educational Measurement of Family Practice Residents.
Min Jeong KIM ; Ho Cheol SHIN ; Cheol Hawn KIM ; Youn Seon CHOI ; Hang LEE ; Ae Kyung CHOI ; Yu Jin PAEK ; Eon Sook LEE ; Jin Ho PARK ; Yoon Jung CHANG
Journal of the Korean Academy of Family Medicine 2007;28(8):616-625
BACKGROUND: The educational assessment of residents is very important in order to check their knowledge and to improve their ability. In case of family practice, the residents spend most of their training time in outside rotations with other specialities. Therefore, they are requested to be evaluated by each specialist after each rotation. To give more accurate assessment, we surveyed directly our family practice residents and family practice faculties. METHODS: A questionnaire was sent to family practice residents and family practice faculties in 116 hospitals from May to June 2006. The response rate was 29% and 78%, respectively. RESULTS: Among the total, 70.2% of family practice residents thought an assessment is necessary, but only 51.7% were evaluated. Overall, 90.4% of family practice faculties knew about the assessment of residents made by other specialists, and only 56.7% of faculties requested other specialists to assess their family practice residents. CONCLUSION: Most of the family practice residents and their faculties knew the need for assessment, but they complained there were no objectivity and the assessment tool was lacking. In order to solve this problem, it is urgent to develop a unified form and guidelines of assesment.
Educational Measurement*
;
Equidae
;
Family Practice*
;
Humans
;
Specialization
;
Surveys and Questionnaires