1.Challenges and Strategies in Medical Education in the COVID‐19 Pandemic
Korean Medical Education Review 2021;23(3):154-159
The coronavirus disease 2019 (COVID-19) pandemic has profoundly impacted all aspects of undergraduate, postgraduate, and continuing medical education. Only the focus of medical education—care for patients and communities—has remained an integral part of all of the above sectors. Several challenges have been experienced by learners and educators as the education and training of future doctors has continued in the midst of this crisis, including the cancellation of face-to-face classes and training, reduced patient encounter opportunities, fairness issues in online assessments, disruption of patient interview-based exams, reflections on the role of doctors in society, and mental health-related problems linked to isolation and concerns about infection. In response to these disruptions, educators and institutions have rapidly deployed educational innovations. Schools have adopted educational strategies to overcome these challenges by implementing novel education delivery methods in an online format, providing clinical experiences through simulation or telehealth methods, introducing online assessment tools with formative purposes, encouraging learners’ involvement in nonclinical activities such as community service, and making available resources and programs to sustain learners’ mental health and wellness. During the COVID-19 pandemic, educators and institutions have faced drastic changes in medical education worldwide. At the same time, the quantitative expansion of online education has caused other problems, such as the lack of human collaboration. The long-term effects of the COVID-19 pandemic on medical education need to be studied further.
2.How Do Medical Students Prepare for Examinations: Pre-assessment Cognitive and Meta-cognitive Activities
So Jung YUNE ; Sang Yeoup LEE ; Sunju IM
Korean Medical Education Review 2019;21(1):51-58
Although ‘assessment for learning’ rather than ‘assessment of learning’ has been emphasized recently, student learning before examinations is still unclear. The purpose of this study was to investigate pre-assessment learning activities (PALA) and to find mechanism factors (MF) that influence those activities. Moreover, we compared the PALA and MF of written exams with those of the clinical performance examination/objective structured clinical examination (CPX/OSCE) in third-year (N=121) and fourth-year (N=108) medical students. Through literature review and discussion, questionnaires with a 5-point Likert scale were developed to measure PALA and MF. PALA had the constructs of cognitive and meta-cognitive activities, and MF had sub-components of personal, interpersonal, and environmental factors. Cronbach's α coefficient was used to calculate survey reliability, while the Pearson correlation coefficient and multiple regression analysis were used to investigate the influence of MF on PALA. A paired t-test was applied to compare the PALA and MF of written exams with those of CPX/OSCE in third and fourth year students. The Pearson correlation coefficients between PALA and MF were 0.479 for written exams and 0.508 for CPX/OSCE. MF explained 24.1% of the PALA in written exams and 25.9% of PALA in CPX/OSCE. Both PALA and MF showed significant differences between written exams and CPX/OSCE in third-year students, whereas those in fourth-year students showed no differences. Educators need to consider MFs that influence the PALA to encourage 'assessment for learning'.
Education, Medical, Undergraduate
;
Educational Measurement
;
Humans
;
Learning
;
Students, Medical
3.Educational Program Evaluation System in a Medical School
So-Jung YUNE ; Sang-Yeoup LEE ; Sunju IM
Korean Medical Education Review 2020;22(2):131-142
A systematic educational program evaluation system for continuous quality improvement in undergraduate medical education is essential. Monitoring and evaluation (M&E) are two distinct but complementary processes referred to in an evaluation system that emphasizes formative purpose. Monitoring involves regular data collection for tracking process and results, while evaluation requires periodic judgment for improvement. We have recently completed implementing an educational evaluation using the M&E concept in a medical school. The evaluation system consists of two loops, one at the lesson/course level and the other at the phase/graduation level. We conducted evaluation activities in four stages: planning, monitoring, evaluation, and improvement. In the planning phase, we clarified the purpose of evaluation, formulated a plan to engage stakeholders, determined evaluation criteria and indicators, and developed an evaluation plan. Next, during the monitoring phase, we developed evaluation instruments and methods and then collected data. In the evaluation phase, we analyzed results and evaluated the criteria of the two loops. Finally, we reviewed the evaluation results with stakeholders to make improvements. We have recognized several problems including excessive burden, lack of expertise, insufficient consideration of stakeholders’ evaluation questions, and inefficient data collection. We need to share the value of evaluation and build a system gradually.
4.Cohort Establishment and Operation at Pusan National University School of Medicine
So-Jung YUNE ; Sang-Yeoup LEE ; Sunju IM
Korean Medical Education Review 2023;25(2):119-125
Pusan National University School of Medicine (PNUSOM) began analyzing the cohort of pre-medical students admitted in 2015 and has been conducting purposeful analyses for the past 3 years. The aim of this paper is to introduce the process of cohort establishment, cohort composition, and the utilization of cohort analysis results. PNUSOM did not initially form a cohort with a purpose or through a systematic process, but was able to collect longitudinal data on students through the establishment of a Medical Education Information System and an organization that supports medical education. Cohort construction at our university is different in terms of a clear orientation toward research questions, flexibility in cohort composition, and subsequent guideline supplementation. We investigated the relevance of admission factors, performance improvements, satisfaction with the educational environment, and promotion and failure rate in undergraduate students, as well as performance levels and career paths in graduates. The results were presented to the Admissions Committee, Curriculum Committee, Learning Outcomes Committee, and Student Guidance Committee to be used as a basis for innovations and improvements in education. Since cohort studies require long-term efforts, it is necessary to ensure the efficiency of data collection for graduate cohorts, as well as the validity and ethics of the study.
5.Assessing clinical reasoning abilities of medical students using clinical performance examination.
Sunju IM ; Do Kyong KIM ; Hyun Hee KONG ; Hye Rin ROH ; Young Rim OH ; Ji Hyun SEO
Korean Journal of Medical Education 2016;28(1):35-47
PURPOSE: The purpose of this study is to investigate the reliability and validity of new clinical performance examination (CPX) for assessing clinical reasoning skills and evaluating clinical reasoning ability of the students. METHODS: Third-year medical school students (n=313) in Busan-Gyeongnam consortium in 2014 were included in the study. One of 12 stations was developed to assess clinical reasoning abilities. The scenario and checklists of the station were revised by six experts. Chief complaint of the case was rhinorrhea, accompanied by fever, headache, and vomiting. Checklists focused on identifying of the main problem and systematic approach to the problem. Students interviewed the patient and recorded subjective and objective findings, assessments, plans (SOAP) note for 15 minutes. Two professors assessed students simultaneously. We performed statistical analysis on their scores and survey. RESULTS: The Cronbach α of subject station was 0.878 and Cohen κ coefficient between graders was 0.785. Students agreed on CPX as an adequate tool to evaluate students' performance, but some graders argued that the CPX failed to secure its validity due to their lack of understanding the case. One hundred eight students (34.5%) identified essential problem early and only 58 (18.5%) performed systematic history taking and physical examination. One hundred seventy-three of them (55.3%) communicated correct diagnosis with the patient. Most of them had trouble in writing SOAP notes. CONCLUSION: To gain reliability and validity, interrater agreement should be secured. Students' clinical reasoning skills were not enough. Students need to be trained on problem identification, reasoning skills and accurate record-keeping.
Checklist
;
*Clinical Competence
;
Communication
;
Comprehension
;
*Education, Medical, Undergraduate
;
Educational Measurement/*standards
;
Humans
;
Medical History Taking
;
Medical Records
;
Observer Variation
;
Physical Examination
;
Physician-Patient Relations
;
*Problem-Based Learning
;
Reproducibility of Results
;
Republic of Korea
;
*Schools, Medical
;
*Students, Medical
;
Surveys and Questionnaires
;
*Thinking
;
Universities
6.Medical students' clinical performance of dealing with patients in the context of domestic violence.
Hyun Hee KONG ; Sunju IM ; Ji Hyun SEO ; Do Kyong KIM ; HyeRin ROH
Korean Journal of Medical Education 2018;30(1):31-40
PURPOSE: The aim of this study was to inquire about the clinical performance and determine the performance pattern of medical students in standardized patient (SP) based examinations of domestic violence (DV). METHODS: The clinical performance sores in DV station with SP of third-year (n=111, in 2014) and 4th-year (n=143, in 2016) medical students of five universities in the Busan-Gyeongnam Clinical Skills Examination Consortium were subjected in this study. The scenarios and checklists of DV cases were developed by the case development committee of the consortium. The students’ performance was compared with other stations encountered in SP. The items of the checklists were categorized to determine the performance pattern of students investigating DV into six domains: disclosure strategy (D), DV related history taking (H), checking the perpetrator’s psychosocial state (P), checking the victim’s condition (V), negotiating and persuading the interviewee (N), and providing information about DV (I). RESULTS: Medical students showed poorer performance in DV stations than in the other stations with SP in the same examination. Most students did confirm the perpetrator and commented on confidentiality but ignored the perpetrator’s state and patient’s physical and psychological condition. The students performed well in the domains of D, H, and I but performed poorly in domains P, V, and N. CONCLUSION: Medical students showed poor clinical performance in the DV station. They performed an ‘event oriented interview’ rather than ‘patient centered’ communication. An integrated educational program of DV should be set to improve students’ clinical performance.
Checklist
;
Child
;
Child Abuse
;
Clinical Competence
;
Confidentiality
;
Disclosure
;
Domestic Violence*
;
Education, Medical, Undergraduate
;
Humans
;
Negotiating
;
Students, Medical
7.Assessing the Validity of the Preclinical Objective Structured Clinical Examination Using Messick’s Validity Framework
Hye-Yoon LEE ; So-Jung YUNE ; Sang-Yeoup LEE ; Sunju IM
Korean Medical Education Review 2021;23(3):185-193
Students must be familiar with clinical skills before starting clinical practice to ensure patients’ safety and enable efficient learning. However, performance is mainly tested in the third or fourth years of medical school, and studies using the validity framework have not been reported in Korea. We analyzed the validity of a performance test conducted among second-year students classified into content, response process, internal structure, relationships with other variables, and consequences according to Messick’s framework.As results of the analysis, content validity was secured by developing cases according to a pre-determined blueprint. The quality of the response process was controlled by training and calibrating raters. The internal structure showed that (1) reliability by generalizability theory was acceptable (coefficients of 0.724 and 0.786, respectively, for day 1 and day 2), and (2) the relevant domains had proper correlations, while the clinical performance examination (CPX) and objective structured clinical examination (OSCE) showed weaker relationships. OSCE/CPX scores were correlated with other variables, especially grade point average and oral structured exam scores. The consequences of this assessment were (1) making students learn clinical skills and study themselves, while causing too much stress for students due to lack of motivation; (2) reminding educators of the need to apply practical teaching methods and to give feedback on the test results; and (3) providing an opportunity for faculty to consider developing support programs. It is necessary to develop the blueprint more precisely according to students’ level and to verify the validity of the response process with statistical methods.
9.Factors That Influence Educational Effectiveness and Learning Satisfaction in Biomedical Research Programs during Premedical School
So Jung YUNE ; Yong Sang PARK ; Jung Ho CHO ; Jong Min HAN ; Hee Min HWA ; Sang Yeoup LEE ; Sunju IM
Korean Medical Education Review 2018;20(1):32-43
Although student research programs have been implemented worldwide, research programs during premedical school have unique characteristics. The purpose of this study is to evaluate factors that influence the effects of premedical school research programs. Eighty second-year premedical students at Pusan National University were included in the study. Effect elements and influential factors were extracted through reference reviews and in-depth individual interviews. A Likert scale questionnaire was developed using the extracted elements and factors, and Cronbach's alpha coefficient was used to analyze the reliability of the survey. The mean value and the standard deviation for each question were calculated to evaluate education effectiveness and learning satisfaction, and the influence of each factor on effect was analyzed using correlation analysis. Students' research skills and knowledge were improved in the short term; however, interest in research or in a career as a researcher did not increase. Student interest, participation, and contributions were important factors. Among professors, passion, considerateness, and teaching method including the level of lesson were influential factors. Implementation of curriculum and support and guidance were influential as well, whereas evaluation system was not a factor. To improve student research programs, several factors that influence education effectiveness and learning satisfaction should be considered.
Busan
;
Curriculum
;
Education
;
Education, Premedical
;
Humans
;
Learning
;
Program Evaluation
;
Students, Medical
;
Students, Premedical
;
Teaching
10.Authenticity, acceptability, and feasibility of a hybrid gynecology station for the Papanicolaou test as part of a clinical skills examination in Korea
Ji Hyun SEO ; Younglim OH ; Sunju IM ; Do Kyong KIM ; Hyun Hee KONG ; HyeRin ROH
Journal of Educational Evaluation for Health Professions 2018;15(1):4-
PURPOSE: The objective of this study was to evaluate the authenticity, acceptability, and feasibility of a hybrid station that combined a standardized patient encounter and a simulated Papanicolaou test. METHODS: We introduced a hybrid station in the routine clinical skills examination (CSE) for 335 third-year medical students at 4 universities in Korea from December 1 to December 3, 2014. After the tests, we conducted an anonymous survey on the authenticity, acceptability, and feasibility of the hybrid station. RESULTS: A total of 334 medical students and 17 professors completed the survey. A majority of the students (71.6%) and professors (82.4%) agreed that the hybrid station was more authentic than the standard CSE. Over 60 percent of the students and professors responded that the station was acceptable for assessing the students' competence. Most of the students (75.2%) and professors (82.4%) assessed the required tasks as being feasible after reading the instructions. CONCLUSION: Our results showed that the hybrid CSE station was a highly authentic, acceptable, and feasible way to assess medical students' performance.
Anonyms and Pseudonyms
;
Clinical Competence
;
Gynecology
;
Humans
;
Korea
;
Mental Competency
;
Papanicolaou Test
;
Patient Simulation
;
Students, Medical