1.Prospects and applications of artificial intelligence and large language models in obstetrics and gynecology education: a narrative review
Journal of the Korean Medical Association 2025;68(3):161-168
This review examines how artificial intelligence (AI) and large language models (LLMs) can meet the diverse demands of obstetrics and gynecology education. Based on an exploration of their applications, benefits, and challenges, strategies are proposed for effectively integrating these emerging technologies into educational programs.Current Concepts: Traditional obstetrics and gynecology education relies on lectures, hands-on training, and clinical exposure. However, these approaches often face limitations such as restricted practical opportunities and difficulties in remaining current with rapidly evolving medical knowledge. Recent AI advancements offer enhanced data analysis and problem-solving capabilities, while LLMs, through natural language processing, can supply timely, disease-specific information and facilitate simulation-based training. Despite these benefits, concerns persist regarding data bias, ethical considerations, privacy risks, and potential disparities in healthcare access.Discussion and Conclusion: Although AI and LLMs hold promise for improving obstetrics and gynecology education by expanding access to current information and reinforcing clinical competencies, they also present drawbacks. Algorithmic transparency, data quality, and ethical use of patient information must be addressed to foster trust and effectiveness. Strengthening ethics education, developing Explainable AI, and establishing clear validation and regulatory frameworks are critical for minimizing risks such as over-diagnosis, bias, and inequitable resource distribution. When used responsibly, AI and LLMs can revolutionize obstetrics and gynecology education by enhancing teaching methods, promoting student engagement, and improving clinical preparedness.
2.Prospects and applications of artificial intelligence and large language models in obstetrics and gynecology education: a narrative review
Journal of the Korean Medical Association 2025;68(3):161-168
This review examines how artificial intelligence (AI) and large language models (LLMs) can meet the diverse demands of obstetrics and gynecology education. Based on an exploration of their applications, benefits, and challenges, strategies are proposed for effectively integrating these emerging technologies into educational programs.Current Concepts: Traditional obstetrics and gynecology education relies on lectures, hands-on training, and clinical exposure. However, these approaches often face limitations such as restricted practical opportunities and difficulties in remaining current with rapidly evolving medical knowledge. Recent AI advancements offer enhanced data analysis and problem-solving capabilities, while LLMs, through natural language processing, can supply timely, disease-specific information and facilitate simulation-based training. Despite these benefits, concerns persist regarding data bias, ethical considerations, privacy risks, and potential disparities in healthcare access.Discussion and Conclusion: Although AI and LLMs hold promise for improving obstetrics and gynecology education by expanding access to current information and reinforcing clinical competencies, they also present drawbacks. Algorithmic transparency, data quality, and ethical use of patient information must be addressed to foster trust and effectiveness. Strengthening ethics education, developing Explainable AI, and establishing clear validation and regulatory frameworks are critical for minimizing risks such as over-diagnosis, bias, and inequitable resource distribution. When used responsibly, AI and LLMs can revolutionize obstetrics and gynecology education by enhancing teaching methods, promoting student engagement, and improving clinical preparedness.
3.Prospects and applications of artificial intelligence and large language models in obstetrics and gynecology education: a narrative review
Journal of the Korean Medical Association 2025;68(3):161-168
This review examines how artificial intelligence (AI) and large language models (LLMs) can meet the diverse demands of obstetrics and gynecology education. Based on an exploration of their applications, benefits, and challenges, strategies are proposed for effectively integrating these emerging technologies into educational programs.Current Concepts: Traditional obstetrics and gynecology education relies on lectures, hands-on training, and clinical exposure. However, these approaches often face limitations such as restricted practical opportunities and difficulties in remaining current with rapidly evolving medical knowledge. Recent AI advancements offer enhanced data analysis and problem-solving capabilities, while LLMs, through natural language processing, can supply timely, disease-specific information and facilitate simulation-based training. Despite these benefits, concerns persist regarding data bias, ethical considerations, privacy risks, and potential disparities in healthcare access.Discussion and Conclusion: Although AI and LLMs hold promise for improving obstetrics and gynecology education by expanding access to current information and reinforcing clinical competencies, they also present drawbacks. Algorithmic transparency, data quality, and ethical use of patient information must be addressed to foster trust and effectiveness. Strengthening ethics education, developing Explainable AI, and establishing clear validation and regulatory frameworks are critical for minimizing risks such as over-diagnosis, bias, and inequitable resource distribution. When used responsibly, AI and LLMs can revolutionize obstetrics and gynecology education by enhancing teaching methods, promoting student engagement, and improving clinical preparedness.
4.Maternal serum placental growth factor and pregnancy-associated plasma protein A measured in the first trimester as parameters of subsequent pre-eclampsia and small-for-gestational-age infants: A prospective observational study.
Kyung Uk SUNG ; Jeong A ROH ; Kyung Jin EOH ; Eui Hyeok KIM
Obstetrics & Gynecology Science 2017;60(2):154-162
OBJECTIVE: To examine the first-trimester maternal serum placental growth factor (PlGF) and pregnancy-associated plasma protein A (PAPP-A) levels in pregnancies associated with pre-eclampsia (PE) or small-for-gestational-age (SGA) infants, and determine the predictive accuracy of PlGF and of PAPP-A for either PE or SGA infants. METHODS: This prospective, observational study included 175 pregnant women, and of these women, due to participant withdrawal or loss to follow-up, delivery data were collected from the medical records of 155 women, including 4 who had twin pregnancies. The women's maternal history was recorded, and the PlGF and PAPP-A levels at 11 to 13 gestational weeks were measured. During the second trimester, the maternal uterine artery's systolic/diastolic ratio was measured. Multiples of the median (MoM) of PlGF and PAPP-A were determined, and the associations of these values with the risk factors of SGA and PE were evaluated. Logistic regression analysis was used to determine whether PlGF and PAPP-A are useful markers for predicting SGA infants. RESULTS: The PAPP-A MoM level was significantly lower in women with advanced maternal age, multipara women, and women with gestational diabetes than in their counterparts. The PlGF and PAPP-A MoM levels were higher in women with a twin pregnancy than in those with a singleton pregnancy. There was a significant relationship between the maternal serum PAPP-A MoM level in the first trimester and the uterine artery systolic/diastolic ratio in the second trimester. Results of logistic regression analysis showed that low PlGF and PAPP-A MoM levels were predictors of SGA infants (odds ratio, 0.143; 95% confidence interval, 0.025 to 0.806; odds ratio, 0.191; 95% confidence interval, 0.051 to 0.718, respectively). CONCLUSION: PlGF and PAPP-A are potentially useful as first-trimester markers for SGA infants and some hypertensive disorders of pregnancy.
Diabetes, Gestational
;
Female
;
Follow-Up Studies
;
Humans
;
Infant*
;
Logistic Models
;
Maternal Age
;
Medical Records
;
Observational Study*
;
Odds Ratio
;
Plasma*
;
Pre-Eclampsia*
;
Pregnancy
;
Pregnancy Trimester, First*
;
Pregnancy Trimester, Second
;
Pregnancy, Twin
;
Pregnancy-Associated Plasma Protein-A
;
Pregnant Women
;
Prospective Studies*
;
Risk Factors
;
Staphylococcal Protein A*
;
Uterine Artery
5.Fusion of Pedicular Cleft Using Pedicle Screw Fixation: A Case Report.
Jin Soo KIM ; Ki Tack KIM ; Kyung Soo SUK ; Jung Hee LEE ; Sang Hun LEE ; Jae Hyung EOH
Journal of Korean Society of Spine Surgery 2008;15(2):106-110
There is a broad range of lumbar pedicular and neural arch anomalies, with spina bifida occulta and spondylolysis being especially common. It is very rare for a pedicular cleft to be associated with contralateral spondylolysis in the same vertebral segment. We observed L4 spondylolisthesis, clefting of the left L4 hypertrophied pedicle associated with contralateral spondylolysis, and spinal stenosis on a radiographic study of the lumbar spine. Operative treatment was performed, with posterior decompression, partial removal of the hypertrophied pedicle, posterior lumbar interbody fusion using a cage, and posterior instrumentation. The patient's neurologic symptoms resolved after operative treatment. Fusion of the left pedicle cleft was observed on 6-month follow-up CT. We report one case of pedicular cleft fusion using pedicle screw fixation and present a review of the relevant literature.
Decompression
;
Follow-Up Studies
;
Neurologic Manifestations
;
Spina Bifida Occulta
;
Spinal Stenosis
;
Spine
;
Spondylolisthesis
;
Spondylolysis
6.Comparison of the Prognostic Outcome between High-Grade Ovarian Sertoli-Leydig Cell Tumors (SLCTs) and Low-Grade SLCTs
Kyung Jin EOH ; Junsik PARK ; Hye Min KIM ; Maria LEE ; Young Tae KIM
Yonsei Medical Journal 2021;62(4):366-369
The purpose of the current study was to compare prognostic outcomes between patients with high-grade ovarian Sertoli-Leydig cell tumors (SLCTs) and those with other low-grade SLCTs. We retrospectively reviewed medical records for 24 patients pathologically diagnosed with SLCTs between 2006 to 2019 at two institutions. The patients were grouped according to pathological grade: SLCT was classified as grade 1, well differentiated; grade 2, intermediated differentiated; or grade 3, poorly differentiated (Meyer’s classification). Statistical analysis was performed to compare survival outcomes according to pathological grade. The median patient age was 42.5 years (range 16–75). Eighteen patients (75%) were International Federation of Gynecology and Obstetrics stage I, and none were diagnosed in stage IV. Nine patients (37.5%) were grade 3, and 15 patients (63.5%) were grades 1–2. When comparing clinical baseline characteristics of the grade 1–2 group with those of the grade 3 group, only serum CA125 level at diagnosis was significantly higher in the grade 3 group (38.34 vs. 382.29, p=0.002). Five patients experienced recurrence of grade 3 disease, while no recurrence was reported in grade 1–2 disease. Four of the five recurrent patients died. In result, grade 3 ovarian SLCT showed significantly poorer prognosis than grade 1–2 disease (overall survival, hazard ratio=14.25, 95% confidence interval= 1.881–108.0; log-rank p=0.010). Our findings were consistent with the concept that patients with stage I/grade 1–2 tumors have a good prognosis without adjuvant chemotherapy. Since grade 3 ovarian SLCT appears to be relatively more fatal than grade 1 or 2, patients with grade 3 SLCT might require more aggressive surgical intervention and post-treatment surveillance.
8.Efficacy of large language models and their potential in Obstetrics and Gynecology education
Kyung Jin EOH ; Gu Yeun KWON ; Eun Jin LEE ; JoonHo LEE ; Inha LEE ; Young Tae KIM ; Eun Ji NAM
Obstetrics & Gynecology Science 2024;67(6):550-556
Objective:
The performance of large language models (LLMs) and their potential utility in obstetric and gynecological education are topics of ongoing debate. This study aimed to contribute to this discussion by examining the recent advancements in LLM technology and their transformative potential in artificial intelligence.
Methods:
This study assessed the performance of generative pre-trained transformer (GPT)-3.5 and -4 in understanding clinical information, as well as its potential implications for obstetric and gynecological education. Obstetrics and gynecology residents at three hospitals underwent an annual promotional examination, from which 116 of the 170 questions over 4 years (2020-2023) were analyzed, excluding 54 questions with images. The scores achieved by GPT-3.5, -4, and the 100 residents were compared.
Results:
The average scores across all 4 years for GPT-3.5 and -4 were 38.79 (standard deviation [SD], 5.65) and 79.31 (SD, 3.67), respectively. For groups first-year resident, second-year resident, and third-year resident, the cumulative annual average scores were 79.12 (SD, 9.00), 80.95 (SD, 5.86), and 83.60 (SD, 6.82), respectively. No statistically significant differences were observed between the scores of GPT-4.0 and those of the residents. When analyzing questions specific to obstetrics, the average scores for GPT-3.5 and -4.0 were 33.44 (SD, 10.18) and 90.22 (SD, 7.68), respectively.
Conclusion
GPT-4 demonstrated exceptional performance in obstetrics, different types of data interpretation, and problem solving, showcasing the potential utility of LLMs in these areas. However, acknowledging the constraints of LLMs is crucial and their utilization should augment human expertise and discernment.
9.Efficacy of large language models and their potential in Obstetrics and Gynecology education
Kyung Jin EOH ; Gu Yeun KWON ; Eun Jin LEE ; JoonHo LEE ; Inha LEE ; Young Tae KIM ; Eun Ji NAM
Obstetrics & Gynecology Science 2024;67(6):550-556
Objective:
The performance of large language models (LLMs) and their potential utility in obstetric and gynecological education are topics of ongoing debate. This study aimed to contribute to this discussion by examining the recent advancements in LLM technology and their transformative potential in artificial intelligence.
Methods:
This study assessed the performance of generative pre-trained transformer (GPT)-3.5 and -4 in understanding clinical information, as well as its potential implications for obstetric and gynecological education. Obstetrics and gynecology residents at three hospitals underwent an annual promotional examination, from which 116 of the 170 questions over 4 years (2020-2023) were analyzed, excluding 54 questions with images. The scores achieved by GPT-3.5, -4, and the 100 residents were compared.
Results:
The average scores across all 4 years for GPT-3.5 and -4 were 38.79 (standard deviation [SD], 5.65) and 79.31 (SD, 3.67), respectively. For groups first-year resident, second-year resident, and third-year resident, the cumulative annual average scores were 79.12 (SD, 9.00), 80.95 (SD, 5.86), and 83.60 (SD, 6.82), respectively. No statistically significant differences were observed between the scores of GPT-4.0 and those of the residents. When analyzing questions specific to obstetrics, the average scores for GPT-3.5 and -4.0 were 33.44 (SD, 10.18) and 90.22 (SD, 7.68), respectively.
Conclusion
GPT-4 demonstrated exceptional performance in obstetrics, different types of data interpretation, and problem solving, showcasing the potential utility of LLMs in these areas. However, acknowledging the constraints of LLMs is crucial and their utilization should augment human expertise and discernment.
10.Efficacy of large language models and their potential in Obstetrics and Gynecology education
Kyung Jin EOH ; Gu Yeun KWON ; Eun Jin LEE ; JoonHo LEE ; Inha LEE ; Young Tae KIM ; Eun Ji NAM
Obstetrics & Gynecology Science 2024;67(6):550-556
Objective:
The performance of large language models (LLMs) and their potential utility in obstetric and gynecological education are topics of ongoing debate. This study aimed to contribute to this discussion by examining the recent advancements in LLM technology and their transformative potential in artificial intelligence.
Methods:
This study assessed the performance of generative pre-trained transformer (GPT)-3.5 and -4 in understanding clinical information, as well as its potential implications for obstetric and gynecological education. Obstetrics and gynecology residents at three hospitals underwent an annual promotional examination, from which 116 of the 170 questions over 4 years (2020-2023) were analyzed, excluding 54 questions with images. The scores achieved by GPT-3.5, -4, and the 100 residents were compared.
Results:
The average scores across all 4 years for GPT-3.5 and -4 were 38.79 (standard deviation [SD], 5.65) and 79.31 (SD, 3.67), respectively. For groups first-year resident, second-year resident, and third-year resident, the cumulative annual average scores were 79.12 (SD, 9.00), 80.95 (SD, 5.86), and 83.60 (SD, 6.82), respectively. No statistically significant differences were observed between the scores of GPT-4.0 and those of the residents. When analyzing questions specific to obstetrics, the average scores for GPT-3.5 and -4.0 were 33.44 (SD, 10.18) and 90.22 (SD, 7.68), respectively.
Conclusion
GPT-4 demonstrated exceptional performance in obstetrics, different types of data interpretation, and problem solving, showcasing the potential utility of LLMs in these areas. However, acknowledging the constraints of LLMs is crucial and their utilization should augment human expertise and discernment.