1.High-Resolution Contrast-Enhanced 3D-Spoiled Gradient-Recalled Imaging for Evaluation of Intracranial Vertebral Artery and Posterior Inferior Cerebellar Artery in Lateral Medullary Infarction.
Youngno YOON ; Sung Jun AHN ; Sang Hyun SUH ; Ah Young PARK ; Tae Sub CHUNG
Journal of the Korean Society of Magnetic Resonance in Medicine 2014;18(1):17-24
PURPOSE: To determine whether high-resolution contrast-enhanced three dimensional imaging with spoiled gradient-recalled sequence (HR-CE 3D-SPGR) plays a meaningful role in the assessment of intracranial vertebral artery (ICVA) and posterior inferior cerebellar artery (PICA) in lateral medullary infarction (LMI). MATERIALS AND METHODS: Twenty-five patients confirmed with LMI were retrospectively enrolled with approval by the IRB of our institute, and 3T MRI with HR-CE 3D-SPGR and contrast-enhanced magnetic resonance angiography (CE-MRA) were performed. Two radiologists who were blinded to clinical information and other brain MR images including diffusion weighted image independently evaluated arterial lesions in ICVA and PICA. The demographic characteristics, the area of LMI and cerebellar involvement were analyzed and compared between patients with arterial lesion in ICVA only and patients with arterial lesions in both ICVA and PICA on HR-CE 3D-SPGR. RESULTS: Twenty-two of twenty-five LMI patients had arterial lesions in ICVA or PICA on HR-CE 3D SPGR. However twelve arterial lesions in PICA were not shown on CE-MRA. Concurrent cerebellar involvement appeared more in LMI patients with arterial lesion in ICVA and PICA than those with arterial lesion in ICVA alone (p = 0.069). CONCLUSION: HR-CE 3D-SPGR can help evaluate arterial lesions in ICVA and PICA for LMI patients.
Arteries*
;
Brain
;
Diffusion
;
Ethics Committees, Research
;
Humans
;
Infarction*
;
Magnetic Resonance Angiography
;
Magnetic Resonance Imaging
;
Pica
;
Retrospective Studies
;
Vertebral Artery*
2.Deep learning-based surgical phase recognition in laparoscopic cholecystectomy
Hye Yeon YANG ; Seung Soo HONG ; Jihun YOON ; Bokyung PARK ; Youngno YOON ; Dai Hoon HAN ; Gi Hong CHOI ; Min-Kook CHOI ; Sung Hyun KIM
Annals of Hepato-Biliary-Pancreatic Surgery 2024;28(4):466-473
Background:
s/Aims: Artificial intelligence (AI) technology has been used to assess surgery quality, educate, and evaluate surgical performance using video recordings in the minimally invasive surgery era. Much attention has been paid to automating surgical workflow analysis from surgical videos for an effective evaluation to achieve the assessment and evaluation. This study aimed to design a deep learning model to automatically identify surgical phases using laparoscopic cholecystectomy videos and automatically assess the accuracy of recognizing surgical phases.
Methods:
One hundred and twenty cholecystectomy videos from a public dataset (Cholec80) and 40 laparoscopic cholecystectomy videos recorded between July 2022 and December 2022 at a single institution were collected. These datasets were split into training and testing datasets for the AI model at a 2:1 ratio. Test scenarios were constructed according to structural characteristics of the trained model. No pre- or post-processing of input data or inference output was performed to accurately analyze the effect of the label on model training.
Results:
A total of 98,234 frames were extracted from 40 cases as test data. The overall accuracy of the model was 91.2%. The most accurate phase was Calot’s triangle dissection (F1 score: 0.9421), whereas the least accurate phase was clipping and cutting (F1 score:0.7761).
Conclusions
Our AI model identified phases of laparoscopic cholecystectomy with a high accuracy.
3.Deep learning-based surgical phase recognition in laparoscopic cholecystectomy
Hye Yeon YANG ; Seung Soo HONG ; Jihun YOON ; Bokyung PARK ; Youngno YOON ; Dai Hoon HAN ; Gi Hong CHOI ; Min-Kook CHOI ; Sung Hyun KIM
Annals of Hepato-Biliary-Pancreatic Surgery 2024;28(4):466-473
Background:
s/Aims: Artificial intelligence (AI) technology has been used to assess surgery quality, educate, and evaluate surgical performance using video recordings in the minimally invasive surgery era. Much attention has been paid to automating surgical workflow analysis from surgical videos for an effective evaluation to achieve the assessment and evaluation. This study aimed to design a deep learning model to automatically identify surgical phases using laparoscopic cholecystectomy videos and automatically assess the accuracy of recognizing surgical phases.
Methods:
One hundred and twenty cholecystectomy videos from a public dataset (Cholec80) and 40 laparoscopic cholecystectomy videos recorded between July 2022 and December 2022 at a single institution were collected. These datasets were split into training and testing datasets for the AI model at a 2:1 ratio. Test scenarios were constructed according to structural characteristics of the trained model. No pre- or post-processing of input data or inference output was performed to accurately analyze the effect of the label on model training.
Results:
A total of 98,234 frames were extracted from 40 cases as test data. The overall accuracy of the model was 91.2%. The most accurate phase was Calot’s triangle dissection (F1 score: 0.9421), whereas the least accurate phase was clipping and cutting (F1 score:0.7761).
Conclusions
Our AI model identified phases of laparoscopic cholecystectomy with a high accuracy.
4.Deep learning-based surgical phase recognition in laparoscopic cholecystectomy
Hye Yeon YANG ; Seung Soo HONG ; Jihun YOON ; Bokyung PARK ; Youngno YOON ; Dai Hoon HAN ; Gi Hong CHOI ; Min-Kook CHOI ; Sung Hyun KIM
Annals of Hepato-Biliary-Pancreatic Surgery 2024;28(4):466-473
Background:
s/Aims: Artificial intelligence (AI) technology has been used to assess surgery quality, educate, and evaluate surgical performance using video recordings in the minimally invasive surgery era. Much attention has been paid to automating surgical workflow analysis from surgical videos for an effective evaluation to achieve the assessment and evaluation. This study aimed to design a deep learning model to automatically identify surgical phases using laparoscopic cholecystectomy videos and automatically assess the accuracy of recognizing surgical phases.
Methods:
One hundred and twenty cholecystectomy videos from a public dataset (Cholec80) and 40 laparoscopic cholecystectomy videos recorded between July 2022 and December 2022 at a single institution were collected. These datasets were split into training and testing datasets for the AI model at a 2:1 ratio. Test scenarios were constructed according to structural characteristics of the trained model. No pre- or post-processing of input data or inference output was performed to accurately analyze the effect of the label on model training.
Results:
A total of 98,234 frames were extracted from 40 cases as test data. The overall accuracy of the model was 91.2%. The most accurate phase was Calot’s triangle dissection (F1 score: 0.9421), whereas the least accurate phase was clipping and cutting (F1 score:0.7761).
Conclusions
Our AI model identified phases of laparoscopic cholecystectomy with a high accuracy.
5.Development and Multicenter, Multiprotocol Validation of Neural Network for Aberrant Right Subclavian Artery Detection
So Yeon WON ; Ilah SHIN ; Eung Yeop KIM ; Seung-Koo LEE ; Youngno YOON ; Beomseok SOHN
Yonsei Medical Journal 2024;65(9):527-533
Purpose:
This study aimed to develop and validate a convolutional neural network (CNN) that automatically detects an aberrant right subclavian artery (ARSA) on preoperative computed tomography (CT) for thyroid cancer evaluation.
Materials and Methods:
A total of 556 CT with ARSA and 312 CT with normal aortic arch from one institution were used as the training set for model development. A deep learning model for the classification of patch images for ARSA was developed using two-dimension CNN from EfficientNet. The diagnostic performance of our model was evaluated using external test sets (112 and 126 CT) from two institutions. The performance of the model was compared with that of radiologists for detecting ARSA using an independent dataset of 1683 consecutive neck CT.
Results:
The performance of the model was achieved using two external datasets with an area under the curve of 0.97 and 0.99, and accuracy of 97% and 99%, respectively. In the temporal validation set, which included a total of 20 patients with ARSA and 1663 patients without ARSA, radiologists overlooked 13 ARSA cases. In contrast, the CNN model successfully detected all the 20 patients with ARSA.
Conclusion
We developed a CNN-based deep learning model that detects ARSA using CT. Our model showed high performance in the multicenter validation.