1.Prospection of electromagnetic navigation for hepatic ducts exploration by choledochoscope based on a three-dimensional printing model
Rui TANG ; Longfei MA ; Jianping ZENG ; Wenping ZHAO ; Xuedong WANG ; Liang WANG ; Hongen LIAO ; Jiahong DONG
Chinese Journal of Digestive Surgery 2017;16(4):433-436
Objective To explore the feasibility of electromagnetic navigation for hepatic ducts exploration by choledochoscope based on a three-dimensional (3D) printing model.Methods The retrospective descriptive study was conducted.The clinical data of 1 patient with obstructive jaundice combined with secondary biliary tract dilation who was admitted to the Beijing Tsinghua Changgung Hospital in April 2016 were collected.Digital imaging and standardized format data of enhanced computed tomography (CT) scan were collected,3D reconstruction of liver and bile duct were done under a 3D printing model,and then a hollow model of bile duct was achieved.Choledochoscope with internal electromagnetic probe was inserted into the printed hepatic ducts.Four points of anatomical markers in left and right hepatic ducts and common hepatic duct were chosen as fiducial markers for calibration and registration.Results After registration,the scope can be tracked in main hepatic segmental bile ducts.The locations of choledochoscope matched precisely the navigation results.Conclusion Electromagnetic navigation may bring accurate tracking effectiveness for choledochoscopic examination.
2.Gait analysis of knee osteoarthritis based on depth camera
Fang CHEN ; Zhe ZHAO ; Xiwen CUI ; Yanting XIE ; Licheng ZHANG ; Hongen LIAO ; Peifu TANG
Chinese Journal of Orthopaedics 2021;41(22):1631-1639
Objective:In this study, a gait acquisition and analysis system is developed to provide a cheap, easy-to-use solution for quantitative recording and analysis of patients' gaits.Methods:From April 2017 to October 2018, we collected the gait data of 19 patients with knee osteoarthritis and 19 healthy volunteers in the orthopaedic outpatient department. Among 19 patients, there were 9 males and 10 females, aged 50.1±9.4 years old. Among 19 healthy volunteers, there were 8 males and 11 females, aged 50.7±10.3 years old. Then, from the collected gait data, the static gait features such as gait speed, step length, stride, and dynamic gait features were automatically calculated, and the statistical difference analysis was finished to determine the correlation between these quantitative gait features and knee osteoarthritis.Results:Firstly, the gait data collected by the depth camera was compared with the data from the multi infrared camera-based motion analysis system (gold standard). The average angle error of the collected knee joint angle was 0.98 degrees, which proved the correctness of the gait data recorded by the depth camera. The statistical difference analysis of gait characteristics between the patient group and the healthy group showed that the gait characteristics with P<0.05 included: gait speed ( r=-0.922, P<0.001), step length ( r=-0.897, P=0.004), stride ( r=-0.914 , P<0.001), dynamic characteristics of angle of knee joint ( r=0.775, P=0.001). Conclusion:The gait acquisition and analysis system based on the depth camera can accurately record and store the gait data of the patients with knee osteoarthritis. Moreover, the extracted quantitative gait features have statistical differences between the patients and the healthy group, which is helpful for the gait analysis of bone joint.
3.Real-time in situ three-dimensional integral videography and surgical navigation using augmented reality: a pilot study.
Hideyuki SUENAGA ; Huy Hoang TRAN ; Hongen LIAO ; Ken MASAMUNE ; Takeyoshi DOHI ; Kazuto HOSHI ; Yoshiyuki MORI ; Tsuyoshi TAKATO
International Journal of Oral Science 2013;5(2):98-102
To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (<1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye.
Calibration
;
Data Display
;
Feasibility Studies
;
Humans
;
Image Processing, Computer-Assisted
;
instrumentation
;
methods
;
Imaging, Three-Dimensional
;
methods
;
Mandible
;
anatomy & histology
;
Maxilla
;
anatomy & histology
;
Models, Anatomic
;
Optical Devices
;
Oral Surgical Procedures
;
instrumentation
;
methods
;
Pilot Projects
;
Stereotaxic Techniques
;
instrumentation
;
Surgery, Computer-Assisted
;
instrumentation
;
methods
;
Tomography, X-Ray Computed
;
methods
;
Tooth
;
anatomy & histology
;
User-Computer Interface
;
Video Recording
;
instrumentation
;
methods
4.CT-based integrated deep learning model for qualitative and quantitative research of hepatic portal vein
Zhuofan XU ; Qi'ao JIN ; Kaiyu WANG ; Xinjing ZHANG ; Liutong ZHANG ; Ranran ZHANG ; Hongen LIAO ; Canhong XIANG ; Jiahong DONG
Chinese Journal of Digestive Surgery 2024;23(7):976-983
Objective:To investigate the computed tomography (CT)-based integrated deep learning model for qualitative and quantitative classification of hepatic portal vein.Methods:The retrospective study was conducted. The CT imaging data of 291 patients undergoing upper-abdomen enhanced CT examination in the Beijing Tsinghua Changgung Hospital of Tsinghua University from October 2017 to January 2019 were collected. There were 195 males and 96 females, aged (51±12)years. The hepatic portal vein was reconstructed using the three-dimensional reconstruction system. Three-dimensional point cloud was input to the encoder model to obtain the three-dimen-sional reconstructed vectorized representation, which was used for qualitative classification and quantitative representation classification. Measurement data with normal distribution were repre-sented as Mean± SD, and comparison between groups was conducted using the paired t test. Count data were repre-sented as percentages or absolute numbers, and comparison between groups was analyzed using the paired chi-square test. Results:(1) Three-dimensional reconstruction of portal vein and anatomical classification. Three-dimensional structure was reconstructed in the 291 patients. Classification of main hepatic portal vein showed 211 cases of Akgul type A, 29 cases of Akgul type B, 16 cases of Akgul type C, 10 cases of Akgul type D, and 25 cases of unclassifiable. (2) Prediction of qualitative classification of main hepatic portal vein. Of the 291 patient samples, 25 unclassifiable or poor quality samples were excluded, 266 samples were used for automated qualitative classification of the main portal vein by machine model. There were 211 cases of Akgul type A, 29 cases of Akgul type B, 26 cases of Akgul type C&D. The Macro-F1 of 266 patients was 61.93%±40.50% and the accuracy was 84.99%, versus 32.38%±19.81% and 61.65% of Random classifier, showing significant differ-ences between them ( t=7.85, χ2=62.89, P<0.05). (3) Quantitative representation of portal vein classification. The probabilities of quantitative classification for Akgul qualitative classification of similar samples included P@1 as 73%±45%, P@3 as 70%±37%, P@5 as 69%±35%, P@10 as 67%± 32%, mean reciprocal rank(MRR) as 80%±34%, versus 57%±50%, 58%±35%, 58%±32%, 58%± 30%, 70%±37% of the baseline model, showing significant differences between the two analytical methods ( t=5.22, 5.11, 5.00, 4.99, 3.47, P<0.05). Conclusion:The automated classification model for the hepatic portal vein structure was constructed using CT-based three-dimensional reconstruc-tion and deep learning technology, which can achieve automatic qualitative classification and quanti-tatively describe the hepatic portal vein structure.
5. Application value of augmented reality technology in pancreatoduodenectomy
Rui TANG ; Xinjing ZHANG ; Guochen NING ; Ang LI ; Lihan YU ; Hongen LIAO ; Jiahong DONG ; Qian LU
Chinese Journal of Digestive Surgery 2019;18(10):986-991
Objective:
To investigate the application value of augmented reality (AR) technology in pancreatoduodenectomy (PD).
Methods:
The retrospective and descriptive study was conducted. The clinicopathological data of 3 male patients who underwent PD in Tsinghua University Affiliated Beijing Tsinghua Changgung Hospital from June 2018 to February 2019 were collected. The 3 patients were aged from 52 to 63 years, with a median age of 57 years. Digital imaging and communication standard format data of enhanced computed tomography (CT) scan were collected, the three-dimensional (3D) images of abdominal arteries, portal vein, descending duodenum, pancreatic body and tail, pancreatic head, common bile duct, and pancreatic duct were reconstructed. The results were imported into the AR software. Optical tracking based on two-dimensional codes and manual interactive rigid registration were adopted for intraoperative navigation. Observation indicators: (1) surgical and postoperative conditions; (2) postoperative pathological examination; (3) follow-up. Follow-up using outpatient examination or telephone interview was conducted to detect survival of patients and tumor metastasis and recurrence up to June 2019. The measurement data with skewed distribution were expressed as