1.A Narrative Review on the Double Pulley-Triple Row Technique for Large to Massive Rotator Cuff Repair
Vivek Kumar MORYA ; Jun LANG ; Yong-beom LEE ; Jung Woo KIM ; Kang Uk LEE ; Kyu-Cheol NOH
Clinics in Orthopedic Surgery 2025;17(3):359-371
Rotator cuff tears are common shoulder injuries that often necessitate surgical intervention, particularly when nonoperative treatments fail. Arthroscopic rotator cuff repair is the current gold standard; however, challenges, such as high retear rates, especially in large tears, persist. Traditional techniques, such as single-row and double-row repairs, have limitations in fully restoring the anatomical footprint and ensuring optimal healing. This review examines the novel double pulley-triple row technique, which aims to overcome these limitations by enhancing the footprint contact area, load distribution, and tendon healing. By evaluating the double pulley-triple row method in comparison to established techniques, this study explores the potential advantages, limitations, and future directions of rotator cuff repair.
2.A Narrative Review on the Double Pulley-Triple Row Technique for Large to Massive Rotator Cuff Repair
Vivek Kumar MORYA ; Jun LANG ; Yong-beom LEE ; Jung Woo KIM ; Kang Uk LEE ; Kyu-Cheol NOH
Clinics in Orthopedic Surgery 2025;17(3):359-371
Rotator cuff tears are common shoulder injuries that often necessitate surgical intervention, particularly when nonoperative treatments fail. Arthroscopic rotator cuff repair is the current gold standard; however, challenges, such as high retear rates, especially in large tears, persist. Traditional techniques, such as single-row and double-row repairs, have limitations in fully restoring the anatomical footprint and ensuring optimal healing. This review examines the novel double pulley-triple row technique, which aims to overcome these limitations by enhancing the footprint contact area, load distribution, and tendon healing. By evaluating the double pulley-triple row method in comparison to established techniques, this study explores the potential advantages, limitations, and future directions of rotator cuff repair.
3.A Narrative Review on the Double Pulley-Triple Row Technique for Large to Massive Rotator Cuff Repair
Vivek Kumar MORYA ; Jun LANG ; Yong-beom LEE ; Jung Woo KIM ; Kang Uk LEE ; Kyu-Cheol NOH
Clinics in Orthopedic Surgery 2025;17(3):359-371
Rotator cuff tears are common shoulder injuries that often necessitate surgical intervention, particularly when nonoperative treatments fail. Arthroscopic rotator cuff repair is the current gold standard; however, challenges, such as high retear rates, especially in large tears, persist. Traditional techniques, such as single-row and double-row repairs, have limitations in fully restoring the anatomical footprint and ensuring optimal healing. This review examines the novel double pulley-triple row technique, which aims to overcome these limitations by enhancing the footprint contact area, load distribution, and tendon healing. By evaluating the double pulley-triple row method in comparison to established techniques, this study explores the potential advantages, limitations, and future directions of rotator cuff repair.
4.A Narrative Review on the Double Pulley-Triple Row Technique for Large to Massive Rotator Cuff Repair
Vivek Kumar MORYA ; Jun LANG ; Yong-beom LEE ; Jung Woo KIM ; Kang Uk LEE ; Kyu-Cheol NOH
Clinics in Orthopedic Surgery 2025;17(3):359-371
Rotator cuff tears are common shoulder injuries that often necessitate surgical intervention, particularly when nonoperative treatments fail. Arthroscopic rotator cuff repair is the current gold standard; however, challenges, such as high retear rates, especially in large tears, persist. Traditional techniques, such as single-row and double-row repairs, have limitations in fully restoring the anatomical footprint and ensuring optimal healing. This review examines the novel double pulley-triple row technique, which aims to overcome these limitations by enhancing the footprint contact area, load distribution, and tendon healing. By evaluating the double pulley-triple row method in comparison to established techniques, this study explores the potential advantages, limitations, and future directions of rotator cuff repair.
5.Advances in the Application of Near-infrared Imaging to the Assessment of Speech Processing Function in Cochlear Implant Recipients
Xiao LANG ; Zhen FU ; Chuanxin NIU ; Qing XIE ; HaKyung KIM
Journal of Audiology and Speech Pathology 2024;32(3):279-283
To date,measuring the neural activity of speech processing in adults and children after cochlear implantation has been a great challenge because conventional neuroimaging techniques are limited by the magnetic properties of cochlear implants.Functional near-infrared spectroscopy(fNIRS)is a non-invasive imaging technique for neuroimaging the brain compatible with implanted devices and unaffected by electrical artifacts.fNIRS is suitable for measuring the neural activity of speech processing in adult and pediatric cochlear implant recipients and can better identify and interpret cortical responses.The purpose of this paper is to summarize the use of fNIRS as a tool to measure the neural activation level of speech processing in cochlear implant recipients for different types of speech stimuli,and thus to provide a clinical and objective basis for the assessment of auditory speech rehabilitation and the development of rehabilitation programs for implant recipients.
6.A Study of Mandarin Vowels Space Area and A Rational administration Method and Corpus in Patients with Voice Disorders
Bin YI ; Xiao LANG ; Xiaoxi KANG ; Wensheng ZHAO ; Shaohua GAO ; HaKyung KIM
Journal of Audiology and Speech Pathology 2024;32(4):322-325
Objective To investigate the vowel acoustic characteristics of patients with voice disorders and the reasonable way of administering vocal space area(VSA)and language in the Mandarin system.Methods A total of 40 subjects(20 males and 20 females)with voice disorder and normal healthy controls were recruited.The differ-ences in VSA between the disorder and healthy control groups were analyzed under different corpora and different vowel vertex numbers.Results The differences in VSA between the voice disorder and the control groups were highly significant in both the vowel and long sentence corpus.The differences in VSA between 4 vowels and 3 vow-els and 5 vowels were not significant,and 5 vowels were more suitable for VSA measurement in Mandarin.Conclu-sion The vowel articulation of patients with voice disorder is less clear than that of normal speakers.It is more ac-curate to use 5 vowels to measure VSA under the Mandarin system and it is better for measuring oral motility.Both single vowels and continuous speech are suitable for the measurement of VSA.
7.Analyzing Large Language Models’ Responses to Common Lumbar Spine Fusion Surgery Questions: A Comparison Between ChatGPT and Bard
Siegmund Philipp LANG ; Ezra Tilahun YOSEPH ; Aneysis D. GONZALEZ-SUAREZ ; Robert KIM ; Parastou FATEMI ; Katherine WAGNER ; Nicolai MALDANER ; Martin N. STIENEN ; Corinna Clio ZYGOURAKIS
Neurospine 2024;21(2):633-641
Objective:
In the digital age, patients turn to online sources for lumbar spine fusion information, necessitating a careful study of large language models (LLMs) like chat generative pre-trained transformer (ChatGPT) for patient education.
Methods:
Our study aims to assess the response quality of Open AI (artificial intelligence)’s ChatGPT 3.5 and Google’s Bard to patient questions on lumbar spine fusion surgery. We identified 10 critical questions from 158 frequently asked ones via Google search, which were then presented to both chatbots. Five blinded spine surgeons rated the responses on a 4-point scale from ‘unsatisfactory’ to ‘excellent.’ The clarity and professionalism of the answers were also evaluated using a 5-point Likert scale.
Results:
In our evaluation of 10 questions across ChatGPT 3.5 and Bard, 97% of responses were rated as excellent or satisfactory. Specifically, ChatGPT had 62% excellent and 32% minimally clarifying responses, with only 6% needing moderate or substantial clarification. Bard’s responses were 66% excellent and 24% minimally clarifying, with 10% requiring more clarification. No significant difference was found in the overall rating distribution between the 2 models. Both struggled with 3 specific questions regarding surgical risks, success rates, and selection of surgical approaches (Q3, Q4, and Q5). Interrater reliability was low for both models (ChatGPT: k = 0.041, p = 0.622; Bard: k = -0.040, p = 0.601). While both scored well on understanding and empathy, Bard received marginally lower ratings in empathy and professionalism.
Conclusion
ChatGPT3.5 and Bard effectively answered lumbar spine fusion FAQs, but further training and research are needed to solidify LLMs’ role in medical education and healthcare communication.
8.Analyzing Large Language Models’ Responses to Common Lumbar Spine Fusion Surgery Questions: A Comparison Between ChatGPT and Bard
Siegmund Philipp LANG ; Ezra Tilahun YOSEPH ; Aneysis D. GONZALEZ-SUAREZ ; Robert KIM ; Parastou FATEMI ; Katherine WAGNER ; Nicolai MALDANER ; Martin N. STIENEN ; Corinna Clio ZYGOURAKIS
Neurospine 2024;21(2):633-641
Objective:
In the digital age, patients turn to online sources for lumbar spine fusion information, necessitating a careful study of large language models (LLMs) like chat generative pre-trained transformer (ChatGPT) for patient education.
Methods:
Our study aims to assess the response quality of Open AI (artificial intelligence)’s ChatGPT 3.5 and Google’s Bard to patient questions on lumbar spine fusion surgery. We identified 10 critical questions from 158 frequently asked ones via Google search, which were then presented to both chatbots. Five blinded spine surgeons rated the responses on a 4-point scale from ‘unsatisfactory’ to ‘excellent.’ The clarity and professionalism of the answers were also evaluated using a 5-point Likert scale.
Results:
In our evaluation of 10 questions across ChatGPT 3.5 and Bard, 97% of responses were rated as excellent or satisfactory. Specifically, ChatGPT had 62% excellent and 32% minimally clarifying responses, with only 6% needing moderate or substantial clarification. Bard’s responses were 66% excellent and 24% minimally clarifying, with 10% requiring more clarification. No significant difference was found in the overall rating distribution between the 2 models. Both struggled with 3 specific questions regarding surgical risks, success rates, and selection of surgical approaches (Q3, Q4, and Q5). Interrater reliability was low for both models (ChatGPT: k = 0.041, p = 0.622; Bard: k = -0.040, p = 0.601). While both scored well on understanding and empathy, Bard received marginally lower ratings in empathy and professionalism.
Conclusion
ChatGPT3.5 and Bard effectively answered lumbar spine fusion FAQs, but further training and research are needed to solidify LLMs’ role in medical education and healthcare communication.
9.Analyzing Large Language Models’ Responses to Common Lumbar Spine Fusion Surgery Questions: A Comparison Between ChatGPT and Bard
Siegmund Philipp LANG ; Ezra Tilahun YOSEPH ; Aneysis D. GONZALEZ-SUAREZ ; Robert KIM ; Parastou FATEMI ; Katherine WAGNER ; Nicolai MALDANER ; Martin N. STIENEN ; Corinna Clio ZYGOURAKIS
Neurospine 2024;21(2):633-641
Objective:
In the digital age, patients turn to online sources for lumbar spine fusion information, necessitating a careful study of large language models (LLMs) like chat generative pre-trained transformer (ChatGPT) for patient education.
Methods:
Our study aims to assess the response quality of Open AI (artificial intelligence)’s ChatGPT 3.5 and Google’s Bard to patient questions on lumbar spine fusion surgery. We identified 10 critical questions from 158 frequently asked ones via Google search, which were then presented to both chatbots. Five blinded spine surgeons rated the responses on a 4-point scale from ‘unsatisfactory’ to ‘excellent.’ The clarity and professionalism of the answers were also evaluated using a 5-point Likert scale.
Results:
In our evaluation of 10 questions across ChatGPT 3.5 and Bard, 97% of responses were rated as excellent or satisfactory. Specifically, ChatGPT had 62% excellent and 32% minimally clarifying responses, with only 6% needing moderate or substantial clarification. Bard’s responses were 66% excellent and 24% minimally clarifying, with 10% requiring more clarification. No significant difference was found in the overall rating distribution between the 2 models. Both struggled with 3 specific questions regarding surgical risks, success rates, and selection of surgical approaches (Q3, Q4, and Q5). Interrater reliability was low for both models (ChatGPT: k = 0.041, p = 0.622; Bard: k = -0.040, p = 0.601). While both scored well on understanding and empathy, Bard received marginally lower ratings in empathy and professionalism.
Conclusion
ChatGPT3.5 and Bard effectively answered lumbar spine fusion FAQs, but further training and research are needed to solidify LLMs’ role in medical education and healthcare communication.
10.Analyzing Large Language Models’ Responses to Common Lumbar Spine Fusion Surgery Questions: A Comparison Between ChatGPT and Bard
Siegmund Philipp LANG ; Ezra Tilahun YOSEPH ; Aneysis D. GONZALEZ-SUAREZ ; Robert KIM ; Parastou FATEMI ; Katherine WAGNER ; Nicolai MALDANER ; Martin N. STIENEN ; Corinna Clio ZYGOURAKIS
Neurospine 2024;21(2):633-641
Objective:
In the digital age, patients turn to online sources for lumbar spine fusion information, necessitating a careful study of large language models (LLMs) like chat generative pre-trained transformer (ChatGPT) for patient education.
Methods:
Our study aims to assess the response quality of Open AI (artificial intelligence)’s ChatGPT 3.5 and Google’s Bard to patient questions on lumbar spine fusion surgery. We identified 10 critical questions from 158 frequently asked ones via Google search, which were then presented to both chatbots. Five blinded spine surgeons rated the responses on a 4-point scale from ‘unsatisfactory’ to ‘excellent.’ The clarity and professionalism of the answers were also evaluated using a 5-point Likert scale.
Results:
In our evaluation of 10 questions across ChatGPT 3.5 and Bard, 97% of responses were rated as excellent or satisfactory. Specifically, ChatGPT had 62% excellent and 32% minimally clarifying responses, with only 6% needing moderate or substantial clarification. Bard’s responses were 66% excellent and 24% minimally clarifying, with 10% requiring more clarification. No significant difference was found in the overall rating distribution between the 2 models. Both struggled with 3 specific questions regarding surgical risks, success rates, and selection of surgical approaches (Q3, Q4, and Q5). Interrater reliability was low for both models (ChatGPT: k = 0.041, p = 0.622; Bard: k = -0.040, p = 0.601). While both scored well on understanding and empathy, Bard received marginally lower ratings in empathy and professionalism.
Conclusion
ChatGPT3.5 and Bard effectively answered lumbar spine fusion FAQs, but further training and research are needed to solidify LLMs’ role in medical education and healthcare communication.

Result Analysis
Print
Save
E-mail