1.TGF-β Signalling is Suppressed under Pro-Hypertrophic Conditions in MSC Chondrogenesis Due to TGF-β Receptor Downregulation
Christian G PFEIFER ; Alexandra KARL ; Maximilian KERSCHBAUM ; Arne BERNER ; Siegmund LANG ; Rupert SCHUPFNER ; Matthias KOCH ; Peter ANGELE ; Michael NERLICH ; Michael B MUELLER
International Journal of Stem Cells 2019;12(1):139-150
BACKGROUND AND OBJECTIVES: Mesenchymal stem cells (MSCs) become hypertrophic in long term despite chondrogenic differentiation following the pathway of growth plate chondrocytes. This terminal differentiation leads to phenotypically unstable cartilage and was mirrored in vitro by addition of hypertrophy inducing medium. We investigated how intrinsic TGF-β signaling is altered in pro-hypertrophic conditions. METHODS AND RESULTS: Human bone marrow derived MSC were chondrogenically differentiated in 3D culture. At day 14 medium conditions were changed to 1. pro-hypertrophic by addition of T3 and withdrawal of TGF-β and dexamethasone 2. pro-hypertrophic by addition of BMP 4 and withdrawal of TGF-β and dexamethasone and 3. kept in prochondrogenic medium conditions. All groups were treated with and without TGFβ-type-1-receptor inhibitor SB431542 from day 14 on. Aggregates were harvested for histo- and immunohistological analysis at d14 and d28, for gene expression analysis (rt-PCR) on d1, d3, d7, d14, d17, d21 and d28 and for Western blot analysis on d21 and d28. Induction of hypertrophy was achieved in the pro-hypertrophic groups while expression of TGFβ-type-1- and 2-receptor and Sox 9 were significantly downregulated compared to pro-chondrogenic conditions. Western blotting showed reduced phosphorylation of Smad 2 and 3 in hypertrophic samples, reduced TGF-β-1 receptor proteins and reduced SOX 9. Addition of SB431542 did not initiate hypertrophy under pro-chondrogenic conditions, but was capable of enhancing hypertrophy when applied simultaneously with BMP-4. CONCLUSIONS: Our results suggest that the enhancement of hypertrophy in this model is a result of both activation of pro-hypertrophic BMP signaling and reduction of anti-hypertrophic TGFβ signaling.
Blotting, Western
;
Bone Marrow
;
Cartilage
;
Chondrocytes
;
Chondrogenesis
;
Dexamethasone
;
Down-Regulation
;
Gene Expression
;
Growth Plate
;
Humans
;
Hypertrophy
;
In Vitro Techniques
;
Mesenchymal Stromal Cells
;
Phosphorylation
2.Analyzing Large Language Models’ Responses to Common Lumbar Spine Fusion Surgery Questions: A Comparison Between ChatGPT and Bard
Siegmund Philipp LANG ; Ezra Tilahun YOSEPH ; Aneysis D. GONZALEZ-SUAREZ ; Robert KIM ; Parastou FATEMI ; Katherine WAGNER ; Nicolai MALDANER ; Martin N. STIENEN ; Corinna Clio ZYGOURAKIS
Neurospine 2024;21(2):633-641
Objective:
In the digital age, patients turn to online sources for lumbar spine fusion information, necessitating a careful study of large language models (LLMs) like chat generative pre-trained transformer (ChatGPT) for patient education.
Methods:
Our study aims to assess the response quality of Open AI (artificial intelligence)’s ChatGPT 3.5 and Google’s Bard to patient questions on lumbar spine fusion surgery. We identified 10 critical questions from 158 frequently asked ones via Google search, which were then presented to both chatbots. Five blinded spine surgeons rated the responses on a 4-point scale from ‘unsatisfactory’ to ‘excellent.’ The clarity and professionalism of the answers were also evaluated using a 5-point Likert scale.
Results:
In our evaluation of 10 questions across ChatGPT 3.5 and Bard, 97% of responses were rated as excellent or satisfactory. Specifically, ChatGPT had 62% excellent and 32% minimally clarifying responses, with only 6% needing moderate or substantial clarification. Bard’s responses were 66% excellent and 24% minimally clarifying, with 10% requiring more clarification. No significant difference was found in the overall rating distribution between the 2 models. Both struggled with 3 specific questions regarding surgical risks, success rates, and selection of surgical approaches (Q3, Q4, and Q5). Interrater reliability was low for both models (ChatGPT: k = 0.041, p = 0.622; Bard: k = -0.040, p = 0.601). While both scored well on understanding and empathy, Bard received marginally lower ratings in empathy and professionalism.
Conclusion
ChatGPT3.5 and Bard effectively answered lumbar spine fusion FAQs, but further training and research are needed to solidify LLMs’ role in medical education and healthcare communication.
3.Analyzing Large Language Models’ Responses to Common Lumbar Spine Fusion Surgery Questions: A Comparison Between ChatGPT and Bard
Siegmund Philipp LANG ; Ezra Tilahun YOSEPH ; Aneysis D. GONZALEZ-SUAREZ ; Robert KIM ; Parastou FATEMI ; Katherine WAGNER ; Nicolai MALDANER ; Martin N. STIENEN ; Corinna Clio ZYGOURAKIS
Neurospine 2024;21(2):633-641
Objective:
In the digital age, patients turn to online sources for lumbar spine fusion information, necessitating a careful study of large language models (LLMs) like chat generative pre-trained transformer (ChatGPT) for patient education.
Methods:
Our study aims to assess the response quality of Open AI (artificial intelligence)’s ChatGPT 3.5 and Google’s Bard to patient questions on lumbar spine fusion surgery. We identified 10 critical questions from 158 frequently asked ones via Google search, which were then presented to both chatbots. Five blinded spine surgeons rated the responses on a 4-point scale from ‘unsatisfactory’ to ‘excellent.’ The clarity and professionalism of the answers were also evaluated using a 5-point Likert scale.
Results:
In our evaluation of 10 questions across ChatGPT 3.5 and Bard, 97% of responses were rated as excellent or satisfactory. Specifically, ChatGPT had 62% excellent and 32% minimally clarifying responses, with only 6% needing moderate or substantial clarification. Bard’s responses were 66% excellent and 24% minimally clarifying, with 10% requiring more clarification. No significant difference was found in the overall rating distribution between the 2 models. Both struggled with 3 specific questions regarding surgical risks, success rates, and selection of surgical approaches (Q3, Q4, and Q5). Interrater reliability was low for both models (ChatGPT: k = 0.041, p = 0.622; Bard: k = -0.040, p = 0.601). While both scored well on understanding and empathy, Bard received marginally lower ratings in empathy and professionalism.
Conclusion
ChatGPT3.5 and Bard effectively answered lumbar spine fusion FAQs, but further training and research are needed to solidify LLMs’ role in medical education and healthcare communication.
4.Analyzing Large Language Models’ Responses to Common Lumbar Spine Fusion Surgery Questions: A Comparison Between ChatGPT and Bard
Siegmund Philipp LANG ; Ezra Tilahun YOSEPH ; Aneysis D. GONZALEZ-SUAREZ ; Robert KIM ; Parastou FATEMI ; Katherine WAGNER ; Nicolai MALDANER ; Martin N. STIENEN ; Corinna Clio ZYGOURAKIS
Neurospine 2024;21(2):633-641
Objective:
In the digital age, patients turn to online sources for lumbar spine fusion information, necessitating a careful study of large language models (LLMs) like chat generative pre-trained transformer (ChatGPT) for patient education.
Methods:
Our study aims to assess the response quality of Open AI (artificial intelligence)’s ChatGPT 3.5 and Google’s Bard to patient questions on lumbar spine fusion surgery. We identified 10 critical questions from 158 frequently asked ones via Google search, which were then presented to both chatbots. Five blinded spine surgeons rated the responses on a 4-point scale from ‘unsatisfactory’ to ‘excellent.’ The clarity and professionalism of the answers were also evaluated using a 5-point Likert scale.
Results:
In our evaluation of 10 questions across ChatGPT 3.5 and Bard, 97% of responses were rated as excellent or satisfactory. Specifically, ChatGPT had 62% excellent and 32% minimally clarifying responses, with only 6% needing moderate or substantial clarification. Bard’s responses were 66% excellent and 24% minimally clarifying, with 10% requiring more clarification. No significant difference was found in the overall rating distribution between the 2 models. Both struggled with 3 specific questions regarding surgical risks, success rates, and selection of surgical approaches (Q3, Q4, and Q5). Interrater reliability was low for both models (ChatGPT: k = 0.041, p = 0.622; Bard: k = -0.040, p = 0.601). While both scored well on understanding and empathy, Bard received marginally lower ratings in empathy and professionalism.
Conclusion
ChatGPT3.5 and Bard effectively answered lumbar spine fusion FAQs, but further training and research are needed to solidify LLMs’ role in medical education and healthcare communication.
5.Analyzing Large Language Models’ Responses to Common Lumbar Spine Fusion Surgery Questions: A Comparison Between ChatGPT and Bard
Siegmund Philipp LANG ; Ezra Tilahun YOSEPH ; Aneysis D. GONZALEZ-SUAREZ ; Robert KIM ; Parastou FATEMI ; Katherine WAGNER ; Nicolai MALDANER ; Martin N. STIENEN ; Corinna Clio ZYGOURAKIS
Neurospine 2024;21(2):633-641
Objective:
In the digital age, patients turn to online sources for lumbar spine fusion information, necessitating a careful study of large language models (LLMs) like chat generative pre-trained transformer (ChatGPT) for patient education.
Methods:
Our study aims to assess the response quality of Open AI (artificial intelligence)’s ChatGPT 3.5 and Google’s Bard to patient questions on lumbar spine fusion surgery. We identified 10 critical questions from 158 frequently asked ones via Google search, which were then presented to both chatbots. Five blinded spine surgeons rated the responses on a 4-point scale from ‘unsatisfactory’ to ‘excellent.’ The clarity and professionalism of the answers were also evaluated using a 5-point Likert scale.
Results:
In our evaluation of 10 questions across ChatGPT 3.5 and Bard, 97% of responses were rated as excellent or satisfactory. Specifically, ChatGPT had 62% excellent and 32% minimally clarifying responses, with only 6% needing moderate or substantial clarification. Bard’s responses were 66% excellent and 24% minimally clarifying, with 10% requiring more clarification. No significant difference was found in the overall rating distribution between the 2 models. Both struggled with 3 specific questions regarding surgical risks, success rates, and selection of surgical approaches (Q3, Q4, and Q5). Interrater reliability was low for both models (ChatGPT: k = 0.041, p = 0.622; Bard: k = -0.040, p = 0.601). While both scored well on understanding and empathy, Bard received marginally lower ratings in empathy and professionalism.
Conclusion
ChatGPT3.5 and Bard effectively answered lumbar spine fusion FAQs, but further training and research are needed to solidify LLMs’ role in medical education and healthcare communication.
6.Analyzing Large Language Models’ Responses to Common Lumbar Spine Fusion Surgery Questions: A Comparison Between ChatGPT and Bard
Siegmund Philipp LANG ; Ezra Tilahun YOSEPH ; Aneysis D. GONZALEZ-SUAREZ ; Robert KIM ; Parastou FATEMI ; Katherine WAGNER ; Nicolai MALDANER ; Martin N. STIENEN ; Corinna Clio ZYGOURAKIS
Neurospine 2024;21(2):633-641
Objective:
In the digital age, patients turn to online sources for lumbar spine fusion information, necessitating a careful study of large language models (LLMs) like chat generative pre-trained transformer (ChatGPT) for patient education.
Methods:
Our study aims to assess the response quality of Open AI (artificial intelligence)’s ChatGPT 3.5 and Google’s Bard to patient questions on lumbar spine fusion surgery. We identified 10 critical questions from 158 frequently asked ones via Google search, which were then presented to both chatbots. Five blinded spine surgeons rated the responses on a 4-point scale from ‘unsatisfactory’ to ‘excellent.’ The clarity and professionalism of the answers were also evaluated using a 5-point Likert scale.
Results:
In our evaluation of 10 questions across ChatGPT 3.5 and Bard, 97% of responses were rated as excellent or satisfactory. Specifically, ChatGPT had 62% excellent and 32% minimally clarifying responses, with only 6% needing moderate or substantial clarification. Bard’s responses were 66% excellent and 24% minimally clarifying, with 10% requiring more clarification. No significant difference was found in the overall rating distribution between the 2 models. Both struggled with 3 specific questions regarding surgical risks, success rates, and selection of surgical approaches (Q3, Q4, and Q5). Interrater reliability was low for both models (ChatGPT: k = 0.041, p = 0.622; Bard: k = -0.040, p = 0.601). While both scored well on understanding and empathy, Bard received marginally lower ratings in empathy and professionalism.
Conclusion
ChatGPT3.5 and Bard effectively answered lumbar spine fusion FAQs, but further training and research are needed to solidify LLMs’ role in medical education and healthcare communication.
7.Bone marrow-derived mesenchymal stem cells migrate to healthy and damaged salivary glands following stem cell infusion.
Silke SCHWARZ ; Ralf HUSS ; Michaela SCHULZ-SIEGMUND ; Breda VOGEL ; Sven BRANDAU ; Stephan LANG ; Nicole ROTTER
International Journal of Oral Science 2014;6(3):154-161
Xerostomia is a severe side effect of radiation therapy in head and neck cancer patients. To date, no satisfactory treatment option has been established. Because mesenchymal stem cells (MSCs) have been identified as a potential treatment modality, we aimed to evaluate stem cell distribution following intravenous and intraglandular injections using a surgical model of salivary gland damage and to analyse the effects of MSC injections on the recruitment of immune cells. The submandibular gland ducts of rats were surgically ligated. Syngeneic adult MSCs were isolated, immortalised by simian virus 40 (SV40) large T antigen and characterized by flow cytometry. MSCs were injected intravenously and intraglandularly. After 1, 3 and 7 days, the organs of interest were analysed for stem cell recruitment. Inflammation was analysed by immunohistochemical staining. We were able to demonstrate that, after intravenous injection, MSCs were recruited to normal and damaged submandibular glands on days 1, 3 and 7. Unexpectedly, stem cells were recruited to ligated and non-ligated glands in a comparable manner. After intraglandular injection of MSCs into ligated glands, the presence of MSCs, leucocytes and macrophages was enhanced, compared to intravenous injection of stem cells. Our data suggest that injected MSCs were retained within the inflamed glands, could become activated and subsequently recruited leucocytes to the sites of tissue damage.
Animals
;
Antigens, Polyomavirus Transforming
;
immunology
;
Cell Culture Techniques
;
Cell Movement
;
physiology
;
Cell Transformation, Viral
;
Clone Cells
;
physiology
;
Flow Cytometry
;
Immunohistochemistry
;
Injections, Intralesional
;
Injections, Intravenous
;
Leukocytes
;
pathology
;
Macrophages
;
pathology
;
Mesenchymal Stem Cell Transplantation
;
methods
;
Mesenchymal Stromal Cells
;
pathology
;
physiology
;
Necrosis
;
Rats, Wistar
;
Salivary Ducts
;
pathology
;
Sialadenitis
;
pathology
;
therapy
;
Simian virus 40
;
immunology
;
Submandibular Gland
;
pathology
;
Submandibular Gland Diseases
;
pathology
;
therapy
;
Time Factors