1.Metabolomics: A Novel Approach to Early and Noninvasive Prostate Cancer Detection.
Matthew J ROBERTS ; Horst J SCHIRRA ; Martin F LAVIN ; Robert A GARDINER
Korean Journal of Urology 2011;52(2):79-89
Prostate cancer (PCa) is the most commonly diagnosed visceral cancer in men and is responsible for the second highest cancer-related male mortality rate in Western countries, with increasing rates being reported in Korea, Japan, and China. Considering the low sensitivity of prostate-specific antigen (PSA) testing, it is widely agreed that reliable, age-independent markers of the presence, nature, and progression of PCa are required to facilitate diagnosis and timely treatment. Metabolomics or metabonomics has recently emerged as a novel method of PCa detection owing to its ability to monitor changes in the metabolic signature, within biofluids or tissue, that reflect changes in phenotype and function. This review outlines the physiology of prostate tissue and prostatic fluid in health and in malignancy in relation to metabolomics as well as the principles underlying the methods of metabolomic quantification. Promising metabolites, metabolic profiles, and their correlation with the presence and stage of PCa are summarized. Application of metabolomics to biofluids and in vivo quantification as well as the direction of current research in supplementing and improving current methods of detection are discussed. The current debate in the urology literature on sarcosine as a potential biomarker for PCa is reviewed and discussed. Metabolomics promises to be a valuable tool in the early detection of PCa that may enable earlier treatment and improved clinical outcomes.
Biomarkers
;
China
;
Humans
;
Japan
;
Korea
;
Male
;
Metabolome
;
Metabolomics
;
Organothiophosphorus Compounds
;
Passive Cutaneous Anaphylaxis
;
Phenotype
;
Prostate
;
Prostate-Specific Antigen
;
Prostatic Neoplasms
;
Sarcosine
;
Urology
2.Optimal technique and response of doxorubicin beads in hepatocellular cancer: bead size and dose.
Robert MARTIN ; Javier IRURZUN ; Jordi MUNCHART ; Igor TROFIMOV ; Alexander SCUPCHENKO ; Cliff TATUM ; Govindarajan NARAYANAN
The Korean Journal of Hepatology 2011;17(1):51-60
BACKGROUND/AIMS: It has been shown that the drug-eluting beads loaded with doxorubicin (DEBDOX) are effective for the treatment of hepatocellular carcinoma (HCC). However, the optimal safety and efficacy still remain to be established by using various bead sizes, doxorubicin doses, and the degree of stasis.The aim of this study was to determine the optimal safety and efficacy of DEBDOX in the treatment of HCC. METHODS: Analysis of a 503-patient prospective, multicenter, multinational Bead Registry Database from 2007 to 2010 identified 206 patients who had been treated for HCC with DEBDOX. Primary endpoints were to compare safety, tolerance, response rates, and overall survival based on bead size (100-300, 300-500, 500-700, and 700-900 microm), number of vials, doxorubicin dose, and degree of stasis. RESULTS: In total, 206 patients underwent 343 treatments. The use of all four bead sizes was similar based on Child-Pugh class and Okuda stage, with a significantly higher use (50%) of beads of size 100-300 microm in patients with portal vein thrombosis (P=0.05). Significant differences were seen for the number of median treatments, median doxorubicin dose, lobar infusion), and degree of complete stasis. The rate of adverse events was higher for larger beads than for smaller beads (28% vs. 16%; P=0.02). CONCLUSIONS: Bead size and dose may vary according to disease distribution. Smaller beads offer the opportunity for repeated treatments, a larger cumulative dose delivery, a lesser degree of complete stasis, and fewer adverse events.
Adult
;
Aged
;
Aged, 80 and over
;
Antibiotics, Antineoplastic/*administration & dosage/adverse effects
;
Carcinoma, Hepatocellular/*drug therapy/mortality
;
Dose-Response Relationship, Drug
;
Doxorubicin/*administration & dosage/adverse effects
;
Drug Carriers/*chemistry
;
Female
;
Humans
;
Liver Neoplasms/*drug therapy/mortality
;
Male
;
Middle Aged
;
Particle Size
;
Prospective Studies
;
Severity of Illness Index
3.Development and Testing of Thrombolytics in Stroke
Dmitri NIKITIN ; Seungbum CHOI ; Jan MICAN ; Martin TOUL ; Wi-Sun RYU ; Jiri DAMBORSKY ; Robert MIKULIK ; Dong-Eog KIM
Journal of Stroke 2021;23(1):12-36
Despite recent advances in recanalization therapy, mechanical thrombectomy will never be a treatment for every ischemic stroke because access to mechanical thrombectomy is still limited in many countries. Moreover, many ischemic strokes are caused by occlusion of cerebral arteries that cannot be reached by intra-arterial catheters. Reperfusion using thrombolytic agents will therefore remain an important therapy for hyperacute ischemic stroke. However, thrombolytic drugs have shown limited efficacy and notable hemorrhagic complication rates, leaving room for improvement. A comprehensive understanding of basic and clinical research pipelines as well as the current status of thrombolytic therapy will help facilitate the development of new thrombolytics. Compared with alteplase, an ideal thrombolytic agent is expected to provide faster reperfusion in more patients; prevent re-occlusions; have higher fibrin specificity for selective activation of clot-bound plasminogen to decrease bleeding complications; be retained in the blood for a longer time to minimize dosage and allow administration as a single bolus; be more resistant to inhibitors; and be less antigenic for repetitive usage. Here, we review the currently available thrombolytics, strategies for the development of new clot-dissolving substances, and the assessment of thrombolytic efficacies in vitro and in vivo.
4.Analyzing Large Language Models’ Responses to Common Lumbar Spine Fusion Surgery Questions: A Comparison Between ChatGPT and Bard
Siegmund Philipp LANG ; Ezra Tilahun YOSEPH ; Aneysis D. GONZALEZ-SUAREZ ; Robert KIM ; Parastou FATEMI ; Katherine WAGNER ; Nicolai MALDANER ; Martin N. STIENEN ; Corinna Clio ZYGOURAKIS
Neurospine 2024;21(2):633-641
Objective:
In the digital age, patients turn to online sources for lumbar spine fusion information, necessitating a careful study of large language models (LLMs) like chat generative pre-trained transformer (ChatGPT) for patient education.
Methods:
Our study aims to assess the response quality of Open AI (artificial intelligence)’s ChatGPT 3.5 and Google’s Bard to patient questions on lumbar spine fusion surgery. We identified 10 critical questions from 158 frequently asked ones via Google search, which were then presented to both chatbots. Five blinded spine surgeons rated the responses on a 4-point scale from ‘unsatisfactory’ to ‘excellent.’ The clarity and professionalism of the answers were also evaluated using a 5-point Likert scale.
Results:
In our evaluation of 10 questions across ChatGPT 3.5 and Bard, 97% of responses were rated as excellent or satisfactory. Specifically, ChatGPT had 62% excellent and 32% minimally clarifying responses, with only 6% needing moderate or substantial clarification. Bard’s responses were 66% excellent and 24% minimally clarifying, with 10% requiring more clarification. No significant difference was found in the overall rating distribution between the 2 models. Both struggled with 3 specific questions regarding surgical risks, success rates, and selection of surgical approaches (Q3, Q4, and Q5). Interrater reliability was low for both models (ChatGPT: k = 0.041, p = 0.622; Bard: k = -0.040, p = 0.601). While both scored well on understanding and empathy, Bard received marginally lower ratings in empathy and professionalism.
Conclusion
ChatGPT3.5 and Bard effectively answered lumbar spine fusion FAQs, but further training and research are needed to solidify LLMs’ role in medical education and healthcare communication.
5.Analyzing Large Language Models’ Responses to Common Lumbar Spine Fusion Surgery Questions: A Comparison Between ChatGPT and Bard
Siegmund Philipp LANG ; Ezra Tilahun YOSEPH ; Aneysis D. GONZALEZ-SUAREZ ; Robert KIM ; Parastou FATEMI ; Katherine WAGNER ; Nicolai MALDANER ; Martin N. STIENEN ; Corinna Clio ZYGOURAKIS
Neurospine 2024;21(2):633-641
Objective:
In the digital age, patients turn to online sources for lumbar spine fusion information, necessitating a careful study of large language models (LLMs) like chat generative pre-trained transformer (ChatGPT) for patient education.
Methods:
Our study aims to assess the response quality of Open AI (artificial intelligence)’s ChatGPT 3.5 and Google’s Bard to patient questions on lumbar spine fusion surgery. We identified 10 critical questions from 158 frequently asked ones via Google search, which were then presented to both chatbots. Five blinded spine surgeons rated the responses on a 4-point scale from ‘unsatisfactory’ to ‘excellent.’ The clarity and professionalism of the answers were also evaluated using a 5-point Likert scale.
Results:
In our evaluation of 10 questions across ChatGPT 3.5 and Bard, 97% of responses were rated as excellent or satisfactory. Specifically, ChatGPT had 62% excellent and 32% minimally clarifying responses, with only 6% needing moderate or substantial clarification. Bard’s responses were 66% excellent and 24% minimally clarifying, with 10% requiring more clarification. No significant difference was found in the overall rating distribution between the 2 models. Both struggled with 3 specific questions regarding surgical risks, success rates, and selection of surgical approaches (Q3, Q4, and Q5). Interrater reliability was low for both models (ChatGPT: k = 0.041, p = 0.622; Bard: k = -0.040, p = 0.601). While both scored well on understanding and empathy, Bard received marginally lower ratings in empathy and professionalism.
Conclusion
ChatGPT3.5 and Bard effectively answered lumbar spine fusion FAQs, but further training and research are needed to solidify LLMs’ role in medical education and healthcare communication.
6.Analyzing Large Language Models’ Responses to Common Lumbar Spine Fusion Surgery Questions: A Comparison Between ChatGPT and Bard
Siegmund Philipp LANG ; Ezra Tilahun YOSEPH ; Aneysis D. GONZALEZ-SUAREZ ; Robert KIM ; Parastou FATEMI ; Katherine WAGNER ; Nicolai MALDANER ; Martin N. STIENEN ; Corinna Clio ZYGOURAKIS
Neurospine 2024;21(2):633-641
Objective:
In the digital age, patients turn to online sources for lumbar spine fusion information, necessitating a careful study of large language models (LLMs) like chat generative pre-trained transformer (ChatGPT) for patient education.
Methods:
Our study aims to assess the response quality of Open AI (artificial intelligence)’s ChatGPT 3.5 and Google’s Bard to patient questions on lumbar spine fusion surgery. We identified 10 critical questions from 158 frequently asked ones via Google search, which were then presented to both chatbots. Five blinded spine surgeons rated the responses on a 4-point scale from ‘unsatisfactory’ to ‘excellent.’ The clarity and professionalism of the answers were also evaluated using a 5-point Likert scale.
Results:
In our evaluation of 10 questions across ChatGPT 3.5 and Bard, 97% of responses were rated as excellent or satisfactory. Specifically, ChatGPT had 62% excellent and 32% minimally clarifying responses, with only 6% needing moderate or substantial clarification. Bard’s responses were 66% excellent and 24% minimally clarifying, with 10% requiring more clarification. No significant difference was found in the overall rating distribution between the 2 models. Both struggled with 3 specific questions regarding surgical risks, success rates, and selection of surgical approaches (Q3, Q4, and Q5). Interrater reliability was low for both models (ChatGPT: k = 0.041, p = 0.622; Bard: k = -0.040, p = 0.601). While both scored well on understanding and empathy, Bard received marginally lower ratings in empathy and professionalism.
Conclusion
ChatGPT3.5 and Bard effectively answered lumbar spine fusion FAQs, but further training and research are needed to solidify LLMs’ role in medical education and healthcare communication.
7.Analyzing Large Language Models’ Responses to Common Lumbar Spine Fusion Surgery Questions: A Comparison Between ChatGPT and Bard
Siegmund Philipp LANG ; Ezra Tilahun YOSEPH ; Aneysis D. GONZALEZ-SUAREZ ; Robert KIM ; Parastou FATEMI ; Katherine WAGNER ; Nicolai MALDANER ; Martin N. STIENEN ; Corinna Clio ZYGOURAKIS
Neurospine 2024;21(2):633-641
Objective:
In the digital age, patients turn to online sources for lumbar spine fusion information, necessitating a careful study of large language models (LLMs) like chat generative pre-trained transformer (ChatGPT) for patient education.
Methods:
Our study aims to assess the response quality of Open AI (artificial intelligence)’s ChatGPT 3.5 and Google’s Bard to patient questions on lumbar spine fusion surgery. We identified 10 critical questions from 158 frequently asked ones via Google search, which were then presented to both chatbots. Five blinded spine surgeons rated the responses on a 4-point scale from ‘unsatisfactory’ to ‘excellent.’ The clarity and professionalism of the answers were also evaluated using a 5-point Likert scale.
Results:
In our evaluation of 10 questions across ChatGPT 3.5 and Bard, 97% of responses were rated as excellent or satisfactory. Specifically, ChatGPT had 62% excellent and 32% minimally clarifying responses, with only 6% needing moderate or substantial clarification. Bard’s responses were 66% excellent and 24% minimally clarifying, with 10% requiring more clarification. No significant difference was found in the overall rating distribution between the 2 models. Both struggled with 3 specific questions regarding surgical risks, success rates, and selection of surgical approaches (Q3, Q4, and Q5). Interrater reliability was low for both models (ChatGPT: k = 0.041, p = 0.622; Bard: k = -0.040, p = 0.601). While both scored well on understanding and empathy, Bard received marginally lower ratings in empathy and professionalism.
Conclusion
ChatGPT3.5 and Bard effectively answered lumbar spine fusion FAQs, but further training and research are needed to solidify LLMs’ role in medical education and healthcare communication.
8.Analyzing Large Language Models’ Responses to Common Lumbar Spine Fusion Surgery Questions: A Comparison Between ChatGPT and Bard
Siegmund Philipp LANG ; Ezra Tilahun YOSEPH ; Aneysis D. GONZALEZ-SUAREZ ; Robert KIM ; Parastou FATEMI ; Katherine WAGNER ; Nicolai MALDANER ; Martin N. STIENEN ; Corinna Clio ZYGOURAKIS
Neurospine 2024;21(2):633-641
Objective:
In the digital age, patients turn to online sources for lumbar spine fusion information, necessitating a careful study of large language models (LLMs) like chat generative pre-trained transformer (ChatGPT) for patient education.
Methods:
Our study aims to assess the response quality of Open AI (artificial intelligence)’s ChatGPT 3.5 and Google’s Bard to patient questions on lumbar spine fusion surgery. We identified 10 critical questions from 158 frequently asked ones via Google search, which were then presented to both chatbots. Five blinded spine surgeons rated the responses on a 4-point scale from ‘unsatisfactory’ to ‘excellent.’ The clarity and professionalism of the answers were also evaluated using a 5-point Likert scale.
Results:
In our evaluation of 10 questions across ChatGPT 3.5 and Bard, 97% of responses were rated as excellent or satisfactory. Specifically, ChatGPT had 62% excellent and 32% minimally clarifying responses, with only 6% needing moderate or substantial clarification. Bard’s responses were 66% excellent and 24% minimally clarifying, with 10% requiring more clarification. No significant difference was found in the overall rating distribution between the 2 models. Both struggled with 3 specific questions regarding surgical risks, success rates, and selection of surgical approaches (Q3, Q4, and Q5). Interrater reliability was low for both models (ChatGPT: k = 0.041, p = 0.622; Bard: k = -0.040, p = 0.601). While both scored well on understanding and empathy, Bard received marginally lower ratings in empathy and professionalism.
Conclusion
ChatGPT3.5 and Bard effectively answered lumbar spine fusion FAQs, but further training and research are needed to solidify LLMs’ role in medical education and healthcare communication.
9.Finding acute coronary syndrome with serial troponin testing for rapid assessment of cardiac ischemic symptoms (FAST-TRAC): a study protocol
W. Frank PEACOCK ; Alan S. MAISEL ; Christian MUELLER ; Stefan D. ANKER ; Fred S. APPLE ; Robert H. CHRISTENSON ; Paul COLLINSON ; Lori B. DANIELS ; Deborah B. DIERCKS ; Salvatore Di SOMMA ; Gerasimos FILIPPATOS ; Gary HEADDEN ; Brian HIESTAND ; Judd E. HOLLANDER ; Juan C. KASKI ; Joshua M. KOSOWSKY ; John T. NAGURNEY ; Richard M. NOWAK ; Donald SCHREIBER ; Gary M. VILKE ; Marvin A. WAYNE ; Martin THAN
Clinical and Experimental Emergency Medicine 2022;9(2):140-145
Objective:
To determine the utility of a highly sensitive troponin assay when utilized in the emergency department.
Methods
The FAST-TRAC study prospectively enrolled >1,500 emergency department patients with suspected acute coronary syndrome within 6 hours of symptom onset and 2 hours of emergency department presentation. It has several unique features that are not found in the majority of studies evaluating troponin. These include a very early presenting population in whom prospective data collection of risk score parameters and the physician’s clinical impression of the probability of acute coronary syndrome before any troponin data were available. Furthermore, two gold standard diagnostic definitions were determined by a pair of cardiologists reviewing two separate data sets; one that included all local troponin testing results and a second that excluded troponin testing so that diagnosis was based solely on clinical grounds. By this method, a statistically valid head-to-head comparison of contemporary and high sensitivity troponin testing is obtainable. Finally, because of a significant delay in sample processing, a unique ability to define the molecular stability of various troponin assays is possible.Trial registration ClinicalTrials.gov Identifier NCT00880802
10.Comparing computer-aided therapy with conventional physiotherapy in Parkinson’s disease: An equivalence study
Martin Unterreiner ; Carolin Biedermann ; Robert el-Fahem ; Michael John ; Stefan Klose ; Christian T Haas ; Tobias Wä ; chter
Neurology Asia 2019;24(4):309-315
Objective: The present study investigated, whether computer-aided therapy in patients with Parkinson’s
disease is equivalent/non-inferior to conventional Lee Silvermann Voice Treatment (LSVT)-BIGtherapy in respect to motor outcome as measured by the Unified Parkinson’s Disease Rating Scale
(MDS-UPDRS-III) and quality of life as measured by the Parkinson’s Disease Questionnaire (PDQ-39).
Methods: In this controlled, rater-blinded study, 34 patients were included and 24 patients randomized
to train seven standard exercises of the BIG-therapy either by a computer (BeBIG-group) or by a
certified LSVT-BIG therapist (ThBIG-group) over four weeks. Equivalence was assessed by comparing
the confidence interval of the BeBIG-group to the equivalence margin of the ThBIG-group. Results:
There were no significant group differences in respect to age, disease duration, L-dopa equivalent
daily dose or clinical stage of the disease. Both groups profited significantly from the therapy as
demonstrated by an improvement in the MDS-UPDRS-III of 9.17 point in the BeBIG-group and of 8.92
points in the ThBIG-group. There was a non-significant decrease in the PDQ-39 of 9.23 points in the
BeBIG-group and 4.23 points in the ThBIG-group. However, equivalence could not be demonstrated
as the improvement of the BeBIG-group exceeded the confidence interval of the ThBIG-group.
Conclusion: Physical training by a computer as well as by a therapist improves motor symptoms and
quality of life in Parkinson’s disease. Both therapies are not equivalent, superiority of the computerized
training can however not be concluded, as the study was only designed to test for non-inferiority.
Therefore, computerized training can be considered as an add-on-therapy