1.Impact of prolonged exposure to video display terminals on macular pigment optical density in young adult Filipinos
Ronald Y. Tiu ; Eleonore B. Iguban
Philippine Journal of Ophthalmology 2025;50(1):43-49
OBJECTIVE
This study assessed the correlation of macular pigment optical density (MPOD) and varying levels of exposure to video display terminals (VDT) among young adult Filipinos.
METHODSThis cross-sectional, analytical, single-center study compared the MPOD, measured using the Zeiss VISUCAM 500, between two groups of individuals aged 20 to 35 years old with differing VDT exposure. The more exposed group consisted of individuals who spent at least 8 hours per day on VDTs for the past 1 year, while the less exposed group spent less than 6 hours per day. Student’s t-test and chi-square test were used to compare the two groups, while Pearson's r coefficient was utilized to determine the relationship between MPOD and VDT exposure.
RESULTSA total of 80 individuals (40 in each group) were included in the study. Both groups had similar profiles, except for refractive errors, which were significantly higher in the prolonged VDT exposure group (p = 0.02). The prolonged exposure group averaged 10 hours of VDT use per day, compared to 3 hours in the low VDT exposure group. The MPOD level, particularly the maximum optical density (Max OD), was significantly lower in the prolonged VDT exposure group (0.2034 DU) compared to the low VDT exposure group (0.2467 DU) (p = 0.0051). A negative weak correlation was observed between the VDT exposure hours and MPOD levels (Max OD r = –0.387, p = 0.0005).
CONCLUSIONThis study found a weak but significant negative correlation between prolonged VDT exposure and lower MPOD levels, suggesting that extended screen time may contribute to reduced macular pigment density. While the correlation was weak (r = –0.387), these findings underscore the potential risk of diminished macular health with increased VDT use. The results highlight the importance of promoting protective strategies, such as reducing screen time and encouraging dietary or lifestyle changes that support eye health, especially among individuals with high VDT exposure.
Human ; Video Display Terminals (vdt) ; Computer Terminals
2.A machine learning approach for the diagnosis of obstructive sleep apnoea using oximetry, demographic and anthropometric data.
Zhou Hao LEONG ; Shaun Ray Han LOH ; Leong Chai LEOW ; Thun How ONG ; Song Tar TOH
Singapore medical journal 2025;66(4):195-201
INTRODUCTION:
Obstructive sleep apnoea (OSA) is a serious but underdiagnosed condition. Demand for the gold standard diagnostic polysomnogram (PSG) far exceeds its availability. More efficient diagnostic methods are needed, even in tertiary settings. Machine learning (ML) models have strengths in disease prediction and early diagnosis. We explored the use of ML with oximetry, demographic and anthropometric data to diagnose OSA.
METHODS:
A total of 2,996 patients were included for modelling and divided into test and training sets. Seven commonly used supervised learning algorithms were trained with the data. Sensitivity (recall), specificity, positive predictive value (PPV) (precision), negative predictive value, area under the receiver operating characteristic curve (AUC) and F1 measure were reported for each model.
RESULTS:
In the best performing four-class model (neural network model predicting no, mild, moderate or severe OSA), a prediction of moderate and/or severe disease had a combined PPV of 94%; one out of 335 patients had no OSA and 19 had mild OSA. In the best performing two-class model (logistic regression model predicting no-mild vs. moderate-severe OSA), the PPV for moderate-severe OSA was 92%; two out of 350 patients had no OSA and 26 had mild OSA.
CONCLUSION
Our study showed that the prediction of moderate-severe OSA in a tertiary setting with an ML approach is a viable option to facilitate early identification of OSA. Prospective studies with home-based oximeters and analysis of other oximetry variables are the next steps towards formal implementation.
Humans
;
Oximetry/methods*
;
Sleep Apnea, Obstructive/diagnosis*
;
Male
;
Female
;
Middle Aged
;
Machine Learning
;
Polysomnography
;
Adult
;
Anthropometry
;
ROC Curve
;
Aged
;
Algorithms
;
Predictive Value of Tests
;
Sensitivity and Specificity
;
Neural Networks, Computer
;
Demography
3.Use of deep learning model for paediatric elbow radiograph binomial classification: initial experience, performance and lessons learnt.
Mark Bangwei TAN ; Yuezhi Russ CHUA ; Qiao FAN ; Marielle Valerie FORTIER ; Peiqi Pearlly CHANG
Singapore medical journal 2025;66(4):208-214
INTRODUCTION:
In this study, we aimed to compare the performance of a convolutional neural network (CNN)-based deep learning model that was trained on a dataset of normal and abnormal paediatric elbow radiographs with that of paediatric emergency department (ED) physicians on a binomial classification task.
METHODS:
A total of 1,314 paediatric elbow lateral radiographs (patient mean age 8.2 years) were retrospectively retrieved and classified based on annotation as normal or abnormal (with pathology). They were then randomly partitioned to a development set (993 images); first and second tuning (validation) sets (109 and 100 images, respectively); and a test set (112 images). An artificial intelligence (AI) model was trained on the development set using the EfficientNet B1 network architecture. Its performance on the test set was compared to that of five physicians (inter-rater agreement: fair). Performance of the AI model and the physician group was tested using McNemar test.
RESULTS:
The accuracy of the AI model on the test set was 80.4% (95% confidence interval [CI] 71.8%-87.3%), and the area under the receiver operating characteristic curve (AUROC) was 0.872 (95% CI 0.831-0.947). The performance of the AI model vs. the physician group on the test set was: sensitivity 79.0% (95% CI: 68.4%-89.5%) vs. 64.9% (95% CI: 52.5%-77.3%; P = 0.088); and specificity 81.8% (95% CI: 71.6%-92.0%) vs. 87.3% (95% CI: 78.5%-96.1%; P = 0.439).
CONCLUSION
The AI model showed good AUROC values and higher sensitivity, with the P-value at nominal significance when compared to the clinician group.
Humans
;
Deep Learning
;
Child
;
Retrospective Studies
;
Male
;
Female
;
Radiography/methods*
;
ROC Curve
;
Elbow/diagnostic imaging*
;
Neural Networks, Computer
;
Child, Preschool
;
Elbow Joint/diagnostic imaging*
;
Emergency Service, Hospital
;
Adolescent
;
Infant
;
Artificial Intelligence
4.Development of an abdominal acupoint localization system based on AI deep learning.
Mo ZHANG ; Yuming LI ; Zongming SHI
Chinese Acupuncture & Moxibustion 2025;45(3):391-396
This study aims to develop an abdominal acupoint localization system based on computer vision and convolutional neural networks (CNNs). To address the challenge of abdominal acupoint localization, a multi-task CNNs architecture was constructed and trained to locate the Shenque (CV8) and human body boundaries. Based on the identified Shenque (CV8), the system further deduces key characteristics of four acupoints: Shangwan (CV13), Qugu (CV2), and bilateral Daheng (SP15). An affine transformation matrix is applied to accurately map image coordinates to an acupoint template space, achieving precise localization of abdominal acupoints. Testing has verified that this system can accurately identify and locate abdominal acupoints in images. The development of this localization system provides technical support for TCM remote education, diagnostic assistance, and advanced TCM equipment, such as intelligent acupuncture robots, facilitating the standardization and intelligent advancement of acupuncture.
Acupuncture Points
;
Humans
;
Deep Learning
;
Abdomen/diagnostic imaging*
;
Neural Networks, Computer
;
Acupuncture Therapy
;
Image Processing, Computer-Assisted
5.Construction of an artificial intelligence-assisted system for auxiliary detection of auricular point features based on the YOLO neural network.
Ganhong WANG ; Zihao ZHANG ; Kaijian XIA ; Yanting ZHOU ; Meijuan XI ; Jian CHEN
Chinese Acupuncture & Moxibustion 2025;45(4):413-420
OBJECTIVE:
To develop an artificial intelligence-assisted system for the automatic detection of the features of common 21 auricular points based on the YOLOv8 neural network.
METHODS:
A total of 660 human auricular images from three research centers were collected from June 2019 to February 2024. The rectangle boxes and features of images were annotated using the LabelMe5.3.1 tool and converted them into a format compatible with the YOLO model. Using these data, transfer learning and fine-tuning training were conducted on different scales of pretrained YOLO neural network models. The model's performance was evaluated on validation and test sets, including the mean average precision (mAP) at various thresholds, recall rate (recall), frames per second (FPS) and confusion matrices. Finally, the model was deployed on a local computer, and the real-time detection of human auricular images was conducted using a camera.
RESULTS:
Five different versions of the YOLOv8 key-point detection model were developed, including YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, and YOLOv8x. On the validation set, YOLOv8n showed the best performance in terms of speed (225.736 frames per second) and precision (0.998). On the external test set, YOLOv8n achieved the accuracy of 0.991, the sensitivity of 1.0, and the F1 score of 0.995. The localization performance of auricular point features showed the average accuracy of 0.990, the precision of 0.995, and the recall of 0.997 under 50% intersection ration (mAP50).
CONCLUSION
The key-point detection model of 21 common auricular points based on YOLOv8n exhibits the excellent predictive performance, which is capable of rapidly and automatically locating and classifying auricular points.
Humans
;
Neural Networks, Computer
;
Artificial Intelligence
;
Acupuncture Points
6.Research status of automatic localization of acupoint based on deep learning.
Yuge DONG ; Chengbin WANG ; Weigang MA ; Weifang GAO ; Yuzi TANG ; Yonglong ZHANG ; Jiwen QIU ; Haiyan REN ; Zhongzheng LI ; Tianyi ZHAO ; Zhongxi LV ; Xingfang PAN
Chinese Acupuncture & Moxibustion 2025;45(5):586-592
This paper reviews the published articles of recent years on the application of deep learning methods in automatic localization of acupoint, and summarizes it from 3 key links, i.e. the dataset construction, the neural network model design, and the accuracy evaluation of acupoint localization. The significant progress has been obtained in the field of deep learning for acupoint localization, but the scale of acupoint detection needs to be expanded and the precision, the generalization ability, and the real-time performance of the model be advanced. The future research should focus on the support of standardized datasets, and the integration of 3D modeling and multimodal data fusion, so as to increase the accuracy and strengthen the personalization of acupoint localization.
Deep Learning
;
Acupuncture Points
;
Humans
;
Neural Networks, Computer
7.Key technologies and challenges in online adaptive radiotherapy for lung cancer.
Baiqiang DONG ; Shuohan ZHENG ; Kelly CHEN ; Xuan ZHU ; Sijuan HUANG ; Xiaobo JIANG ; Wenchao DIAO ; Hua LI ; Lecheng JIA ; Feng CHI ; Xiaoyan HUANG ; Qiwen LI ; Ming CHEN
Chinese Medical Journal 2025;138(13):1559-1567
Definitive treatment of lung cancer with radiotherapy is challenging, as respiratory motion and anatomical changes can increase the risk of severe off-target effects during radiotherapy. Online adaptive radiotherapy (ART) is an evolving approach that enables timely modification of a treatment plan during the interfraction of radiotherapy, in response to physiologic or anatomic variations, aiming to improve the dose distribution for precise targeting and delivery in lung cancer patients. The effectiveness of online ART depends on the seamless integration of multiple components: sufficient quality of linear accelerator-integrated imaging guidance, deformable image registration, automatic recontouring, and efficient quality assurance and workflow. This review summarizes the present status of online ART for lung cancer, including key technologies, as well as the challenges and areas of active research in this field.
Humans
;
Lung Neoplasms/radiotherapy*
;
Radiotherapy Planning, Computer-Assisted/methods*
8.Large models in medical imaging: Advances and prospects.
Mengjie FANG ; Zipei WANG ; Sitian PAN ; Xin FENG ; Yunpeng ZHAO ; Dongzhi HOU ; Ling WU ; Xuebin XIE ; Xu-Yao ZHANG ; Jie TIAN ; Di DONG
Chinese Medical Journal 2025;138(14):1647-1664
Recent advances in large models demonstrate significant prospects for transforming the field of medical imaging. These models, including large language models, large visual models, and multimodal large models, offer unprecedented capabilities in processing and interpreting complex medical data across various imaging modalities. By leveraging self-supervised pretraining on vast unlabeled datasets, cross-modal representation learning, and domain-specific medical knowledge adaptation through fine-tuning, large models can achieve higher diagnostic accuracy and more efficient workflows for key clinical tasks. This review summarizes the concepts, methods, and progress of large models in medical imaging, highlighting their potential in precision medicine. The article first outlines the integration of multimodal data under large model technologies, approaches for training large models with medical datasets, and the need for robust evaluation metrics. It then explores how large models can revolutionize applications in critical tasks such as image segmentation, disease diagnosis, personalized treatment strategies, and real-time interactive systems, thus pushing the boundaries of traditional imaging analysis. Despite their potential, the practical implementation of large models in medical imaging faces notable challenges, including the scarcity of high-quality medical data, the need for optimized perception of imaging phenotypes, safety considerations, and seamless integration with existing clinical workflows and equipment. As research progresses, the development of more efficient, interpretable, and generalizable models will be critical to ensuring their reliable deployment across diverse clinical environments. This review aims to provide insights into the current state of the field and provide directions for future research to facilitate the broader adoption of large models in clinical practice.
Humans
;
Diagnostic Imaging/methods*
;
Precision Medicine/methods*
;
Image Processing, Computer-Assisted/methods*
9.Role of artificial intelligence in medical image analysis.
Lu WANG ; Shimin ZHANG ; Nan XU ; Qianqian HE ; Yuming ZHU ; Zhihui CHANG ; Yanan WU ; Huihan WANG ; Shouliang QI ; Lina ZHANG ; Yu SHI ; Xiujuan QU ; Xin ZHOU ; Jiangdian SONG
Chinese Medical Journal 2025;138(22):2879-2894
With the emergence of deep learning techniques based on convolutional neural networks, artificial intelligence (AI) has driven transformative developments in the field of medical image analysis. Recently, large language models (LLMs) such as ChatGPT have also started to achieve distinction in this domain. Increasing research shows the undeniable role of AI in reshaping various aspects of medical image analysis, including processes such as image enhancement, segmentation, detection in image preprocessing, and postprocessing related to medical diagnosis and prognosis in clinical settings. However, despite the significant progress in AI research, studies investigating the recent advances in AI technology in the aforementioned aspects, the changes in research hotspot trajectories, and the performance of studies in addressing key clinical challenges in this field are limited. This article provides an overview of recent advances in AI for medical image analysis and discusses the methodological profiles, advantages, disadvantages, and future trends of AI technologies.
Artificial Intelligence
;
Humans
;
Image Processing, Computer-Assisted/methods*
;
Neural Networks, Computer
;
Deep Learning
;
Diagnostic Imaging/methods*
10.Optimization of extraction process for Shenxiong Huanglian Jiedu Granules based on AHP-CRITIC hybrid weighting method, grey correlation analysis, and BP-ANN.
Zi-An LI ; De-Wen LIU ; Xin-Jian LI ; Bing-Yu WU ; Qun LAN ; Meng-Jia GUO ; Jia-Hui SUN ; Nan-Yang LIU ; Hui PEI ; Hao LI ; Hong YI ; Jin-Yu WANG ; Liang-Mian CHEN
China Journal of Chinese Materia Medica 2025;50(10):2674-2683
By employing the analytic hierarchy process(AHP), the CRITIC method(a weight determination method based on indicator correlations), and the AHP-CRITIC hybrid weighting method, the weight coefficients of evaluation indicators were determined, followed by a comprehensive score comparison. The grey correlation analysis was then performed to analyze the results calculated using the hybrid weighting method. Subsequently, a backpropagation-artificial neural network(BP-ANN) model was constructed to predict the extraction process parameters and optimize the extraction process for Shenxiong Huanglian Jiedu Granules(SHJG). In the extraction process, an L_9(3~4) orthogonal experiment was designed to optimize three factors at three levels, including extraction frequency, water addition amount, and extraction time. The evaluation indicators included geniposide, berberine, ginsenoside Rg_1 + Re, ginsenoside Rb_1, ferulic acid, and extract yield. Finally, the optimal extraction results obtained by the orthogonal experiment, grey correlation analysis, and BP-ANN method were compared, and validation experiments were conducted. The results showed that the optimal extraction process involved two rounds of aqueous extraction, each lasting one hour; the first extraction used ten times the amount of added water, while the second extraction used eight times the amount. In the validation experiments, the average content of each indicator component was higher than the average content obtained in the orthogonal experiment, with a higher comprehensive score. The optimized extraction process parameters were reliable and stable, making them suitable for subsequent preparation process research.
Drugs, Chinese Herbal/analysis*
;
Neural Networks, Computer


Result Analysis
Print
Save
E-mail