The application of DeepSeek-assisted teaching in the cultivation of clinical thinking skills for medical laboratory technology students
10.3760/cma.j.cn114452-20250703-00394
- VernacularTitle:DeepSeek辅助教学在医学检验技术专业临床思维能力培养中的应用
- Author:
Yufan RUAN
1
;
Dan JIN
;
Juan XI
;
Jiancheng TU
;
Chunzi LIANG
Author Information
1. 湖北中医药大学检验学院,武汉 430065
- Publication Type:Journal Article
- Keywords:
Artificial intelligence;
Clinical laboratory techniques/education;
Clinical reasoning;
DeepSeek large language model
- From:
Chinese Journal of Laboratory Medicine
2025;48(12):1552-1557
- CountryChina
- Language:Chinese
-
Abstract:
Objective:To explore the application effectiveness of the large language model DeepSeek in the cultivation of clinical thinking skills for medical laboratory technology students.Methods:A non-randomized controlled study was conducted. In the 2024-2025 academic year, two classes of second-year medical laboratory technology students from Hubei University of Chinese Medicine were selected and divided into a DeepSeek-assisted teaching group (Class A, n=53) and a traditional teaching control group (Class B, n=53), totaling 106 students. Both groups followed a 20-week problem-based learning (PBL) framework with identical teaching content, instructors, and class hours. Class A utilized DeepSeek via the"Learning Pass AI"platform for case diagnosis reasoning, prompt construction training, test plan formulation, and result analysis, while Class B received traditional PBL instruction. Paired t-tests were used to compare pre-and post-teaching scores in clinical thinking skills, AI interaction literacy, and prompt construction in Class A. Independent samples t-tests and chi-square ( χ2) tests were used to evaluate differences in case reasoning scores, etiology analysis accuracy, and teaching satisfaction between groups. Structured questionnaires supplemented the evaluation of model-assisted teaching processes. Results:The comparison of pre-and post-teaching scores in Class A showed that post-teaching scores significantly improved in clinical thinking skills[(4.02±0.45) points vs. (3.09±0.50) points, t=2.23)] and AI interaction literacy [(4.62±0.41) points vs. (3.27±0.54) points, t=2.18]. Compared to Class B, Class A demonstrated superior performance in case reasoning scores [(81.1±3.8) points vs.(74.3±4.2) points, t=8.97], etiology analysis accuracy [94.3% (50/53) vs. 81.1% (43/53), χ2=4.29], and teaching satisfaction [(95.6±3.2)points vs. (82.6±4.8) points, t=11.86] ( P<0.05). The results of questionnaires indicated that during model application, the prompt construction improved in logic [(2.85±0.58) points to (4.25±0.50) points, t=14.23, P<0.01] and innovation [(2.60±0.53) points to (4.05±0.46) points, t=11.57, P<0.05], but question clarity (77.4%, 41/53) and medical terminology accuracy (43.4%, 23/53) remained primary shortcomings. Conclusion:Integrating large language models into AI-teacher collaborative learning pathways can effectively promote students′ autonomous inquiry and clinical reasoning skills, thereby enhancing medical laboratory technology students′ clinical thinking skills.