ISSN 2097-6054(网络) ISSN 1672-9234(印刷) CN 11-5289/R
主管:中国科学技术协会 主办:中华护理学会
出版:中华护理杂志社
收录:中国科学引文数据库(CSCD)来源期刊
   中国期刊全文数据库
   中国核心期刊(遴选)数据库
   Scopus

中华护理教育 ›› 2025, Vol. 22 ›› Issue (4): 398-402.doi: 10.3761/j.issn.1672-9234.2025.04.003

• 专题策划——人工智能赋能护理教育 • 上一篇    下一篇

ChatGPT在护理问题和护理措施自主回答中的应用性研究

李鹏(),张源慧(),唐龙,刘杉,郑晓妮,刘坚,龙腾   

  1. 413000 湖南省益阳市 湖南益阳医学高等专科学校护理学院(李鹏,郑晓妮);541001 桂林市 桂林医学院附属医院急诊医学科(张源慧),重症医学科一病区(唐龙);413000 湖南省益阳市 湖南益阳市中心医院体检中心(刘杉);澳大利亚墨尔本护理联盟Alliance Nursing Melbourne Australia(Jianchun Ji);413000 湖南省益阳市 益阳医学高等专科学校附属医院放射科(刘坚,龙腾)
  • 收稿日期:2025-01-02 出版日期:2025-04-15 发布日期:2025-04-16
  • 通讯作者: 张源慧,硕士,副主任护师,E-mail:170153896@qq.com
  • 作者简介:李鹏,男,博士,副教授,E-mail:451792921@qq.com
  • 基金资助:
    湖南省教育厅教学改革项目(ZJGB2024324);湖南省社会科学成果评审委员会课题(XSP24YBC210)

Application of ChatGPT in self-directed answers to nursing problems and measures for surgical patients

LI Peng(),ZHANG Yuanhui(),TANG Long,LIU Shan,ZHENG Xiaoni,Jianchun Ji,LIU Jian,LONG Teng   

  • Received:2025-01-02 Online:2025-04-15 Published:2025-04-16

摘要:

目的 评估ChatGPT-4在外科疾病患者护理问题和护理措施自主回答的准确性,为护理教育的智能化改革提供参考。方法 选取某三级甲等医院2024年的30例外科住院患者病例为分析资料,按输入前训练、编写指令提示语、输入指令的步骤生成护理问题和护理措施,由5名护理专家根据统一的评分标准进行准确性评价,采用Fleiss’s Kappa检验和Cronbach’s α系数进行一致性检验。结果 30个案例护理问题和护理措施的准确性评为“非常好”分别占比85.3%、80.0%。每个病例的专家评分两两比较Fleiss’s Kappa检验值为0.433~0.763,所有病例护理问题和护理措施的总Cronbach’s α系数分别为0.908、0.943。结论 ChatGPT-4自主生成护理问题和护理措施的准确性整体可接受,反映出大语言模型具有成为教学辅助工具的潜力,但还需对其生成信息进一步判断,可为人工智能在护理教育领域的进一步应用和发展提供参考。

关键词: ChatGPT-4, 人工智能, 护理问题, 护理措施, 准确性

Abstract:

Objective To evaluate the accuracy of ChatGPT-4 in autonomously answering nursing questions and measures for surgical patients,and to provide a reference for the intelligent reform of nursing education.Methods Thirty inpatient cases from a tertiary first-class hospital in 2024 were selected as analytical data. Nursing problems and measures were generated following steps of pre-training input,writing instructional prompts,and inputting instructions. Five nursing experts evaluated the accuracy based on a unified scoring standard,and consistency was tested using Fleiss’s Kappa test and Cronbach’s α coefficient.Results The accuracy of nursing questions and measures for the 30 cases was rated as “excellent” at 85.3% and 80.0%,respectively. The Fleiss’s Kappa test values for pairwise comparison of expert scores for each case ranged from 0.433 to 0.763,and the overall Cronbach’s α coefficients for nursing questions and measures across all cases were 0.908 and 0.943,respectively.Conclusion The overall accuracy of autonomously generated nursing questions and measures by ChatGPT-4 is acceptable,reflecting the potential of large language models to serve as teaching aids. However,further judgment of the generated information is necessary,which can provide a reference for the further application and development of artificial intelligence in the field of nursing education.

Key words: ChatGPT-4, Artificial Intelligence, Nursing problems, Nursing measures, Accuracy