训练用单针/双针带线【出售】-->外科训练模块总目录
0.5、1、2、3.5、5mm仿生血管仿生体 - 胸腹一体式腹腔镜模拟训练器
仿气腹/半球形腹腔镜模拟训练器
[单端多孔折叠]腹腔镜模拟训练器
「训练教具器械汇总」管理员微信/QQ12087382[问题反馈]
开启左侧

人群来源的评估技术技能确认基本腹腔镜泌尿外科技能任务

[复制链接]
发表于 2016-9-9 13:33:15 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有账号?注册

×
人群来源的评估技术技能确认基本腹腔镜泌尿外科技能任务
Crowd-Sourced Assessment of Technical Skills for Validation of Basic Laparoscopic Urologic Skills Tasks

Abstract
摘要
PURPOSE:
目的:
The BLUS (Basic Laparoscopic Urologic Skills) consortium sought to address the construct validity of BLUS tasks and the wider problem of accurate, scalable and affordable skill evaluation by investigating the concordance of 2 novel candidate methods with faculty panel scores, those of automated motion metrics and crowdsourcing.

MATERIALS AND METHODS:
材料与方法:
A faculty panel of surgeons (5) and anonymous crowdworkers blindly reviewed a randomized sequence of a representative sample of 24 videos (12 pegboard and 12 suturing) extracted from the BLUS validation study (454) using the GOALS (Global Objective Assessment of Laparoscopic Skills) survey tool with appended pass-fail anchors via the same web based user interface. Pre-recorded motion metrics (tool path length, jerk cost etc) were available for each video. Cronbach's alpha, Pearson's R and ROC with AUC statistics were used to evaluate concordance between continuous scores, and as pass-fail criteria among the 3 groups of faculty, crowds and motion metrics.

RESULTS:
结果:
Crowdworkers provided 1,840 ratings in approximately 48 hours, 60 times faster than the faculty panel. The inter-rater reliability of mean expert and crowd ratings was good (α=0.826). Crowd score derived pass-fail resulted in 96.9% AUC (95% CI 90.3-100; positive predictive value 100%, negative predictive value 89%). Motion metrics and crowd scores provided similar or nearly identical concordance with faculty panel ratings and pass-fail decisions.

CONCLUSIONS:
结论:
The concordance of crowdsourcing with faculty panels and speed of reviews is sufficiently high to merit its further investigation alongside automated motion metrics. The overall agreement among faculty, motion metrics and crowdworkers provides evidence in support of the construct validity for 2 of the 4 BLUS tasks.

原文:
Crowd-Sourced Assessment of Technical Skills for Validation of Basic Laparoscopi.pdf (698.09 KB, 下载次数: 0, 售价: 99 香叶)
您需要登录后才可以回帖 登录 | 注册

本版积分规则

丁香叶与你快乐分享

微信公众号

管理员微信

服务时间:8:30-21:30

站长微信/QQ

← 微信/微信群

← QQ

Copyright © 2013-2024 丁香叶 Powered by dxye.com  手机版 
快速回复 返回列表 返回顶部