As simulation training steadily increases its foothold in surgical training curricula worldwide, the finetuning of its delivery, integration, and capacity to objectively assess student proficiency and readiness to safely progress training is receiving much attention.
Indeed, the global call for surgical simulation-based training to be delivered in competence-based, rather than time-based curricula begs the question: How exactly does one define and assess competency?
Many simulation trainers offer a range of metrics that can effectively judge whether or not a student has reached certain benchmarks indicating mastery, and thus their readiness to safely progress to more advanced training, or to performing certain procedures in the OR.
Image Source: http://nursingschoolsnearme.com/
Metric-based formative feedback, where one’s performance is quantified and assessed throughout training allows for deliberate, rather than repeated practice. Before the grand-scale introduction of simulation training, surgical skills were acquired in clinical settings, and the speed at which they were mastered depended on how frequently a learning scenario would present itself, whether or not it would be a viable opportunity for skill practice, as well as the willingness of supervisors to turn the scenario into a teachable moment.
Reviewing the literature confirms that clinical evaluation and surgical skill assessments are still mostly subjective, despite many studies demonstrating that the motion analysis of surgical tools can quantitively measure and assess student skill-levels. These metrics are available across the range of simulation technologies, from simple box models to advanced VR simulators, and would further allow for better standardization of training outcomes.
The performance feedback required for deliberate practice hinges on procedural experts identifying and defining what optimal and sub-optimal performance looks like, but could be as simple as using the procedural performance of expert surgeons as benchmarks for novice learners, thus bypassing potentially difficult research requirements while still vastly improving current training standards.
The delivery of metrics affords students the chance to practice core skills in simulation labs with more autonomy, and deliberately gain proficiency in a systematic manner. Learning experiences are greatly enhanced, delivered more efficiently, and with better efficacy.
To explore further into how metrics can add greater value to your simulation learning experience, please visit: https://laparosimulators.com/, and check out the latest hybrid Advance model at: https://laparosimulators.com/a-comprehensive-laparoscopic-simulator-meet-laparo-apex/.