COVID-19: Essentials of Simulation Evaluation

By: Sabrina Beroz

The COVID-19 pandemic has profoundly altered nursing education, necessitating the move to online delivery for all levels of programs. Didactic and clinical experiences have shifted to virtual platforms. Simulation experiences, synchronous and asynchronous, have taken the place of clinical hours when students cannot attend clinical at hospital sites. Many state boards of nursing have waived regulation limits for simulation-based education.

These unprecedented times call on educators to evaluate all aspects of simulation. Accreditors and regulators will be looking for evidence of how simulation-based education meets program outcomes. This post proposes the alignment of three essentials to evaluate simulation: theory, standards, and valid and reliable instruments.

The NLN Jeffries Simulation Theory describes several concepts from the overall context, background and design to the simulation experience, facilitator and participant interaction, and outcomes (Jeffries, 2016). Simulation educators can evaluate the development, implementation and outcomes of the experience across several aspects of the theory. The International Nursing Association for Clinical Simulation and Learning (INACSL) Standards of Best Practice: SimulationSM offer nine standards, based on the most recent research, to aid in the integration of simulation into nursing programs (INACSL Standards Committee, 2016). The standards provide simulation educators and researchers with a roadmap for best practices to design, implement, and evaluate simulation. Fortunately, instruments are available to measure many aspects of simulation-based experiences. The table offers an example of alignment between the NLN Jeffries Simulation Theory, the INACSL Standards of Best Practice: Simulation, and corresponding valid and reliable instruments to evaluate simulation.

NLN Jeffries Simulation Theory INACSL Standard of Best Practice: Simulation Valid and Reliable Instrument
Simulation Design INACSL Standard of Best Practice: Simulation Design

 

Simulation Design Scale (NLN, 2004).
Facilitator and Participant Interaction INACSL Standard of Best Practice: Simulation Outcomes and Objectives

 

INACSL Standard of Best Practice: Simulation Facilitation

 

INACSL Standard of Best Practice: Simulation Professional Integrity

 

INACSL Standard of Best Practice: Simulation Debriefing

 

Simulation Effectiveness Tool – Modified (Leighton et al., 2015)

 

Facilitator Competency Rubric (Leighton et al., 2018)

 

Debriefing Assessment for Simulation in Healthcare (Center for Medical Simulation, 2011)

 

Participant Outcomes INACSL Standard of Best Practice: Simulation Participant Evaluation

 

Creighton Competency Evaluation Instrument (Creighton University, 2014)

 

Lasater Clinical Judgment Rubric

(Lasater, 2007)

 

Seattle University

Simulation Evaluation Tool (Mikasa et al., 2013)

 

In summary, nursing education has the duty to evaluate the strength of simulation programs — what is going well and areas for improvement. The pandemic poses an opportunity to validate simulation, and evaluation is essential in the effort!


References

Center for Medical Simulation. (2011). Debriefing assessment for simulation in healthcare: Student and instructor versions. https://harvardmedsim.org/debriefing-assessment-for-simulation-in-healthcare-dash/

Creighton University. (2014). Competency Evaluation Instrument.  https://nursing.creighton.edu/academics/competency-evaluation-instrument

INACSL Standards Committee (2016, December). INACSL Standards of best practice: SimulationSM. Clinical Simulation in Nursing, 12(S), S1-S47. http://dx.doi.org/10.1016/j.ecns.2016.09.005

Jeffries, P. (2016). The NLN Jeffries simulation theory. National League for Nursing.

Lasater, K. (2007). Clinical judgment development: Using simulation to create an assessment rubric. Journal of Nursing Education, 46(1), 496-503. https://doi.org/10.3928/01484834-20071101-04

Leighton, K., Mudra, V., & Gilbert, G. (2018). Development and psychometric evaluation of the Faculty Competency Rubric. Nursing Education Perspectives, 39(6), E3-E9. https://doi.org/10.1097/01.NEP.0000000000000409

Leighton, K., Ravert, P., Mudra, V., & Macintosh, C. (2015). Updating the Simulation Effectiveness Tool: Item modification and reevaluation of psychometric properties. Nursing Education Perspectives, 36(5), 317–323. https://doi.org/10.5480/15-1671

Mikasa, A., Cicero, T., & Adamson, K. (2013). Outcome-based evaluation tool to evaluate student performance in high-fidelity simulation. Clinical Simulation in Nursing, 9(9), e361-e367. https://doi.org/10.1016/j.ecns.2012.06.001

National League for Nursing. (2004). Tools and instruments: Use of NLN surveys and research. http://www.nln.org/professional-development-programs/research/tools-and-instruments/descriptions-ofavailable-instruments

Leave a Reply