By: Sabrina Beroz
Simulation-based education (SBE) has advanced as a key strategy in the education of nurses and other health care providers. The National Council of State Board of Nursing National Simulation Study provided evidence for 50 percent substitution of clinical hours with simulation (Hayden, Smiley, Alexander, Kardong-Edgren, & Jeffries, 2014). Now schools of nursing are taking steps to integrate simulation across curricula, leading to the question: “If schools of nursing are substituting clinical hours with simulation, how are we evaluating participant performance?” As with the evaluation of clinicals that take place off campus, simulation, when viewed as on-campus clinical, requires evaluation for the achievement of learner outcomes.
According to the International Nursing Association for Clinical Simulation and Learning (INACSL) Standards of Best Practice: SimulationSM Participant Evaluation, “all simulation-based experiences require participant evaluation” (INACSL, 2016, p. S26). Formative evaluation measures progression toward achieving objectives and outcomes, whereas summative evaluation measures actual achievement at a discrete moment in time. High-stakes evaluation holds major implications, or consequences of performance, in simulation-based experiences (INACSL, 2016).
How, then, do we provide valid and reliable evaluation of performance? Using the NLN Jeffries Simulation Theory (2015) as a guide, one must first validate the scenario. Next, one must select a valid and reliable tool for performance evaluation of SBE. Below are the essential steps.
Validation: Content experts are the requisite resource for validating scenarios and likely reside outside of your institution. Face validity determines if physical and conceptual fidelity will enhance realism (psychological fidelity). Content validation of a scenario – where educators engage content experts who agree or do not agree with the relevance to a construct – provide support for objective/outcome assessment at all three levels of evaluation (Rutherford- Hemming, 2015). The entire scenario, as well as prebriefing and debriefing, must be reviewed. A content validity index (CVI) using the Lynn or Lawshe method is an integral part of the process. (See Rutherford-Hemming for an example of a validated scenario.)
Once the scenario has been validated, the performance evaluation tool must be selected. There are valid and reliable tools available for measuring participant performance including the Creighton Competency Evaluation Instrument and the Lasater Clinical Judgment Rubric (Adamson, Kardong-Edgren, & Wilhaus, 2013). Use caution when selecting a tool to evaluate performance as choosing a tool with strong psychometrics does not necessarily mean it is the correct tool to use. The tool must be appropriate for the population and activity.
Reliability: Crucial to evaluation is interrater reliability, which provides consistency among raters who are evaluating participant performance. Educators work together to determine the behaviors that meet the objectives/outcomes of the scenario. Agreement of expected behaviors for performance evaluation, while at times arduous, is key to establishing a tool that aligns with the scenario objectives.
Educators often struggle for consensus on what constitutes achievement. Simple skills such as handwashing or taking vital signs may take more time than anticipated. However, it is essential to recognize that rater variability hinders the quality of simulation and leads to inconsistent or ineffective evaluation outcomes. The NLN explored the use of simulation for high-stakes evaluation and concluded that well-designed (validated) scenarios, facilitated by knowledgeable educators, can provide a valid and reliable means for the evaluation of performance (Rizzolo, Kardong-Edgren, Oermann, & Jeffries, 2015).
As more and more schools of nursing integrate simulation across the curriculum, careful planning for establishing valid and reliable competency evaluation is essential.
References
Adamson, K., Kardong-Edgren, S., & Wilhaus, J. (2013). An updated review of published simulation evaluation instruments. Clinical Simulation in Nursing, 9, e393-e400.
INACSL Standards Committee (2016, December). INACSL standards of best practice: SimulationSM participant evaluation. Clinical Simulation in Nursing, 12, S26-S29.
Jeffries, P. (2015). NLN Jeffries Simulation Theory: Brief narrative description. Nursing Education Perspectives, 36(4), 292-293.
Hayden, J., Smiley, R., Alexander, M., Kardong-Edgren, S., & Jeffries, P. (2014, July). NCSBN national simulation study: A longitudinal, randomized, controlled study replacing clinical hours with simulation in prelicensure nursing education. Journal of Nursing Regulation, 5(2), S1-S64. Retrieved from https://www.ncsbn.org/JNR_Simulation_Supplement.pdf
Rizzolo, M., Kardong-Edgren, S., Oermann, M., & Jeffries, P. (2015). The National League for Nursing project to explore the use of simulation for high-stakes assessment: Process, outcomes and recommendations. Nursing Education Perspectives, 36(4), 299-303.
Rutherford-Hemming, T. (2015). Determining content validity and reporting a content validity index for simulation scenarios. Nursing Education Perspectives, 36(6), 389-393.
Magnificent beat ! I would like to apprentice while you amend your site, how could i subscribe for a poker website? The account helped me a acceptable deal. I had been a little bit acquainted of this your broadcast offered bright clear idea