We have been forced, some gladly, others protesting, into learning to use simulation as a teaching tool in education and clinical practice by the COVID 19 pandemic. Laurendine and colleagues (2020) surveyed their Neonatal Intensive Care Unit (NICU) providers and staff clinicians to try and improve team performance during codes. Because research and guidelines for nursing education and other fields are just now becoming more accessible, Laurendine and colleagues (2020) developed their own survey and peer-reviewed it before implementing it. Their results were interesting and probably common in most agencies. Staff supported using clinical simulation and debriefing with multidisciplinary teams.
Kiernan and Olsen (2020) reported a study in nursing education with 27 first-semester juniors and 35 first-semester seniors in school. They used the Clinical Competence Questionnaire (CCQ) (Liou & Cheng, 2014) before and after extensive simulation training. They described faculty and task trainers' efforts, individual and group debriefing, open access to the simulation lab, and other strategies employed.
Unfortunately. few clinicians have had this kind of training. In fact, the National Council of State Boards of Nursing's National Simulation Study was only completed in 2015 (Kardong-Edgren, 2015). State boards have struggled to evaluate and make decisions about allowing simulation to replace all or part of clinical practice requirements for nursing students (Virginia Board of Nursing, 2017). Taibi and Kardong-Edgren (2014) found that only 23% of regional health care educator respondents had used simulation in teaching. Thus, an initial approach might be to improve training or periodic practice after training because 'new nurses' were able to locate items on the crash cart sooner than others could (Laurendine et al., 2000).
These deficits make studies like the one conducted by Laurendine and colleagues (2020) more important and necessary. Developing tools to measure clinical competence like the CCQ requires a theoretical framework and evidence-based research studies. Multidisciplinary teams are best suited for this work. Replication of previous studies is very valuable work. The safety and efficacy of emergency codes is vital in any age group and almost always is a topic ripe for investigation because people do forget and make mistakes. Debriefing and honest self-examination and peer review can lead to new methods and guidelines that are more effective.
A replication of the study by Laurendine and colleagues (2020) might start with reviewing recent research literature for other instruments to measure clinical competence and those presenting clinical simulation curricula and training schedules or methods. One modification might also be to provide evidence-based validated guidelines for aspects of the code protocol that could be referred to for review or quick reference. Although open-ended questions almost always raise previously unexpected results, a guideline to analyze the results would also be needed. Of course, a larger participant pool that would allow comparison of groups from other units or agencies would be helpful.
Pediatr Nurs. 2020;46(5):254 © 2020 Jannetti Publications, Inc.