Whether you are implementing a new EMR or rolling out the latest upgrade, you may want to take another look at your project timeline. Having managed training for multiple upgrades and implementations, I have seen the same timeline mistake over and over again. Training starts before testing is complete, so there’s a last-minute need for retraining or a rush to deliver new information at go-live. As a result, training often looks "bad" or "incomplete" despite the endless hours of prep work done by the training team.
This was recently highlighted by a physician user on LinkedIn. An article was shared that highlights how Epic outspends major software developers in the area of R&D, and when the comments started pouring in, this physician piped up and said, "Why not spend that much on training? When we went live our training reflected only 50% of the actual system we saw at go-live." As the kids say these days "The struggle is real."
A typical EMR vendor will recommend having training deliverables running parallel to the build team deliverables. In theory, this is a good idea. In reality, it leads to a great deal of re-work and last minute scrambling to deliver meaningful training. For example, if I write lesson plans according to a build plan and that build gets adjusted due to failure points at testing, my training becomes out-of-date mid-training. The same can be said of the training environment build, which commonly occurs before even the first round of testing.
Most often, it is the Principal Trainers (Instructional Designers) who carry the burden of re-writing lesson plans and workbooks. They’re often adjusting training environment builds late into the night to prepare for class the next day. This occurs over and over again throughout training.
Classroom trainers often find things that "don't work" according to the lesson plan. This is because the build has been adjusted, since the materials were written, and the training team was either not made aware of the change or has not had a chance to update materials and relay the change to the team who is actively training when this change was pushed out. Along the same lines, our classroom trainers often identify workflows that don't work as designed and require re-build by an analyst.
And what about the "Super User" training program? Most often this group is scheduled to be the very first trained, and ends up being the largest group needing re-training prior to go-live. How does this help make them "Super"?
This puts Training Managers in a frustrating situation. How do we adjust timelines to optimize training and minimize project team down time between build and go-live? Are adjusting the timelines alone enough? What about the scope of our Principal Trainers and Credentialed Trainers? Can we re-evaluate how we focus the training team to assist with testing, making testing more meaningful and training more robust?
Is there room in the project budget and timeline to allow for 100% completion of testing before the start of training? I believe there is. I believe, with a few slight tweaks to timeline and scope, we can increase the efficiency of our training dramatically.
So where do you start? The easiest place is with the timeline. Instead of testing and training overlapping, why not adjust your training to start after your last round of integrated testing? The other easy timeline addition is to ensure that you have workflow documentation 100% complete as a result of not just validation but as a result of the most up to date workflows and integrated testing, all before training begins. This will ensure that training is as complete as possible, and your go-live build will be much better suited for your end users.
But then you’ve created another problem--what do you do with your training team if you are moving some of the other work up? That’s easy: put them to work, just in a slightly different way that will benefit the build team greatly. In the beginning of most projects you bring in the Principal Trainers (Instructional Designers), but the majority of their early work is negated due to build and other decision-making changes throughout the build phases of the project. This is an opportunity to rethink how we use our training team. If the training team begins working with their analyst partners on actual decision tracking and testing instead of working on items that need to be reworked later, the work becomes much more meaningful for all parties involved.
Often the training team brings an element of “end user” to the build, and helps builders and informaticists to think about build and decisions from the perspective of the end user. Additionally, the training folks can help the analysts have more meaningful discussions and relationships with their stakeholders.
With projects as complex as EMR implementations, timelines will never be perfect and things will sometimes go wrong. In my experience, however, there are ways to reduce wasted time and improve efficiency through carefully thinking through and planning your project. If you’re interested in chatting with me about how, send me an email.