This may or may not surprise you – but there is a disconnect when it comes to business managers requesting training, and training professionals actually delivering the training.
Part of this disconnect lies in the learning function’s uphill battle with actually measuring training, and their learning’s ROI and impact(s) to the business. With the constant business transformation in our digital age, Marketing is a good example of a business function that has gone from very little measurement (think trying to calculate an accurate ROI or revenue increase from billboard ads, taxi-cab ads, print ads, etc.) to a very digitally data-driven function where evaluating engagement metrics, ROI, cost-per-lead, and many others, is fairly simple and common. A very similar transformation is happening in learning & development, but instead of trying to calculate business impact from something like billboard ads, it’s trying to calculate business impact from face-to-face training or eLearning without any meaningful analytics.
Transitioning From Old to New
When many learning and development professionals discuss measurement and analytics they are usually talking about either the Kirkpatrick or Phillips model, or they are talking about how to better track learning activities.
We actually want to discuss a modern approach which is designing your training specifically for the benefit of being able to measure it.
Kirkpatrick’s is an older model but can be helpful for understanding where training impact lies through surveys. These surveys can be important and valuable for comparing how learners feel about the training, but they should not be used as the main measuring tool. Surveys are essentially guesswork – if you ask people if they are good drivers, most of them will come back with a resounding “yes” but that doesn’t say much about whether they know the “rules of the road.” In the same breath, the amount of courses taught, amount of course completions, or scores on “smiley sheets” don’t say anything about how the training impacted the business. Most people don’t get past levels 1 and 2 of Kirkpatrick, but that’s where a strategic training design comes in.
Better tracking learning activities is great – so long as it’s measurable. Tracking learning activities in and of themselves is not going to be great evidence of whether learning has made a difference to the learners, and to the organization. For example, if you see that Joe paused a video at 2 minutes and 10 seconds, that’s cool, but not going to help tell your boss if your training is helping to meet the yearly goal of increasing revenue by 20%. Tracking these activities is great for improving learning through. If you have a year’s worth of eLearning behavioral data, you’ll be able to see where learners are struggling in the training (whether that be a module or quiz area) and figure out why so you can improve the experience for your learners.
Training Designed for Measurement
The first step in even beginning to think about designing training that is optimized to be measured, is to understand the goals of your organization. It’s important to meet with business stakeholders to have the conversation of: “What are the organizational goals for the year?” This will set you up with a few measurable key performance indicators (KPI’s) that you can strategically incorporate into your training course.
From a CLO Show interview with Dr. Laura Paramoure, President and Founder of eParamus, she quotes that “it’s not measurement for measurement’s sake. It’s not even to deliver analytics. It’s really measurement so we can have quality assurance to know cause and effect.” Essentially, you need to be designing and measuring your training in order to be able to clearly see a cause (training) and effect (impact on business need). Data and analytics can’t help anyone’s cause if you don’t know what you are tracking and why you are tracking it.
When you design a course around a business need, such as, reduce manufacturing defects by 25%, you are starting from that need and working backwards. Then, you can begin tailoring content and questions linked to that content based on competencies. It’s important to understand that you can’t change learner personalities, but you can influence and build learner knowledge.
How to Begin
Once you are thinking “backwards” from identifying a target goal, you then have to look for the “standard” of performance (the behaviors that, if changed, would help you reach the target). Once you have something that makes sense, you then take that standard and repurpose it into specific objectives and directly tie those objectives to your instructional strategies and evaluations. It’s important to have that standard laid out so you can get repeatable results.
Laura Paramoure calls these three things (objectives, instructional methods, evaluations) a “learning object.” If you link your objective to the job, and carry the conditions/criteria of the job throughout the instructional methods/evaluations, then you should be really clear on where you’re going, how you’re going to get there, and how you’ll know if you’re there.
At Riptide, we are helping talent development teams connect learning to performance and prove learning’s impact to the business. We would love to connect to see how we can help your organization. Contact us here to learn more.