Can you forecast training and development’s impact on performance? You sure can!
There’s data available that predicts the amount of change in performance you can expect because of training.
This data comes from the best source there is when it comes to predicting performance change. It comes from the people who’ve participated in the training.
When training and development fulfills its highest purpose, it impacts performance and empowers people to reach goals. Performance-focused learning and development aligns organizational goals with performance requirements. Measurable data is the evidence that shows what we can expect from employee performance because of training and development.
The highest goal for training and development is impact on performance. With surveys immediately following a training program, you can measure participants’ estimate of learning and development’s impact on their day-to-day work. There’s data that predicts how much of what they learned that they will apply and how often. You get predictive insight on change in performance and data-based evidence for aligning training with performance expectations and outcomes.
Post-program surveys, and particularly those that are web-based, provide actionable insight and valuable data for perceptions about training. We gain insight into the extent to which people believe that they can act on what they learned. The following examples use surveys to capture predictive data.
Predicting Training’s Impact on Change in Performance
Data that measures beliefs about training’s impact on performance predicts the likelihood of change. The question, “Training will improve my job performance on-the job.” is a direct way of collecting data for opinions about anticipated outcomes for training’s impact on behavior. Use a Likert scale for levels of agreement; Strongly Disagree, Disagree, Neutral, Agree, Strongly Agree.
The results show the extent to which you can expect a change in performance. If you’re seeing high numbers for “Agree” and “Strongly Agree”, great! Higher numbers for “Neutral”, “Disagree” and “Strongly Disagree” is an opportunity for examination of why participants view the training as having less impact on their performance. Is it not relevant to the work they do real-time, day-to-day? Are there barriers that prevent them from applying what they learned? Asking these questions and finding the answers is what makes the data actionable.
Predicting Application of Learning
Data that predicts the amount of learning people expect to apply produces valuable insight on learning effectiveness. The question, “How much of what you learned will you apply?” forecasts the predicted performance change and the effectiveness of training. Use a Likert scale for amounts; Very Little (less than 20%), Some (30% to 40%), Moderate Amount (50% to 60%), Large Amount (70% to 80%), Substantial Amount (90% or more).
The results show the estimated volume of transferred learning to performance. Participants estimating large to substantial amounts of learning being applied to their work is a strong indication of high impact learning. Predictions for lower volumes of applied learning is an opportunity to examine the proportion of learning that is actionable. For example, how much of what is being taught is theoretical and aspirational versus practical and realistic? This is another example of data being actionable.
Predicting Frequency of Applied Learning
Data that predicts frequency for how often people believe they’ll apply what they learned forecasts impact on performance through regularity of behavior. The more often people apply what they learned, the greater the likelihood for performance impact. The question, “How often do you believe you’ll apply what you learned?” collects the data. Use a Likert scale for frequency; Rarely (less than 20% of the time), Sometimes (30% to 40% of the time), Frequently (50% to 60% of the time), Often (70% to 80% of the time), Very Often (90% or more of the time).
The results will show the rate at which people expect they will use what they learned. Higher rates of frequency for applying what was learned are strong indicators for achieving goals. Lower rates for frequency of performing with what was learned indicate a barrier to achieving goals. Lower predictions are an opportunity to look at course design and dynamics of the real-world work. What are the barriers to applying what was learned as often as is necessary to meet performance expectations? How often do you need to apply what was learned to achieve goals? Is the course designed and communicated in a way that clearly identifies how often the behaviors need to be performed to meet performance expectations?
When training and development is aligned with performance expectations for reaching goals, it fulfills its highest purpose. Data that predicts impact on performance forecasts expectations for the change we can expect because of training. Predictive data is the evidence that shows what we can expect for changes in performance. The team at Riptide Software or Kevin M. Yates would be happy to help your L&D program become measurable, no matter where you are in your journey. Want actionable survey examples? Kevin Yates created a free resource packed with ideas to get the most out of your surveys: Check it out here.
Get Started on your data-driven journey by signing up for “The L&D Measurement Journey” webinar:
About the Authors:
Kevin M. Yates is a data detective for training, learning and development answering the question, “Did training work?” with facts. He is also creator of The COURAGE Model©. Connect with Kevin on his website, LinkedIn, Facebook, YouTube and Twitter.
Elements makes eLearning easier through enabling, behavior-focused learning technology that provides insightful analytics (Storepoints Learning Record Store), walk-through software training (Waypoints), and adaptive eLearning courseware (Learnpoints).