“Our training program was a success!” Was it though? How do you know?

Most of my clients ask participants how much they liked the program. If the participants say they liked it, many learning professionals claim training success.

Sure, we want participants to enjoy the learning experience, but that’s not our goal. Saying you liked the course is pretty imprecise – maybe you liked the jokes the facilitator told, and the lunch served by catering. And those evaluations can’t measure achievement of learning objectives, which are about behavior. Changing performance is a better indicator of success.

More often than not, learning professionals perceive that it’s too difficult to measure performance change. It really isn’t, though. Here is a range of metrics used by the business leaders I’ve worked with.

  • Satisfaction: This is the “did you like it?” measurement. It gives us information on the participants’ impression of the program. Typically we use end-of-session surveys about the quality of materials, program delivery, and the overall experience. In many cases, this is where evaluation ends. However, to truly define success, you have to go further.
  • Learning: This gauges the extent to which participants believe the program achieved its objectives, and how well reaching those objectives met their development needs. Often we ask participants to report what they learned, but sometimes we can use knowledge checks or an end-of-session test.
  • Application: This measures how well participants apply what they learned to their jobs. In most cases, I recommend asking participants, at the end of the program, what they will apply on the job. Then, follow up 60 to 90 days later and assess what they actually applied. I also recommend that supervisors rate participants’ application.
  • Performance: This is an assessment of changes in job performance. The evaluation typically targets key business indicators like quality of work, customer satisfaction, speed-to-market, sales, etc. Ideally, we would measure the extent to which a participant’s new skills impact business results. Again, I recommend an end-of-course survey asking participants to predict how their new skills will change their performance. Then follow up 60 to 90 days later and measure how performance actually changed and how those changes affected business metrics. Again, supervisor ratings are important as well.
  • Recommendation: This is somewhat related to satisfaction. This evaluation asks participants whether they would recommend the program to someone else. It’s a good measurement of perceived value of the program.

Collect this information consistently across all of your programs. This allows you to compare the performance of individual programs or courses. It’s also helpful to measure the same program over time. These broad views of your curricula help you pinpoint problems and focus improvement efforts.

Gathering evaluation data doesn’t have to be difficult. The metrics are straightforward and easy for business leaders to understand. Focusing on these five measures will help you build and maintain strong learning programs that deliver business value. And it will help you demonstrate training success to your stakeholders.