Most of us devote significant effort to collecting feedback and assessment of our training. We seek to improve our delivery, our materials and the end user experience while measuring the business impact to support future training resources. For now, let’s think about the latter.
Of course, any business wants to have a clear picture of the effect of all of its initiatives and training is no exception, but this raises the question what do managers actually want us to tell them?
What is really interesting is that the very things that executives report that their organisations measure now are the same things they are least interested to record (see Figure 1. What CEOs Really Think). Conversely their top three priorities are things they report are not currently reported.
Measure | We Currently Measure This | We Should Measure This in the Future | My Ranking of the Importance of This Measure |
---|---|---|---|
Inputs: Last Year, 78,000 employees received Formal Learning | 94% | 85% | 6 |
Efficiency: Formal Training costs $2.15 per hour of learning consumed | 78% | 82% | 7 |
Reaction: Employees rated our training very high, averaging 4.2 out of 5 | 53% | 22% | 8 |
Learning: 92% of participants increased knowledge and skills | 32% | 28% | 5 |
Application: At least 78% of employees are using the skills on the job | 11% | 61% | 4 |
Impact: Our programs are driving our top five business measures in the organisation | 8% | 96% | 1 |
ROI: Five ROI studies were conducted on major programs, yielding an average of 68% ROI | 4% | 74% | 2 |
Awards: Our Learning and Development program won an award from the ATD | 40% | 44% | 3 |
Figure 1: What CEOs Really Think, Jack J. Phillips, chairman, ROI Institute
It’s no secret that effective and holistic measurement can be tricky. For instance, as trainers we’re all familiar with the Kirkpatrick model of measurement and we’re also acutely aware of just how difficult it is to achieve more than level 1 or 2. As a result, even these well-known approaches such as this can fail to achieve real outomes in practice.
Sometimes measurement is just too difficult, or we’re worried about what the evaluation might uncover about our design. Perhaps the process of measuring is too expensive.; few of us have the resources to measure what we would like, the way we’d like to measure it.
The moment we select our metrics, the compromises begin. We may want to conduct behavioural change analyses – but most often the only practical option is to conduct post-training feedback surveys. Yet if we accept that evaluation of training is a fundamental part of good learning design, it must be a priority.
We have to pick the right metrics to measure. Easier said than done, I know. After all, few of us are data analysts. We might know what we want to see, but working out what data will provide that information is not as simple. Even after selecting metrics will our choices work to support our arguments given that our superiors might have opted for a different selection?
So, what to do?
My advice – think like an Instructional Designer and do your action mapping!
We begin with the end in mind. Ask questions such as:
- What does success look like?
- How will things look if the learning has been a total success?
- What would consitiute an outstanding ROI, or what KPI do we want to see to establish success?
Isolate the successful impact of the learning and backwards map from there – what will success look like, how do you measure success, what are the behaviour changes that lead to this success. From this you can determine a baseline against which to measure growth and development and a way to measure change relative to that baseline.
If we work back from our end goals to determine our metrics, our choices become clearer. For example:
- If a course is intended to reduce the time taken for people to perform a task, rather than measure their score on the quiz, why not ask participants to come to class with a measure of how long on average that task took in the preceding week, then ask them again a week after the course?
- If the course is focussed on improving quality, can we collect data from the complaints desk, the support department or returns processing department before and after training?
What’s interesting is that these kinds of data may well be collected already in your organisation. A little lateral thinking might allow you to access metrics that allow you to report “This course cost us $1,500 to deliver and returned $15,000 to the business the following month”. Imagine what that does for a training budget application!
Most business leaders want success stories and to know the value of those successes. While a quiz on completion asking whether trainees had fun is more straightforward, the measurement is of little value. The process of learning is a means to end and the outcome of training is inevitably prioritised usinng the results you collect on the metrics you select.
Approaching the task like this allows us not only to build a stronger case for resources but also to improve and implement successful training programs. Every part of your organisation is tasked with measuring its level of success and improving its contribution over time. Learning and development is no exception – quality evaluation aligned to clear goals is a core part of our job.