Beyond Learning ROI, Written by Sherwin Chen
For learning professionals return on investment is often considered the pinnacle of learning measurement. But is there more we should be looking at beyond learning ROI to better assess the impact of learning?
In general ROI has two parts – the investment, which is how much was spent, and the return, which is how much was made. In simple terms, the “return” is calculated by taking the price when sold and multiplying that by the units sold. Or put another way, how much one makes as a result of a given investment.
Measuring ROI in learning is more or less the same. For a given investment in a training initiative, what did the organization make back? Now calculating the return can be tricky, but the principle remains the same.
Is What Something Costs the Same as What It Is Worth?
So what’s missing? In business, a marketing professional will tell you that to consumers, a product or service is generally worth more than the price paid. Think about it – who would buy anything if it was worth less to her than what she paid.
Take for example a new tablet computer. For the company making the tablet, the return on a single tablet is probably the few hundred dollars it sold for. However the value to the buyer of that tablet is more than that – but by how much only he knows. Why is that? Because only he can say how much the tablet is worth to him. For example, how much is being able to manage his email from an airport worth? Or perhaps it is the ability to view a movie anywhere in the house that is valuable to him. Or perhaps just having the latest technology is what he’s after.
Does This Apply to Training?
By only measuring ROI, a large part of the actual value of a product is being missed. Economists know this and have models for measuring value beyond just price times units. The same is true for learning. Anyone attending a training program or taking an elearning course pays for it somehow – whether or not there’s payment involved. And in principle, the value she receives from the program should be greater than the time and the cost of attending (now that may not always be the case and that’s not necessarily a bad thing – but that’s a topic for another day).
Most organizations can measure a learner’s satisfaction with a training program and others attempt to quantify the ROI. But by stopping there, they are missing an opportunity to capture and demonstrate the true value of the program – even if they can show a positive ROI.
A product’s specifications defines its features, only the consumer can tell you what they’re worth. Learning objectives tell us what a program is designed to teach, but we need to ask the learner what specific value, if any, is received from a program.
Did a program that was designed to improve the skills of the learner actually do that? That we know how to measure. However, even if it did, were the benefits to the learner enough to justify the money and time spent? That’s a different question. What would we learn if we asked our learners and organizations to define and perhaps even quantify the value that they received from the training they take?
In previous thoughts I presented the idea that measuring the monetary return on training, i.e., Learning ROI, did not capture the full value of a training program. Just like, for example, a tablet computer is worth more to a buyer than the price paid, learning teams should seek to understand if the value received from a training program exceeded what the learner paid in terms of time and money to attend.
Difference Between Features and Benefits
Sales people are typically taught not to talk about just features, but of benefits as well. Anti-lock brakes modulate the braking of a car in slippery conditions, true. But the benefit is that the driver can maintain control of the car in the winter which in turn may save his life. What’s that worth?
It is easy for us to build training programs to a set of specifications – we call them learning objectives . . . “After this program, the learner will be able to . . .“ In fact using tools such as pre- and post-course assessments, we can reliably measure whether those objectives were met. But if we’ve done our jobs right, the value of the training to the learner should be more than just whether they can demonstrate a new skill.
How Do We Know What the Learner Values?
So while we know what learning objectives were achieved, do we know what the program was worth to the learner? For example, many organizations have training for new supervisors. There are already many versions of new manager programs so it would be easy for a training team to look at a sample of these programs and design a program that has all their best features.
But that would be like a company loading a product with a feature just because it is in a competitor’s product. So instead, many companies conduct market research (focus groups, surveys, etc.) to determine what features their customers actually want and often what they are willing to pay for them.
The same should apply to learning. Before designing a new supervisor program, one could conduct interviews and focus groups with current managers and ask questions such as:
- What were the biggest challenges they faced as new managers?
- What do they know now that they wished they had known when they first became managers?
- What advice or information would they pass on to a new manager?
In other words, ask your customers what they would find valuable and then use that as a basis to design your program.
But We Already Do That . . .
Fair enough. Hopefully for you, that is a common practice. But that’s only half the equation. After you’ve built and delivered the program, you need to ask learners whether or not those benefits were realized. So in addition to measuring skills and knowledge, your focus should also be on outcomes and value.
To get at this, the questions you should be asking learners are what are they doing differently as a result of the training and more importantly, how has that benefited them. If you are able, you can take this a step further and see if they can quantify the benefit and tell you what that benefit is worth. This will then provide a more complete picture of the actual impact that the training program has had.
One caution. Just like you can’t ask about a car buyer’s satisfaction as he drives off the lot, you can’t survey learners about impact as they are walking out the door. This can only be done after they’ve been back on the job for a while. However, as designers, we can use this to our advantage, but more on that in the future