Converting Surveys Into Strategies

These Seven-Plus One Best Practices Help Add Value to Learning Management Initiatives.

An important question often asked of organizations creating and revising processes for learning measurement is, “What are the best practices?”

In an increasingly competitive corporate landscape, all aspects of the organization are being asked to justify their existence, not just training. However, the more prepared training organizations are to respond to such inquiries the better likelihood they have of creating and maintaining a value-added role within the company.


Plan themetrics before developing survey questions – Never ask a question on a data collection instrument unless it ties to a metric you will utilize. Organizations often fail to create questions that collect meaningful data for stakeholders. Once you have finalized your set of metrics, the survey questions are an easy byproduct.

Create a measurement process that is replicable and scalable –
It is essential that you create a measurement process that can be replicated and scaled across all learning events without spending more on measurement than you do on training. To do this, you must acknowledge and accept that not everything needs in-depth, precise measures. The key is to use reasonable assumptions to predict and estimate learning effectiveness. Doing so will provide a baseline for you to manage by measurement and extract relevant data points to present to management that clearly demonstrate the value of the learning investments. This is not to rule out having the flexibility in your measurement process to drill deep into a program
5 to 10 percent of the time where it warrants such an exercise. You simply want to make sure what you do 90 to 95 percent of the time is replicable and scalable.

Ensure measurements are internally and externally comparable –
Related to best practice #2 is the concept of comparability. It is a significantly less powerful endeavor to do a one-off exercise when you have no base line of comparability. If you spend several months calculating out a 300 percent ROI on your latest program, how do you know if that is good or bad? Surely a 300 percent ROI is a positive return but what if the average ROI on training programs is 1000 percent?

Ensuring your measurement process is comparable both internally and externally is critical. Comparing learning effectiveness for each course, comparing investment value by each client grouping, or comparing job impact by key program are just a few examples of how internal and external comparisons can provide a more accurate portrayal of how your training is really measuring up.

Use industry-accepted measurement approaches –
Management is looking to the training group to lead the way in training measurement. It is the job of the training group to convince management that its approach to measurement is
reasonable. This is not unlike a finance department that must convince management of the way it values assets. In both cases, the group must ensure the approach is based on industry accepted principles that have proof of concept externally and merit internally. An example is an approach that is emphasized in the learning industry, including the models used by Donald Kirkpatrick in his “Four Levels of Learning” and Jack J. Phillips in his “ROI Process.” Regardless of the approach you use, keep in mind you need to “adapt” it to your organization, not “adopt” it in your organization. There is a big difference. Adapting it means you take an approach and tweak it to fit your needs.

Define value in the eyes of your stakeholders –
If you ask people what they mean by “return on investment,” you are likely to get more than one answer. Return on investment is in the eyes of the beholder. Understand how value is being defined. Ensure that your measurement process and your resulting metrics yield business intelligence that is of value to each stakeholder.

Leverage automation and technology –
Your measurement process must leverage technology and automation to do the heavy lifting in areas such as data collection, data storage, data processing and data reporting.

In today’s world of automation and technology, any company — large or small — can cost effectively leverage technologies such as the Internet to collect data. Even when no computer is in the classroom, surveys can be e-mailed to participants after the training. Software technologies exist to create standardized reports using the collected data. The end result is that you spend less resources collecting, processing, and reporting results and more time analyzing the data for improvement purposes or for showcasing the value of the training back to management.

Crawl, walk, run –
When designing a learning measurement strategy, it is nice to have a long-term vision, but don’t attempt to put your entire vision in place right out of the blocks. The best approach is to start with the low-hanging fruit that can be done in a reasonable time frame to prove the concept, demonstrate a “win,” and build a jumping off point to advance it to the next level. You can learn a lot from a pilot or test run of your process. Also, you build the quick wins you need to sustain momentum to move the process forward in the organization.

Ensure your metrics have flexibility –
The last thing you want to do is roll out a measurement process that is inflexible. You will likely have people who want to view the same data but in many different ways. You need to have architected your database to accommodate this important issue thereby creating measurement flexibility.

Most commonly, you should want to “tag” every data element with the following: instructor name (if applicable), learning delivery mode, location where the training was held, learning provider (internal or external), date of training, course name,
curricula (group the course belongs to), and program (group curricula belongs to). To make matters easier, technologies such as OLAP cubes can be used to slice this data in near infinite proportions, satisfying the needs of all of your data requests.

Finally, flexibility is also inherent in your ability to “roll up” the data. This too must be thought about prior to data collection. Companies often ask different questions for each course. That is good tactical detail but not good strategic intelligence. Different data is more challenging to aggregate up into higher levels. Senior management is far more likely to want to view aggregate data than class- or course-specific data.

—For more information about learning measurement, visit To request a learning  analytics requirements checklist, visit analyticschecklist.

Leave a reply