Measuring Learning 2.0 for Impact

Great moments in science, business and exploration have always come with the advent of new technology. The steam engine reduced travel time across a continent. The jet engine made transcontinental travel feasible. Rockets opened the final frontier. The printing press made books available to the masses. In modern times, the Internet has made knowledge abundant. Moreover, many of the tools on the Internet support the latest knowledge paradigm, Learning 2.0. WHAT IS LEARNING 2.0? In an article published on an Infosys blog, Sontakey described Learning 2.0:
“In the traditional methods of learning like classroom training and e-learning, [the] learner is just fed with information and [the] teacher dictates the learning. Learning 2.0 is a fundamental shift in learning methodology that places [the] learner in the center of learning and allows the learner to control the learning. Learning 2.0 enables an employee to control and track individual
learning using a set of Web 2.0 tools and systems that enable collaboration. Learning 2.0 doesn’t replace the conventional means of learning; it augments the conventional methods of learning with a set of Web 2.0 tools and systems.”
The tools that facilitate Learning 2.0 are numerous. They include, but are not limited to blogs, wikis, communities of practice, tagging, RSS feeds, collaborative workspaces, podcasting, virtual worlds and mobile devices. Melinda Sample is the Senior Director of Learning Technology and Integration at Pharmaceutical Product Development, Inc., a leading global contract research organization providing drug discovery, development and lifecycle management services. She characterizes her business environment as a place
where Learning 2.0 is necessary: “Our culture is very focused on utilization targets.  Our ability to reduce the cycle time for a new product or service allows our pharmaceutical partners to go to market faster and start to recoup a return on their investment.   Therefore, it is nearly impossible to pull people away for training. Knowledge and performance support needs to be available to our employees 24×7 and
at their fingers tips just in time.  Informal and social learning are becoming increasingly important in our workplace and is a part of the overall learning strategy moving forward, so we can bring the benefits to both our internal and external customers.”
THE 70:20:10 LEARNING APPROACH AND INFORMAL LEARNING Michael Lombardo and Robert Eichinger describe the 70:20:10 approach to learning in their book, The Career Architect Development Planner. On-the-job experiences, tasks, and problem solving comprise 70 percent of learning. Coaching feedback and examples (both good and bad) contribute to another
20 percent of learning. Formal and traditional training such as courses and reading contribute only 10 percent. With only 10 percent classified as formal and controlled by Learning and Development (L&D) professionals, 90 percent of learning is by default informal. Charles Jennings (2011), Managing Director of Duntroon Associates, and a
member of the Internet Time Alliance, indicates that cost and timeliness are driving organizations to adopt the 70:20:10 learning model. Not only are the costs of formal training high, but learning has to occur at the speed required by business. Knowledge must be delivered just in time for the learner to perform a task on the job. In 2009, KnowledgeAdvisors conducted research on the types of informal learning that organizations pursue. Figure 1 shows the responses for four main categories of informal learning: mentoring and coaching, communities of practice, virtual knowledge sharing, and performance support systems. Jennings asserts that informal learning cannot be managed. Each informal learner fundamentally self-manages. However, managers can encourage it by providing experiences and offering coaching to correct and encourage right performance. Informal learning is facilitated by context. If a manager provides training in the moment with support (e.g., job aids and coaching), learning
is more effective, faster and efficient. Alan Bellinger from the Learning and Performance Institute in Coventry, England argues that "blended learning" should refer to a genuine blend of formal and informal learning, and not merely to adding an e-learning
element to a formal learning event. Having seen the 70:20:10 model applied across many organizations, Bellinger observes that the power of re-designing learning interventions and suggests that informal learning should no longer be serendipitous. Whether the learning is traditional or informal, the link from learning to  performance is key to assessing its benefits. MEASURING THE BENEFITS OF LEARNING 2.0 Learning 2.0 organizations emerge when learners pull information from sources to solve problems rather than having information pushed to them. An organization benefits overall from the pull rather than push nature of Learning 2.0. Learning 2.0 development is: >>relevant because it is self-guided; >>efficient because learners gather only what they need to do their jobs; and >>timely because the learners do not have to wait for the next available  classroom course. An organization optimizes these benefits by using the right technology tools (e.g., LMS, knowledge portal, elearning system, etc.) to make learning more efficient, practical and repeatable. But the vastness and complexity of Learning 2.0 content makes it difficult to measure its impact. In order to effectively measure the impact of Learning 2.0, L&D practitioners need to concentrate on three critical aspects: >>Defining the Scope of the Learning Components: Knowledge sources proliferate, from Google and SkillSoft’s Books24x7 to just-in-time e-learning
modules and communities of practice. Other informal learning elements consist
of coaching and mentoring, social media, knowledge portals, electronic performance support systems, job aids, and on-the-job experience. In order to
evaluate the impact of learning, it is essential to constrain, or a least define,
the scope to a reasonable and measureable set of content.

>>Measurement instruments: Measurement instruments vary substantially
including surveys, checklists, focus groups, key person interviews, and web analytics. What instruments are best for gathering information for each of the various components? It depends fully on the components of learning that are consumed. >>Timing: Learning 2.0 doesn’t occur on a schedule. Rather it happens at the
point of need throughout the work week. Evaluators must determine when
to intervene with measurement instruments. What is the best time in the
learning cycle to reach out to the learner to gather insights? Should the learner
who reads a book on Books24x7 receive a survey every time she reads a summary? When should a learner be surveyed for using a web portal or a
community of practice? When measuring the impact of Learning 2.0, the instruments must match the scope and timing of the learning that occurs. For example, if a knowledge portal is a key component of the learning, then web analytics such as the amount of time spent reading certain topics is worth gathering as formative data. Other tools can also be added, such as pop-up surveys and micro-polls, to gather information about quality, usefulness, expected performance improvement, and business impact. With these tools, brevity is essential so the evaluation process is only minimally invasive. Figure 2 represents a simple framework for measuring the impact of Learning 2.0. To measure impact effectively, the learning component must be identified,
and the appropriate instrument must be deployed at the right time. When multiple sets of content and multiple instruments are used, a system capable of organizing, storing, analyzing and reporting results is also required. Figure 3 shows an example of how the framework in Figure 2 can be used to
measure the impact of a series of Learning 2.0 components. More information about measuring informal learning is available via KnowledgeAdvisors, including how to use evaluation systems to collect, analyze, and report results from disparate Learning 2.0 sources.
MOVING BEYOND SATISFACTION METRICS Over the years, learning organizations have attempted to measure the impact of traditional learning using surveys and polls, but most settle for achieving reaction or satisfaction metrics. It is likely that the same thing will happen with informal learning as well. However, it is not too difficult to measure impact. It begins with asking the right questions in your poll or survey—questions that relate to application, performance improvement, and business outcomes. While it is important to ask learners if they like the user interface on a knowledge portal, it is more important to ask, “Will you use the information you just learned?” and “Will it improve your performance?” Quantifying the responses to these questions begins the journey of measuring for impact. For more than 50 years, Kirkpatrick’s model of the Four Levels of Evaluation has provided a framework for measuring impact (Kirkpatrick, 1998). Similarly, the Phillips ROI Methodology (Phillips, 1997) and Bersin’s Learning Impact model (Bersin, 2008) provide perspectives about the impact of training. Common questions among all of these approaches that address impact include
the following: >>Did learning transfer and skill acquisition occur and if so, how much was learned? >>Will the learner apply what was learned? How quickly and how effectively? >>Will the application of learning improve individual and organizational performance? Answers to these questions provide some insight into skill acquisition and performance. Follow-up evaluations can solidify the impact of learning on performance.
Unfortunately, it is difficult to know when to send a follow-up evaluation because the learning process is likely ongoing. Another approach is to monitor the performance management process within the organization. Talk to managers about the performance improvement of the learners who are using the Learning 2.0 tools. IMPACT ON BUSINESS OUTCOMES It is important to consider measures of business outcomes. That is, does Learning 2.0 contribute to critical business measures like increased customer satisfaction, reduced risk, increased cycle time, increased sales and increased revenue among other business measures? These measures may be challenging to gather. But it is even more difficult to develop the ability to isolate the impact of
Learning 2.0 components that contributor to these outcomes. Of course experimental designs offer the most rigorous approach for isolating impact, but that approach is often not practical because of time and resource constraints. Phillips advocates asking learners to estimate how much job performance is going to improve, have them identify which components helped and adjust for impact. This estimate, isolate, and adjust approach is worth researching and is a useful and valid technique. The greatest hurdle is finding an appropriate comparison group that has not participated in the Learning 2.0 components.

CHANGING THE WORLD WITH LEARNING 2.0 AND MEASUREMENT The question that often accompanies Learning 2.0 interventions is the samequestion that still applies to formal learning—does it have an impact on theorganization? The likely answer is yes, it does. The challenge for L&D professionals is quantifying the impact. A fairly simple measurement approach can yield answers. By concentrating on the components, measurement instruments, and timing, a capable learning organizationshould be able to assess the impact of Learning 2.0 almost as easily as it assesses the impact of traditional learning. It will certainly be a challenge, but such are the struggles of science, business and exploration. Bon voyage! John R. Mattox, II is the Director of Research at KnowledgeAdvisors, a human
capital metrics company that offers Metrics That Matter® software and helps improve learning effectiveness within organizations.

References Bersin, J. (2008). The training measurement book: Best practices, proven methodologies and practical approaches. San Francisco, CA: John Wiley & Sons. Hanley M. (2008). Introduction to Nonformal Learning. E-Learning Curve Blog. Retrieved November 19, 2011: Opens external link in new windowhttp://michaelhanley.ie/elearningcurve/introduction-to-non-formal-learning-2
/2008/01/28/
Jennings, C. (Oct, 2011) 70:20:10 Opens external link in new windowhttps://www.youtube.com/watch?v=t6
WX11iqmg0&feature=player_embedded# Retrieved 3/26/2012.

Kirkpatrick, D. (1998). Evaluating training programs: The four levels. (2nd ed.). San Francisco, CA: BerrettKoehler Publishers, Inc. KnowledgeAdvisors (2009). Informal learning. Opens external link in new windowhttp://www.knowledgeadvisors.com/wp-content/uploads/2009/
09/Informal_Learning.pdf Retrieved 3/26/2012.
KnowledgeAdvisors (2010). Informal learning measurement. Opens external link in new windowhttp://www.knowledgeadvisors.com/wp-content/uploads/
2010/06/Whitepaper_Informal_Learning_Measurement.pdf Retrieved 4/2/2012.
Lombardo, Michael M. and Robert W. Eichinger (1996). The career architect development Planner. Lominger Limited, Inc. p. iv. ISBN 0965571211. Phillips, J. J. (1997). Return on investment in training and performance improvement programs. Houston, TX: Gulf Publishing Company. Sontakey, A. (May 21, 2009) What is Learning 2.0? Infosys® Building Tomorrow’s Enterprise blog Opens external link in new windowhttp://www.infosysblogs.com/learningservices/2009/05/what_is_learning_
20.html Retrieved 3/26/2012.

Great moments in science, business and exploration have always come with the advent of new technology. The steam engine reduced travel time across a continent. The jet engine made transcontinental travel feasible. Rockets opened the final frontier. The printing press made books available to the masses. In modern times, the Internet has made knowledge abundant. Moreover, many of the tools on the Internet support the latest knowledge paradigm, Learning 2.0. WHAT IS LEARNING 2.0? In an article published on an Infosys blog, Sontakey described Learning 2.0:
“In the traditional methods of learning like classroom training and e-learning, [the] learner is just fed with information and [the] teacher dictates the learning. Learning 2.0 is a fundamental shift in learning methodology that places [the] learner in the center of learning and allows the learner to control the learning. Learning 2.0 enables an employee to control and track individual
learning using a set of Web 2.0 tools and systems that enable collaboration. Learning 2.0 doesn’t replace the conventional means of learning; it augments the conventional methods of learning with a set of Web 2.0 tools and systems.”
The tools that facilitate Learning 2.0 are numerous. They include, but are not limited to blogs, wikis, communities of practice, tagging, RSS feeds, collaborative workspaces, podcasting, virtual worlds and mobile devices. Melinda Sample is the Senior Director of Learning Technology and Integration at Pharmaceutical Product Development, Inc., a leading global contract research organization providing drug discovery, development and lifecycle management services. She characterizes her business environment as a place
where Learning 2.0 is necessary: “Our culture is very focused on utilization targets.  Our ability to reduce the cycle time for a new product or service allows our pharmaceutical partners to go to market faster and start to recoup a return on their investment.   Therefore, it is nearly impossible to pull people away for training. Knowledge and performance support needs to be available to our employees 24×7 and
at their fingers tips just in time.  Informal and social learning are becoming increasingly important in our workplace and is a part of the overall learning strategy moving forward, so we can bring the benefits to both our internal and external customers.”
THE 70:20:10 LEARNING APPROACH AND INFORMAL LEARNING Michael Lombardo and Robert Eichinger describe the 70:20:10 approach to learning in their book, The Career Architect Development Planner. On-the-job experiences, tasks, and problem solving comprise 70 percent of learning. Coaching feedback and examples (both good and bad) contribute to another
20 percent of learning. Formal and traditional training such as courses and reading contribute only 10 percent. With only 10 percent classified as formal and controlled by Learning and Development (L&D) professionals, 90 percent of learning is by default informal. Charles Jennings (2011), Managing Director of Duntroon Associates, and a
member of the Internet Time Alliance, indicates that cost and timeliness are driving organizations to adopt the 70:20:10 learning model. Not only are the costs of formal training high, but learning has to occur at the speed required by business. Knowledge must be delivered just in time for the learner to perform a task on the job. In 2009, KnowledgeAdvisors conducted research on the types of informal learning that organizations pursue. Figure 1 shows the responses for four main categories of informal learning: mentoring and coaching, communities of practice, virtual knowledge sharing, and performance support systems. Jennings asserts that informal learning cannot be managed. Each informal learner fundamentally self-manages. However, managers can encourage it by providing experiences and offering coaching to correct and encourage right performance. Informal learning is facilitated by context. If a manager provides training in the moment with support (e.g., job aids and coaching), learning
is more effective, faster and efficient. Alan Bellinger from the Learning and Performance Institute in Coventry, England argues that "blended learning" should refer to a genuine blend of formal and informal learning, and not merely to adding an e-learning
element to a formal learning event. Having seen the 70:20:10 model applied across many organizations, Bellinger observes that the power of re-designing learning interventions and suggests that informal learning should no longer be serendipitous. Whether the learning is traditional or informal, the link from learning to  performance is key to assessing its benefits. MEASURING THE BENEFITS OF LEARNING 2.0 Learning 2.0 organizations emerge when learners pull information from sources to solve problems rather than having information pushed to them. An organization benefits overall from the pull rather than push nature of Learning 2.0. Learning 2.0 development is: >>relevant because it is self-guided; >>efficient because learners gather only what they need to do their jobs; and >>timely because the learners do not have to wait for the next available  classroom course. An organization optimizes these benefits by using the right technology tools (e.g., LMS, knowledge portal, elearning system, etc.) to make learning more efficient, practical and repeatable. But the vastness and complexity of Learning 2.0 content makes it difficult to measure its impact. In order to effectively measure the impact of Learning 2.0, L&D practitioners need to concentrate on three critical aspects: >>Defining the Scope of the Learning Components: Knowledge sources proliferate, from Google and SkillSoft’s Books24x7 to just-in-time e-learning
modules and communities of practice. Other informal learning elements consist
of coaching and mentoring, social media, knowledge portals, electronic performance support systems, job aids, and on-the-job experience. In order to
evaluate the impact of learning, it is essential to constrain, or a least define,
the scope to a reasonable and measureable set of content.

>>Measurement instruments: Measurement instruments vary substantially
including surveys, checklists, focus groups, key person interviews, and web analytics. What instruments are best for gathering information for each of the various components? It depends fully on the components of learning that are consumed. >>Timing: Learning 2.0 doesn’t occur on a schedule. Rather it happens at the
point of need throughout the work week. Evaluators must determine when
to intervene with measurement instruments. What is the best time in the
learning cycle to reach out to the learner to gather insights? Should the learner
who reads a book on Books24x7 receive a survey every time she reads a summary? When should a learner be surveyed for using a web portal or a
community of practice? When measuring the impact of Learning 2.0, the instruments must match the scope and timing of the learning that occurs. For example, if a knowledge portal is a key component of the learning, then web analytics such as the amount of time spent reading certain topics is worth gathering as formative data. Other tools can also be added, such as pop-up surveys and micro-polls, to gather information about quality, usefulness, expected performance improvement, and business impact. With these tools, brevity is essential so the evaluation process is only minimally invasive. Figure 2 represents a simple framework for measuring the impact of Learning 2.0. To measure impact effectively, the learning component must be identified,
and the appropriate instrument must be deployed at the right time. When multiple sets of content and multiple instruments are used, a system capable of organizing, storing, analyzing and reporting results is also required. Figure 3 shows an example of how the framework in Figure 2 can be used to
measure the impact of a series of Learning 2.0 components. More information about measuring informal learning is available via KnowledgeAdvisors, including how to use evaluation systems to collect, analyze, and report results from disparate Learning 2.0 sources.
MOVING BEYOND SATISFACTION METRICS Over the years, learning organizations have attempted to measure the impact of traditional learning using surveys and polls, but most settle for achieving reaction or satisfaction metrics. It is likely that the same thing will happen with informal learning as well. However, it is not too difficult to measure impact. It begins with asking the right questions in your poll or survey—questions that relate to application, performance improvement, and business outcomes. While it is important to ask learners if they like the user interface on a knowledge portal, it is more important to ask, “Will you use the information you just learned?” and “Will it improve your performance?” Quantifying the responses to these questions begins the journey of measuring for impact. For more than 50 years, Kirkpatrick’s model of the Four Levels of Evaluation has provided a framework for measuring impact (Kirkpatrick, 1998). Similarly, the Phillips ROI Methodology (Phillips, 1997) and Bersin’s Learning Impact model (Bersin, 2008) provide perspectives about the impact of training. Common questions among all of these approaches that address impact include
the following: >>Did learning transfer and skill acquisition occur and if so, how much was learned? >>Will the learner apply what was learned? How quickly and how effectively? >>Will the application of learning improve individual and organizational performance? Answers to these questions provide some insight into skill acquisition and performance. Follow-up evaluations can solidify the impact of learning on performance.
Unfortunately, it is difficult to know when to send a follow-up evaluation because the learning process is likely ongoing. Another approach is to monitor the performance management process within the organization. Talk to managers about the performance improvement of the learners who are using the Learning 2.0 tools. IMPACT ON BUSINESS OUTCOMES It is important to consider measures of business outcomes. That is, does Learning 2.0 contribute to critical business measures like increased customer satisfaction, reduced risk, increased cycle time, increased sales and increased revenue among other business measures? These measures may be challenging to gather. But it is even more difficult to develop the ability to isolate the impact of
Learning 2.0 components that contributor to these outcomes. Of course experimental designs offer the most rigorous approach for isolating impact, but that approach is often not practical because of time and resource constraints. Phillips advocates asking learners to estimate how much job performance is going to improve, have them identify which components helped and adjust for impact. This estimate, isolate, and adjust approach is worth researching and is a useful and valid technique. The greatest hurdle is finding an appropriate comparison group that has not participated in the Learning 2.0 components.

CHANGING THE WORLD WITH LEARNING 2.0 AND MEASUREMENT The question that often accompanies Learning 2.0 interventions is the samequestion that still applies to formal learning—does it have an impact on theorganization? The likely answer is yes, it does. The challenge for L&D professionals is quantifying the impact. A fairly simple measurement approach can yield answers. By concentrating on the components, measurement instruments, and timing, a capable learning organizationshould be able to assess the impact of Learning 2.0 almost as easily as it assesses the impact of traditional learning. It will certainly be a challenge, but such are the struggles of science, business and exploration. Bon voyage! John R. Mattox, II is the Director of Research at KnowledgeAdvisors, a human
capital metrics company that offers Metrics That Matter® software and helps improve learning effectiveness within organizations.

References Bersin, J. (2008). The training measurement book: Best practices, proven methodologies and practical approaches. San Francisco, CA: John Wiley & Sons. Hanley M. (2008). Introduction to Nonformal Learning. E-Learning Curve Blog. Retrieved November 19, 2011: Opens external link in new windowhttp://michaelhanley.ie/elearningcurve/introduction-to-non-formal-learning-2
/2008/01/28/
Jennings, C. (Oct, 2011) 70:20:10 Opens external link in new windowhttps://www.youtube.com/watch?v=t6
WX11iqmg0&feature=player_embedded# Retrieved 3/26/2012.

Kirkpatrick, D. (1998). Evaluating training programs: The four levels. (2nd ed.). San Francisco, CA: BerrettKoehler Publishers, Inc. KnowledgeAdvisors (2009). Informal learning. Opens external link in new windowhttp://www.knowledgeadvisors.com/wp-content/uploads/2009/
09/Informal_Learning.pdf Retrieved 3/26/2012.
KnowledgeAdvisors (2010). Informal learning measurement. Opens external link in new windowhttp://www.knowledgeadvisors.com/wp-content/uploads/
2010/06/Whitepaper_Informal_Learning_Measurement.pdf Retrieved 4/2/2012.
Lombardo, Michael M. and Robert W. Eichinger (1996). The career architect development Planner. Lominger Limited, Inc. p. iv. ISBN 0965571211. Phillips, J. J. (1997). Return on investment in training and performance improvement programs. Houston, TX: Gulf Publishing Company. Sontakey, A. (May 21, 2009) What is Learning 2.0? Infosys® Building Tomorrow’s Enterprise blog Opens external link in new windowhttp://www.infosysblogs.com/learningservices/2009/05/what_is_learning_
20.html Retrieved 3/26/2012.

Leave a reply