Measuring Intangibles: Where to Begin


By Christo Lute, Director of Advanced Analytics,
Analytics Guild

At some point in our careers, it’s likely we’ll receive some type of coaching, training, or skill-building, whether one-on-one, in a workshop, or among team members. The goal of training is often to improve some aspect of our skills, technical or social. But how do we know when we’ve achieved that goal? Simply asking whether coaching was effective or not is a poor measure of effectiveness; you cannot know if you’ve improved if you don’t know where you started from.

Coaching, training, and teaching all work toward abstract goals. We do not merely work with coaches to get higher scores or more income (easy to measure), but to improve careers and work-life cohesion, and make pre-existing skills stronger (hard to measure). So how do we develop strong methods and metrics to measure effectiveness of abstract or nebulous goals? 

1. Pre- and Post-Event Surveys

In order to determine whether you have improved from a training exercise, you must know where you began. Rank the target goals of a training workshop on a scale of 1 to 5, have participants report their current scores prior to training, and then have them report their scores following the event. Ideally, participants would report their scores at one further interval as well. For example, a mindfulness coach would have a participant rank their ability to manage stress on a 1-5 prior to a workshop, and then have the coachee rank their ability after the workshop—and then again 1 month after the workshop. This would identify the initial ability of the coachee, the short-term boost of the workshop, and the long-term impact of the workshop.

2. Ask for Feedback

It may seem obvious, but putting participants to work on providing evidence of training effectiveness is an unappreciated gold mine of information. One strategy has training participants identity tangible business benefits that resulted from training, estimate the monetary value of those benefits, and their level of confidence in that estimate. There were also asked to give a percentage of the benefit that they attributed to the training and to give their confidence in that percentage. This practice gives participants a chance to provide feedback and articulate for themselves the real benefit of the change.

3. Get Meta

After you’ve collected surveys and received feedback, ask for feedback and analysis on the feedback. Do the participants believe that the feedback that has been gathered is accurate? Is the influence greater or lesser than individual feedback suggests? Eliciting feedback about the feedback is especially useful for workshop groups to communicate collectively about an outcome, though there may be opportunities to “get meta” with individual participants as well.

4. Define Success

Instead of letting surveys do all the heavy lifting for success, instructors and participants can define the success criteria for themselves as part of the exercise. Utilizing SMART goals or the BSQ framework for goal setting, define what the intentions of the session or workshop are ahead of time and ask participants to define their own goals. Have participants write goals down and follow up on them. For example, if a participant defines their goal for a session as, “Learn how to deal with conflict better,” ask them to define how they’ll know if the instruction helped. A well-defined goal will be easy to measure in terms of success or failure, and can even serve as a metric in its own right. How many of participants felt they had succeeded at their goals set before the class began? That’s a metric! 

These 4 strategies for measuring intangible benefits of instruction are methods for developing metrics surrounding a service-oriented practice. They apply to coaching, teaching, training, mentoring. Any time a qualitative activity happens, these strategies provide a means to turn qualitative actions into quantitative measurements.