Customer Satisfaction

Quality Series: Score Output Metrics and Use Sub-Attributes to Capture Reasons for Errors

March 25, 2016 No Comments

This is the second post in a special blog series called, “Five Changes to Your Quality Program That Can Dramatically Improve Customer Satisfaction.”

With each post in the series, we will examine one of five fundamental changeswe recommend you make to your quality program. These are proven approaches to ensure your quality program is truly customer focused.  Implementing these changes can drive significant improvements in the performance of your customer experience operations.

Here are the five changes we recommend:

#1 Redesign the quality form to align with key customer drivers
#2 Score only output metrics and use sub-attributes to capture
reasons for error
#3 Measure quality using three metrics instead of one overall score
#4 Evaluate transactions from the customer’s perspective, focusing on systemic issues impacting performance
#5  Expand the quality process to include the capture of business intelligence

 Today’s installment: 

#2 Score only output metrics and use sub-attributes to capture reasons for error

Ultimately, your quality program should be designed to improve the customer experience. In our last article, we discussed the importance of focusing on the key drivers of the customer experience when designing your quality form.

That concept leads us to the topic of this article. Many times, we find the attributes that are scored do not reflect what the customer ultimately cares about. For example as we discussed in the last post, the customer cares about things like “was my issue resolved?” or “did I get accurate information?” or “was my issue handled in an acceptable timeframe?”  We consider these output metrics and are what should be scored.

What we typically see as attributes being scored are items such as “did the agent use the appropriate tools?” or “did the agent use proper transfer procedures?” or “did the agent follow the proper procedures?”  While these are all important, they are not output metrics, nor are they what the customer cares about. These are actually the REASONS a failure occurred or  “causal factors.”  They should be captured as sub-attributes as the reasons for error.

Figure 1. shows the relationship between output metrics and causal factors. For example, if a customer’s issue was NOT resolved because the agent failed to use their tools correctly, the output score would be a fail to “issue resolution”with “did not use tools correctly” captured as the sub-attribute. There could be more than one causal factor and all should be captured. You may also find you need multiple levels of sub-attributes.  In this example, the next level would be to capture the specific tool the agent did not use correctly.

measure output metrics

Figure 1.



Figure 1.




By taking this approach of focusing on output metrics and using sub-attributes to capture causal factors, you will have reliable data about the issues that matter most to customers. The results of your output metrics also will be better correlated with your actual customer satisfaction data.

At the same time, you will have a more accurate understanding of the underlying cause of failure for each metric. You will be able to target improvement efforts more effectively, ensuring any actions you take will have a direct impact on the customer experience.

Please check back soon to see the next post in our series, “Five Changes to Your Quality Program that Can Dramatically Improve Customer Satisfaction,” when we will examine specific quality measurements.  It’s the topic of our #3 recommended change:  Measure quality using three metrics instead of one overall score.

Want to know more? Read our post about change #1, Redesign the quality form to align with key customer drivers.