August 16, 2022
- Redesign your quality form to focus on key drivers
- Measure three quality metrics vs. one overall score
- Evaluate interactions from the customer’s perspective
A quality assurance program offers one of the best windows into the customer experience. Designed effectively, quality can predict CSAT, providing actionable data that leads to significant and sustained improvements.
Yet, quality scores and CSAT results often tell different stories. The misalignment occurs when organizations don’t fully understand or measure true drivers of satisfaction through the quality program.
Below are a few straightforward approaches to realigning quality assurance with customer satisfaction.
An effective quality program starts with the form. To accurately assess the customer’s experience, the quality form should reflect what matters most to the customer. Often quality forms are designed based on good intentions and varied opinions. Over time, however, they evolve and end up with attributes that are not critical to the customer.
Organizations sometimes fail to consider the voice of the customer while designing quality forms. Avoid this common pitfall by conducting a key driver survey to understand what customers care about most. This survey asks customers detailed questions about their experience.
A quality form that captures what matters most to your customers brings you closer to aligning QA scores and CSAT results.
Using an overall score to measure quality often produces inflated results and doesn’t provide an accurate view of performance. Using a single score can also be deceptive to leaders who do not take action because the organization appears to be performing well.
Measuring quality from the customer, compliance and business perspectives gives you a more accurate view of performance. Combining attributes, such as resolving the customer’s issue with logging calls correctly or following privacy regulations, doesn’t make sense because it tempers the score.
For precise performance insight, measure these three components of quality:
- Customer Critical Accuracy measures quality from the customer’s perspective.
- Issue resolution and clear communication are two examples of customer critical attributes that significantly impact customer experience.
- Compliance Critical Accuracy measures how often compliance requirements are met in relation to regulatory and legal requirements. Errors in this area could cause company liability.
- Disclosing sensitive information without the proper identification is an example of a compliance error.
- Business Critical Accuracy measures how well the agent followed critical business processes. These could be attributes that customers don’t care about but could result in unnecessary costs and possible revenue loss.
- Accurately logging calls or attempting to close a sale are two examples.
*Out of respect for client confidentiality, we’ll refer to the organization as Centera.
Our client Centera initially reported an 86% overall quality score, appearing to perform at a high level. But when we segmented the overall score into the three metrics mentioned above, the results told a different story. Here’s the honest breakdown:
- Customer Critical Accuracy: 60% of transactions had no customer-critical errors.
- Business Critical Accuracy: 70% of transactions had no business-critical errors.
- Compliance Critical Accuracy: 100% of transactions had no legal or compliance errors.
Averaging these distinct aspects of QA into a simplistic number was masking Centera’s actual performance. In this case, the customer critical accuracy result of 60%, rather than 86%, was a more accurate reflection of the customer’s experience.
A primary reason for QA and CSAT misalignment is that evaluators often assess from the perspective of an organization or agent instead of the customer. For instance, assessors often consider the transaction a pass when an agent does everything right but can’t provide a solution due to policy. In contrast, the customer would likely think the transaction failed since the agent couldn’t resolve their issue. If asked about their experience in a survey, the customer will probably indicate their dissatisfaction.
Correctly scored from the customer’s perspective, the transaction would fail. Most importantly, the quality process would capture the specific reason for not resolving the issue. In this case, the failure is because of the policy preventing resolution.
This scenario demonstrates how a disconnect between what an organization and its customers consider “passing” can lead to misalignment of quality and CSAT results.
These are just three methods of ensuring quality and CSAT correlation. Sampling approaches, calibration, and overall program design also play critical roles. We’ll discuss those aspects of QA in subsequent content. Stay tuned!
For better outcomes, inform planning and development strategies with industry-leading data. Additional resources around timely issues affecting contact centers and customer experience are available in our Global Benchmarking Series 2022.
- RevealCX Achieves Platinum Partner in COPC Inc.’s ATP Program
- Saudi Arabia’s First COPC Inc. Certified BPO: ccc by stc
- CCMG & COPC Inc. Partner to Propel South Africa’s Contact Centre Industry Development
- Employee Engagement and Retention: Consider This
- Digital Engagement
- Channel Strategy
- Vendor Management
- Human Resources
- Outsource Service Providers
- Indirect Procurement
- Corporate News
- Book Notes
- Artificial Intelligence
- Technology Provider
- Press Release
- Six Sigma
- Service Journey
- Performance Improvement
- Virtual Services
- CX Research
- Employee Engagement
- Ask the Expert
- Call Center
- COPC Inc. News
- COPC Standards
- Customer Experience
- Customer Satisfaction
- CX Stories
- Social Care