The Impact of AI in Contact Center Quality

Learn the impact of AI in contact center quality and how measuring the right things with your quality program will drive higher CSAT and profitability
View The Webinar
Image

Transcript

Christian Erickson (00:00):

Well, welcome everyone to today’s webinar with Nexcom and COPC Inc. We’ll be talking about the impact of AI in contact center quality. I’m Christian Erickson and I’m head of marketing for COPC, obviously, very excited about this webinar and what we’ll be presenting today. Today’s presenters are Rick Zayas. Rick is the vice president CX strategy and performance improvement at COPC. Rick brings years of experience working with contact centers globally with the focus on performance improvement and quality. Iain Ironside is the chief technology officer for Nexcom and Iain has been instrumental in developing the Nexcom approach to exploiting AI based on his experience over a quarter of a century in contact centers. Without further ado, gentlemen, please take it over.

Rick Zayas (00:57):

Thank you Christian. And thank you everyone for joining today, both Iain and I are excited to have this opportunity to speak with you on a very relevant topic. I’ll jump right in at this point. Quality assurance in contact centers and in similar CX operations or customer experience operations have traditionally been cost centers over the years, and many organizations use them primarily to evaluate agent performance and provide information that they need to conduct coaching sessions and such. The staff that carry out this function and these activities and the tools that they use all contribute to operating costs. But in most organizations, if we’re honest with ourselves, we’ve had a difficult time proving how much they actually drive higher revenues or better business performance measures and other financial metrics or directly drive customer satisfaction and loyalty and such. And in doing so have called into question the value of having these parts of the organizations, these functions exist there.

Rick Zayas (02:03):

There are concerns with parts of the practices that they carry out. There’s concerns with the ability in terms of how much they’re able to evaluate with thousands of transactions going on on a daily basis or hourly basis, depending on your organization and services. Unless you have a small army of these quality evaluators, they’re only getting to a small number of these transactions, which limits how much you’re really knowing and understanding about the customer experience and your ability to achieve your significant performance measures. The sampling approaches that are used many times we see practices there that have good intentions where we’re focusing in on certain types of transactions, maybe the duration of certain transactions and such, but by doing we begin to introduce bias into the data itself. It is sample data. And once we introduce bias, we begin to affect the usefulness of that data to make broader decisions and such.

A time,cost and quality pyramid.

Rick Zayas (03:03):

Calibration challenges, again, these are humans carrying out these activities, and we all have different ways of doing things. Having the proper controls in place to try to ensure consistent performance of these evaluations is a significant challenge as well. And when that’s not as controlled as we need it to be, again, affects the integrity of the data. And when we use that data, whether it is to coach and manage the performance of an agent or we use it for broader decisions, having data that we can’t trust and rely on really begins to impact how comfortable we are with using it going forward. And then lastly, a key point here is the lack of correlation that we’ve seen that exists between these functions, these activities, and what they’re really doing to drive real business results. As such what COPC has found in many organizations is that some operations have pulled back all together on the quality assurance function.

Rick Zayas (04:05):

They’ve just abandoned it because they’ve had trouble connecting the value of the expense and the effort put forward here with what it’s really doing to drive real business results. Others have decided to keep the function, but in reality, have under-resourced it. We’re not quite leveraging the full potential of what we believe at COPC is a core function for a well operating contact center or customer experience operation. We do support the importance of this, but it’s got to be designed in a way that it’s really driving business value. There are high performance organizations that have figured out how to do this. And it really comes down to a couple of key things. One is having the knowledge of what are these best practices that are necessary to incorporate in the design of a quality assurance function and also having the right tools that can help significantly boost our ability to address many of these gaps. Many of these concerns that you see presented here.

Rick Zayas (05:13):

What I’m going to do today is just talk about two of the many factors that have to be considered when designing a best practice quality assurance function and then my colleague, Iain Ironside is going to take us down the technology path and help us understand how artificial intelligence can be used in a powerful way to address again, many of these concerns that we have. Moving forward. One of the first things that we need to understand, and we need to consider when we are designing our quality assurance function is how to connect it properly with things that matter for our business. We need information from the financial department, from the CFO’s office on what is it that we’re trying to achieve from our most important financial metrics within the organization, and what are those targets that we’re trying to reach with the delivery of services and such.

Rick Zayas (06:08):

We need to gather and analyze customer sentiment and customer expectation type data, customer preference type data so we can understand what’s important to our customers, to the consumers of our products and services that would motivate them to continue to purchase our services, expand the purchasing of our services and products and grow their loyalty with us. This information is necessary to then build a quality function that is directly designed to influence those most important key performance indicators. If you’re in your organization, high levels of customer satisfaction and loyalty is important as it is in most, then your quality assurance and function needs to know exactly what those attributes, what those elements are for your consumers that drive their satisfaction, their loyalty, their desire to keep purchasing from you and expand their wallet, expand their wallet in terms of how much they spend on your products and services.

Rick Zayas (07:09):

Once you know that, those things should be incorporated within every aspect of your quality assurance program, beginning with your form design. The attributes, the elements that are measured, the behaviors that you’re seeking that drive and help support satisfaction and loyalty need to be reflected within those forms as well. Whether you’re evaluating humans delivering this customer support function, or even when you’re evaluating digital forms of delivery. Whether it’s a bot or it’s some sort of self-service technology, we still need to know what are the behaviors of whether it’s human or digital that we’re seeking that are going to drive higher levels of satisfaction and loyalty. What we commonly see at COPC are things such as issue resolution or the speed of service, the accuracy of the resolution that you’ve provided to consumers, the amount of effort they’ve had to go through to navigate your support model as they try to get their inquiry handled and resolved, and the empathy that you portray that you display for what they’ve had to go through with your product and service or support experience to just get the answers they need.

Rick Zayas (08:21):

Align QA with Key Business Drivers

All of these things we’ve seen are common attributes that need to be looked at, but sometimes we fail to capture that properly within our quality program and find the exact behaviors that, again, drive that link directly to increased satisfaction, increase loyalty and better business financial performance results. When we don’t hit on these things, when we fail to carry forward and perform the behaviors that are necessary to support these drivers of satisfaction and loyalty, that’s what we call at COPC a critical error. And even if we can move forward, what we have found for a second point today is that high performance organizations spend the bulk of their time and energy trying to reduce critical errors. And they do it in a way where even though it is important to gather this information and understand the individual agent’s performance, they’re trying to gather this information in a way where they can evaluate the entire service journey, the design of the support model, the design of their processes, both those that are internal, that affect the support experience or those that are very visible to the customer as they interact with your brand and such.

Rick Zayas (09:37):

They’re looking for what are the causes of these critical errors across the board, and trying to find ways to bring forward systemic solutions that drive broader impact and more sustained performance improvement. Again, focus on these critical errors first. Use the data in way that can help us understand process challenges and service design challenges so that we can improve this service experience, the journey that the customer takes as they interact with our brands. Many organizations are challenged to have data they can rely on, data that they trust, data that is presented in such a way that helps them get closer to pinpointing where these process and service design opportunities are. And that’s where it really becomes important, again, that we understand these best practices for designing a good quality program, but that we have the proper tools that have the power and the capability to aid us in closing these gaps and identifying where these issues are. With that I’m going to turn it over to Iain, and he’s going to share some interesting insights on how we can leverage AI to support our quality assurance programs.

Program Level vs Agent Level Slide

Iain Ironside (10:54):

Okay. Thanks very much, Rick. And I think the first thing I would say is that AI, as Rick is saying, is a tool. It doesn’t remove the need for the best practices that Rick’s talked about. If you have the wrong form with the wrong design, AI is just going to evaluate the wrong things more quickly, and you will still go in the wrong direction. AI needs the foundation of those best practices, the right form and clarity of thinking about where you are trying to reduce individual agents errors and where you’re trying to fix the process. Let’s just think through now how AI will actually enhance and boost your ability to work within those best practices and deliver better value. First of all, AI offers the promise of cost reductions to have a transaction with a customer evaluated by the machine is typically cheaper than the human, but don’t worry, in this new AI world, there is a huge role for the human.

Iain Ironside (12:11):

I’m not saying that the machines will take over just yet. Questioning sampling approaches. Rick highlighted that we work in a world at the moment in quality where we are restricted. We can’t have a huge team of quality people because we are a cost center. We’re forced into sampling approaches. With AI, it opens up the possibility of much larger number of transactions. Instead of looking at a sample, we can start to look at the whole population and some of the strange things that Rick was referring to about biased samples, where people are looking only at average length transactions or short transactions can become a thing of the past because we can start to think about looking at population and not worry about sampling. The efficiencies improved. The models we’ve developed typically are going to evaluate a transaction in a second or less. We can actually realistically look at large volumes of data.

Iain Ironside (13:27):

Calibration. The problem changes there as well and notice way I’m saying it, the problem changes. Instead of how having to make sure a group of humans all see the world the same way, you have to make sure the machines sees the same way as the gauge, as the experts. The problem simplifies because trying to persuade a group of human beings with their own points of view, that things of view the world in the same way can be challenging, but there’s still a job to be done to make sure that the machine sees things the right way, but you know what, once you’ve trained it a particular way, it’s going to be consistent.

Iain Ironside (14:11):

This point of accurate and representative data starts to come through. The promise of AI is very attractive. It reduces costs. It can boost what we’re doing and it takes away some of the problems in delivering quality that we’ve all struggled with for many years. Let’s just imagine ourselves in this new AI world. Imagine you’re in a center where you are trying to deliver sales through service. It’s quite a common thing that centers try to achieve. You solve the customer’s problem and that earns you the right to then go on and try and talk about add-on products or on upsell with them. It’s a common model.

Iain Ironside (15:05):

Imagine if when you had a new group of agents coming into the center and in nesting, you could actually evaluate every one of their calls for the sales skills they were using and whether they made any critical errors from a sales perspective or not. Imagine it’s got to the end of the first month and you’ve run the model that evaluates the sales skills for every agent, for every call. You’ve got huge, huge, new data that you didn’t have available before. You can see based upon every call an agent’s taken how many errors they’ve made from a sales perspective, how many calls were good from a sales perspective. That gives you data to start putting in place remedial action, whether that be retraining, whether that be coaching, whatever, and then imagining that you can run the same models a month later.

Iain Ironside (16:08):

You can see the difference that the remedial action has taken. We’re starting to come into a world where it was impossible to have such large samples to see where the problems lie and how cool is it that you can actually look at the performance after the training or the coaching and see if the intervention worked or not. You can start to run these evaluations on a regular basis and do much more measurement to allow you to manage and see the progress. So you find the problems and you can see if your remedial action has worked.

Iain Ironside (16:51):

Rick was talking about issue resolution as typically being one of the things that is driving customer satisfaction. Well, we’ve developed models around issue resolution. We’ve developed models around reasons for calling. If you start combining those together so you are looking at every call or every email or every chat session and what the reason for calling was, you’re really starting to get good data about where you’ve got good resolution, which reasons for calling have good resolution, which agents are making more errors on resolving the customer. It’s no longer based upon three or four samples per agent, which you cannot say is representative. Imagine how your coaching conversations might change if you’ve got a conversation with an agent where you say, we looked at three calls this month, one of them you made the mistake. It’s very easy for that to be brushed off as, hey, you know what, I was unlucky.

Iain Ironside (18:03):

If you actually then go and deep dive and evaluate all of the calls for the last week or the two weeks, it’s a very different conversation. Conversation with the agent might be, we looked at three calls, we’ve found you’ve made this error. We’ve done a deep dive on that error and you know what, it was an unusual case or we’ve found by looking at all of your calls, it’s a regular issue, how much more ownership are you going to get with the agent if you can say we’ve looked at all of your calls compared to a small. The world of AI is going to give you an ability at the process level that Rick talked about and at the agent coaching level. However, AI is not going to replace the human touch. It will change it and some of the ways we’re seeing it change are what are the skills that’s going to be needed in the quality team?

Iain Ironside (19:11):

Historically, the role of the vast majority of a quality team is rating the transactions, listening to the calls, reading the emails, looking at the chat sessions. It’s natural to take on board people who maybe did the job before they understand the product. They understand how to deal with customers. The world I’m describing is one where there’ll be a new skill needed in the quality team. It’s one about how can you use AI models to investigate, to identify issues and then to measure if you’ve in fixing them. Will the skill to be able to rate a transaction by a human go away? No, it won’t because there’s still a need for a gauge. There’s an expert who’s going to make sure the machine is evaluating correctly. And one of the ways that we implementing that is in our tour revealed CX we have a dispute mechanism. If anybody disagrees with an evaluation made by the AI, you can dispute it and we’ve been building automatic retraining so that the model will learn from the disputes, but that still relies upon a human who’s going to monitor and calibrate.

AI is not here to replace the human touch but it will change it slide

Iain Ironside (20:42):

But the nature of the work has changed. Having said that, I haven’t come across an AI model yet that can spot the agent’s got a hundred post-it notes in their workstation and they’re using that as an informal knowledge database rather than going the proper way. I’m not saying that the human involvement in quality is over by far from that. But I am saying, we’ve got to think about how the team’s going to evolve. The next thing is AI is being fed by data. It needs data to be able to evaluate and to be trained. And the more data we can feed it, the more granular can be the evaluation. On a typical form, there’ll be a main attribute. And underneath that, there may be some reasons as to why that particular attribute, that particular error was called as a mistake. And you may have a second or a third layer.

Iain Ironside (21:52):

And what we want the AI to do is to drill down into that. Well, there are some challenges with that. We need more data to become more granular. From our experience, one of the factors you have to consider in creating the data is how am I going to compare either the transcribed text of a call or the text of an email with a monitoring, with an evaluation so that we have an example to say in that call the error was present or in that call, the error was not present. The way we do that is we have the resource within reveal CX of all the evaluations that a QA team has been creating. And we mix that with the text coming from the transaction, as I say, that could be transcribed from a call recording. It could be an email or whatever.

Iain Ironside (22:59):

And the limiting factor in how quickly that data accumulates is how many evaluations you are creating. If you are a small team with only a couple of QA, it’s going to take more time to establish a database of evaluations that can be used to train the machine and provide it with examples. From our experience so far, we would believe that at once we have around 1500 evaluations, we can start create models that are getting to an acceptable level of accuracy. It’s going to be a different time scale depending on the size of your operation. If you are a large operation taking thousands of transactions and you have a large QA time, a large QA team, you’re going to build the database very quickly.

Iain Ironside (24:00):

One thing that we’ve come up against in Europe is the GDPR regulations or how organizations actually look at data retention. I think in Europe, it’s a particular thing that we need to consider how much history can you retain because of GDPR. The other thing is what I would call the balance within the data. If you have 1,500 evaluations in your database that we can use to train a machine learning model, but you are very accurate so that 99% of the evaluations are all okay and only 1% of a problem, 1% of 1500 evaluations gives a very small number of examples of a mistake, and it will become much more challenging to train the model.

Iain Ironside (25:04):

Now, if you have attributes with higher error rates in them, there are more examples of bad transactions and therefore you can train the model more quickly. Getting to the place where you have a model that you are happy with with a good accuracy rate is determined by a number of different factors. How quickly are you creating the evaluations and what’s the balance within the data. If you’ve got an attribute with lots of errors in it that you can train a model more quickly, that’s maybe an area where you want to work on first, anyway, rather than the attribute that’s very accurate. Maybe the machine learning model isn’t as necessary for that one so in some ways the system kind of works.

The more data, the more granularity! Slide

Rick Zayas (26:00):

Iain, there’s a question that came through that has to do with concepts you’ve covered on the last two slides so I want to toss it out there now for your consideration. There’s a question around, if you can go a little bit deeper on talking about how the machine itself, how AI is trained. And there’s a component of it that seems to infer can the machine ever become the gauge? Can you speak to both of those concepts, please.

Iain Ironside (26:31):

And the training cycle, I was trying to allude to a little bit, what we’re doing is we are taking an evaluation. A human has produced an evaluation of a transaction and in there they will have marked all the attributes that were good and hopefully the few ones that are bad. What we’re doing is we are matching that with the text of the evaluation and the machine models that we’ve got are ones that are looking at … And not just the words within the transaction, but also some of the syntax and the context of how those words are being used. And at the simplest level, it is taking those as examples of good and bad. And from that learning how to predict from new text. When we have a trained model, we give it the text and sorry, the second part of the question was?

Rick Zayas (27:37):

Whether the machine actually can become the gauge at some point.

Iain Ironside (27:41):

At the moment, I would say, no. I think we are still going to have to be arbiter of what is right and what is wrong. And the retraining cycle is going to be very important because one other issue we’ve got is as you do more analysis on the voice of the customer, you may start to find that you are going to change your definition. In the same way, we have to retrain a human monitor there is a need to retrain a machine learning model. And that’s why we’ve been building the loops in, but that’s decision to say the voice of the customer is now better understood and our definition of the attribute has changed.

Iain Ironside (28:37):

I think we’re now into the realms of doing data analysis on customer satisfaction surveys, on focus groups. And I think that’s another area of study. In the near future, I think there will still be the need for the human expert to understand the voice of the customer and put that through into classifying what is good and what is bad for the machine to learn from. Thanks, Rick. If there any more, just jump in.

Rick Zayas (29:15):

Okay. Sounds good.

Iain Ironside (29:19):

Rick earlier was talking about working at the process level or at the individual agent level and working at the process level is very powerful because 75% of issues are typically process related. The agent has followed perfectly the instructions in the knowledge base and referred the customer back to the website, but the website, for whatever reason, doesn’t have the answer and the customer’s caught in a loop. It’s a business process issue. Coaching agents won’t solve it. Process level issues are particularly powerful things to go after. And for AI, that’s one area where you can move away from the sampling and you can move into getting whole population data. If we go back to the example I started with of a group of new agents who are in nesting and you’ve started not just to evaluate if they had issues with selling, but you are also evaluating the causes of that, the sub attributes.

Iain Ironside (30:36):

I was talking about remedial action before typical remedial action in that might be not just to coach the agents but it will be to go back to the training and say, what was wrong in the training that meant that they didn’t handle objections very well, for example. Those are process level actions that you can find out about through looking at population rather than through sampling. And as I said before, you now have the opportunity of doing that regularly and repeating it on that group of people or the next group of people who come along for nesting. Larger data is a great help to be able to look at process level issues and that’s what AI can give you.

Iain Ironside (31:29):

The other side of the coin was about managing individual agents. And what we would say is coaching can be improved by looking at a large sample for an agent. The example I gave you before was saying, imagine the coaching conversation with the agent instead of it being an argument that you only listened to three calls, I was very unlucky. You found that I’ve only done it once this month. It turns in into a much bigger discussion about we’ve took this in depth and it is a regular thing. One thing I would warn you about is the accuracy of the AI modules. Giving coaching to an individual agent on an individual call using AI is starting to put a lot more pressure on the accuracy of the model. Now, within Reveal CX, we have a lot of calibration data that we’ve been able to look at and look at how often the human QAs are accurate when they’re going through a calibration session.

Managing individual agents

Iain Ironside (32:51):

And typically the best QA are over 90% accurate. You need that level of accuracy because if the individual agents starts to query individual transaction ratings too often, and it’s found to be wrong too often, the system, whether it’s human QA or a machine learning model will start to come into disrepute. You’re going to need very much higher levels of accuracy in the machine learning models to be able to do individual transactions with an agent. And I would steer you away from that where the power is going to come from is taking larger samples. Hopefully you can tell that through their examples I’ve been giving you, and that still gives you a very powerful tool. What we’re saying is use AI to boost your quality monitoring output. Identify specific topic. In my example, it’s about boosting the sales skills from the group of new agents in nesting. Use boost, use the AI to investigate where you are.

Iain Ironside (34:12):

That’s where I’ve been saying, run the model or models to look at a variety of errors, to identify where the problem is and what the sub attributes, what the causes of those errors are. Use that to create your own improvement program, whether that be training, whether that be coaching, whatever, and use it to go back to the process level, change your training. So the next group out of training, when you measure them using the model, you can see the difference that the new training program for the next group of new recruits produced a better result and use it to confirm the improvement. Keep using the model regularly.

Iain Ironside (35:02):

We’re starting to think about things in a different way. It was a way that was blocked up afterwards before, because trying to employ the quality analysts to do specific analysis or specific projects like this time and time again, would be inconceivable because you have ongoing monitoring to do to make sure you have enough information for coaching agents, you have monitoring to do, to make sure that you can get an overall KPI at the end of the month. The AI is being used to boost rather than to replace. And that’s why I was giving you the message before saying that the humans haven’t gone, the humans are still in the system, but the skills they need and the thinking they’re using are going to be changing to employ the tool to work with the best practices that Rick said about exploiting the right form to drive the right behaviors, to get the right CSAT.

AI is used to boost your quality monitoring output

Rick Zayas (36:07):

Iain a few more questions have popped through that I think are relevant that I want to bring some of these to your attention. And some of these, well, they happen in a world where AI’s not in use, where humans are doing the evaluations, but the question is obviously pointed towards, well, what do you do with the machine when that happens? So when the definition of what is appropriate behavior changes or what an error changes, how do we incorporate that when it comes to the use of AI, especially when you’ve been discussing the need for large amounts of data?

Iain Ironside (36:48):

Yeah. Well, I’d say it’s going to depend on the nature of the change. If it’s a smaller change, we would deal with that through the learning loop that we’ve got. We would get a number of existing evaluations reevaluated and put through. Now, I would hope that something like solving the customer’s query, an issue resolution or something like politeness or empathy, if you are using it for soft skills, wouldn’t change radically. But if they do change radically, if you did change your definition of politeness radically, then there would be a hiatus while the machine had sufficient examples to relearn. I think we would need to look at individual cases to see how radical the change was.

Rick Zayas (37:49):

Okay. You might be leveraging your human evaluators for a bit while you’re building up the data need to train the system.

Iain Ironside (37:58):

Yeah. And that comes back to the point I was making before about the scale of your organization would make a difference. If you’ve got a larger QA team, you’re going to generate that kind of data very quickly. If you’ve got a very small team, it’s going to take longer to create the data and the shape of the problem changes.

Rick Zayas (38:21):

Okay. What about organizations whose their customer experience operations handle many languages in terms of the support they provide? Can you talk to what that means for enabling AI and having it trained and calibrated to perform well going forward?

Iain Ironside (38:42):

Yeah. And it’s a discussion we often get, and I’ll maybe go onto the empathy. Often in different languages, there are different standards and different expectations around some of the soft skills. Certainly, I can speak for my experience in Europe and the way that a German would expect to be handled would be different to the way somebody from England or somebody from France would expect to be handled. The language difference, I could almost mix into a cultural difference as well. And so it would need training for the different expectations in the different cultural areas. We’d be trying to build a data set for each language.

Rick Zayas (39:37):

Each language, each culture.

Iain Ironside (39:40):

Yeah. I’m kind of conflating language and culture together because it’s dangerous to think that you can apply the same model to the different languages. That wouldn’t work. You would need to build it for your language, for your customer base.

Rick Zayas (40:00):

Okay. Another one had to do with channels here. If we’ve got voice related services and we’ve got non voice, say through social media messaging platforms, chats, and such, any differences that you want to speak to on the use of AI depending on the channel and such and its effectiveness.

Iain Ironside (40:22):

The AI we’ve built, the input to it is a string of text. And so if it’s emails we’re working with, there would still be some pre-processing to deal with sort standard things like email addresses in headers and so on. There’d be some pre-processing. But the email text or chats has one advantage built into it, that there is a syntax and sentence structure in there. And the models we’ve got that gives us some extra leverage because we have more context. If it’s call recording, we have a partner where we can transcribe.

Iain Ironside (41:14):

But one of the challenges in transcription is we have to develop more models that don’t rely upon sentence structure. If we have two channel recording, so the customers in one channel and the agents in another channel, we can actually start to put some of the sentence structure back in and use the models that exploit sentence structure. But if it’s a single channel recording with the customer and the agent’s voices mixed together, then we have to use other kinds of models. Although the input to all our models is text, depending on the sentence structure that we get from the particular channel, we can actually use different models and be more powerful and get more accuracy from them. Our ambition is always to get sentence structure.

Rick Zayas (42:14):

Okay, great. When you talked about using the data to evaluate individual agent performance, you talked about getting the tool up to a 90% accuracy or confidence level and such. In your experience in working with organizations, there’s a question here that asks, how long should they expect that to take? How long does it take to get the machine to that level of performance? That level of confidence.

Iain Ironside (42:40):

And the very trite answer it’s a piece of string.

Rick Zayas (42:44):

That’s right.

Iain Ironside (42:45):

Which is why the slide that I used earlier with the graph, I was trying to go through the different factors that tell me how long that period is. If I go to one extreme and it’s a large organization, you’ve maybe got 10 or 20 QAs, let’s go with 10 QA’s just so I can do the math more easily in my head and they can evaluate maybe 10 transactions a day, I’ve got 100 transactions a day, I’ve got 15 days, I’ve got 1,500 transactions.

Iain Ironside (43:31):

If I’ve got one QA and very, very long calls and they can only do five calls a day because of the complexity of the task and I’ve only got one QA working for me, you can tell it’s going to take a lot longer to get to that kind of threshold. Sure. That’s why I’m saying it’s a piece of string. It is just the way machine learning and AI works. The more data you feed it, the more accurate it gets, the quicker you can feed it the quicker it will get there. I’m not going to say there’s some instant solution that everybody can train a model within a week. It’s going to depend on the size of your operation and how much data we have to feed it with.

Rick Zayas (44:21):

Thank you for that transparency and very real answer that I think our audience would appreciate. Christian, I think we’re getting ready here to turn this one over.

Christian Erickson (44:33):

Great. Well, thank you very much, guys. We do have a few more questions that have come up. Unfortunately, I think there might be a little too many, so we may have to answer some of these directly or in the link that people get with the transcription and the video recording we’ll have some more answers to the questions here. There was one that came up that I thought was interesting and this says, I understand how this can be used for back office processes. Can you go deep into how to use AI in the evaluation process for calls? Is it more focus on process compliance than agent behavior?

Iain Ironside (45:15):

Okay. For calls, we have that extra step that I was talking about, which is transcription. Once we’ve been through the transcription mechanically, the process for us becomes the same because it’s text input. In contrasting back office, text-based work with front office call work, I’m trying to kind of interpretate from the question. A back office form might be more about adherence to procedure and following procedure correctly. A call if it’s front office might have the form to evaluate that transaction, might have more in it about empathy, soft skills, acknowledging the customer and doing those things.

Iain Ironside (46:12):

Now, we can debate about the relative importance of soft skills to the customer experience compared to solving the problem. But the way the AR experience with AI models is shaping out is that those soft skills that are more present on a call that are around empathy, et cetera, are actually very trainable because the information is largely held within the text of the transcribed call itself. For front office, I think it works well. The idea of whether we’re using it to coach an individual agent or to find large process level problems, I think is the same for front or back office. Either way, we’re going to be using it to create large to sets or whether that be down for an individual agent say, hey, we’ve looked at every call or whether that be center wide to say we’ve looked at everything that’s happened in the center on sales in the last month and these are the kinds of things we’ve been finding.

Christian Erickson (47:36):

Great. This is a one on sample size. Mohamed asks, for human quality agent, what is the best practice sample size of evaluated transactions? And is it not better or to evaluate small sample size when you’re using a detailed evaluation?

Iain Ironside (47:57):

Well, the sample size is going to determine what you’re going to use the data for. If you are going to be using the evaluations to reflect with the agent on their performance, get them to own the performance and coach them. And I’ve got three examples. I can do coaching with three examples because they are just things to learn from. They are events. They happened. Let’s learn from them. What you wouldn’t use a small sample for is to actually say, what’s your error rate? If you wanted to evaluate the agent’s error rate on three, you just can’t. Do you want to add to that, Rick?

Rick Zayas (48:48):

Yeah. The only other thing that was great, thank you Iain, the only other thing I’d say now, again, back to the use of that sample, if you are planning on using that sample for broader analysis of how your service is performing then at that point, there’s a different approach to making sure that our sample size is appropriate, it’s not a percentage of sorts. But we also have to make sure that our approach to gathering the sample is unbiased as well. Kind of that word I used earlier today. And we don’t have time to kind of talk through the things that you need to consider there, but it is size and it is the approach to ensuring that we have a valid sample that would come into play when you’re trying to use the data for broader purposes to improve your overall design of a process or service.

Christian Erickson (49:35):

Great. We have time for two more or questions. Carlos, I will ask your question and then Victor, we’ll get to yours after that. For the other people that we didn’t get to, we will look to answer those directly or in the transcript. And if a question comes up during these two, please ask it. And in the meantime, Iain, if you want to put up your contact information with you and Rick, we have quite a few people that have stayed on this long, but I know we’re coming up to the top of the hour. Carlos asks, it is great that it can identify the use of soft skills. Does the machine take into account voice inflection, tone of voice or volume, most of the time these elements enhance the interaction over the phone.

Iain Ironside (50:22):

Yeah. And the models we’re using at the moment are taking the text. It is plain text. It is not the fact that the customer is shouting in anger at this point or anything like that. It is text. It’s not the inflection, but usually there are the fingerprints within the text to see the context and to see the tone of the conversation.

Christian Erickson (50:50):

Great. And Victor asks, but is there an existing well-developed tool in place doing that right now? And if yes or no, doesn’t apply this would be easier to voice or non-voice transactions.

Iain Ironside (51:08):

Could you just repeat, is there an existing tool? I didn’t understand the context.

Christian Erickson (51:13):

Is there an existing well-developed tool in place doing that right now?

Rick Zayas (51:18):

That would be the use of artificial intelligence for the things you pointed out when you were presenting Iain. And is it easier to deploy this on voice or non voice transactions?

Iain Ironside (51:33):

I think because there is an extra step in the voice transactions, the implementation, it will inevitably be bigger because there’s a transcription stage. After the transcription stage, it’s not much different, but for voice, there’s going to be more work because you need to put the transcription in place to turn it into text for the machine.

Christian Erickson (51:59):

Okay. Well, everyone, thank you very much for joining us today. This was very insightful. Again, you have Rick and Iain’s information below, so feel free to reach out to them. If you have direct questions, there were a few questions we just aren’t able to get to today. Look for the email with the recording and the transcription and we’ll have those questions in there answered as well. Thank you very much. Have a good evening, have a good day.