Get the Most Out of Your Contact Center Reporting – Part 2


Welcome guest contributor Jeremy Markey! Jeremy has a few decades-long background in contact centers and shares from his experience working with agents at Hunter Douglas as the leader of CS Ops and Workforce Experience. Hunter Douglas is the leading manufacturer and marketer of custom window treatments in North America.

This article is the second in a 4-part series on the right way to do customer experience reporting.


Create a Compelling Scorecard for Agents and Supervisors: People and Teams Act Differently When They’re Keeping Score

Like most new marriages, early on in my relationship with my wife, we struggled. Our biggest fight? You guessed it, money. My wife loved a Dr. Pepper when her day was a bit stressful. Some days she’d even start with one for breakfast. If we didn’t have one in the house, she’d walk to the Arby’s down the street and get one. No breakfast, just a Dr. Pepper. Multiple times over several months it was a $34 Dr. Pepper instead of the normal $1: we were young and broke, and we didn’t have the money in the account for the pop so we’d also be charged a $33 insufficient funds fee. Each time it happened I’d show her how many we had, and show her the entire register and budget. I got angry. And yet we kept buying $34 Dr. Peppers. Luckily, we broke the cycle. Getting upset wasn’t working and neither was just expecting her to know how much money we had or showing her all the details in our budget and register. What broke the cycle? Each week when I updated our register, I would text her how much we had in each bucket of our budget. Immediately we stopped buying $34 Dr. Peppers. In the last 18 years, not a single $34 Dr. Pepper has been purchased!

This example is what an effective scorecard should be for our agents. Simple, straightforward, just the facts. Just enough to make a good decision and know how close to the line they are. When we do that, behaviors change.

Below, we’ll look at what a compelling scorecard for agents and supervisors looks like, you’ll notice like with my wife and me, keeping it simple for the agents is imperative to their success. Too many metrics cripple their ability to react, so we share what we find to be the critical few and shared them below.

Agents

The agent scorecard is comprised of individual-facing metrics. In other words, these are the metrics that an agent has direct control over. These metrics are paired with team and department results. We can display them as a rank for example. I’m a big fan of showing glide paths over time in a line graph in line with the team average and top performers. We glide path the trend line over a few months at a week-by-week detail, for example. Deliver agent scorecards at least daily; near real-time, optimally. Most folks, if we just show them what we’re expecting of them, allow them to gauge their own performance to our expectations and the whole team, and let them see it frequently enough, will alter their behavior on their own. It’s just like driving. If they know they have to stay between the lines, they’ll stay between them.

So, what goes on an agent scorecard?


Metrics that matter:


Leading Indicators

First, our agents must track leading indicators. What are those? Leading indicators are specific behaviors that will lead to future results. In a previous article, I gave the example of a shoe sales associate with a better rate of sale. What did they do differently? Maybe they’re showing every customer 3 pairs of shoes and assuming the close on a store credit card. Those are leading indicators. I highly recommend going back and reading the article for more on the methodology this principle is derived from, called 4DX. If we’re not tracking the leading indicator, we’re missing something huge—the simple commitment from each agent to do the specific things that will impact the measures that matter most. Agents should ask themselves: What is a simple thing I can do to make things better and am I doing it? This is something they will have to track and report themselves, and it goes on the scorecard.

Out-of-Center Shrink

Next, we’ll need to measure out-of-center shrink or agent attendance. A lot of companies have their own discernment on when is it in-attendance versus when is it PTO. For me, typically it’s this: was that out-of-center shrink an employee benefit or not? If it was an employee benefit, then it doesn’t count against the agent’s out-of-center shrink number.

Why is this metric so important? When it comes to the impact the agent has on the other agents and the customers that are calling in, first and foremost, they must be here. It’s just that simple. Somebody’s job is harder when more people aren’t there. Average Speed of Answer (ASA) will be worse. Customer Experience (CX) will be worse. Voice of the Customer (VOC) can be worse, too, if the hold times are high. Everything else that is on the scorecard is predicated on somebody being here at a regular rate.

Quality Management

Another item on your scorecard is some form of quality measurement. That could be VOC. It could be a customer loyalty metric like Net Promoter Score (NPS). It could be monitoring calls. It could be some sort of error rate. I would argue we need to keep it as simple as possible. The agent must be able to understand it and contribute to it. No matter what metric we choose, it needs to answer the question: did the agent provide a good experience? Every interaction is a product. And like in manufacturing, we must ensure that the product that goes out meets expectations.

Productivity or Efficiency

The last piece is some measurement of productivity or efficiency. We want to keep it as simple as possible. My personal favorite is, “How much time were you here expecting to be productive?” and “How many tasks, interactions, or widgets did you complete?” Then you just divide to find interactions (or whatever) per hour. This isn’t a calculation that you need a math degree to understand. There are a ton of really complex models that I’ve seen out there. We had one here at Hunter Douglas called time allocation. And there wasn’t a single supervisor that could completely and properly explain what the metric meant. So, if supervisors couldn’t explain it, how could any of our agents? The short answer is they didn’t. It was hocus-pocus. Instead, make it simple, easy to understand, and conveyed in a tangible format. For example, to agents, a percentage isn’t a tangible metric. So, give them hours. People understand that—they can count it. Keep the cookies on the lower shelf so everybody can have them.

And that’s all you should need on the agent scorecard. There aren’t things like agent utilization or adherence. We shouldn’t need those. That might sound like heresy. Go back and look at my flex article and you’ll see some of the reasons why. And I’ll talk a little more about adherence under the supervisor scorecard piece (below).

As for publishing, we want to get to the point where it’s like the dashboard of a car. It’s just always in front of our agents. They can always see what they’re doing and what the impact is, year-to-date, month-to-date, or week-to-date. Even today. Do that and most people will alter their behaviors instead of supervisors having to chase people down, you’ll find that it’s far less “forcing somebody to do something” and much more of a collaborative conversation.

Supervisors

Before we get to the supervisor scorecard, we need to talk about the most important thing your supervisors need to be doing. Observing. Supervisors should be observing agents for two to three hours a day, minimum. We give feedback to those agents every time, even if the agent didn’t do anything wrong, because feedback isn’t about correcting mistakes. It’s about what you want to encourage to continue or discourage. It’s been said that anywhere between 5 and 10 positive feedback sessions must occur for every negative one so that the person feels like our feedback is balanced and credible. So, we’ve got to play a lot of “caught you doing good.” Legendary coach John Wooden said the trick to becoming the winningest basketball coach of all time was just watching what his players did and asking them to either do it again or not do it again. Super simple.

Alright. Let’s look at the Supervisor scorecard.


Metrics that matter:


Aggregated Agent Metrics

Our supervisor scorecard should start with all the agent metrics, aggregated. They need to see trending agent metrics and outliers. Essentially, highlights of everything on the agent scorecard, on both an agent and team level. The goal is to point out what they need to take care of. Oftentimes, trying to figure out what a supervisor needs to go act on is like trying to find a needle in a stack of needles. It’s just hard. Make it easy for the supervisor. The rule that we use here Hunter Douglas is 5 seconds and go. When somebody’s looking at that dashboard, if within 5 seconds they don’t know exactly what to do, the dashboard’s wrong. And we take the 5 seconds literally.

Leading Indicators

This is an aggregation of agent leading indicators, the behaviors committed to, the result the agent recorded, and the trailing metric it was meant to impact. If we see high completion rates and strong trailing metric results, perfect. If we see high completion rates and problematic trailing metrics or low completion rates it’s time for coaching. Read our recent article for more on coaching using the 4DX model.

Agent Utilization or Billed-to-Paid

Agent utilization is the amount of time that agents spend on productive activities, like handling customer interactions versus on break or other non-productive activities. If we’re in a cost center, we’re going to want agent utilization on our supervisor scorecard. If we’re in a profit center, we are probably going to want to use billed-to-paid instead. Billed-to-paid is how much time or interactions or sales that we can bill for versus the equivalent that we paid that person. We’ll want to use billed-to-paid in any sales or revenue-generating environment. Typically, agent utilization needs to be between 80 and 84%, depending on meetings, one-on-ones, and so forth. First-party tends to be closer to 80%, BPOs at 83% or 84%. Simple environments, where people have a very simple task in front of them, tend to target 84%. Complex organizations, where agents have a variety of things to tackle, are closer to 80%. Depending on how we bill, there are many ways to measure billed-to-paid. Let’s use a BPO example. First, calculate what our occupancy (time handling customer inquiries) is supposed to be. Then calculate what our in-center shrink is supposed to be and look at the inverse—essentially agent utilization. Multiply the two… and there’s our goal. So simply, if we have 85% occupancy and 20% in-center shrink or 80% agent utilization, our billed-to-paid is 68%. This assumes a handle-minute environment. In a production-minutes environment (per interaction), our billed-to-paid and agent utilization of 80% are identical.

This is one of the most important things on the supervisor scorecard because runaway shrink is one of the biggest things we find when it comes to optimizing a contact center. We’ve got all the excuses in the world. “We’re special.” “We’re unique.” I’ve been doing this for 25 years and not even one has held true when we start peeling back the onion layers. And just like on Shark Tank, if we don’t know our numbers, we get thrown out. This is one of those numbers we’ve got to know because it’s the precursor to profitability and we’ve got to control our costs.

Training Completion

What trainings do we need to have done? Do we use Computer-Based Trainings (CBTs)? Is that all up to speed and do they get a passing score?

Channel Adherence

Was our agent in the channel that we need them to be in? In true omnichannel, channel adherence doesn’t matter, because everybody gets what they get when it comes to an interaction. But if we’re multi-channel, where agents are voice for a period of time, then chat for a period of the day, and then e-mail for a period of the day, we must measure channel adherence.

External-Facing Metrics

If we don’t have it in your quality metric from the agent scorecard, we need to have VOC (Voice of Customer), NPS (Net Promoter Score), C-SAT (Customer Satisfaction), or a similar external metric to see if the experience meets expectations. Metrics like profitability, EBITA, and so on may also be applicable in the right environment, though not common in CS/CX.

Internal-Facing Metrics

Again, any type of internal facing metrics that I haven’t listed, you want to have here. A few examples would be retention rate (the inverse of transfer rate), dead air/silence ratio for voice interactions, average handle time, average wrap time, average speed of response (for digital interactions), and so on.

Agent Inspection Results

Supervisors should be spending two to three hours a day, every day, just observing their agents, taking notes, and scoring the interactions. Back when we weren’t remote and were all in-center, we would pull up next to an agent, put a headset on, and buddy-jack. For most of us that are either fully remote or hybrid, that’s not a thing anymore. But we can still watch their computer. So, don’t just listen to calls—use the screen capture software that exists within your organization. Typically, if we’ve got some sort of quality recording software, you can also watch the agents’ desktop live, listen to whatever interactions they have, see what chats they’ve got going, and so on.

Service Metrics

The last one is some sort of servicing metric. These metrics evaluate the quality and effectiveness of your overall service, tracked over time, to identify trends. The service metrics you’ll want to watch depends on the nature of your business and the specific goals of your customer service operation. Typical metrics include Average Speed of Answer (ASA), service level, and so on.

Conclusion

At the office we’re not saving a marriage, we are maintaining a relationship. Our relationships with our agents and our supervisors are critical to our success. An effective scoreboard is central to that. Like on a sports ball field—or when buying $34 Dr. Peppers—the scoreboard only has the basic information, and the coach has everything else that they share as needed.

In the next article, we’ll look at the scorecard from the perspective of managers and executives.

Need a hand building the right reporting for your business?

Vertical can help! That’s because we don’t just sell products; we work in partnership with every customer to find the best technologies and implement the right processes that help them see real customer experience improvement and ultimately increased revenue.

  • Our expert design team can work with you to determine exactly which reporting features you need… and which ones you don’t.
  • Vertical’s white-glove installation process ensures that your team has the training to get the most out of your reports.
  • And if something goes wrong or you need to makes some tweaks? Our award-winning service team is always here to help.

From design, to install, to support, we’re always by your side—that’s the Vertical difference.