Years ago, I was a newly promoted contact center operations manager. During that time, our contact center began using workforce & performance management systems to track call volumes, create trending, and use forecasts for scheduling. While this was a culture shift, the visibility into new performance metrics for agents yielded some interesting results.
My previous supervisor (with over a decade of experience) suddenly changed how she managed her team. Prior to having these reports, she would observe calls, perform side-by-side coaching sessions, and obtain customer feedback to measure performance. When we started distributing performance reports, she cut back on the use of the “other tools in her toolbox” to only focus on some standard operational metrics. When I sat down to discuss it with her, she told me that for the first time the management of her agents’ performance was not subjective, but was fact-based and unbiased. By having “hard data” that she could use, she could determine who on her team was doing a good job and who was not.
I admit to being a little biased – having come from her team before my promotion, I thought she had managed me fairly well. We talked through the reports and discussed a couple examples. One agent on her team had the best metrics in the entire call center. This agent had low average handle times, extremely low post-call processing time, and she was taking more calls than anyone in the contact center (without hanging up on customers).
However, as we dug deeper and started using some additional tools, we discovered repeat calls from customers who had spoken to this agent. To complete calls quickly, she rushed customers off the phone without ensuring that all of the customers’ questions were answered. There were also occasional quality issues with work items not being done as completely as we would have preferred, even though she finished her work while the customer was still on the line (typing & talking at the same time).
What happened? In our analysis, we found that measuring agents on averages alone was not providing the full picture and was even driving bad behavior. Our agents heard us saying that their “average” call should be a certain length, and they began managing themselves to that average. This included watching call timers and rushing customers off of the phone when they hit that time on individual calls. This led to a decreased focus on the service we provided our customers, and as a result, created a poor customer experience.
The lesson learned was that performance management is important in measuring and continuously improving service, when used to look at more than one interaction at a time. Managing to averages alone often does not provide a full view of performance. Instead, these reports provide indicators that should be used in combination with other “tools” – in other words, tools that also reflect repeat calls, quality scoring, and customer satisfaction rates to help point supervisors towards the identification of areas to coach on.