Ground Floor, Adamson House, Towers Business Park, Wilmslow Rd, Manchester M20 2YY

Understanding the causes of service level failure

Philip Stubbs suggests a graphical report that helps pinpoint the causes of service level failure

Staff within inbound contact centres have very many crucial measures on which to focus, yet none of these indicators has the visceral urgency of how quickly the inbound contacts are answered.

In inbound sales and retention centres, there is a direct revenue hit if calls are abandoned, as fewer sales and retentions are made. In service centres, customer satisfaction can decrease significantly if callers struggle to get through for assistance, particularly if they have a burning issue that needs speedy resolution. This can lead to increased customer churn, a reduction in customer recommendations and reduced revenue follows.

A failure to answer customer calls swiftly can lead to further problems. A great deal of management time spent on firefighting can reduce the focus on improving performance and strategic planning. Also employee engagement can be reduced if there are regular sizeable customer queues, since there are chunks of time where advisors are answering call after call. Also, in some operations regular essential meetings such as one-to-ones, team meetings and coaching can get cancelled, which in turn can lead to deterioration in performance.

“Service Level” is commonly used to measure how quickly calls are answered, defined as the percentage of offered calls that are answered within a target time. Also frequently found in inbound contact centres are Percentage of Calls Answered (or its opposite: Abandoned Rate) and Average Speed of Answer. Often these measures are reviewed regularly at board level, providing the Customer Services Director with significant pressure to achieve the daily target, and also to provide meaningful answers if the Service Level target is not met.

In many operations, forecast error is regularly blamed. This is because forecast error is often placed next to service level in the morning reports. If service level fails, and there were more calls than forecast, then the forecast is deemed to be the culprit. Often this is unfair on the forecaster – simply due to the fact that other, potentially more significant factors, are not reported.

The reality of contact centre dynamics is much more complex than this. Here, I suggest how to identify – unambiguously – the reason or reasons why service level was not achieved on a particular day. This aids understanding of the problems, and such knowledge can swiftly lead to more reliable service level achievement.

Let us start by understanding the service level failure itself. Very often such failure is not constant across the day, but only happens at certain periods throughout the day. Therefore we must study every interval of the day to increase our understanding. Therefore you should produce reports of important service level measures. An example of three measures is below.

We see that the target 80% Service Level was achieved or nearly achieved for most of the day. The significant exceptions were 10:30 to 11:30 and also 16:00 to 17:30. The Percentage of Calls Answered shows a similar shape – this is because as Service Level decreases, the number of callers who abandon increases. With Average Speed of Answer, we see high average wait times at the same times, which again is to be expected. Now, our task is to understand what caused the Service Level problems across those intervals, examining the four components that together determine whether the Service Level target is met.

1. Call Volume
The number of inbound contacts affects the service level. For example, there could be a network fault, or the response to a mailing could be different than that anticipated. With this chart, we can ask the question, for each interval: What was our forecast accuracy?

In our example, we see that the number of calls throughout the day was similar to the forecast, but there were small spikes above and below the forecast, but nothing that would on its own cause a major Service Level problem. The call volume was not higher than forecast in the problematic intervals.

2. Average Handling Time
The Average Handling Time of an inbound call, together with related administrative or outbound work, is just as influential to Service Level as the number of calls. It should be presented in a chart, for each interval, together with the planned Average Handling Time. So with this chart we can ask the question, for each interval: Was the contact duration greater than or less than planned?

In the example, between 9:30 and 17:30, the Average Handling Time was close to the planned values and within those times, it was slightly below the target overall. In the first three intervals, AHT was significantly below target. From 18:00 to 20:00, the Average Handling Time was above the target, but it did not cause a significant Service Level problem. A takeaway from this graph is that, if this AHT profile is a recurring pattern, then the AHT should be profiled across the intervals to ensure that planned AHT takes into account the natural profile of the calls.

3. Net Schedule Match
In this chart we examine how many staff were scheduled, compared to requirement. The requirement is based on forecast volume, target AHT and the desired Service Level. The chart is most useful if both values are specified in net values. With this chart we are able to answer the question, for each interval: Did we schedule enough staff in to meet the forecast at the assumed average handling time?

Looking at the chart, we see that scheduled staff was greater than requirement from 11:00 onwards, except for a shortfall between 16:00 and 17:30. This means that before the day started, there was a known issue of advisor shortage within those times. Since call volume and AHT were on track in those intervals, calls began to queue, causing a reduction in Service Level in those intervals. Therefore, the reason why the schedule has a 90-minute shortfall starting at 16:00 should be investigated, if it is not already a known issue. Perhaps a small tweak to the scheduling rules might solve the problem.

4. Net Schedule Adherence
Of course, just because the advisors were scheduled and anticipated, it doesn’t follow that all advisors were necessarily in place and available to handle calls. So on this chart we see the net number of staff scheduled, compared with the actual number of available advisors. With this chart we are able to answer the question, for each interval: Did the actual number of available net staff match the number we anticipated in advance of the day?

In our example, there was small variation throughout the day, but there were far fewer advisors than scheduled between 10:30 and 11:30. This caused the Service Level problems encountered at the time, and should be investigated. Perhaps there was a sudden meeting, or an IT issue forcing some advisors to log out.

* * *

With these seven charts now complete, we have identified which intervals had service level problems, and we identified which of the four components explains the problem. In this example, it was the Schedule Adherence between 11:30 and 12:30, and it was Schedule Match between 16:00 and 17:30.

Here is the report represented in its entirety:

When I produced this report for the first time – over 20 years ago – I called it the “Seven Blocker”, and the name stuck. The Seven Blocker has become one of my most useful reports. This is because in an instant you can diagnose and fix service level problems, with very little interpretation required by the user.

It is particularly useful if it is created throughout the day, so when an interval is complete, the actual values become updated. This gives the contact centre the opportunity to fix issues as they happen, rather than wait until the morning after to review the day and take action.

Once you know how each of those four components impacted the Service Level, you are better equipped to explain the Service Level failure, and also to go about reducing the likelihood of it happening again!

The benefit is that your operation will be able to achieve a much higher service level, through identifying the causes of service level problems, and taking the focussed actions required to fix the problems that arise.

* * *

At Atlantic Insight, our mission is to help customer-facing organisations achieve sustained improvements in operational effectiveness and customer engagement. We would be delighted to partner with you to improve performance. If you think we can help you, please email us at, or call us on +44 (0) 161 438 2009

Philip Stubbs is a partner of Atlantic Insight, and has over 25 years’ experience of improving performance within operational areas within a wide range of industries.

Share the Post:

More Insight