How to measure Help Center efficiency?

The scenario is this: Your company offers customer support through traditional channels such as email, chats, messengers, social media, and maybe even by phone.

Inevitably, you'll come up with the idea of setting up a knowledge base or help center.
So there is a manager (or two) who creates and edits articles, analyzes knowledge gaps, etc.

What are the odds that you'll need to measure outcomes?

I'd say 100%.

You'll definitely want to know what content works best, whether customers value the knowledge base, and, ultimately, what your business gets out of it.

This article will share 6 ways to measure the efficiency of your knowledge base.

Pick one or a few that suit you best!
Customer satisfaction
The first and most common idea would be to gather customers' feedback about knowledge base articles. I bet you've seen pop-ups like this a couple of times:
And that would be a great idea! You should definitely enable feedback on your articles if your help center provides such functionality.

However, you may need to remember two things:

  1. Usually, the survey rate is quite low.

    It is natural for customers to apply the solution and then continue with whatever they had been up to just before the issue occurred. So they would rarely come back to leave their feedback. If the solution didn't work, however, they most probably would come back and share their frustration in the form of dislike. And that brings us to the second point.

  2. Okay, they could give negative feedback on some articles.

    But why? Was it the wrong solution? Or maybe the customer didn't have the necessary skills to apply the solution. Or maybe the solution was for a different environment or product. Or perhaps it was general frustration because the customer had to miss their favorite TV show trying to fix the unexpected problem.

    Either way, it is a signal to review the feedback carefully and revise the article.

Content Health Check
When you hear of the Knowledge-Centered Service (KCS) methodology, you are familiar with how articles are assessed to drive Article Quality Index metrics. If you haven't heard about it, check out our article on the topic.

The process ensures the most prominent articles get scored in the following areas:

  • The necessity of the article (it was necessary at all, up-to-date, etc.)

  • Completeness (if the solution is complete, it covers all aspects, and so on)

  • Easiness-to-use (if the article is clear, it fits a target audience, it has all the necessary visual aids, and it's easy to read and apply the article overall)

  • Formatting and styling (if it is styled and formatted properly, all environments and versions of the product are set, and any other tags and labels are in place)

  • Accuracy (if it provides the right level of relevancy, having all keywords, description of the problem, cause, and resolution)
The resulting summary score – Article Quality Index (AQI) – may serve as a meaningful measure of knowledge base content quality.

Besides, it highlights whether certain content or content creators have areas for improvement, which is a worthwhile outcome too.
Swarmica provides tools for content health check process
We have everything you need for the process: selecting sample articles for evaluation, providing a crisp content standard checklist for evaluators, aggregating the data into multidimensional reports. So you can have it too!
Article's ability to resolve
Again, when you run KCS practices, your support tickets have verified links to knowledge base articles used to resolve the request.

So you immediately get two indicators of your help center efficiency:

  1. Article reuse rate. It shows how many times an article gets linked to support tickets. This means that agents did not have to spend their time inventing a solution; instead, they used documented knowledge.

  2. FCR (First Contact Resolution) rates of those tickets can be attributed to the corresponding articles. A high FCR rate for an article indicates that the article works well and once the customer has it the problem is resolved.
Here is our most recent article
What customer support KPIs should you use?
This article will help you define the KPIs you may want to focus on to achieve tactical and strategic goals. It will also help you determine the reporting cadency that will be the most useful.

And a handy reading tip at the end.
Missing content
The preferred practice is to check what customers are searching for in your knowledge base. It might be the basic list of search strings that most analytics tools provide. If you use advanced analytics tools, you may even see what customers have tried to search for right before they submit their tickets.

The analysis will reveal two useful things:

  1. The content you are missing is in your knowledge base. Customers are looking for articles that do not currently exist. A hint at what to create next.

  2. Poorly formatted articles that may exist in the knowledge base, but customers cannot find and use them. That's because the symptoms in these articles don't match the search keywords customers use. Also a hint at articles requiring editors' attention.

The analysis results may be used as an indicator of a help center's efficiency when done regularly.
Article views
Another popular approach is to check what articles get the most views.

The logic behind it is: if an article gets many customer views, then it must be a good one. That's correct to some degree.

Remember that the article could be viewed not only by humans but also by search engine robots. And even if it's a human, it's not necessary that it's your target audience or customer.

Imagine an automotive company has an article titled "How to change motor oil." Now, how many people of those who managed to find the article with the search phrase above are actually customers of that car brand?
Combining all of the above methods and seeing the merits of the concept of "knowledge base efficiency", you will find the gem of deflection. That is, how many tickets were not filed because the knowledge base helped customers to resolve the problems they had.

Some help center software has built-in ability or integration with 3rd-party apps to measure ticket deflection.

So, how do you measure deflection properly?

There are two different types of deflection:

  1. Direct deflection. Whenever customers try to open a ticket, they are offered a solution and abandon the process shortly after that. That's relatively easy to measure – the ratio of abandoned submission attempts to all attempts in percentage.

  2. Indirect deflection. In many other cases, customers would never use your ticket submission form. If they try to Google the solution first, they might land on an article, apply the solution and move on with their business. It would be impossible to know whether a customer intended to fill in a ticket before searching for an article or not. It's the hardest part of the story – to calculate something that has never happened. In this case, only math models and approximations could be used to calculate some predictable percentage of such cases.

Swarmica runs volume deflection report for you
So, what metrics do we recommend to use?

There is no silver bullet that will show whether your knowledge base is helpful or not. We recommend trying all the approaches above and focusing on the one that works best at the given situation.
Max Sudyin
Co-Founder @ Swarmica

Do you have other thoughts on how to measure help center efficiency? Have questions about support workflow? Disagree with any statement from above? Drop us a note, we love to make anything about customer service better!