Maintaining a perfect Knowledge Base: Content Health Check Process in a nutshell
Once upon a time a good partner of ours pointed us to a core metaphor of the KCS approach: "Maintaining a knowledge base is like tending to a garden: it requires constant weeding."

Same as for gardening, there are best practices on how to make your knowledge base blossom. You may want to read about those in detail in Consortium's guides and then use this very article as a cheat sheet.
Tip 1: Define article quality criteria
First, you should brainstorm and define the criteria for high-quality content that makes sense for your product and company. It's easier to understand and adhere to criteria that read like simple statements or yes/no questions.

Here is an example of universal criteria that are valid for most companies and CX organizations:

1. Solution provided
The article should have a real resolution (permanent or temporary workaround) in place. It must resolve the declared question or problem and cover it completely.

2. Solution is complete
The solution is valid and covers all aspects of the issue. It must resolve the declared question or problem completely and shouldn't require any actions beyond those mentioned in the article.

3. Instructions are clear
An article should be understood even by a 5-year-old. There are no complex conditions or if/else branching.

4. Visuals are used
Screenshots, GIFs, or short videos that facilitate understanding of the article are in place.

5. Concise content
The content is as simple to follow and concise as possible.

6. Proper styling
The article corresponds to the structure and style described in the content standard.

7. Taxonomy is correct
All proper labels for version, product name, type, etc. are chosen. It helps customers find and distinguish the right article in their search.

8. Grammar is correct
There should not be any grammar mistakes.

9. Title is accurate
The title must be as unique as possible and summarize the issue accurately. Exact error codes and error messages would be a good starting point.

10. Solution is straightforward
The article must contain the only solution necessary for resolving the mentioned symptoms. It should not be a troubleshooting guide that refers to other sources necessary for resolving the issue.

11. Symptoms in customer words
Symptoms must be outlined in the way customers see them. It's tempting to describe symptoms to reflect the root cause, or how the problem is seen "under the hood" when a support agent is investigating the issue. But in reality, customers do not know about it and try to find symptoms they can see from their end.

These 11 criteria cover most of the cases, but you may extend or shorten the list to better fit your current focus.
Tip 2: Select a sufficient sample size
To estimate the state of the knowledge base as a whole, you'll need a representative amount of articles for evaluation.

If your team creates numerous articles, you may use statistical calculators (e.g. this one) to find a proper sample size.

Otherwise, one may simply consider 10-15% a reasonable starting point.

You may ask: 10-15% of what? Good question!

The base is the number of public articles. Only public articles, not drafts, WIPs, etc.
The reason is that your knowledge base should be ranked based on criteria related to your customers and their experiences.

Quality standards for internal articles would typically be much lower, so there is no need to include those in the quality evaluation process.

Since we are talking about KCS methodology, it suggests that articles are to be created, published, and modified as necessary. So the article should be added for evaluation if either of these two events occurs: publication or modification.
Tip 3: Make it a recurring process
Next, define the cadence of the recurring evaluation.

There is no silver bullet: some prefer to run the evaluation on a monthly basis, others do bi-monthly or even weekly runs. It all depends on your product, release cycle, support tickets volume, knowledge base size, and overall company dynamics of gauging the KPI values.

You should keep in mind that the longer the review period, the more articles will be reviewed at once. And it means more work for QA managers, so choose wisely.

Sometimes it's easier to spend 10-15 minutes at one run and do a few evaluations per week than be buried under hundreds of articles for a few days if you go with monthly grades.
Tip 4: Involve other people in the process
As we wrote in one of the previous articles, it's crucial to have the right QA manager in the CX organization. However, there are some nuances to their work. While the CX organization is relatively small, the QA manager would be able to do the whole evaluation on their own.

As your organization – as well as the amount of articles – grows, you may face one of these challenges:

  1. It becomes infeasible for one person to process the whole volume;
  2. It is possible that even the best QA person will be biased in some areas, so grades will be skewed towards it.
What is the solution? Add one more QA person and grow the QA team?

It could be a good idea, but do you remember that a QA person has to be at Senior Manager level? So I'm afraid that not every organization has a reserve of such outstanding folks snooping around and doing nothing, so you could borrow one for the health check process ????

The best practice here is to involve team leaders, line managers, and sometimes senior managers to run weekly evaluations. The QA manager orchestrates the work and still keeps the role of the Supreme Court and acts as a manager of this virtual team.

All members may receive a small portion of 5-10 articles per week. It would only take them about a few minutes to do the evaluation, while the benefits are tremendous!

  1. Everyone speaks the same language. All managers at all levels are calibrated to the same quality criteria.
  2. Team leads pay more attention to the customer's point of view. Weird as it sounds, but some team leads are too focused on the technical side of their job and completely ignore the quality of the knowledge base, thinking that it's someone else's job. Once they are involved in the process, they get to know the feelings of customers and how they view public solutions.
  3. There is more than one viewpoint to an issue. Sometimes "the right approach" raises hot debates between managers, but at the end of the day, it leads to a well-balanced outcome.
  4. Agents see grades from different people with the right level of authority. They perceive the feedback more justified when it comes from several people.
  5. And last but not least - the virtual team processes and evaluates a lot more articles than one person.
What to do next?
As they say, practice is the criterion of truth, so take those tips and try them out!

Even running the process manually - pulling the list of recently updated articles and scoring them in a spreadsheet - would already yield an enormous value.

If you want to automate the process, you may consider a tool like Swarmica to do the job of selecting articles, evenly distributing them among performance assessors and providing scorecards for AQI and other KPIs.

Also, there are experts out there, like our friends at Pro Accessio, who can help you to design, plan and guide you through the KCS implementation process.

And may the well-tended knowledge base help you customers!

UPD (December, 8th 2022): We've put together a Google Form that you can use as a sample tool for evaluating your articles.

Copy the following form document to your account and the linked spreadsheet and feel free to play around!
Max Sudyin
Co-Founder @ Swarmica

Do you have other thoughts on how to implement the QA process? Have questions about support workflow? Disagree with any statement from above? Drop us a note, we love to make anything about customer service better!