Knowledge Management in Support: Evolution from Ticket-Centric to Knowledge-Centric Support

I've been running a Support organization in a software company for a few years now. I joined one 20 years ago as a junior support agent and worked my way up to the executive position. Together with the company, I've been through a series of mergers, acquisitions, and rebrandings, being a witness and, sometimes, a driver for significant changes across the support organization.

During all these years, we've worked a lot with support tools that had to evolve to enable the evolution of our processes and the transformation we were going through. And I probably could tell one or two things that might help your company cut corners short.

Anyway, here is our story.
1
First Ticketing System
In the very beginning, when the whole support org barely reached ten agents, we worked in the self-hosted opensource helpdesk system Request tracker. Back in 2003, we took it as a cheap and better-organized alternative to supporting our customers via a shared email box. We got a pretty decent ticketing system that helped us standardize the growing volume of support requests and work as a team.

The major benefit of having an open-source system was its flexibility. Whenever we needed to add a new product or change the workflow or pull up some integration, we could easily do it with a few dozens (or hundreds, honestly) lines of code in Perl and Javascript. It was never a problem because we had a dedicated internal development team within our support organization.

Shortly after the launch of the helpdesk system, we launched the Knowledge Base based on open source code as well.

The initial approach towards knowledge management was very primitive. We thought it would be enough to create a FAQ and — maybe — describe our customers' most popular issues. Although the FAQ part didn't produce any problems, "the most popular issues" part of the plan got stuck at the step of identifying the topics we'd want to cover in our articles. At times, we were quite a geeky support team, so the best approach we could invent was to designate the most experienced and nerdy engineer to write articles that our customers must find useful.

«What could go wrong, right?»
Me 18 years ago.
Head of Support in Software Company
2
Why Knowledge Base Doesn't Work
Well, maybe it was a good first step but only as the first step. It was not sufficient at all. The problems started emerging from the very beginning when "the most experienced engineer" wrote few articles barely understandable even by our junior members, let alone less technical customers.

Indeed, few geeks liked those articles, but the vast majority either didn't get how to apply the resolution provided or even could not recognize that the article was related to the problem they had. The reason was that support folks (including "the most experienced agent") interpret issues differently than customers. They had the knowledge and skills which our customers did not have, that's why they tended to describe the symptoms of the problem from their point of view, totally unavailable for most of the customers. That could work as an internal Knowledge Base, but it was too inefficient for a client-facing help center.

Okay, once we figured that out, we decided to hire some external copywriter to translate that geeky language into our customers' language. That helped just a little bit, as the copywriter didn't know how our customers perceived the symptoms of a problem and just tried to apply more friendly language to the whole article. Articles become easier to read, but still, the relevancy and efficiency in the search were very low.

Plus, one more problem appeared — despite "the most experienced agent" worked hard, the problem coverage of articles didn't seem to grow well enough.

Maybe the speed was not good yet? Perhaps we should have written more articles?
That's how we thought too. So we added one more "experienced person" to the process. We took our two best agents out from ticket processing, working full time now only writing knowledge base articles. The rationale behind the idea was the following: "Yeah, we'd decrease our capacity on ticket processing, but eventually, we must be able to create a decent number of good articles, so our customers would use those to resolve their issues and stop creating more tickets. And we'd go play PS finally."

However, the time passed by, the number of articles published skyrocketed, but customers kept creating more tickets. Yikes.

Why doesn't KB work? We had to figure that out.


To differentiate that role from that of "key" modeling when a modeling source moves behind the object, it is typically called a "rim" or "accent" light. In portrait lighting, it also called a "hair" light because it is used to create the appearance of physical separation between the subject's head and background.
3
How to Measure Ticket Deflection
Before we tell why Knowledge Base doesn't work well, we have to formulate what "well" means. And there is an excellent industry-standard metric widely used to measure the efficiency of the customer self-service toolkit: ticket deflection.

Deflection = Tickets Avoided / (Tickets Avoided + Tickets Created)

Let's say you have 100 customers who have an issue, and they come looking for support. Fourteen of them got to Knowledge Base, found the solution, and haven't filed a ticket. The remaining customers either haven't found the answer or haven't even try to search — they have just reported 86 tickets to your team. Using the formula, we can calculate the deflection in this very case: 14%.

But wait, there is more. The deflection could be direct and indirect. Direct deflection is the deflection that is possible to measure. Imagine, 100 customers came to support web form, and 14 of them transitioned to Knowledge Base following links automatically suggested by the web form. And they ended up not creating a ticket. So it is possible to track these transitions from web-form pages to KB articles and measure the deflection.

Indirect deflection is the deflection that is not possible to measure. Let's take the same 100 customers who want to receive support, but 14 of them either go directly to an article or find it through google. They never filed a ticket, and we couldn't possibly know whether they wanted it or not. As a result, it is not possible to measure the thing that hasn't happened. But some clever math algorithms allowed us to estimate it by comparing the difference between the real volume of tickets and the projected volume. Rocket science, not less!

So, what was our deflection, and why we thought it was not good enough?

You got me here — at that time, we knew nothing about deflection. The only thing we did to evaluate the efficiency of the Knowledge Base was observing the trend of the ticket volume and be happy when it declined and be sad when it grew. It didn't take into account the growing number of customers, the increasing complexity of the product, and many more factors.

We realized that, but we thought the plethora of articles we had published would help us to reduce the number of tickets we had. And still, we were sad more frequently than happy — the volume of tickets was growing.

4
Why Knowledge Base Doesn't Work: Digging Deeper
There had to be an explanation, so after digging into the analysis deeper, we've found few factors to that:

  1. Irrelevant content. Our dedicated knowledge base writers created articles they believed to be helpful. In reality, these KB articles were either too basic or, vice versa, too complex. No one wanted to read "How to retrieve my password? Click the Retrieve my password button" or a multi-step technical guide that described how to troubleshoot sophisticated edge cases that occurred only a few times. Neither could help solve problems that customers actually had.
  2. Outdated information. In addition to the previous point, articles became outdated quite soon after the release of a new version of the product or after any related technology update. We almost never reviewed the content of the article after it got published.
  3. Writers lose competence. After approximately six months of being dedicated writers, "the most experienced" folks suddenly started to realize they were not "the most experienced" anymore. The reason was simple — they didn't participate in resolving customers' issues and, therefore, started losing their product expertise.
Together, these factors made our efforts futile. Even more so, we've gotten to the situation where we took our most efficient and experienced engineers out of ticket processing (where they generated the biggest value for the company), and other engineers had to handle more tickets. And the company had to recruit, hire and train new employees to cover the gap.

But wait, there had to be the right way to do the Knowledge Base business, right?

5
Making Knowledge Base Work: The Proper Way
It was early 2012 when we found the methodology named Knowledge-Centered Support (KCS®). It appeared that major IT companies had already gone through the same challenges and developed a framework that helps keep the knowledge base relevant and effective.

KCS® introduced the significant difference to our approach and helped to resolve the problems above. The considerable shift was that all agents became knowledge contributors and those "the most experienced" people who also were doing tickets. Check out consortium docs for detailed info on how the magic works.

It was not easy for us to start new processes and workflows.

How do we train all the agents to work the new way? How would tools support these new workflows?

The KCS® consortium created different guides and docs on how to implement new workflows, and it's even possible to purchase training from one of their affiliates and certify agents for compliance with KCS® guidelines. So the training part was covered, although it was quite a task to study those numerous guides.

And the fundamental question we had before launching the Knowledge-Centered approach was: "ok, we are ready to go, but what tool should we choose?"

All the tools presented and approved by the consortium had the only integration with "Big Blue CRM" and were very expensive. Honestly, that's pretty much the same nowadays, almost a decade later.


6
Our Decision: In-house Toolkit
And we decided:
Image courtesy: Futurama
Yes — we decided to go with our own KCS® tool with KPIs and a flexible feature set :-)

Remember that we had a custom-written helpdesk/knowledgebase. And even the code itself was old and not optimal, the biggest advantage was that we could add as many new features as we wanted to.

We started with KCS® guides and then passed KCS® practice exams to get the framework deep. I learned and got certified myself, then senior managers, line managers, and team leaders did the same. After that, we built up our internal course for support agents and trained them as well.

The development of the KCS® module took another six months, and we launched the new framework in 2013 year. Yes, overall, it took approximately a year which I still think was a good result considering that we had zero budget and did all the development by the team of two programmers.

Disclaimer: maybe I would not go the same way now, considering different tools available on the market. Moreover, I prefer and suggest others find solutions with the existing tools to avoid any custom development.
Why? We are almost there, read on!

We got the tool that allowed us to involve all agents in the process of content authoring. We started to link articles to tickets and publish relevant knowledge to our customer-facing portal. It did resolve most of the problems we had initially, but something still was off.

Support agents didn't quite "buy" the new processes.
7
Why didn't KCS Launch as We Expected?
First, we got a classical case:
Agents tended to think that it's somewhat unnecessary and "not their" job to create & publish articles because they were too busy handling tickets. They perceived themselves to be the best technical experts performing a hands-on job, while the knowledge maintainer role didn't attract them so much.

The second thing related to the first one — agents, especially senior experts, were afraid that if they'd shared the knowledge they had with others, they wouldn't be so unique and appreciated as they used to be.

And the third thing, all the agents, from juniors to seniors, thought that if they created a lot of articles and customers would stop coming to support, and, as a result, agents would lose their job.
8
Debunking Fears of KCS
That turned me to think that our performance review and appraisal system had to be changed. We needed to transform a classical model when agents were awarded for their contribution into ticket handling, quality, and customer satisfaction to something else, which would also consider their contribution into the Collective Knowledge.

We've found these KPIs that would help us:

  1. Participation rate (PAR) — it is a percentage of tickets with either a new article created while resolving the ticket or an existing one with a resolution that helped. The metric shows how agents follow the process and remember to create and link articles.
  2. Article Quality Index (AQI) — it is an evaluation metric that indicates the quality of articles.
  3. The number of articles created, approved and published. We used those as auxiliary metrics to validate the sample size for the above KPIs.
But to implement the new appraisal system, we had to collect and analyze the data, and that was not easy at all. We started with exporting raw data from the database as CSV files, importing them to Excel, and building pivot tables and associated charts. Then we purchased another 3rd-party BI solution that helped us aggregate data from across multiple systems and skip the export/import part, bringing the data straight to Excel sheets.

Having the new performance review system and BI solution behind, we resolved the first two problems. Now agents were appreciated for their knowledge management efforts and started to feel it as a part of their job.

But debunking the third "scary tale" took time. We had to keep talking with people and show the first results of our work. The customer base was growing quite intensively, but the ticket volume not only stopped growing but even decreased a bit. The team realized that no one got fired, and at the same time, the constant pressure and stress caused by the constant understaffing had gone away.

Agents accidentally spotted that the routine, boring typical cases became more seldom, and they tended to spend more time on interesting and complex tasks. They now got to experiment more and help improve the product by collaborating with R&D guys.

That was a game-changer moment, and finally, we celebrated a victory!
9
Wins from Implementing KCS
Here are some of the benefits we've got:

  1. Relevant content. The knowledge base was always up-to-date, with real live coverage of the issues that were important for customers at every moment in time. As a result, customers found solutions in articles and created fewer tickets. That's what we wanted to achieve from the very beginning!
  2. Low attrition. Agents were less likely to leave because now they had more interesting tasks to do.
  3. Quick onboarding. Yet if attrition happened, we could replace agents quickly as our rich and up-to-date Knowledge base allowed us to hire juniors and get them up to speed in weeks, not months. We've got an effect of Swarming Intelligence.
  4. Faster response time. That also drove better First Response Time results and helped to resolve tickets faster. In conjunction with volume deflection, it decreased demand in new support headcount in the long term.
  5. Better P&L. Finally, as support organization's costs are mostly about the number of employees, we got our costs reduced, as we needed fewer people to handle bigger demand from our customers.
Was that a happy end? Why does it still read in the past tense? Well, there was a second big part of the story, but it will be another article.

Andrew Bolkonsky
CX, Knowledge Manager & KCS enthusiast @ Swarmica

KCS® is a service mark of the Consortium for Service Innovation™