You would probably have bugs associated with your tickets and simply give that list to development, ranked by occurrence.
However, some product managers may come to support and ask for some statistics that refer to the most problematic areas of the product that can be improved besides bugs.
The traditional approach used in the company in the pre-KCS® age was to get a list of product areas from development, pre-configure it in the helpdesk and then use that taxonomy for tagging each ticket. Then we would calculate the number of tickets in each category, and, et voila, we have our problematic areas.
Would you guess which category had the most tickets? "Other."
In the next iteration, we focused on evaluating the "Other" category to granulate it to more specific categories. That approach seemed logical, but as a result, we got another "Other" category on the top. And again, after another iteration. And again.
Developers blamed support folks they could not recognize the proper category, and they decided to make customers choosing the category on the webform during the ticket creation. The result was even worse: in addition to "Other," the percentage of errors when customers chose the incorrect category increased as well.
The root of the problem was again in the perception of how each group (developers, support, and customers) saw the problem. Developers tended to propose the areas connected to the code and wanted to see the cases affected by the specific part of the code. Supporters could not get what code is responsible for the problem and tried to classify it using the module which was involved in the process as per their best judgment. Customers didn't pay much attention to that part at all and chose a category closest to an error message they saw.
Once we implemented KCS®, we also found a permanent solution for this problem.
It worked in the following way: each ticket is supposed to be linked with the corresponding article that describes the symptoms, cause, and resolution. Then it's possible to track how many times a particular article was re-used (i.e., how many tickets it was linked) in every given moment.
When you sort this list by the top re-used articles, you get the most questionable or buggy areas of the product that you can analyze from a product management perspective. Articles describe use cases, not modules or snippets of code. And each party can identify areas for improvement.
We used to split these articles into two buckets:
- Bugs to be fixed
- Feature requests to be implemented
For the bug fixing, we also associated the articles which had a bug as a root cause with the corresponding Jira ID, and it became transparent for the development of what they have to fix.
It's similar to what they had when tickets were linked directly to bug with a barely noticeable but yet major difference: whenever they wanted to understand the nature of the issue, they didn't have to read through tons of unnecessary information which normally exists in tickets like greetings, small talks, requests for access, escalations, etc. They would get a gist out of the articles' summary, and sometimes they'd even port the "workaround" section almost "as is" to the code. That drastically improved the speed of bug fixing.
For feature requests, it opened an even bigger opportunity to have a clear vision of what customers were struggling with. It allowed us to remove the taxonomy based on labels or categories at all.
All the necessary information was taken based on re-used articles. Should it be "how to" questions or third-party components or product problems — the indicator was that if customers were looking for a solution for that area, then it could be bad UX or product architecture or anything else which was worth analyzing in detail.
We implemented all this in our old tool, but unfortunately, that part was completely missing in Zendesk.