In-house or SaaS: a Story of Support Tools Search
If you ever thought of switching from an in-house support tool to a SaaS solution or vice versa, I might save you a couple of bumps down the road by telling our fascinating story. Through the past few years, we've moved from no solution to an on-premise tool, then custom in-house software, then to SaaS, then to a combination of SaaS and another in-house solution.

It's a long read, so grab some tea, buckle up and read on!
1
Why switching from in-house to SaaS?
Life was going on, and after a few mergers, acquisitions, and splits, the company got one flagman product out of 7 others. We didn't need the flexibility in accommodating those seven different workflows within one tool.

At the same time, we had a troublesome legacy of our in-house solution:

  1. Expensive maintenance. Our custom helpdesk+knowledgebase+ƒ tool was written in Perl, so the development resources were quite expensive. And we needed those frequently.
  2. Terrible UI. Many checkboxes, triggers, forms, workflow processes that we thought we needed to support the plethora of products. The tool looked like Frankenstein's Monster.
  3. Costly reporting. Remember, we had the BI tool connected at the backend? It required special skills from a user to pull up a report. Whenever I needed some insights, I had to ask a data scientist to prepare the report. It took sufficient time, and if I wanted to drill in or change something, it went to another round of data science magic, taking more time again and again.
We didn't want to maintain a team of six people just to keep the support systems up and have data for analysis. We wanted a tool that would accommodate our workflows, provide necessary reporting and be easy to use for all users involved.

2
Criteria and Candidates for SaaS Helpdesk
I started researching the market of helpdesk systems available at that moment. My criteria were:

  1. SaaS tool. I wanted to remove all dependencies on internal developers and system administrators.
  2. Flexible reporting. I should be able to build my own reports without involving Data Scientists.
  3. KCS® Support. Ideally, I didn't want to lose all the benefits we got from our KCS® implementation and migrate all processes "as is."
  4. Integrations. The tool should have out-of-the-box integrations with other tools we used (CRM, Jira for bug tracking, etc.)
  5. Reasonable pricing. We didn't want to increase costs part in our P&L, so the pricing had to speak to that requirement.
  6. Easy-to-use UI. Needless to say, the satisfaction of agents and the time of onboarding for the new agents heavily rely on the UI/UX of the system, so we had high hopes for that aspect too.
  7. Modern and nice looking. Of course, it's merely a matter of personal taste, but it's psychologically easier to invest in a good-looking tool. Too sad not all helpdesk system vendors appreciate this point of view :)
If you ever tried to choose a helpdesk solution, I believe you know big and niche players on the market. We looked through all of them, so to keep the story short (well, not much longer it is now, anyway), I'd better provide the summary of our research:

  1. Salesforce Desk didn't pass criteria #5 and partially #4. It was optimized to work with Salesforce CRM, but we have already migrated to Hubspot. Too bad because there were KCS® tools tailored for Salesforce Desk.
  2. Atlassian helpdesk didn't pass #6, although it natively integrated with Jira.
  3. VisionDesk/TeamSupport/HappyFox failed criteria #3, #4, #6, and #7. Well, maybe not completely failed, but taking a piece from each requirement didn't make the puzzle for us as a whole.
As finalists, we've got Zendesk and Freshdesk. Both fulfilled all the criteria except #6 — neither had the KCS® support we wanted. Freshdesk guys even promised "to develop everything we wished", but for some reason, I could not fully rely on that commitment. Zendesk had some basic functions, which could be considered as the inception of the KCS® toolkit. It allowed configuring some triggers and workflow rules to cover a case when a ticket could be linked with an article.

We used all the other useful things in our in-house solution like proper workflow, roles, statistics, reports, etc. Now we were supposed to configure all of that as a part of a custom development with their tooling and reporting module.

So we had a final brainstorm and voted to choose Zendesk as it completely fulfilled six criteria out of seven, hoping that we'd be able to find some workaround for the KCS® part.
3
Migration to Zendesk
Now we had to think about how we would migrate our workflow and processes to the new tool. The big constraint was that Zendesk didn't provide the same features as we used to have in our custom helpdesk. It became a problem for some of our team members who tried to copy everything "as is" and failed because of the difference.

And we've learned our lesson:
Instead of migrating the processes "as is," it's necessary to take the goals we wanted to achieve, "migrate" these goals to the new tool, and then build up new processes using the available feature set.
We managed to build up all the necessary workflows, triggers, labels, SLA, and essential reporting using that approach.

Then we created a script that migrated all the Knowledge base content from the old system to Zendesk. The old system was reconfigured to redirect old links to new locations for backward compatibility because these old-fashioned links were used in the documentation, the product itself, and some other places we couldn't change.

We decided to exclude the Zendesk telephony and chat tool because we already had in place Talkdesk and LiveChat for that purpose. It was not a big deal as both had out-of-the-box integration with Zendesk, which took us only a few minutes to set up.

On Day X, we've switched DNS and URLs on our website and started to enjoy the new system.
4
Results of migration
  1. We eliminated the dependencies on Perl developers and successfully fired them. I'm kidding :) They all joined the mainstream product team and started to contribute to the product that generates profit directly.
  2. I'd got a flexible tool that had out-of-the-box reports for the most popular metrics and provided a GUI SQL tool to build custom ones easily. No more waiting for hours or even days when I needed to build up some report, drill in or play around.
  3. Good UX helped to optimize the agents' performance.
The major question that remained was about the KCS® module. At that time, Zendesk had a rudimentary Knowledge Capture application which didn't evolve much over these years. It allowed basics like searching, creating, and linking articles right in a support ticket interface. But it lacked the overall reporting, agent performance appraisal, and statistics as well as the tool for product improvement.

Wait, haven't I already told you that KCS® helped our support organization drive changes in the product and impacted the R&D roadmap?

Oh boy, that's important too!
5
Product improvement
Have you ever had a request from your development team "ok, please give me top-10 bugs which annoy you, and we'll release a patch to fix those?"
You would probably have bugs associated with your tickets and simply give that list to development, ranked by occurrence.

However, some product managers may come to support and ask for some statistics that refer to the most problematic areas of the product that can be improved besides bugs.

The traditional approach used in the company in the pre-KCS® age was to get a list of product areas from development, pre-configure it in the helpdesk and then use that taxonomy for tagging each ticket. Then we would calculate the number of tickets in each category, and, et voila, we have our problematic areas.

Would you guess which category had the most tickets? "Other."

In the next iteration, we focused on evaluating the "Other" category to granulate it to more specific categories. That approach seemed logical, but as a result, we got another "Other" category on the top. And again, after another iteration. And again.

Developers blamed support folks they could not recognize the proper category, and they decided to make customers choosing the category on the webform during the ticket creation. The result was even worse: in addition to "Other," the percentage of errors when customers chose the incorrect category increased as well.

The root of the problem was again in the perception of how each group (developers, support, and customers) saw the problem. Developers tended to propose the areas connected to the code and wanted to see the cases affected by the specific part of the code. Supporters could not get what code is responsible for the problem and tried to classify it using the module which was involved in the process as per their best judgment. Customers didn't pay much attention to that part at all and chose a category closest to an error message they saw.

Once we implemented KCS®, we also found a permanent solution for this problem.

It worked in the following way: each ticket is supposed to be linked with the corresponding article that describes the symptoms, cause, and resolution. Then it's possible to track how many times a particular article was re-used (i.e., how many tickets it was linked) in every given moment.

When you sort this list by the top re-used articles, you get the most questionable or buggy areas of the product that you can analyze from a product management perspective. Articles describe use cases, not modules or snippets of code. And each party can identify areas for improvement.

We used to split these articles into two buckets:

  1. Bugs to be fixed
  2. Feature requests to be implemented
For the bug fixing, we also associated the articles which had a bug as a root cause with the corresponding Jira ID, and it became transparent for the development of what they have to fix.

It's similar to what they had when tickets were linked directly to bug with a barely noticeable but yet major difference: whenever they wanted to understand the nature of the issue, they didn't have to read through tons of unnecessary information which normally exists in tickets like greetings, small talks, requests for access, escalations, etc. They would get a gist out of the articles' summary, and sometimes they'd even port the "workaround" section almost "as is" to the code. That drastically improved the speed of bug fixing.

For feature requests, it opened an even bigger opportunity to have a clear vision of what customers were struggling with. It allowed us to remove the taxonomy based on labels or categories at all.

All the necessary information was taken based on re-used articles. Should it be "how to" questions or third-party components or product problems — the indicator was that if customers were looking for a solution for that area, then it could be bad UX or product architecture or anything else which was worth analyzing in detail.

We implemented all this in our old tool, but unfortunately, that part was completely missing in Zendesk.
6
Back to the stone age
We got a new powerful helpdesk with all its benefits and could be happy. But we lost KCS®
We still were starving for:

  1. Different KCS® reports measuring agents' participation in the process. How many articles they create, approve, publish, drill-in details, and so on.
  2. How many articles each agent authored, what product version, labels, age, number of likes, and views these articles have.
  3. Reports contribution of each agent into the collective knowledge that we could use to provide them feedback and run the performance appraisal process.
  4. Deflection reporting.
  5. And, of course, Taxonomy and Product improvement tooling.
Thank God we hadn't fired one last developer from the internal tools to the product team. We assigned him to re-invent the bicycle of the reporting, but this time using Zendesk API and available data.

He created a dump of necessary raw data from Zendesk to a custom database and built up a simple web interface to show the necessary reports. And that had become our next custom-developed in-house solution.

Do you guys really believe that you've optimized and improved something?
Well, let's evaluate the bottom line.
7
The bottom line
All the KCS® reports and tools described above are custom again. And again they have the same problems:

  1. Non-flexible in terms of quick data drill-in. Any change in report parameters requires time and resources.
  2. There are bugs that we have to fix internally.
  3. We have a bottleneck — the one developer who maintains our customizations.
We have optimized reporting somehow with a data warehouse solution that allows building custom reports and visualization, but it's still buggy and requires lots of steps to be performed to migrate the data from Zendesk.

And I tried one more time to find the ready-made solution.

As five years ago, market research has brought up some big enterprise tools with some unrealistic costs. Not only for the software itself but maybe half of it would go for professional services needed to implement & launch it. And what is more important, these tools were focused on integration with Big Blue CRM only. Again.

What would small and medium businesses do if they prefer using lightweight tools like Zendesk? What if they don't want to spend a fortune in money and centuries in time to launch the solution? To have the same experience they had with Zendesk itself?

Do you have these questions too?

Then it's time to join me for the next step in our evolution!
Andrew Bolkonsky
CX, Knowledge Manager & KCS enthusiast @ Swarmica

KCS® is a service mark of the Consortium for Service Innovation™