The Zendesk AI Misconception – Why AI Needs More Human Supervision Than You Think

So, you think you can set artificial intelligence loose and let it do its own thing? That’s a bit of a risky proposition when it comes to customer service.
Most people mistakenly believe that AI is completely autonomous. It’s advanced, but not quite at that stage yet. The truth is that it needs some human oversight to keep it on track.
Even when you have a Zendesk AI tool that handles all your customer queries, you need to check that it’s answering the questions properly. You need to make sure that it accurate and there are also ethical considerations to consider.
In this article, we’ll look at why AI needs more human supervision than you think.
The Rise of AI in Customer Support
AI is changing customer service platforms like Zendesk. It powers chatbots, automated ticket routing and sentiment analysis. It can even go so far as to provide predictive analytics.
The benefits of using AI here are:
- Increased efficiency: AI tools can handle repetitive tasks effortlessly and quickly.
- 24/7 availability: Bots can work all day, every day without a break.
- Reduced response times: Your bots can answer queries in seconds.
- Better accuracy: When you train AI properly, it always gives the right answers. It doesn’t have off days.
- Easier scaling: AI can deal with 100 queries as easily as it can deal with 1000
Where Does the Misconception Come In?
Some people believe that AI can handle all customer queries independently. The reality is that not all bots are created equal. If you don’t take the time to train your AI properly, it can be quite inaccurate.
What most people don’t realize is that AI always wants to give you answers. If it doesn’t know the correct one, it’ll usually make something up. This is called hallucinating and is quite common.
According to research by Stanford University, general purpose chatbots hallucinate 58%-82% of the time on legal queries. The reason that AI does this is because there’s insufficient training data. The machine fills in the gaps by making assumptions.
Where Can AI Mess Up?
AI is still learning about contextual understanding. We are way ahead of where we were with this ten years ago thanks to advances in natural language programing. Back then, you had to get the exact keywords right so AI could understand what you were talking about.
That’s no longer the case, but it’s still possible for misunderstandings to occur. Say, for example, a customer says something like, “I don’t want no red shoes.” A human would understand that this is a grammatically-incorrect sentence. AI might read it to mean the customer wants red shoes in the mix.
AI doesn’t understand emotions the way we do. So, while it can make a guess about what a customer is feeling and how to respond, it doesn’t have true empathy. This could lead to tone-deaf responses that seem insincere.
Finally, people think AI is infallible. It isn’t, it can:
- Miscategorize support tickets
- Give incorrect responses
- Misinterpret customer intent
Why AI Still Needs Human Supervision
So, why do machines still need us? There are a good few reasons.
Complexity of Human Emotions and Language
AI struggles to understand sarcasm, slang, and cultural nuances. This can lead to it misinterpreting the sentiment of a query. For example, a customer might say sarcastically, “I’m so happy with the outstanding service you provide.” And then launch into a tirade. AI might view that as a positive statement rather than the negative one it was.
Escalation and Exception Handling
AI doesn’t have the same problem-solving skills humans have. To understand why, we need to look at how we train our large language models for chatbots. We feed in lot of data that contains examples the AI can learn from. But we’re not teaching the bot to think logically.
Instead, it’s learning patterns. When it gets a query in the real-world, it sees how it measures up to the patterns it’s already encountered and then gives an answer. Which means it can’t handle more complex issues.
It’s also very difficult to train AI to deal with all the possible edge cases. These are examples that you seldom encounter, so it’s hard to think about what they are. When your AI encounters these in the real world, it’s baffled.
Which is why you still need to have human agents that your AI can escalate queries to. You also need to set clear rules for when that needs to happen.
Bias and Ethical Concerns
If you’re not careful, your AI model can reinforce biases in customer support. If you use training data that contains harmful stereotypes, your bot will use these when giving answers.
For example, say you train AI on older medical texts that describe women as being hysterical. If you have a medical chatbot, it might be more inclined to deal with issues from women as hysteria.
These are issues that will come out in the wash as long as you supervise your AI properly. You’ll need to regularly check its responses.
Continuous AI Training and Improvement
AI doesn’t know what it doesn’t know. Think of artificial intelligence like a toddler. You have to explain how everything works and teach it. We need human-in-the-loop systems to make this training as effective as possible.
You could, for example, implement a chatbot for your support team to use. This wouldn’t give answers to customers, but rather to your consultants. The AI might suggest two possible answers, leaving your agents to choose the right option.
The advantage here is that you directly supervise the answers. You also positively reinforce the right ones, ensuring that your AI learns correctly. Over time, you’d be able to let it handle customer queries directly.
Striking the Right Balance Between AI and Human Agents
It’s very tempting to let AI handle as much as possible. You can speed up customer service and improve productivity. But you don’t have to let AI loose on its own. You can use it to support your consultants. It can speed through your policies and procedures and give your agents answers in seconds.
Over time, you can rely on it more and more when it comes to simple queries. The key to getting this right is in properly supervising the AI and having a human team at the ready to handle complex cases.
Conclusion
AI is a powerful tool, but it can’t match human judgement. The future of customer support is a hybrid model where artificial intelligence supports your team. AI will become even more capable over time but it’s not quite a standalone solution just yet.