Skip to content

Risks with AI in the contact center

  • 5 min read

Things are moving fast in the AI area. Massive developments are happening each day and it’s hard to keep up. No one can say with certainty how the workday will change for contact center professionals, but it will hopefully be more fun, less stressful and more effective.

There are already exciting services on the market that simplify reporting, create and summarize agent texts, speed up on boarding and dramatically lower the threshold for creating chatbots that can unburden the customer service staff. And what we see now is just the beginning. 

BUT. Where there are opportunities there are also risks. And it is good to be aware of them when you set off on your AI journey. Aside from the obvious risk of falling behind the competitors if you do nothing, there are concrete risks that come with implementing solutions based on generative AI.

Here below you can learn about some of the risks that a customer service manager need to be aware of.

Hallucinations

Both Bard (Google’s chatbot) and Bing Chat (Microsoft) have been caught sharing false information. Unfortunately, they stretch the truth with both confidence and conviction. If you have heard of someone speaking about hallucinations in the same sentence as at chatbot, this would be the phenomenon they refer to.

Sharing false information is not a useful or desirable behavior in the communication with customers. When Google made a demo of Bard for the first time, the chatbot answered a question about the James Webb telescope discoveries wrong, causing the stock value to plummet with $100 million. So, there is no doubt that hallucination phenomenon is a problem that has the attention of the suppliers.

Deepfake – fake that looks like the real deal

AI technology that can manipulate video, images, and voice so well that no one can tell what’s real or not is not science fiction anymore, and is already causing real problems. Our imagination is the only limitation to how the technology can be used. There are already plenty of examples of photographs that were never taken, and fake videos and calls, that have been used with a malicious intent.

AI generated deepfakes is something contact centers will have to deal with in the future. We will not be able to trust that the voice we hear belongs to the person we think it belongs to, and we will not be able to conclude with certainty whether the video or image we have received is real or AI generated.

Ethical risks

An AI model will reflect the data it has been trained on, regardless of if it is books, articles, chatlogs or web sites. Unfortunately, there are texts that are filled with prejudices and harsh words, rather than a respectful language and good intentions, and that will color the material the AI generates. In spite of there being solutions to moderate the content, and filter out harmful content, the web is overflowing with badly behaving chatbots.

An interesting question arises: Who is responsible for AI generated material and its consequences?

Are you willing to take a chance and launch a virtual assistant on your company site if there is so much as a minimal risk that it will insult your customers? Or even worse: proclaims its love for your customers and encourages them to leave their loveless marriages. (Yup, there is such an examples too, and if you want to hear the whole macabre story that a journalist had with Bing chat you can listen to it here: New York Times tech-podd HardFork och avsnittet ”The Bing who loved me” som släpptes den 17 februari 2023.)

Security hazards

Technology can be used to mislead, tear apart and destroy, and several well-known people issued a warning via media that the lack of regulation can have treacherous consequences. The European Union cyber security agency (ENISA) issued a warning in their annual report (ENISA Threat Landscape) that AI most likely will be used to generate malware, spread disinformation, initiate illegal influence campaigns and create false content.

At this very moment, the EU is working on an AI-directive that will regulate the use, and make sure that AI solutions are based on quality data. But that will hardly stop actors with a malicious intent, and that is a risk that we need to be aware of and be ready for.

GDPR and personal data

GDPR had a massive impact on Swedish organizations and companies that had to make great efforts to protect the individual’s personal information. Following that ordeal, there is a strong resistance toward everything that creates insecurity in the GDPR domain and organization are unlikely to want to comprise personal information in their AI solutions. The right to be forgotten is strong and how do you get a neural network to forget? Is it even possible?

This is also an area where the EU is working to harmonize rules and to make sure that the AI that is released is safe for the users. We haven’t heard the last of this, and those who plan to integrate AI in their operation need to stay updated on the current regulations.

During our 2022 Telia ACE Roadshow we asked our customers what they considered the greatest challenges with AI, most had concerns about information security and managing personal information.

Read the report in full here.

Share this post on social!