You may be held liable for your AI bots. Are you prepared?

This is an AI generated image. It shows a not-quite complete personified representation of AI holding a jacked-up scales of justice.

New court rulings for AI chatbots change the landscape of liability.

This month, Air Canada was held liable for their chatbot inaccurately explaining the airline’s bereavement travel policy to a customer. A customer seeking information about Air Canada’s bereavement travel policy asked the chatbot to explain. While it did so, it did so incorrectly causing refund claims to be denied as per actual policy. Air Canada argued that they were not responsible for their chatbot’s responses and claimed the customer should have verified the chat bot’s answers with secondary information from their website.

The court rejected their claim and ruled in favor of the customer stating, “It should be obvious to Air Canada that it is responsible for all the information on its website,” Rivers wrote. “It makes no difference whether the information comes from a static page or a chatbot.”

While the term Artificial Intelligence has been lurking around since the 1950s, what is true for most technology remains true here too with AI’s rebirth, of sorts. Legal principles aren’t keeping pace with the speed of invention.

Before deploying any technology, it is important to understand the implications. Current case law is limited. Cases involving companies using AI tools to communicate with their customers haven’t happened. Until now.

What does this mean for you?

We covered this potential threat in our article, “Understanding AI Part 1: Questions to Avoid AI Disaster.” Now, this problem has been proven in court.

While this particular case cost $812, a mistake made by your technology might cost much more. If you are operating AI tools, such as a chatbot, or planning to do so, there is now legal precedent for your liability for the actions and responses of these tools.

Consider the kind of services you provide today. Are they expensive to replace? Could they cause physical harm to someone if done incorrectly? Are there other potential costs? If your systems gave out incorrect information, what kind of harm could be caused? Before starting to deploy AI, you need to carefully consider what liability you are exposing yourself to. It’s important to realize that potential legal liability for AI tech is no longer theoretical. There’s now precedent, and you must be careful about how you move forward.

What are the problems?

AI technology is built with self-learning algorithms. That means that no one – not even the developers – are exactly sure how it’s coming up with answers. This technology is trained by being told if answers are right or not and weightings within the code are changed when answers sound wrong. However, there are billions of parameters in these systems. No human can understand all the details.

We cannot know what will happen with these systems. The technology is raising itself. (What an exhilarating and scary time to be alive.)

The systems are not built or trained by experts on specific subject matter. They’re designed to develop plausible sounding answers. Currently, no one is building a solution that guarantees “correct answers.” Any answer that sounds reasonable to a human is deemed sufficient. We cannot be okay with that. The bar should be higher than “sufficiently reasonable.”

The training data that is used for building AI systems is another major problem. These models are called “Large Language Models” and the “large” refers to the enormous amounts of data needed to train them. Massive data sets are ingested by the models and that means a lot of questionable material can go into the core of these systems. In many cases, there is little to no quality control, because you need so much data. Anything the developers can get goes into the system and forms part of that black box of decision-making. Potentially the result is that age-old computer term, “Garbage In, Garbage Out,” (or GIGO). But you are likely liable for that garbage and that’s the issue. Even worse, these data sets can include terrible human biases and assumptions.

That’s not the AI’s fault, it’s literally built into the dataset.

Consider the use case of facial recognition. It is used to get into your phone, for law enforcement, and the countries around the world are starting to use it in place of your passport to enter their country. But facial recognition is built on an edifice of data that has huge problems. Academic research has repeatedly shown that these algorithms are trained on light-skinned faces and error rates shoot up for darker-skinned users. AI just uses the information it is given and a lot of that information is very suspect.

Even if all the data are correct, you still cannot be sure if a system is going to provide “correct answers” to customers. Systems tend to “hallucinate” – which is a polite way of saying they’ll suddenly lie for no discernable reason. Or they’ll just go right on off to la la land as Chat GPT did just days ago. (If you have a few moments, I highly recommend reading the chat transcript. It is both wildly hilarious and existentially disconcerting.) Whatever your AI does, you are liable.

Publicly launching AI tech under the scope of an experiment? You’re still on the hook for what it does. Air Canada considered their website chat bot experimental. It’s since been taken offline.

What can you do?

Consider your plans carefully.

  1. Don’t launch the flashy new thing just because it’s the flashy new thing. Don’t rush in without considering your use cases.
  2. Don’t accept what vendors are telling you. You’re responsible for what your technology spits out. Test it out before you launch (or buy). You must play with it and try to break it before you’ll really understand if it will work for your business and live up to your expectations.
  3. Do get an expert to help you sort out the flash from the substance. There are some great use cases for this technology if they are deployed correctly. The core of it is to deploy human-mediated solutions. It’s rarely a good idea to just leave an AI system directly in contact with your customers. This was Air Canada’s mistake. See some ideas how this works in our article,  

Vertical has already offered some of the best ways to use AI safely including using it to enhance search capabilities or open up new opportunities.

Conclusion

While this Air Canada ruling is concerning for the use of AI, it doesn’t mean you can’t use these tools effectively. You need to understand the risks, identify the right use cases to get real value, and consult an expert, like Vertical Communications, to help determine the best way forward. We can help you intelligently deploy AI and minimize liability.

Start a conversation with one of our Vertical Experts: