top of page
JSU LAW Logo
Writer's pictureJoseph Ur

AI-AI-Oops: Air Canada's AI blame-game fails to take off

A recent decision of the British Columbia Civil Resolution Tribunal (BCCRT) will cause some turbulence in consumer protection law, and may impact the use and regulation of artificial intelligence (AI) chatbots in Canada.


The case is also amusing and noteworthy as the defendant, Air Canada, employed a ‘pie-in-the-sky’ and ultimately futile defence tactic when they tried to shift blame onto their chatbot, as if it was an independent entity.

Written by Joseph Ur, JSU LAW


Pictured above: Air Canada's AI, responsible for its own decisions


Background


This dispute arose when an Air Canada customer sought a partial refund from the airline for the difference between a bereavement fare and a standard fare. The customer believed he was entitled to the bereavement fare based on information provided by Air Canada's own AI chatbot.


In its interaction with the customer, Air Canada’s chatbot said “If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.” However, Air Canada’s actual policy is that bereavement fares cannot be claimed retroactively.


The customer applied to the BCCRT to seek the difference between the full fare and the bereavement fare when Air Canada refused to cover the difference.


Remarkably, Air Canada sought to distinguish itself from the bot, and blame it for the mistake instead. The airline's audacious defence claimed it wasn't liable for the chatbot’s digital misdirection, just like it wouldn’t be liable for information provided by one of its representatives that is distinct from the company. ‘It wasn’t us – it was the robots! Take it up with them’.


In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission.

The BCCRT decried Air Canada's argument, saying:


27.   Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.


Air Canada failed to stick the landing on its novel argument, and the BCCRT found in favour of the consumer. While the consumer didn’t allege it directly, the BCCRT found that Air Canada was liable for negligent misrepresentation. Air Canada failed to take reasonable care to ensure its chatbot provided accurate information.


Impact and Analysis


While this decision isn’t binding on higher courts, it’s sure to make other businesses and regulators take notice.


The decision shows that, in the right circumstances, businesses can be found liable for their automated systems in customer interactions, and cannot avoid or pass off liability to an AI system as if it was its own legal entity. Businesses are responsible for providing accurate information in interactions with customers, regardless of the channel.


Lots of companies have rolled out or are currently rolling out more advanced AI Chatbots for their websites in order to automate more rote interactions with customers. This decision should come as a warning to all of those companies – AI chatbots require rigorous oversight and careful programming and testing to ensure information they provide is reliable.


It will be interesting to see this area of law develop, and whether companies in the future are able to avoid liability through their fine print, terms of service, or disclaimers. For example, businesses may seek to hide behind warnings about the potential for ‘hallucinations’ or incorrect information, which could be communicated to customers on initiation of their conversation with AI chatbots.


This case could also impact legislation and regulation of AI chatbots, particularly given the direct involvement in the decision of a national corporation the size and notoriety of Air Canada. Integration of AI into day-to-day businesses and consumer interactions is only going to become more prevalent. Canadian legislation needs to keep pace to ensure companies using these systems are acting responsibly, and cannot seek to blame their own mistakes or actions on an ‘AI-error’ – particularly when they own and program the AI.

Get in Touch for a free consultation.

Find out more about Our Services.

Need legal advice or support?

bottom of page