I’ve spent much of the last six months trying to understand how AI can integrate into our client’s legal process in a way that will save time and money, as well as make sense in the grand scheme of case success. However, I’ve also kept my eye on how AI is being employed out in the world and some of the external impacts AI can have and the legal matters it can create.
A recent case in Canada highlights ways in which AI can cause issues, rather than solve them, especially with AI tools that are designed to mimic human interactions, like in the case of chatbots. This particular matter I’ll discuss here sheds light on the potential complexities that emerge when AI, acting in a human-like manner, inadvertently gives rise to legal issues beyond the confines of its programmed functions.
The Case
In 2022, a customer of a major Canadian airline wanted to inquire about the process for getting a bereavement fare and the possibility of a retroactive refund. The customer had suffered a loss in the family, and in his urgency to get a flight home in time to attend the funeral, he wanted to purchase a full-priced fare and then go through the process of getting reimbursed for bereavement fare later, in the interest of time.
He logged onto the airline’s website and instead of searching for the bereavement policy, he opened the chatbot and asked about the policy. According to a conversation screenshot with the chatbot, he was informed he could seek a refund within 90 days of the date of his ticket being issued by completing an online form.
Based on this information, the customer booked tickets to and from the family funeral. However, when he applied for a refund a few weeks later, the airline denied the request, stating that bereavement rates did not apply to completed travel and referred him to the bereavement section on its website.
Oh boy!
Upon confrontation with a screenshot of the chatbot’s response, the airline admitted that the bot had used “misleading words” and committed to updating the chatbot. However, they continued to deny his reimbursement claim, stating that the policy was clear, it was available on their website and the chatbot response was incorrect.
So, obviously the customer sued the airline for reimbursement, using a screenshot of the chatbot text as evidence. In response, the airline tried to claim that the chatbot was a “separate legal entity” from the airline itself (and not employed by them) and therefore the airline bore no responsibility for the chatbot’s actions. Their published policy was the truth point for the suit, not the chat text from the chatbot.
The Resolution
The Canadian court did not accept this stance by the airline and deemed the chatbot, despite its interactive nature, to be a part of the airline’s website. Therefore, the chatbot, and all that it said, was the responsibility of airline as the airline is accountable for all information on its website, whether from a static page or a chatbot.
Ultimately, the court ruled in favor of the customer, requiring the airline to pay the fare difference between what was paid and a discounted bereavement fare, as well as court and interest fees. This incident, one of the first of its kind, raises concerns about the oversight companies exercise over automated chat tools as businesses increasingly adopt automation in various services.
In this era of AI-driven advancements, particularly in the realm of chatbots, I think this case serves as a cautionary tale. The seamless integration of AI into customer service carries the inherent risk of unintended consequences. As these chatbots are meticulously trained to emulate human speech and responses, there is a potential for them to inadvertently shape organizational policies in a manner not easily distinguishable from genuine human decision-making. The blurred line between simulated human behavior and actual human acts poses a significant challenge, as demonstrated by this case.
While AI chatbots offer the convenience of being available 24/7, their capacity to generate policy-like directives demands heightened vigilance and oversight to prevent the inadvertent creation of hallucinatory protocols. Striking a delicate balance between the efficiency of automation and the potential pitfalls of simulated human behavior will be imperative for companies navigating the complex landscape of AI applications in customer service and beyond.
Caragh Landry
Author
Share article:
Caragh brings over 20 years of eDiscovery and Document Review experience to TCDI. In her role as Chief Legal Process Officer, she oversees workflow creation, service delivery, and development strategy for our processing, hosting, review, production and litigation management applications. Caragh’s expertise in building new platforms aligns closely with TCDI’s strategy to increase innovation and improve workflow. Her diverse operational experience and hands on approach with clients is key to continually improving the TCDI user experience. Learn more about Caragh.