Are companies responsible for the actions of their AI Chatbots?

Posted on February 20, 2024

0


Here are the excerpted facts regarding an airline, a bereaved customer, and an AI Chatbot with a mind of its own.

  • Jack Moffat, a Vancouver resident, had asked the airline’s support chatbot whether it offered bereavement rates following the death of his grandmother in November 2022.
  • The chatbot responded by telling the grieving grandson he could claim the lower price up to 90 days after flying by filing a claim.
  • However, the airline’s actual bereavement policy does not include a post-flight refund. It also says all discounts must first be approved.
  • Moffatt ended up booking a roundtrip flight to Toronto for the funeral for around $1,200, but when he contacted Air Canada for the refund, he was told he wasn’t eligible, according to the court filing.
  • He sent numerous emails with the attached screenshots of his conversation with the chatbot to Air Canada in an attempt to retrieve the money, the complaint said.
  • But on Feb. 8, 2023, an Air Canada representative informed him that the chatbot provided “misleading words” and that the company’s bereavement policy did not apply discounts retroactively.
  • The peeved passenger then filed suit against the airline, which claimed in court that the chatbot was a “separate legal entity” and thus was responsible for its actions.

The customer won the suit – this time. However, there is potentially a bigger question.

When it comes to AI, at what point does human responsibility end in an exchange or transaction?

For example, if a college student uses ChatGPT to write an assignment and the student gets a failing grade, does it mean that information from ChatGPT is unreliable? Should the student speak with the professor, asking to be given a chance to rewrite it?

What if a procurement professional uses AI to create an RFP, and after publishing it, the product or service specs are incorrect? Should suppliers who responded to the first RFP be compensated for their time?

What if you are prescribed a new medication and you access a drug company’s chatbot to find out if there is a conflict with the medication you are already taking, and the Chatbot says no? And what if you wanted to be sure and, through a Google search, received the same answer only to experience a bad reaction later? Who is libel?

30

Posted in: Commentary