From Ars Technica:

After months of resisting, Air Canada was forced to give a partial refund to a grieving passenger who was misled by an airline chatbot inaccurately explaining the airline’s bereavement travel policy.

This feels like a good, important, decision. Would Air Canada have fought so hard if it was a human customer service agent that misled the company? There is no practical difference to the customer. Although to be fair, the ruling suggests that maybe they would have, with the remarkable line that:

Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives

In which case why even have a customer services department? Are you supposed to be able to guess whether or not an agent is telling the truth every time you call them up?

In this case it’s not like we’re talking about something that threatens the existence of the company. Air Canada, a company that measures its quarterly revenue in billions of dollars, isn’t going to go bankrupt if they give the customer the $880 refund he was expecting.

In any case, if a company is going to replace its human customer service agents with chatbot customer service agents then they should surely be equally as liable for any misinformation.

As the judge notes:

In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission.

If we want to regard chatbots as being “responsible for their own actions”, well, some kind of radical legal shift would presumably be needed. It’s also kind of insane. If chatbots have responsibilities then do they also have rights? Do we put them in chatbot jail if they say something wrong?

If what they actually mean is that whoever sold them the chatbot should be held responsible, well, there are surely already legal routes that can be used by Air Canada to claim damages against another company if they believe they’ve been mis-sold something.

In no circumstance should it matter than the actual bereavement policy was somewhere else on their website. It’s not the customer’s responsibility to research the same question several times and try and guess which version the company actually meant to provide.

More from the ruling:

While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.

While Air Canada argues Mr. Moffatt could find the correct information on another part of its website, it does not explain why the webpage titled “Bereavement travel” was inherently more trustworthy than its chatbot. It also does not explain why customers should have to double-check information found in one part of its website on another part of its website.