A bot's mistakes can spell big trouble for companies
Employees can cause a lot of trouble for their employers if they mislead customers or give them inaccurate information. The same is true for a company's bots, as several recent cases show.
In Utah, homeowner Robert Brown needed repairs to his air conditioner, so he contacted his home warranty company. A technician came out and made temporary repairs but said the unit needed to be replaced.
Brown went online and told the warranty company's chatbot about it and was assured he would soon be getting a check for $3,000, the maximum payable under the warranty.
When the money didn't show up, Brown complained and was told the chatbot made a mistake. The company admitted the chatbot had"miscommunicated" but refused to pay, saying it wasn't legally responsible for the chatbot's errors.
Brown called KSL-TV and the station contacted the state consumer protection agency, which quickly persuadedthe warranty company to pay up.
Similar cases piling up fast
Chatbots haven't been around long so there's not much of a history yet but new cases are popping up quickly. In fact, it's becoming clear that, while bots may be annoying to consumers, they can beflat-out dangerous to the companies that use them if not properly handled. In that sense, they're just like people.
Air Canada's bot allegedly told a customer who had paid full fare to travel to a funeral that they would be able to apply for a bereavement fare later. When the traveler tried to collect, he was told the chatbot was wrong.
Air Canada argued that it cannot be held liable for inaccurate information provided by one of its agents, servants or representatives including a chatbot. However, the consumer regulatorsdisagreed. They held that a chatbot ispart of Air Canadas website, and Air Canada is responsible for all the information on its website, including information provided by its chatbot.
It makes no difference whether the information comes from a static webpage or a chatbot, the Canadian law firm McMillan recently advised its clients, noting that the consumer authorityfound that Air Canada was responsible for ensuring the boy's representations were accurate and not misleading.
Wrongful death
In a more tragic case, a 14-year-old boy died of an apparent suicide after "falling in love" with a chatbot modeled on theGame of Thronescharacter Daenerys Targaryen.
The boy's mother filed a wrongful death lawsuit against Character.AI, arguing its technology is defective and/or inherently dangerous. The case is still pending but the comapny insists user safety is a top priority.
Character.AI says that about 20 million people a month interact with its "superintelligent chat bots that hear you, understand you, and remember you," according to an account in People Magazine.
The youth shot himself after telling the bot that he was going to "come home" and the bot allegedly responded, "please do, my sweet king," according to People.
Dangerous advice
In New York City,a chatbot deployed by the local government wasintended to assist users in navigating municipal services, but insteaderroneously provided advice that was both incorrect and unlawful, the law firm FrostBrownTodd said in a recent web advisory.
This chatbot was tasked with offering guidance on a range of issues from food safety to sexual harassment and public health, but instead of being helpful, it disseminated misinformation that could lead individuals to unintentionally break state and federal laws, possibly facing fines or legal repercussions, the firm said. It didn't say specifically what the bot said.
"Such errors ...underscore the precarious balance between innovation and the risk of liability in AI implementations," FrostBrownTodd said.
Who are you, really?
Speaking of misrepresentation, several law firms that advise corporate clients have recently warned that bots cannot falsely present themselves as human.
If asked whether they are bots, they must answer truthfully or the company could face charges of misrepresentation, the firms said.
Companies are also being warned that they could be accused of wiretapping if their bots extract information that is passed on to the company without the consumer's knowledge.
Noting recent lawsuits filed in California and elsewhere, the firm PierceAtwood said thecomplaints "assert a common theory: that website owners using chatbot functions to engage with customers are violating state wiretapping laws by recording chats and giving service providers access to them." That, the firm said, can be construed asillegal eavesdropping or wiretapping.
"Chatbot wiretapping complaints seek substantial damages from defendants and assert new theories that would dramatically expand the application of state wiretapping laws to customer support functions on business websites," the firm cautioned.
Photo Credit: Consumer Affairs News Department Images
Posted: 2024-11-21 15:58:57