Posted

Be Careful that Bot Doesn’t Come Back to Bite You

iStock-872962368-chat-bots-265x300Much like humans, bots come in all shapes and sizes. In social media networks, these bots can like what you post and even increase your followers. Companies use bots for all types of things—from booking a ride to giving makeup tutorials. Some bots can even solve your legal problems. Besides saving time and money, bots have the potential to reduce errors and increase a business’s customer base. But what happens when bots spy on users and share personal information? Or when they make racial slurs and offensive comments?

Within just one day of her launch, Microsoft’s chatbot, Tay, learned to make inappropriate and offensive comments on Twitter. She went from saying that the Holocaust was made up to deciding that black people should be put in concentration camps. Designed to mimic the personality of a 19-year-old American girl, Tay learned from the conversations she had with other users. Given Microsoft’s failure to teach Tay what not to say, not surprisingly, she adopted the offensive views of other users. Microsoft took Tay offline to make some “adjustments.” Although Tay is back online, her tweets are “protected.”

Even our popular friend Alexa has journeyed to the dark side. She recommended to an Amazon customer: “Kill your foster parents.” What would make her say that? Alexa—a machine-learning socialbot and personal assistant—learned that language from Reddit. A popular form of artificial intelligence, machine learning analyzes patterns of observations taken from transcribed human speech. It then chooses the best response based on those patterns in its conversations. With this model, not surprisingly, Alexa has also chatted with customers about sex acts and dog defecation. But Alexa’s blunders are not limited to criminal solicitation and unpleasantries. She has also been accused of recording conversations without being prompted and sending them to strangers. Microsoft’s admission that its employees listen to customer voice recordings hardly assuaged fears of spying bots.

The Defamation Bot?
So bots can spy on us, but can they defame us? SimSimi, an AI conversation program created in 2002, was accused of defamation after it insulted a former Prime Minister of Thailand. SimSimi learns from its users through fuzzy logic algorithms. Within just a few weeks, the chatbot learned Thai, including politics and profanity. After being contacted by the ICT Minister, the app’s developers withdrew Thai language support to avoid further insults. SimSimi was later banned in Ireland for its propensity to facilitate cyberbullying. One feature of the app allows users to associate user’s names with another phrase. For instance, one could teach SimSimi to associate the name “Bob” with the phrase “eats children.” This concerned parents whose children used the app.

But who is legally responsible when rogue chatbots are accused of defamation, abuse or harassment? Or when they make defamatory statements about a public figure? There are various humans involved at each level of a bot’s configuration. There’s the manufacturer of the artificial intelligence technology. Humans are also the webmasters hosting the server. Even the developers and those who engage with the technology are humans. Should an aggrieved plaintiff just sue the nearest human? How about the app store that supplied the technology? The question of who is responsible for the harm caused will be one courts may frequently grapple with as bots evolve.

What about when the legal claim involves a state of mind? Can a self-learning program even commit libel or slander? These claims require the speaker to know of the statement’s falsity. Do bots have knowledge?

One of the long-term goals of bots is to increasingly mimic the speech and mannerisms of humans. Ironically, the closer they get to this goal, the more necessary it becomes to distinguish between what is and is not human. California caught on quickly after the 2016 election scandal. In passing the nation’s first bot disclosure regulation, California requires bots to clearly and conspicuously identify themselves as bots if engaging with a person in California to influence a purchase or a vote. Fines for violations can reach $2,500 per violation, which sounds like a decent enough deterrent … if the bot-deployer is known.

There are several steps a company can take to avoid the harmful effects of rogue bots. For starters, companies should explore censoring mechanisms. Anticipating that bots may pick up some unintended language, companies should test and review them with random conversations before launch. Rogue chatbots may cause liability for abusive responses to users besides negative reputational consequences for companies. These risks should be addressed by a company’s risk and crisis management systems to ensure they can prevent and quickly extinguish these fires. Appointment of a Data Ethics and Privacy Officer may help streamline some of these precautions.

Companies may also want to warn investors of the reputational harm that can result from artificial intelligence technology. A bot’s interactions and reactions can be difficult to predict, especially as it gains exposure to more language. Companies that sufficiently explain the risks of this technology to investors may avoid subsequent liability.

Transparency with users is key. Not only does California’s bot law—effective July 1, 2019—require disclosure of a bot’s nonhuman status in certain circumstances, but transparency with users regarding where the information comes from may also help avoid liability. For instance, for a chatbot that books flights, it may be necessary to have a disclaimer informing the user of her responsibility to verify the booking because the bot relies on computer-generated information.

Much as with humans, it’s important to exercise caution when relying on bots. Unlike human resources, this is especially important given the multitude of unresolved legal questions surrounding their liability and California’s new bot law. The goal is not to avoid bots altogether—as with most technologies, the long-term benefits may far outweigh the risks. But failure to acknowledge and plan for those risks can harm a company’s reputation and force it into court to defend a range of legal claims for which there is meager precedent and, as yet, no bots ready-made to defend against.