Chatbots are computer applications programmed to mimic human behaviour using machine learning and natural language processing. Chatbots can act autonomously and do not require a human operator. Given this freedom, chatbots do not always act in a manner that is fair and neutral – they can go wild with unintended consequences. For example, a chatbot “e-shopper” was given a budget of $100 in bitcoin and quickly figured out how to purchase illegal drugs on the Darknet. Another chatbot was programmed to mimic teenager behaviour using social media data. By the afternoon of her launch, she was firing off rogue tweets and taken offline by the developer. Chatbots were pulled from a popular Chinese messaging application after making unpatriotic comments. A pair of chatbots were taught how to negotiate by mimicking trading and bartering and created their own strange form of communication when they started trading together. Online dating sites can use chatbots to interact with users looking for love and increase user engagement. Chatbots can go rogue in chat rooms to extract personal data, bank account information, and stoke negative sentiment. Chatbots are increasingly being used by businesses as customer service agents. Even these legitimate and well-meaning corporate chatbots may also go wild.

In June, we introduced the topic of chatbots and highlighted some key risks and concerns associated with this growing area of technology.  One business in particular, DoNotPay, made headlines recently by announcing that it would begin building legal chatbots for free.

The claim? In a July 14, 2017, posting to the online publishing platform Medium, Joshua Browder, founder of UK-based DoNotPay, writes, “Starting today, any lawyer, activist, student or charity can create a bot with no technical knowledge in minutes.  It is completely free.”  Sound too good to be true?  To be sure, DoNotPay is not the first company to develop law-related chatbots—these bots are already popping up all over the world.  But because this technology is still fairly new, chatbots that are attempting to automate services previously performed by licensed attorneys will almost certainly attract scrutiny. 

What is a chatbot?  Essentially it is a computer program which simulates human behaviour online, including on social media. Chatbots are not a new concept but are becoming increasingly sophisticated in what they can do and how closely they can mimic human behaviour online, such that they are increasingly replacing humans in populating social media for organisations.

Chatbots are widely used by corporations to stimulate conversation, promote products/services, increase consumer engagement and generally enhance the user experience. For example, RBS has announced an intent to launch a chatbot “Luvo” to help its customers with more straightforward queries; H&M has a chatbot on Kik, which learns about a user’s style though viewing photographs and recommends outfits; and Pizza Hut has a chatbot on Facebook and Twitter, which allows customers to place order via those platforms.