What is a chatbot?  Essentially it is a computer program which simulates human behaviour online, including on social media. Chatbots are not a new concept but are becoming increasingly sophisticated in what they can do and how closely they can mimic human behaviour online, such that they are increasingly replacing humans in populating social media for organisations.

Chatbots are widely used by corporations to stimulate conversation, promote products/services, increase consumer engagement and generally enhance the user experience. For example, RBS has announced an intent to launch a chatbot “Luvo” to help its customers with more straightforward queries; H&M has a chatbot on Kik, which learns about a user’s style though viewing photographs and recommends outfits; and Pizza Hut has a chatbot on Facebook and Twitter, which allows customers to place order via those platforms.

‘Rogue’ chatbots are also, unfortunately, prevalent, for example in chat rooms, where they may be used to extract personal data, bank account information, stoke negative sentiment, target individuals and generally cause havoc.   A perfectly legitimate and well-meaning corporate chatbot may also ‘go rogue’ – you may remember Microsoft’s chatbot ‘Tay’ that hit the headlines when it started tweeting offensive tweets, including swearing, making racist and inflammatory political remarks, within 24 hours of launch.   Microsoft’s chatbot had made unfortunate (and un-endorsed) connections between two strings of text, which was the explanation for the rogue tweets.

As such, chatbots pose a myriad of legal and risk issues that need to be navigated by those using them.  A summary of the key legal issues to be considered are as follows:

  1. Policies on chatbots. Internal policies must be in place ‎ which govern the extent of the chatbot’s permitted activities, information that it will be fed and how, information it will collect and where that is stored/sent, oversight/update of the chatbots, and all ancillary policies must be reviewed to make provision for chatbots if appropriate.
  2. Website T&Cs and disclaimers. Depending on the type of activity that is being carried out by the chatbot, consideration should be given to whether reference to the chatbot is made in the website/platform T+Cs and if appropriate disclaimers are required.  For example, if a chatbot is assisting a user with booking a flight, a disclaimer stating that the service is computer generated and that users are responsible for checking information provided before booking travel may be appropriate.
  3. Regulated industries/activities. Where chatbots are used in regulated industries, the activities of the chatbot must be programmed to comply with ‎industry regulations and standards.  Where a chatbot is giving advice for instance, information fed to the chatbot must be kept up to date.  Appropriate escalation measures need to be in place and disclaimers should be considered.  Similarly, where chatbots are used for regulated activities, for example advertising, chatbots will need to be programmed to comply with the relevant regulations (in the UK, the Advertising Standards Authority’s CAP code).‎ Again, appropriate oversight must be in place.
  4. Data collection. Chatbots may be  exposed to and collect a vast amount of personal data and other commercial information in the course of interaction with internet users. Data controller registrations and privacy policies must be up to date, it must be clear where the data is collected and where it will be processed, the relevant controller-processor agreements (if required) must be in place to govern the transfer of any data outside of the EEA, technological protection measures must be in place to safeguard data, and data privacy policies must be up to date.
  5. Defamation‎/abuse/harassment. Where chatbots are used to stimulate conversation, appropriate measures must be in place to prevent the chatbot’s comments going too far and straying into the territory of libellous comment, abuse or harassment. As the party responsible for the chatbot, the corporation will have liability for any comment that crosses the line. Again appropriate safeguards and intervention must be in place.
  6. Infringement of third party rights.‎ Appropriate safeguards must be in place to prevent infringing copyright protected content, using third party trade marks and brands, linking to information/content behind paywalls, or otherwise ‘screen scraping’ where information from third party sources is extracted and re-utilised by the chatbot‎.
  7. Policy on monitoring/human intervention trigger. As well as having policies in place, consideration should be given to the extent of monitoring of a chatbot’s activities and a human intervention trigger must be in place to prevent things going too far.

As chatbots become more sophisticated and prevalent, our ability to police them and comply with the various areas of legislation and regulation must follow suit.