Chatbots are computer applications programmed to mimic human behaviour using machine learning and natural language processing. Chatbots can act autonomously and do not require a human operator. Given this freedom, chatbots do not always act in a manner that is fair and neutral – they can go wild with unintended consequences. For example, a chatbot “e-shopper” was given a budget of $100 in bitcoin and quickly figured out how to purchase illegal drugs on the Darknet. Another chatbot was programmed to mimic teenager behaviour using social media data. By the afternoon of her launch, she was firing off rogue tweets and taken offline by the developer. Chatbots were pulled from a popular Chinese messaging application after making unpatriotic comments. A pair of chatbots were taught how to negotiate by mimicking trading and bartering and created their own strange form of communication when they started trading together. Online dating sites can use chatbots to interact with users looking for love and increase user engagement. Chatbots can go rogue in chat rooms to extract personal data, bank account information, and stoke negative sentiment. Chatbots are increasingly being used by businesses as customer service agents. Even these legitimate and well-meaning corporate chatbots may also go wild. Continue reading
Even when an employee is terminated for cause, it can be difficult to fight an employee’s claim for unemployment benefits. A September 2017 ruling from the Commonwealth Court of Pennsylvania may provide employers a new route to combat meritless unemployment claims. In most states, an unemployed individual may file for and receive unemployment benefits if he is out of work due to no fault of his own. The Commonwealth Court of Pennsylvania recently affirmed a decision by the state’s unemployment board to deny a former painter at a metal company unemployment benefits because of a social media post he left after his departure from the company. Continue reading
The use of WhatsApp without the declaration of consent from every person in the user’s address book directory is deemed to be inadmissible in a recent decision by the family law department of a German lower federal court (AG Bad Hersfeld, 15.05.2017 – F 120/17 EASO).
The court held that the mother of an 11-year-old boy had to ensure and constantly control that all of her son’s phone contacts had given their consent to the transfer of their contact data to WhatsApp. Continue reading
Human interactions with technology
In the past few years, the use of social media has increased rapidly. A key feature of social media platforms and social media apps is the ability to interact with other people in ways that were not thought possible in previous generations. With the click of a button, someone from the other side of the world can appear on a screen in front of you.
Technology and social media have not just given rise to platforms that facilitate human-to-human interaction: recently, advancements in technology have led to a rise in a new type of social relationship: human-to-computer interaction. The interactions we have with technology are not just based on user input. Technology has learned to respond to people. It can communicate with us. It can perform tasks. It can learn our habits and tailor services to our needs. It can learn to identify us. It can assemble information and provide solutions to problems. This ‘artificial intelligence’ (or AI) has become a key component in our daily social interactions. Continue reading
On September 7, 2017, the U.S. Federal Trade Commission (FTC) announced that it had entered into a proposed consent agreement with two individuals and their company that allegedly ran an online gaming community website that allowed users to gamble virtual currency. According to the FTC complaint, the two individuals promoted the gaming site and not only failed to disclose their ownership interest in the site or that they were playing with company money, but they also paid other social media influencers between $2,500 and $55,000 to promote the site.
As we had previously written, in the Spring of 2017, the FTC issued 90 letters to social media influencers and the companies they may have endorsed on Instagram. The letters reminded the recipients that the FTC expects any “material connection” between an influencer and the provider/service/company to be conspicuously disclosed. A “material connection” is a “a connection that might affect the weight or credibility that consumers give the endorsement,” which can be a direct payment, free products, an ownership interested in the company, or even a family connection to the company. As part of the FTC’s September 7 announcement, the FTC also stated that it sent warning letters to 21 social media influencers who had received the Spring letters. (The FTC did not disclose the names of the influencers or companies that received the warning letters.) Consequently, the FTC may be bringing additional actions in this area. Continue reading
The South African Protection of Personal Information Act, 2013 (POPI), which protects the processing of personal information by public and private bodies, is much like similar UK and EU legislation. It was signed into law in November 2013 but is not in full effect yet. Once the Act is made effective, companies will be given a year’s grace to comply with the Act, unless this period is extended as allowed by the Act. Continue reading
On July 14, 2017, a federal trial court ruled on an interesting issue: could models and actresses, whose popularity on social media was a strong factor in determining their earning capacities, maintain a lawsuit under the Lanham Act against a “swingers club” that used their photos without consent? In a case where social media played a prominent evidentiary role, the court permitted the models’ claim to proceed, and denied the defendants’ motion to dismiss. (Lancaster v. The Bottle Club, LLC, Civ. No. 8:19-cv-634-T-33JSS (M.D. Fla. July 14, 2017) (2017 WL 3008434). Continue reading
Have you considered what you would like to happen to your social media accounts when you die? Where the platform gives you options, have you selected one? A while ago we wrote about what happens to your social media account when you die.
Many platforms, including Facebook, Twitter, Instagram and LinkedIn have different policies about what will happen to a deceased person’s profile. Since our last post, some of these policies have changed. Here is the current status as of the date of this post: Continue reading
Emoticons – the often whimsical hieroglyphics that most so affectionately know as “emojis” – have become ubiquitous in modern digital communication not only by individuals but also by corporations as part of their advertising and marketing campaigns on social media. Emojis have also begun appearing as evidence in court cases.
A short, but fascinating, discussion between several experts in the fields of computer science, hieroglyphics, and social media of the impact emojis have had on our language can be found here. The crux of the discussion is that emojis can have a profound impact on the way we communicate. Essentially, the inclusion of a single emoji can alter the meaning of the accompanying text. Alexandre Loktonov, AHRC Fellow at the Kluge Center and an expert on hieroglyphics, likens emojis to what are known as “deteriminatives” in Egyptian hieroglyphics, or “signs, which, without having a phonetic value of their own, can ‘color’ the meaning of the preceding word or phrase.” In recent years, the nature of emojis has been addressed in several lawsuits, proving that courts may be recognizing the importance these characters have begun to have with respect to our language and communication. Continue reading
In June, we introduced the topic of chatbots and highlighted some key risks and concerns associated with this growing area of technology. One business in particular, DoNotPay, made headlines recently by announcing that it would begin building legal chatbots for free.
The claim? In a July 14, 2017, posting to the online publishing platform Medium, Joshua Browder, founder of UK-based DoNotPay, writes, “Starting today, any lawyer, activist, student or charity can create a bot with no technical knowledge in minutes. It is completely free.” Sound too good to be true? To be sure, DoNotPay is not the first company to develop law-related chatbots—these bots are already popping up all over the world. But because this technology is still fairly new, chatbots that are attempting to automate services previously performed by licensed attorneys will almost certainly attract scrutiny. Continue reading