As we discussed in a recent post, “Social media overload”, social media has grown exponentially over the past decade and has caused businesses to change how they operate and how they make decisions. Social media has quickly become one of the most important marketing platforms, providing a convenient way for companies to reach broad audiences.

Often, the popularity or the success of an entity or an individual on social media is determined by some quantitative measurement, such as likes or followers. The more likes or followers (or other such social media unit) that an account has, the more likely it is to be perceived as being popular, successful, or relevant to the ongoing social conversation. This phenomenon has given rise to numerous services that offer to provide likes or followers by the thousands at extremely low costs. These services create and/or provide “bots” that mimic real social media users and can engage with the client’s account by following it, liking it, or even re-sharing or commenting on its posts. Such services provide an easy way for entities to gain large numbers of followers, likes, etc. that may result in the company getting attention as “trending” or simply getting noticed on a social media platform.

 On the other hand, companies do not want to risk losing their credibility by having it revealed that some or even most of their followers are fake accounts run by bots. It is also important for companies to have an accurate account of their social media activity and followers, a task that is nearly impossible when there are unknown numbers of bot accounts present.

 While platforms like Instagram and Twitter have tried to purge fake accounts themselves, it is nearly impossible for them to keep up and even harder for smaller social media platforms. Fortunately, there are several services that can help identify bots or fake/spam accounts—but they can’t guarantee that they will always be right. Bots have become increasingly more sophisticated, with some able to limit their activity to follow normal, human sleep patterns or others that are able to “follow” each other to create the appearance of legitimacy. Such features have made it difficult for a computer program to distinguish the real from the fake. This sophisticated technology presents the risk of wrongly identifying a real follower as a bot, which may be even more problematic than simply allowing the bots to remain.

The best way to address the bot issue while avoiding (or at least minimizing) the false positives in identifying bots may be to do it the old-fashioned way—through hands-on, human labor. People who are familiar with a company’s social media account and with its regular, active followers are in the best position to identify a bot. Companies that already have employees tasked with managing their social media accounts and Internet presence should leverage the knowledge of those experts to identify bots and fake/spam accounts. Others may consider another, more creative way to handle the problem by crowdsourcing the work. Companies with loyal, active followers may want to enlist their help in identifying and reporting fake accounts as they encounter them during their normal social media use.

Whether through computer-based services, employees, or active social media users, companies will have to find some way to tackle the issue of fake accounts and bots—at least until the social media platforms or regulators develop more sophisticated methods to take down such accounts or prevent them from being created in the first place.