Spotting a social bot might be harder than you think by Allie Nicodemo August 14, 2017 Share Facebook LinkedIn Twitter Onur Varol, postdoctoral research associate at Northeastern’s Network Science Institute, estimates that 9-15 percent of profiles on Twitter are likely social bots. Photo by Adam Glanzman/Northeastern University How do you spot a fake friend? Sometimes, it’s easy. Other times, fake friends can be much better disguised as real ones. This is often the case with social “bots,” too. Social bots are accounts, predominantly on Twitter, that pose as people but are in fact the products of automated software. In some cases, a single living human can be the “master” behind an army of hundreds of thousands of bots, says Onur Varol, a post-doctoral researcher at Northeastern’s Network Science Institute. Varol recently joined the Center for Complex Network Research, which is directed by Albert-László Barabási, Robert Gray Dodge Professor of Network Science and Distinguished University Professor. Varol has been studying bots and their influence on social media for several years, beginning with his doctoral work at Indiana University. Within 30 to 40 seconds of looking at a Twitter profile, he can usually tell whether or not it’s a bot. Some of the giveaways? One is when the account is following a large number of people who don’t reciprocate by following back. Another is when tweets are simple, repetitive, and singularly focused on a certain topic, such as promoting a weight loss supplement or disparaging a political candidate. Most savvy internet users might think they could spot a bot with ease. And in some cases, that’s true. “The simpler versions of the social bots can’t argue intelligibly with people on social media, giving clear answers or making arguments,” Varol says. If something is fake but tweeted by lots of accounts, it creates an illusion of reality. Onur Varol Center for Complex Network Research However, scanning every profile on a user’s timeline is not practical. And there are tricks a “bot army master” can employ to make the accounts feel more human-like. Since most automated bots can’t respond to a conversation, the person behind the army might intervene every now and then and join a conversation or introduce templates of arguments and responses, Varol explains. And with one tweak to an algorithm, thousands of bots can be altered to tweet about an entirely new topic to shift online attention. These tactics can—and do—throw people off the bots’ tracks, and may help explain why so many readers are susceptible to fake news. “If something is fake but tweeted by lots of accounts, it creates an illusion of reality,” Varol says. “We might think, ‘maybe this is true because so many people are talking about it.’” Identifying sophisticated bots in a systematic way has proven difficult, which is part of the reason why Twitter hasn’t done more to crack down on fake accounts. If the platform makes a mistake and suspends or bans a real person, it would be like a store wrongly accusing a customer of stealing. The ordeal might be more trouble than it’s worth. However, the Twitter community has participated in crowdsource-based grassroots efforts to police online activity, Varol says. When Daesh was becoming increasingly active on the platform last year, people worked together to identify more than 20,000 bot accounts. Twitter closed them all within a day or two. And yet the vigilance of real people who want a bot-free social media experience hasn’t been enough to snuff out the problem. Varol and his colleagues estimate that 9 to 15 percent of Twitter accounts exhibit bot behavior. To make the process of identifying bots easier, Varol and his colleagues created a detection system called Botometer. The site allows anyone to plug in a Twitter handle and instantly get a rating for the account. The higher the rating, the more likely the account is a bot. Since its inception, the Botometer system has evaluated more than 30 million requests received from the public. Varol hopes that by giving people a way to easily identify social bots, the program will help shut them down. That is, until new ones come along. “This is like a never ending arms race,” Varol said.