This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

Social media: uncovering the bots

07 May 2014

Indiana University computer scientists have developed a tool for uncovering bot-controlled Twitter accounts.

Complex networks researchers at Indiana University (IU) have developed a tool that helps anyone determine whether a Twitter account is operated by a human or an automated software application known as a social bot.

The tool stems from US Department of Defense funded research at IU's Bloomington School of Informatics and Computing to counter technology-based misinformation and deception campaigns.

'BotOrNot' analyses over 1,000 features from a user's friendship network, their Twitter content and temporal information, all in real time. It then calculates the likelihood that the account may or may not be a bot.

The US National Science Foundation and US military are funding the research after recognising that increased information flow - blogs, social networking sites, media-sharing technology - along with an accelerated proliferation of mobile technology is changing the way communication and possibly misinformation campaigns are conducted.

As network science is applied to the task of uncovering deception, it takes advantage of the structure of social and information diffusion networks, along with linguistic cues, temporal patterns and sentiment data mined from content spreading through social media. Each of these feature classes is analysed with BotOrNot.

Alessandro Flammini, an associate professor of informatics and principal investigator on the project, says the demonstration illustrates some of these features and how they contribute to the overall ‘bot or not’ score of a Twitter account.

“We have applied a statistical learning framework to analyse Twitter data, but the ‘secret sauce’ is in the set of more than one thousand predictive features able to discriminate between human users and social bots, based on content and timing of their tweets, and the structure of their networks,” he says.

Through use of these features and examples of Twitter bots provided by Texas A&M University professor James Caverlee's infolab, the researchers are able to train statistical models to discriminate between social bots and humans; according to Flammini, the system is quite accurate. Using an evaluation measure called AUROC, BotOrNot is scoring 0.95 with 1.0 being perfect accuracy.

“Part of the motivation of our research is that we don't really know how bad the problem is in quantitative terms,” says Fil Menczer, the informatics and computer science professor who directs IU’s Center for Complex Networks and Systems Research, where the new work is being conducted as part of an information diffusion research project called Truthy.

“Are there thousands of social bots? Millions?" asks Menczer. "We know there are lots of bots out there, and many are totally benign. But we also found examples of nasty bots used to mislead, exploit and manipulate discourse with rumours, spam, malware, misinformation, political astroturf and slander.”

Flammini and Menczer are convinced that these kinds of social bots could be dangerous for democracy, cause panic during an emergency, affect the stock market, facilitate cybercrime and hinder advancement of public policy. Their goal is to support human efforts to counter misinformation with truthful information.

Les Hunt
Editor


Contact Details and Archive...

Print this page | E-mail this page

Minitec