The Daily Pennsylvanian is a student-run nonprofit.

Please support us by disabling your ad blocker on our site.

01-26-2021-social-media-photo-illustration-diego-cardenas-uribe

The study utilized state-of-the-art machine learning and language processing to identify social media bots.

Credit: Diego Cárdenas

In a recent study, Penn and Stony Brook University researchers found that social media bots may be identifiable due to their similarities, despite appearing human on an individual level.

Engineering professor Lyle Ungar and Ph.D. student Salvatore Giorgi worked with Stony Brook professor Hanson Andrew Schwartz to examine how successfully social spambots — automated social media accounts that emulate humans — can mimic 17 human attributes, including age, gender, personality, sentiment, and emotion.

Published in Findings of the Association for Computational Linguistics, the study utilized state-of-the-art machine learning and language processing to explore how these spambots interact with genuine human accounts across over 3 million Twitter posts, Penn Engineering Today reported. The Twitter posts were written by 3,000 bots and analyzed alongside an equal number of genuine human tweets.

“If a Twitter user thinks an account is human, then they may be more likely to engage with that account. Depending on the bot’s intent, the end result of this interaction could be innocuous, but it could also lead to engaging with potentially dangerous misinformation,” Giorgi told Penn Engineering Today. 

Giorgi said there is a lot of variation in the type of accounts people can encounter on Twitter, including humans, human-like clones pretending to be humans, and robots.

The study builds on an emerging body of work that aims to better understand how spambots infiltrate online discussions, often fueling the spread of disinformation about controversial topics like COVID-19 vaccines and election fraud.

Ungar and Schwartz, who have previously collaborated on studies about social media’s effects on mental health and depression, worked with Giorgi to integrate language processing techniques with spambot detection — something that few studies have done, according to their paper.

After testing how spambots displayed sentiments like agreeableness, sadness, surprise, and disgust, the researchers concluded that spambot behavior defied their initial hypothesis. 

“The results were not at all what we expected. The initial hypothesis was that the social bot accounts would clearly look inhuman,” Giorgi told Penn Engineering Today.

Their unsupervised bot detector, however, revealed that though bot accounts looked reasonably human on an individual basis, they seemed to be a clone of the same human on a broader population level. 

“Imagine you’re trying to find spies in a crowd, all with very good but also very similar disguises,” Schwartz told Penn Engineering Today. “Looking at each one individually, they look authentic and blend in extremely well. However, when you zoom out and look at the entire crowd, they are obvious because the disguise is just so common.”

The bots examined in the study seem to mimic a person in their late 20s and are overwhelmingly positive in their language. As spamming technologies mature, research on bots’ humanlike traits will continue to be important for detection efforts, according to the study.

 “The way we interact with social media, we are not zoomed out, we just see a few messages at once," Schwartz told Penn Engineering Today. "This approach gives researchers and security analysts a big picture view to better see the common disguise of the social bots.”