Yesterday I attended a guest lecture by Justin Cheng, a PhD candidate at Stanford University, titled Antisocial Computing: Explaining and Predicting Negative Behavior Online. One important note is that the definition of trolling used in his work is “behavior that falls outside acceptable bounds defined by those communities.” Here were my main takeaways:
Most trolls are just regular people having a bad day
The common perception of an internet troll is a cackling misanthrope sitting in their basement intentionally trying to ruin someone’s day. But while there certainly is a segment of trolls who fit this model, it appears that most antisocial behavior online is highly situational.
Justin conducted an experiment where people were given a quiz and then asked to engage in an online discussion. When people were given a hard quiz (or when told they did poorly regardless of their actual results), they made more negative comments in response to a political article. Justin also referred to research on mood fluctuation over time, and found that patterns of negative comments on news sites matched closely with dips in people’s mood throughout the day and week.
Downvotes actually promote bad behavior
Common sense might lead one to believe that people chastised for breaking community standards would make an effort to behave better. Justin found that the opposite is actually true! When comparing users with comparable posting histories, he noted the following:
- People write worse after negative feedback, with little change after positive feedback.
- Negatively evaluated users actually post more frequently, while downvoting others more.
- This leads to a downward spiral that causes communities to get worse over time, as shown by a decline in the number of upvotes.
The spread of bad content is predictable
There has been a lot of attention paid to fake news lately. The spread of these low-quality articles seems to be chaotic, and their spread can die down and then recur after long periods of time. But Justin found that sharing cascade effects are predictable, as are recurrences in popularity.
I admit I didn’t understand the finer points of his formula, but the gist of it was that if a post reaches the median number of shares, the chances of the popularity doubling can be predicted.
Over the years, I’ve come to appreciate the extent to which people’s behaviors are influenced by their environments. Justin Cheng’s research suggests that this is no different for online spaces. Just as run-down surroundings and unpleasant stimuli may result in increased aggression, online communities must be thoughtfully designed and maintained to function smoothly.
There is a commonly referenced usability heuristic that there should be a match between digital systems and real world conventions. With the anonymity and immediacy of the internet, many of the factors that constrain our behavior in face-to-face interactions are absent, which can sometimes lead to abusive behavior. Can solutions be designed to simulate or take the place of in-person social cues?
Justin proposed a some ideas based on his findings: Remove downvoting, implement reputation systems, better comment screening algorithms, better community rules and moderation, and delayed comment posting. With trolling and negative online behavior coming into the spotlight, it will be interesting to see whether these and other solutions have the power to make the internet a friendlier place.