Busyboxes-as-a-Service

image

As a long-time poster, I invariably spend a lot of time thinking about internet harassment.

Maybe I’m not the right person to write about this, but I’ve been online for over three decades now and would like to think that I’ve at least witnessed what can only be described as the bottom falling out of the social contract of the ‘net. I don’t think this is necessarily due to the external factors that are commonly ascribed to it, such as the increasing polarity of our politics or social atomization. I’d like to posit that the single biggest factor is that we’ve stopped designing “pressure relief” into our online social systems.

In short, there have been no successful forms of large-scale online communication that didn’t include some way to occupy and divert antisocial behaviors and users. Sometimes it’s as simple as flamewars and plonking your posting nemeses on Usenet. Coordinated bannings and ignores served as a social pressure mechanism to ensure that antisocial twits would, indeed, log off. As our social tools became more complex, and the user base grew, new techniques such as hellbanning and user reputation systems grew in popularity. The 2000’s is really where the basis point for these tools diverged, however, as the design changed to focus more on mitigating and preventing spam as opposed to mitigating antisocial behavior.

We’ve done a pretty good job at the spam mitigation part, to the point where it’s effectively an extremely low-level and ignorable background hiss. It still exists, obviously, and it’s gotten more advanced - but it’s also become more human. Automated content and harassment campaigns now go hand-in-hand, because volume has replaced all of our other metrics for judging the relative importance of things. This has become a weaponized tactic by antisocial users, who use a combination of automated (botting, etc.) and manual (coordinated messaging campaigns, etc.) methods to bully and harass people they don’t like online.

I don’t want to get too much into the weeds about the details here (although there’s a lot of devils in them), but I do want to suggest that one reason that we’ve seen this sort of antisocial behavior proliferate is, in large part, because that’s the only way for trolling to have much of an impact any more. We’ve defined “user safety” in a rather narrow way; I would suggest modern ideas about user safety directly contribute to the explosion of “fake news” or whatever you’d like to call it, because user safety means that you should be able to construct a perfect filter bubble. If that’s the end result, then of course the antisocial response is going to be to crush your bubble, and the only way to do that is by – essentially – DDoSing it.

There’s a spectrum here, obviously. Should people that do really stupid shit face consequences? Absolutely! Don’t come crying to me when you get fired because you decided to livestream yourself trying to overthrow the government. Wear a fucking mask, dumbass.

That said, there’s a whole ocean of lower-level harassment that takes place online that doesn’t rise to this level of consequence, but because the only tools we have to show displeasure are shitposting or popping someone’s bubble, that’s what people do. We need some sort of pressure relief for our social tools that can occupy trolls rather than constant brinksmanship and escalation that the current user safety model is designed for.

I’ve joked several times online that the best feature the SA Forums have is the ability to pay money in order to change someone else’s profile picture. It’s ingenious in some ways, because it acts as both a pressure relief valve on tensions and a way to communicate trust and reputation. If you spend actual money on replacing someone’s profile picture and text with a giant red warning that they’re an asshole then you better have a good reason for it, because why would you care so much as to spend real human money on something unless it was important? Sometimes it’s comic, sometimes it’s poignant, and sometimes it’s highly offensive (although keep in mind, this system is constrained by the other bounds of the system; if you change someone’s picture to something that would get them banned, then you’re going to be the one who catches that ban – there’s a self-regulation here) – but it most critically provides a busybox for trolls. It’s something that occupies them that at the end of the day, doesn’t actually impact the user experience of the target, because if they have profile pictures (avatars, or AVs) turned off, they’d never know you did it in the first place. You could waste five bucks on something your target doesn’t even know about! gg at being mad online, nerd.

Is this the answer for the entire internet? No, obviously not. That said, it flies in the face of our modern understanding of user safety in some ways. Would Twitter be better if I could pay five dollars to turn Ross Douthat’s profile picture to an image of a pig with poop on its balls? I mean, it’d be funnier.

In a more general sense, we need to decide what kind of internet we’d like to have, because we can’t really “have it all”. The moderation challenges of a Facebook or Twitter (or a Parler or Gab) should be proof enough - centralized, global platforms are broadly incompatible with close-knit personalized experiences and bubbles. The potential for context collapse isn’t simply “too high”, it’s an as-intended feature of these platforms. AI/ML-based moderation won’t save it, trying to get people to be nicer by banning the Nazis won’t save it (although we should just ban the Nazis as a matter of course, get wrecked nerds), a bunch of warnings about fake news and engagement-limiters won’t save it. You can’t design something for maximal engagement and perfect safety, they’re incompatible.

Give the trolls something else to do, because they’re never going to go away, and things are only going to get worse.