Moderating with Empathy

Gareth Coles avatar

This post was imported from my old blog at gserv.me. It was originally posted on the 3rd of June 2022, while I was still part of Quilt’s community team – so that’s the perspective it’s been written from. However, things are likely to have changed since then, especially given that I decided to leave my Community Manager position just over a year later.

This post has also been reviewed for typos and other grammatical mistakes, as part of the process of moving the blog here.


I guess it’s about time I posted something here, huh? I’ve been doing moderation for an awfully long time, but a situation I was part of recently really changed my perspectives on a few things. To explain this, though, I need to go back over some of my history.

If you have some insight on the subjects discussed in this blog post, please feel free to contact me on Twitter or the Fediverse. It’d be great to try to push our moderation approaches forward.

A History Lesson

Don’t worry, we’re not going back 16 years for this one.

October 2021

I’ve occasionally been known to say that if you’re not pissing anyone off, you’re probably not moderating strictly enough. While this is obviously a hyperbolic joke, it’s often interesting (and occasionally a touch scary) to see just how upset people can get when they’ve been forced to deal with the consequences of their actions.

As some of my readers are undoubtedly aware, this was the month that I fell victim to an attempted doxxing. This was orchestrated by a user that I’d banned from Quilt spaces back in June of the same year, following a ban from a satellite community over some “light” Islamophobia – a ban well within Quilt’s defensive moderation policies.

While the exact details of this event aren’t too important, it really put into perspective the work that Quilt does, and how important it is to act quickly – swinging the hammer early when issues appear. When we’d figured out what was going on, who was responsible and the kinds of communities they were publicly exposing my information within, we realised that a formalized Quilt Community Collaboration program was on the horizon. It isn’t an exaggeration to say that this event is responsible for a good chunk of how Quilt’s moderation works today.

That’s not to say that everything we do there is perfect, though – there’s certainly room for improvement, some of which I’ll talk about later on.

Community Notes

It’s worth noting that the community that this doxxing was orchestrated within (and that was responsible for a lot of targeted harassment of Quilt and its staff members) was not a particularly healthy one. While searching through it, we found:

  • Lots of casual racism and antisemitism, including non-ironic uses of slurs such as the N-word
  • Blatant transphobia, homophobia, and disparagement of the LGBT community, including harassment of people that were themselves part of this community
  • Violent sexism and anti-feminism
  • Users referring to themselves as Nazis
  • Attempts to radicalise the community’s members, turning them against progressive communities and each other

In some ways, this isn’t too surprising – groups like this exist in niches all over the Internet, and Minecraft is the biggest game of its type. However, it’s still surprising to think about how deep and strongly worded the discussions we found were; it was as if the people in that community were defined by their political viewpoints, rather than being there because they enjoyed the same things or had some other common trait or interest.

It’s specifically poignant that more people who were active in that community had views and supported actions that would likely get them banned from most other communities, should they engage in them there.

Present Day

Since October, many things have happened. Most notably, by this point, I’d been contacted by several people that were part of the group mentioned above – people that had clearly suffered a lot at the hands of that group’s members, and were looking to “escape”, recentre, and reform themselves. These people understood that the environment they were stuck in was harming them, and decided to leave on their own initiative.

I’m pleased to say that everyone who’s done this has seen huge improvements, both in their behaviour and their quality of life. People that I’d originally assumed to be irredeemable racists (for example) have turned themselves around and, surprising almost everybody, showed that they were actually just lovely people that allowed themselves to be manipulated into fitting in with the group.

Of course, it’s been wonderful to oversee these transformations, and it’s always great to see that these people have been able to find themselves. However, these occurrences raise more questions than answers – for example, how did we get here?

A Hypothesis

I imagine that by this point, some people reading this have already come to the same conclusion as me – but let’s think about the circumstances around situations like this. If we examine the group we’ve been talking about above, we can draw the following points of data:

  • The community used to just be built up around a Minecraft server, but that hasn’t operated in a very long time
  • A sizeable chunk of the users have been banned from Quilt spaces, and some of them have been banned more widely
  • The narrative used to justify the community (before Discord banned it) was that it was a place of free speech, away from communities that judge you for your opinions and that ban you across hundreds of Discord servers for a simple misunderstanding

Of course, that last point was a complete fabrication, and it was pretty clear that the people pushing it knew that. However, it made sense for them to push it because, without a backing narrative, they would be unable to grow their community – given that it had no other pulling factors or reasons to further exist, aside from being a place for people that were already there (and who already knew each other) to hang out.

Something else that’s worth considering is what happens to people when you ban them – they don’t just cease to exist or immediately become non-bigoted or friendly towards minorities. Generally, most of the people you’ll ban from your communities will disagree with their ban or ban reason, whether that’s in bad faith or not. Removing someone from your community often signals one or more of the following, depending on the person and what happened:

  • You’ve decided that the user has no room for improvement, or that they’d cause too much harm while working on themselves
  • You have no interest in dealing with them or entertaining their presence in your community
  • You’re removing them to curate the overall viewpoints of your community members (e.g. pushing a progressive narrative)
  • You believe the user is a detriment to your community or the people around them

There are quite a few other possibilities here, but the main thing to take away is that they’re not going to like you any better for banning them – often, they’ll find a narrative that allows them to blame you for banning them, rather than admitting their own faults and taking responsibility for their actions. While it’s true that all moderators will make mistakes occasionally, and that this doesn’t cover every person you’ll ban, it’s what I’ve observed in most trolls, bigots and other explicitly problematic users – the people you ban quickly, basically.

People with this kind of mindset are not likely to learn from the ban. You’ve removed them from your community, but doing so will make them someone else’s problem, or potentially push them towards radicalising groups (like the one described above) because they’ll be accepted there – regardless of how much that group may suck, dislike or bully the person, or show harmful behaviour, they’ll at least avoid banning them.

What We Can Learn

If we combine the above hypothesis with the information given above it, we can draw some interesting conclusions that may help to inform how we do moderation going forward. I’ve had a few ideas myself:

  • Make sure your appeals system is accessible to anyone you’ve banned, and do your best to keep your documentation and language around it non-judgemental. Make it clear that you expect and encourage self-improvement, and that users should return when they feel like they’re a better match for your community.
  • Approach moderation with empathy – you will always need to ban people, but don’t approach it with a level of prejudice that makes you appear “holier than thou”. Instead, try to focus on what the user can do to improve themselves, and never remove their ability to appeal unless you absolutely have to.
  • Share intel, but don’t recommend actions (especially if the intel is old) – if you’re part of a collaborative moderation group, it’s important to inform everyone, but don’t try to stop them from coming to their own conclusions. Pushing people into action via your own narratives can cause more harm than good, both long- and short-term.
  • Train your staff’s attitude towards infractions – set out to encourage them to avoid snark, putting down infracted users, showing enjoyment when banning users, and so on. They should keep things neutral and professional, showing that they’re committed to the job, rather than just doing it for personal enjoyment. Make sure they understand that problematic users have the potential for personal growth, but don’t let that stop them from banning users as required.
  • If you find yourself in a neutral space with someone you’ve banned, talk to them – they may have their own viewpoint to share, and you may be able to help them understand why they were banned, how your moderation approach works, and why you approach it that way. They’re still a person, after all, and you may be able to help them make some progress by doing this.

Framing is an essential tool for people on both sides of this issue. Ultimately, moderation is a job – often a thankless, unpaid one, but an essential one – and it should be approached with professionalism, and those engaging in it should be trying to make people’s lives better, not worse.

Conclusions

Ultimately, we can’t truly know anyone without sitting down with them for a while, and speaking with them as equals. In my research, I’ve noticed that a lot of the worst offenders have either been radicalised, or are dealing with other things – like trauma, parental abuse, difficult home lives, being LGBT in an environment that requires them to be closeted, and so on. It’s important to understand this because, although this won’t always be the case, all actions and viewpoints come from somewhere. Even the most infamous people in history often thought they were doing the right thing, after all.

While you will always have people who come to your communities to engage in bad faith, deliberately cause problems, and just try to hurt people – and you will still have to ban the people you have always done – I believe that approaching the situation with the right approach, right framing, and right mindset can help to minimize harm to the people you’ve banned, other communities, and groups of people that they may try to harass.

If you’re in moderation (and especially if you’re not being paid for it), you probably have an explicit goal: to endeavour to make the world a better place. This is a complex task, a burden that shouldn’t be underestimated, and it’s important that everyone is accounted for – even if you don’t like them, or you’ve banned them.

There’s a balance to strike here. I’m not sure what that balance is yet, but we won’t find it by doing nothing, either.