How should moderation work?

I think there’s two intended paths for moderation built into discourse: the moderator group can directly intervene and users can flag posts. Apparently flagging can trigger automated interventions (like if 3 users flag a post for the same reason it will be hidden and the author will be messaged about it). I haven’t seen flagging explained or encouraged on the forum so far and think it could be helpful if the forum rules are clear about

  • which rules are meant as general guidelines and good practice of discussion
  • how users can collectively put these into practice by flagging
  • which breaches will trigger direct moderator intervention

An inspiring speech: Rowan Atkinson Defends The Freedom To Offend

1 Like

also inspiring, and fun that youtube suggests to watch this next :wink:


Still wanted to elaborate my inital post a bit further:

I think there’s at least two distinct layers of moderation. One is more akin to policing, because it’s about filtering behaviour that’s clearly threatening, abusive, promoting illegal content and such. I assume most people agree with that moderation, because without it you typically loose conditions for civil and peaceful debate very quickly. The other is much more ‘opinionated’. And there, before long, objections will be raised that this is over-policing and unfairly restricting free speech and freedom of opinion. I’d say these objections mostly miss the point, because we’re not looking at free speech in general and society at large. We want to build a particular community and need direction to get there. But at the same time there has to be a recognizably different and much softer approach to moderation.

Translated to the recent episode in People using Couchsurfing for dating/hookups/casual sex : when you want to build a community that is safe and supportive for female guests and will not end up with 9 male out of every 10 hosts, I believe it’s absolutely the right thing to silence a voice that suggests the burden is on women on how to handle unwelcome advances and harrasment. But I’d also recognize it’s not a clear-cut instance for policing bad speech. That’s why I think we need to find a way how to moderate direction appropiately and not only put (volunteer) moderators on a line that will easily get confrontational. From how I understand the software of the platform so far (, it offers such a way in the form of community flagging. Flagging is not only about raising issues to moderators. It’s also about encouraging more speech between users (switching to direct messages instead of steering a discussion off-topic) and designing community-driven moderation through numbers in flagging. That’s why I suggested giving more expression to the wanted direction of the community, being more supportive to community moderation towards this direction and restricting direct moderation to very clearly definable breaches.

Would love to hear more thoughts on this :slight_smile:


So this is the way to troll you all?
I just have to create 2 other fake profiles and than flagging all your posts?

I am not shure this is smart to tell users here. Cs ambassadors/employees tend to visit this forum as well and could sabotage you this way

1 Like

i think discourse (the platform’s software) has generally considered abusive approaches like that. there’s user trust levels (none/0, basic/1, member/2, regular/3, leader/4) and flags count depending on these levels. by default that would be something like 3 trust level 1 users, 2 trust level 2 users, 1 level 3 user to trigger an automated action. that means at trust level 0 you can’t flag at all and trust level 1 still requires moderator review. to reach level 2, you have to put some effort and receive recognition for how you participate in the forum. so you couldn’t just use some undedicated burner accounts for flagging.

if there’s indeed some concerted action to troll, these settings can also be further adjusted.

but they could also take away inspiration to reduce trolling on cs and everyone wins :innocent:


For those interested there’s more detail on flagging on the discourse forum:
What are flags and how do they work?
So what exactly happens when you flag?
User reputation and flag priorities

1 Like

I personally am completely against flagging at all. Even trolls can be so funny too. And it should also be shown on the forum that we are diverse in thinking!
I replied to someone today and then it got deleted. I hope the person writing it and not someone else deleted it. We have different background and morals/values. This shouldn’t be deleted, because in reality we cant do so either.

So when someone says something that’s against my morals: I have the possibility to ignore them or to educate them. I am also for the freedom to offend :wink:

Who are this moderaters with their “higher” morals?

Cs had an ongoing problem with blocking- suspending for years! You have the chance to do better. Red flagging should not be one of the first solutions advised when clashing against another culture. Even if its just in a forum. You are also creating “Super Members” with this system. “Superhosts” to me in CS weren’t those receiving more request, but those silencing others with the misuse of features or powers (Ambassadors).

1 Like

I think it’s important to realise that everyone has different thresholds, and for communities or individuals who already feel marginalised and discriminated, creating an environment where unsubstantiated hostility is tolerated is not ideal. Not only that, I think it might be a bit cruel.

I am a 100% for constructive conversation on this forum. I am 100% against words that can be used to hurt people needlessly, ESPECIALLY when it can be avoided. Yes things can be funny for some, but may not be so amusing for others. Let’s keep discourse on here kind and productive :slight_smile:


I completly agree with you. Just the world is cruel…
I would feel saver already seeing standpoint here before i use the service at all. I dont want anybody to feel marginalised and discriminated, nevertheless this will happen. I can be even part of a marginalised group and still also discriminate others. It can happen that i say something ignorant without me even knowing it. English is not my first language and my culture might differ from yours. It would be very kind of you to simply reply me than and not redflagging me.

I dont want to put all the work and burden on volunteer moderators but to build a community. Why should i redflag someone when i can simply reply? And even if i feel attacked here, i would feel way more connected to couchers when i see other members replying. I want (cultural) misunderstandings to be seen in the forum because that is life. I want you to enter and not close the convercacions arround sexism/racism/discrimination. And i want all members to feel equally accepted and not to create undercover spy “Super Members”.

1 Like

Precisely, flagging someone when they are saying something rude and discriminatory IS putting the onus on the community to moderate themselves. It’s not creating “undercover spy Super Members”, it’s allowing the community to have a say on what is ok to say and what is not. For example, personal attacks are not ok. These are flagged. Constructive replies and discussion will not be flagged - especially not by members of the community who have proven themselves to be able to speak kindly. Wouldn’t you agree on this?

Perhaps we have different views on this, but if was to become a cesspool of people who are aggressive, have discriminatory thoughts, are starting flame wars on threads where I want to see ACTUAL discussion on features, then no. I wouldn’t use this service at all because not only would it be a waste of my time being in such a hostile environment, I would be unhappy that the company won’t take a stand on protecting vulnerable communities - ESPECIALLY if I am part of these communities. I wouldn’t feel safe. It doesn’t matter that there are 1 or 2 people replying the racist/sexist dude on my side - being in this environment already turns me off enough. I believe there are certain things that are universally unacceptable across cultures - victim blaming people who have been sexually assaulted, for example.

CS had a problem because the power was distributed unfairly among so called “Super Members” - the ambassadors. That’s a completely different issue - WHO are the people they are silencing? Normal members who have done nothing and only wish to enjoy CS. What we’re talking about here is community moderation of HATEFUL speech, which of course, should be clearly outlined in the forum rules.

Just because the world is cruel, doesn’t mean we shouldn’t take steps to make progress. Just decades ago, women couldn’t vote in many countries - does that mean we should accept that “the world is cruel” and close an eye? No, we take steps to address the problem. Of course, this discussion is about how we should address the problem, and I completely disagree that we should “leave it to society and chalk it up to cultural differences and let the keyboard wars take their natural course”.

Sorry for the emotional wall of text :joy: I just don’t see how letting rude people and petty arguments stay will improve as a platform.

1 Like

Anyway, back to the discussion at hand. I think rather than basing trust levels on “moderator reviews”, perhaps we could shift the onus to the community as well?

Is it possible to automatically tag a user’s trust level to the number of hearts/likes they have? That way, we can give more weight to users that have actually been contributing in a way that’s representative of the community, and have proven themselves to be more involved/passionate in making the project work.

Yep there are a bunch of settings we can adjust to filter trust levels including things related to:

  • time on forum
  • posts read
  • posts made
  • posts flagged
  • likes given
  • likes received
1 Like

@kellyt so trust levels are already based on engagement and hearts by default. and yes, as @itsi pointed out, you can then adapt this further. though i think the defaults are also well considered. one thing i like a lot about the defaults is that engagement is mostly based on posts read, and not posts made :wink:

The current setup of trust levels is explained here: Understanding user trust levels

@womxn thanks for all your objections!

Generally, I don’t think it’s desirable to have a forum where any position can be brought up anytime. I rather think a meaningful community gravitates around particular values and it doesn’t improve the conversation if these are repeatedly questioned in any topic.

Now it’s true that these values are not fully written out here. And I totally agree that the values should be clearly written out at one point. But when looking at a project that’s just a couple of months in the work, I’d also find it plain unreliable to found my trust or sense of attractiveness on a document. I’d always rather look at the people involved and their energy. That’s why I think it’s a good thing this forum is already here, even when it’s formal rules and processes are still in the making.


on a more pragmatic note, i’d take away the following for now for the forum’s roadmap:

  • give more visiblity to the community values and start a discussion on them
  • give more visbility to users that are trust level ‘member’, so new users get a better idea who is invested in the project
  • provide information on the forum’s background workings right on the forum (trust levels, flagging)
  • find creative ways how to absorb the massive mistrust stemming from cs history, so it doesn’t affect most discussions

thanks for sharing more thoughts! :+1:

1 Like

I did not see any trolls but just future members so far. Nevertheless there have been misunderstandings already. For shure pattacks are not ok, but i just saw misunderstandings no attacks so far. Maybe moderators already doing good job cleaning them out, idk… so far i have trust in your future members.

So you are able to see the future and read ALL future members? Wow!

I dont know. I think i speak very kindly and didnt attack anybody but my way of english just differs from yours. But when a comment using rude words to which i replyd got delited, that kind of triggered me. This tactic was used in cs to make my postings look dumb. But this forum is already better bc its still visible i replyd to delited answer… (nevertheless it makes me look dumb bc i wouldnt use weirdo+freaks myself)

Yes we have. Bc all i see is future members, people that want to build sth great and no aggressions. I am highlighting problems in my words from my perspective which you judge negatively. In cs so many REAL members were called trolls. example: they just nicely greeted others in groups.

Thats why i asked, who are the moderators? As long is i dont know them, i dont trust them bc i got burned before. Are you one?
But whenever i see someone being ignorant or hurting others here, i will reply to them (and not report them). Cant you accept me being different from you?

You are right. I just did not see this behaviour here so far, there will be cases when a moderator has to remove comments. But i want this to be open, like moderator xy removed this bc of…

Jist like @nolo wrote. Thank you. I see you heard and understand my concerns.

You are right. I was not thinking further. Just now you need all kind of input, even trolls can open up different ways of thinking. And future members being ignorant unwillingly might get pushed away when you censor them instead of openning convercacions with them.


You can always see the current moderators of the forum here :slight_smile:


I understand now why you feel triggered about the deleted comment, and agree with this. Well, we are in the process of discussing and defining how things are going to be moderated as a community here, so there may be some mistakes here and there. :wink: I think the rules governing the forum will change based on your feedback too!


I absolutely agree with this. When I moderate on Reddit and need to remove a post, I like that I can choose to make a public note as to why I (or someone else on the moderation team) removed a particular post.

A couple generic examples from my experience on Reddit:

  1. Someone re-posts a link/topic that was already submitted very recently by someone else.
    • Removal note: “Repost - see initial post here:” with a link to that post.
  2. Someone makes a post in which they call another person an expletive and insult them.
    • Removal note: “Rule 1” with a link to the rule, in this case: Be respectful. Don’t insult, name-call, disrespect, humiliate, or harass.

In most cases a removal note is helpful for both the poster and the community understand why something was removed. But sometimes a removal note is not necessary. For example, if the post is a particularly egregious violation of a rule, like commercial spam, a threat of violence, hate speech, etc, there is no need for a removal note as further action/follow-up with the poster will be needed anyway.