Reporting and Moderation

We will include an option to report users if they breach the terms of use (to be defined later). Situations may include people trying to advertise or make a profit, or worse situations like people being racist or violent.

While for the large part we want to avoid policing of users, there will be some situations where it is necessary. So how should we approach those reports? I think we want to avoid a centralised moderation team, as it isn’t scalable if we have a lot of users. Rather, is there some way we could distribute moderation responsibility to trusted members of the community?

What are the punishments for violating the terms of use? What kind of appeal process should there be?

1 Like

While we were running a small scale forum we have used centralized moderators. The main issue there is timing. Since it was not someones dedicated job the response time to the report was more than acceptable limits.

The punishment for violating the terms is a tricky one, it depends on the violation. If the foul is int he penalty zone it is red card but if it is in the mid-field same kick is just worth for a yellow card. Therefor we should identify the cases carefully and add them in the ToC at the beginning.

1 Like

That’s a really good point about response times: some things really require a swift response. Maybe we can have a centralised “triage” team, who can mark something as non-urgent that goes to the local community leaders?

You need to have a safety team at some point to deal with these abuse reports.

1 Like

How about having certain types of reports go to an ‘urgent’ bucket that requires responses of 24h or less, and other types in another bucket?

It’s a tricky one, and

  • I like things to be moderated without much “bureaucracy”
  • I like it to be as invisible as possible

In other words: moderators, do your thing firm and just, but keep a low profile on the forum.

Did this in the past for a few years.
Respect for those who put their energy in it and I don’t envy them.

1 Like

This is a tough one, and it might be helpful to speak with BW volunteers about how they do it. Essentially it’s all about platform size and scalability.

In years past I’ve been a group moderator on CS, but absent any normal forum tools it was difficult to enforce anything as a moderator — you needed to rely on well-written group guidelines, the platform’s terms of use, and mostly on Trust & Safety to take action based on the guidelines and terms of use.

Hi all :hugs: I think there are two parts for this post:

  1. Abuse and violence
  2. Spamming

I think what Couchsurfing is doing in regards to point 1 is good and should be the first thing we do - let people know that should notify the police because this is out of our scope and there’s not much we can do. We could perhaps add a mark to the profile immediately that indicates to other users that there is an open case and that way we avoid the waiting time in a way. This is a bit tricky because if someone does this by mistake or on purpose to sabotage it could get really annoying but we could punish those people by blocking their account or giving them a mark as a potential saboteur. I think leaving this for the community to decide by allowing people to mark people like this until support steps in is a great idea.

Regarding point 2 I think we could put a limit on the amount of messages one can send in a period of time trustroots has been playing around with that recently. I’m reluctant to blocking people’s ability to sending many requests though so I’m not sure if I like this approach…

2 Likes

This. Generally, make it absolutely clear in the documentation, in the FAQ, during signup etc. that if you’re in an emergency situation call police first and then the site in that order.

3 Likes

This needs to be considered very carefully. In some countries a women getting sexually violated is HER fault, and in other countries being gay is against the law and hated.

4 Likes