Talk:WMF Global Ban Policy/Archives/2018

Commons

Hi all,

Wikipedia is usually define as part of Commons. In short, a shared resource with a shared governance. French version of the article on this subject use Wikipedia as an example, saying governance is by community consensus. I feel WMF Global ban procedure imply Wikipedia and other wikimedian projects are not commons, because the procedure is not negotiable by the community on which it acts. Is it correct or did I miss the place where this procedure can be discussed? Noé (talk) 08:25, 17 January 2018 (UTC)

Wikipedia is a website. The Wikimedia Foundation hosts that website. While most functions are left to the community, the WMF global bans is one area where the website operator needs to step in to deal with certain cases. Honestly, WMF global bans are more of a feature than a bug. They've handled a bunch of cases where the community wouldn't have been able to adequately respond, and they can play whack-a-sock much better with full-time staff working on it than an ensemble of community volunteers can. There is also some community oversight of the global bans process, though it is restricted to trusted user groups. – Ajraddatz (talk) 09:06, 17 January 2018 (UTC)
As I said on French Wiktionary the Foundation thinks that community governance and autonomy is really important but there are times where, as the host, we need to take action. This means that there are areas where issues are not up to complete consensus. Many of these situations revolve around legal or safety issues for example while we had a long discussion about the Terms of Use the last time it was updated it was, ultimately, a decision for the legal department and the Board of Trustees (including many elected by the community). No community is able to opt-out (without creating a new site, or forking) unless the specific situation allows it (such as for the paid editing rules).
The Office Action policy and the Global ban policy fall under this as well. We have tried to mitigate the issues with as much transparency as we're able to about the process. This is why, for example, both the Office Action Policy and the Global ban policy were updated recently to try and explain more about what we may do and the process we go through when we do it. As Adrian points out we have also tried to put community oversight where we can. We tell Stewards more about global bans or situations, for example, and sometimes tell other trusted groups like Arbitration Committees where it makes sense. Sadly we can't be fully transparent however, safety still comes first.
For the record while Community consensus is not always possible on these issues I do hope to do more in the way of community discussion about how to solve issues like this with as much community support as possible. These will be long discussions with a lot to work out and so likely won't happen until much later in the year but I think there is a need for some sort of middle ground. The Stewards aren't the best group to handle cases of harassment, for example, (they have enough other work) and there are a number of cases where privacy and security may be important but the WMF doesn't necessarily need to be handling it. If the community can handle a situation I want to allow them to do so and make sure they have what they need to be able to. Jalexander--WMF 02:15, 18 January 2018 (UTC)

Audition of the accused

Hello,

The policy and procedure do not mention if the accused is consulted personally, through mail, cam or with a personal meeting during the process. Could you please let me know how do you collect his/her point of view in this procedure? Thank you   Noé (talk) 12:11, 10 January 2018 (UTC)

Hi Noé, while for many office actions we try to get the accused's point of view this is not always possible and that's especially true for cases like global bans which is why it is not a normal part of the process. The most important thing for us is user safety and too often the situation is that reaching out to the accused (especially with specific questions or asking them to justify specific incidents) would place others in harms way. Retaliation against reporters is, sadly, all too common in these cases and we believe we are obliged to give people as safe a reporting mechanism as we can. One of the reasons we do independent investigations and require multiple levels of review is to compensate for this issue. We don't take a reporters word for granted either, we need to be able to prove we can trust it and back it up with additional information. Jalexander--WMF 19:28, 17 January 2018 (UTC)
Thanks for the clarification. I still think it is unethical to not heard accused's point of view in a process of eviction. The penalty is very strong, and harm both the person and the community without any chance for a clarification of the problem. I think excluding someone is not the best way to be an inclusive community. We all gain a lot when we collaborate with people with odd behavior and different point of view and we better need to create safe places to express our feelings than to bleach our communities. Well, French Wiktionary community react very strongly because we have a lot of weirdos in our community and we like them despite some attitude that may lead to a global ban. Now, we fear your shadow and it's not sane. I shared the story with English Wiktionary but please, do not look to close to this community, they may loose some admin too if you do. So, I appreciate much better the material you offer to help people deal with harassment than this process of authority delegation you picked. Noé (talk) 09:04, 12 February 2018 (UTC)

Why all texts are bold?

I can't see the reason to do so. --Liuxinyu970226 (talk) 08:56, 5 May 2018 (UTC)

Not for me. iOS 11.3.1 Safari. — regards, Revi 09:23, 5 May 2018 (UTC)
In visual editor's edit mode they are all bold. Stryn (talk) 13:27, 5 May 2018 (UTC)
Fixed. I never click/enable/use VE, so you'll have to verify it's fixed in VE. — regards, Revi 08:04, 6 May 2018 (UTC)

"Opportunity to discuss"

Google's new code of conduct template (they are using the contributor covenant) says "We will notify the accused of the report and provide them an opportunity to discuss it before any action is taken." [1] Does the foundation attempt to communicate with a user before imposing a ban? —Neotarf (talk) 21:46, 9 June 2018 (UTC)

@Neotarf: There is currently no universal policy on this for Wikimedia Foundation bans, but where possible we have tried to do so. I say "where possible" because we have not done so in two instances:
  1. where we believed that it would place victims at a higher risk (and we were confident with the level of evidence we were able to obtain on our own), or
  2. where the likelihood of recovery was deemed impossible for example, if their actions were so extreme that there was nothing they could say to change the action taken.
We have, however, reached out to a number of users during our investigations to get their side of the story. Some of those users were ultimately given Foundation Global Bans, but others were given, for example, conduct warnings or had user rights removed. Sometimes no action was taken at all. We have internally been considering a more formal policy of this type of outreach and will be reviewing this as part of the annual plan program for Improving Trust and Safety processes. WMFOffice (talk) 21:29, 15 June 2018 (UTC)
Thank you for the response. —Neotarf (talk) 23:32, 12 July 2018 (UTC)

Logging of sockpuppet blocks

I think it would be helpful if, in the WMFOffice log, blocks of alternate accounts of globally banned users included in the block summary the name of the primary account. This would make it easier to verify that the manual log didn't miss anything, and would be a good thing for transparency. Would this be possible? --Yair rand (talk) 19:34, 16 September 2018 (UTC)

Return to "WMF Global Ban Policy/Archives/2018" page.