Open main menu


Undue emphasis on blockingEdit

Your introductory "background" section deals almost exclusively with blocking and implies a consensus of volunteers to "improve" blocking tools. But looking at the links gives a different picture of the community discussion.

  • The 2015 Community Wishlist survey is two years old and has only 35 votes, not all of them supports. Little reason is given for the 'support' votes, but one reason 'against' stands out:
"Blocking the worst problem users is a pipedream. Helping out the NSA and its ilk by doing mass surveillance of regular users and occasional visitors by storing troves of user-agent data or sending people out on the web bleeding fancy hacked Evercookie type data polluting their browser ... that's the reality. Just don't."
What effect would this have on the privacy and safety of users who edit from inside the borders of repressive regimes? Or on admins/checkusers who have political positions within repressive regimes?
"Setzen eines „Vandalismus-Cookies“ bei Benutzersperren. Die Cookie-Verfallszeit entspricht dabei der Sperrdauer, maximal aber 1 Tag. Beim Aufruf des Wikieditors Überprüfung, ob ein entsprechendes Cookie vorliegt. Wenn ja, dann keine Bearbeitung zulassen und entsprechende Meldung ausgeben. (/als Zombie-Cookie) (Bug 3233)"
and it means something like
"Set a "vandalism-cookie" for user locks. The cookie expiry time corresponds to the lock period, but a maximum of 1 day. When retrieving the Wikieditor check whether a corresponding cookie is present. If yes, then do not allow editing and output the corresponding message. (/ As a zombie cookie ) ( Bug 3233 )"
Like the other "community discussion", it seems to support implementation of vandalism cookies rather than user agent.
  • The Phabricator requests, see this ticket. If I am reading this correctly, it appears the cookie ones are already implemented, but not the more controversial user agent ones.

Ironically, the 2014 Inspire Campaign referenced in that section has two mentions of blocking:

  • "There is a double standard between admins and editors where admins are allowed to get away with conduct that would cause an editor to be blocked"
  • "There is a double standard between female and male editors where men are allowed to get away with conduct that would cause a woman to be blocked or banned"

While everyone can be sympathetic to the idea of not having "disruptive" users, the sad fact is that there have been quite a number of bad blocks, and that blocks have become more and more political. Any admin can indef any good faith contributor, with no discussion, and no one there to see, unless someone has their talk page watchlisted. There is no mechanism for reviewing blocks, and there is virtually no mechanism for reviewing these "admins for life". —Neotarf (talk) 23:44, 26 January 2017 (UTC)

Neotarf: You're right, there is too much emphasis on blocking in that section. I'll have to correct that. The "What this initiative is about" section is a better overview of what we'll be working on -- improving blocking is part of it, but by no means everything.
I think that the reporting and evaluation tools are going to be key to this project, helping administrators who care about harassment to get good information about what's happening, so they can make good decisions. Right now, the main tool for evaluating harassment cases are diffs pulled out of user contributions, and it's difficult to see the whole picture. We want to work with admins and others to figure out what kinds of tools they need. Do you want to talk some more about the problem with reviewing blocks? I know that people ask to be unblocked, but there's a lot I don't know yet. -- DannyH (WMF) (talk) 00:14, 27 January 2017 (UTC)
So "what this initiative is about" is about finding solutions to bullying and harassment. At this point there seems to be quite a bit of agreement that it will take both social and technical tools, and that the social part will be both the most difficult and the most important. For the moment, I am willing to go along with that, until better information comes along. For some context about social fixes, see "Advice for the Accidental Community Manager" by Jessamyn West and "If your website’s full of assholes, it’s your fault", by Anil Dash, both well known in the social media world.
If your approach is going to be "find out what admins want and give them everything they ask for", you are going to be coming in on one side of a culture war, and you will become part of the problem.
So, what culture war? When I first started on enwiki, there was a divide between admins and non-admins. There was a perception that the bullies had taken over the playground, and the "abusive admins" were harassing the "content creators", who did the proofreading and the actual work of writing articles. In all fairness, there were some real problems at the time, and although the situation did improve, the meme was slow to die.
Then about 2 or 3 years ago, the conflict shifted to one between professional and non-professional editors. The professionals were researchers, educators, program leaders, and software developers, many of whom came to the project through GLAM activities. They found the editing environment hostile, and described it as a "buzzsaw culture". While previous conflicts often saw the WMF in conflict with the community, these editors viewed the WMF as allies. They saw the admins as being very young--some are as young as twelve years old--and mainly interested in Pokeman. There is overlap of course, some very savvy admins started at a very young age, and there are professionals with a keen Pokeman knowledge base.
Others may have a different view--I do still consider myself a newbie, in terms of experience and edit count. I'm not sure what you mean about "the problem with reviewing blocks", when there isn't such a mechanism, but I have a few thoughts about technical solutions I will try to put together later. —Neotarf (talk) 03:03, 28 January 2017 (UTC)
Okay, trying to answer your question about blocking...there have been any number of snarky comments about blocking and banning on the Wikipedia criticism sites, if I had time to look for them, but maybe the easiest thing is to link to Sue Gardner's 2011 editor retention talk to Wikimedia UK where she talks about Wikipedia as a video game [Link -- excerpt is some time after 23:30]: "Folks are like, playing Wikipedia like it’s a video game, and their job is to kill vandals, right, and then we talk about how every now and then a nun, or a tourist, wanders in front of the AK47 and they just get murdered, but in actual fact, what we think now is that it’s all nuns and tourists, right, and it’s a big massacre, right, and there’s one vandal running away in the background, you know, and meanwhile everybody else is dead." [Audience:“Yes”]
So what you can get is situations where, for instance, the staff is trying to provide a creative brainstorming type of situation and the admins and stewards step in and try to make it so the newbies who come to Wikipedia for the first time and make their very first edit to the grant proposal page are told how crappy their proposal is. Does anyone really think the grant team does not recognize a viable proposal? Or that some people are just using the grant process invitation to make some comment to the Foundation where they don't have another venue to do so? You can also get admins demanding that newcomers have conversations about reproductive organs with them, in defiance of community consensus, or arguing with the staff against their "safe space" guidelines, actually disrupting the grant process, or putting into place alternative "free speech" policies that have not been voted by the community, arguing that policy is whatever the admins decide to enforce. This latter statement is probably not meant as hubris, but as a true statement of how things work. So you can see how the staff, and really the whole project, is being held hostage by the admins, or rather a structure that does not allow admins to be held accountable, except in very egregious circumstances, or more and more, for political reasons. Of course they need someone to keep the lights turned on, so when you have the same admins respond to something like the recent situation of compromised Wikimedia accounts, you have comments like this, disparaging "civility" and recommending "Better administrative tools, to help keep out the people that administrators and other people with enforcement authority have already decided should be excluded from Wikimedia sites." An interesting perspective on that here, and I've wasted a bit of time trying to figure this one out.
When I first started editing, there were a lot of proposals floating around to make admins more accountable, but none has gained any traction. At one time I envisioned something like a congressional scorecard that would set specific criteria for admins' actions, but such a thing seems unlikely on WP projects, since few people are willing to risk getting on the bad side of an admin. It could probably be done on one of the criticism websites, but those have only shown interest in criticizing the worst cases, even as they criticize WP for doing the same thing. There have also been periodic proposals for unpacking the admin toolkit, which seems more promising to me, so why hasn't it been done?
I have also heard that admins are not all that necessary anymore, as most vandalism is now reverted by bots.
A longish answer, and one that perhaps only raises more questions than it answers. Regards, —Neotarf (talk) 22:47, 28 January 2017 (UTC)
I have no idea of the percentages of vandalism reversal by bots vs reversal by ordinary editors (obviously you don't need Admins to revert), but I revert a lot of vandalism on a daily basis that hasn't been noticed by bots and probably never would be. Some is a bit subtle, some is glaringly in your face. And I haven't noticed much change in the need to block over the last few years. If anything it's getting worse with the current political climate, and I don't just mean the US situation. Doug Weller (talk) 18:53, 6 February 2017 (UTC)
If you wanted numbers you could probably look at something like this, but vandalism is not the same as harassment, which is more like this (if you haven't seen it already). My point was not about harassment but about the WMF being held hostage by the necessity for admins, for instance this, which I think is an unfortunate exchange, and how many admins are actually needed, and how many of their tasks can be automated. —Neotarf (talk) 02:18, 10 February 2017 (UTC)

Need some English Wikipedia forums moderated only by womenEdit

Community health initiative is a good idea. Too bad it will die due to being organized on the Meta wiki, the place where ideas go to die. Few people go to the Meta wiki. If you want more participation put the Community health initiative pages on English Wikipedia.

I initially found out about this info on this rarely visited blog:

It does not allow comments via Wikimedia login though. Why even have a Wikimedia blog that only allows easy login via Facebook, Twitter, and Wordpress? When I started to login there via Facebook it immediately asked to datamine me. I cancelled out.

I came there via a Google search looking for forums addressing the problems of women editing Wikipedia. I see that the problem is from the top to the bottom. From WMF to the editors. No forums moderated only by women that are dedicated to these problems on the most watchlisted Wikipedia, the English Wikipedia. The Teahouse has a majority of male hosts.

Foundations throwing money at related problems for years, but the money is wasted due to lack of participation because they usually go through the Meta wiki. I have seen so much money wasted on projects organized through the Meta wiki. --Timeshifter (talk) 03:14, 28 January 2017 (UTC)

Hi Timeshifter, we will be doing a lot of work and discussion on English Wikipedia, once we really get started. Meta is the home site for the Community Tech team, because our team works across all projects -- check out the 2016 Community Wishlist Survey for an example of a successful project that's organized on Meta. :) But the community health initiative is primarily focused on English WP, so we'll be making some new pages there, once we've got the new team together.
Oh, and those are good points about the Wikimedia blog, and having forums moderated by women. I'll pass on your concern about the blog to the folks who work on that. We'll have to talk and work more on the female-moderated forums, to figure out how we can help. Thanks for your thoughts. -- DannyH (WMF) (talk) 19:08, 30 January 2017 (UTC)
Thanks for replying and for passing on info and ideas. Note that I haven't replied until now. Because unless something is on my English Wikipedia watchlist I tend to forget about it. Please see related discussion here:
Grants talk:IdeaLab/Inspire/Meta - see section I started.
Community Wishlist Surveys tend to be ignored in my experience. A cross-wiki watchlist was in the top ten on one of those lists from a previous year. Still no popular cross-wiki watchlist. There was one that was close to becoming useful, but it was abandoned right when it was getting interesting. I hear there is another one in the works. But like I said, if it is being developed on Meta, I just will not know of it. --Timeshifter (talk) 20:59, 9 February 2017 (UTC)
Timeshifter, we shipped five features from last year's Community Wishlist Survey -- a bot to identify dead external links so they can be replaced, a change to diffs that made changes in long paragraphs appear more consistently, a tool to help users identify and fix copyright violations, a change to category sorting that makes numbers work properly, and a Pageviews Stats tool. The cross-wiki watchlist was on last year's survey, and we're still working on it. It requires some database changes that make it a longer process than we'd hoped, but we're making progress and I expect it'll be finished within 2017. I hope you come back sometime and see this reply, but I guess it's up to you whether you're interested in the answer or not. :) -- DannyH (WMF) (talk) 23:23, 9 February 2017 (UTC)
I know there is great work being done, but it just isn't what I want. ;) Everyone probably says that concerning something. When I said that Community Wishlist Surveys were ignored, I meant relative to the far greater participation it would get if it were posted on English Wikipedia. My 2 points are interrelated. People buy into what they participate in. --Timeshifter (talk) 03:05, 10 February 2017 (UTC)
There already a gender forum at ; before you start another one you might think about why no one posts there. The six arbitration cases listed in the side bar might be a clue, also the comments here from the late Kevin Gorman. —Neotarf (talk) 02:30, 10 February 2017 (UTC)
In my opinion there needs to also be a female-only version of the en:Wikipedia:Teahouse. But I am speaking as a guy, so what do I know. --Timeshifter (talk) 03:05, 10 February 2017 (UTC)
Late to conversation. Typing from phone... I had a similar opinion and proposed this - - and created this - The proposal topped the leader board for that campaign, but also had a lot of opposition. Both efforts died, mostly, I believe, because of the efforts of two Wikipedia camps: pro-gun editors and their wiki friends, and foul-mouthed editors - including those deemed "valued content contributors" - and their friends. They ultimately got me site banned from Wikipedia. Lightbreather (talk) 17:24, 19 February 2017 (UTC)
I only have so much time and energy. So I haven't read up on all the particulars with your situation. But I think it is insane that only userspace can be used by women to talk amongst themselves about Wikipedia issues without interference from men. I did not realize until just now how backward Wikipedia and the Wikimedia Foundation is. This is enlightening:
Without weighing in on the larger question about how to provide safe spaces so that all users are comfortable participating in Wikimedia projects, I wanted to clear up the misunderstanding related to the WMF non-discrimination policy. In WMF Legal's opinion, the non-discrimination policy does not prohibit users from setting up a women-only discussion in their user space, because the policy was passed by the Foundation board to apply to acts taken by the Foundation and Foundation employees, not individual users. Other policies may, of course, apply. [1]
-- Luis Villa, Deputy General Counsel, Wikimedia Foundation, 7 February 2015
See: Non discrimination policy: "The Wikimedia Foundation prohibits discrimination against current or prospective users and employees on the basis of race, color, gender, religion, national origin, age, disability, sexual orientation, or any other legally protected characteristics. The Wikimedia Foundation commits to the principle of equal opportunity, especially in all aspects of employee relations, including employment, salary administration, employee development, promotion, and transfer."
How twisted that a policy to prevent discrimination against women and others is used by some to stop forums for women.
For example: Wikipedia:Miscellany for deletion/User:Lightbreather/Kaffeeklatsch.
--Timeshifter (talk) 03:25, 21 February 2017 (UTC)
@Timeshifter: Just to note that Wikipedia:User:Lightbreather/Kaffeeklatsch exists although it's defunct.Doug Weller (talk) 11:56, 21 February 2017 (UTC)
I was referring to the deletion discussion about it. Where some people were trying to use the WMF non-discrimination policy in order to discriminate against the women trying to set up a forum for women to discuss Wikipedia-related things amongst themselves. --Timeshifter (talk) 01:26, 22 February 2017 (UTC)
@Timeshifter, I'm not sure why you would want such a group, but you may be interested in the discussion about creating a sub-forum for women on MetaFilter some time ago. There was also some discussion of "castle projects" (German: "Stammtisch") on Lighbreather's proposal. A Stammtisch could be "inclusive to anyone supportive of its goals, but could quickly remove anyone disrupting it." As a result, I started User:Neotarf/Stammtisch, also defunct. —Neotarf (talk) 20:42, 2 March 2017 (UTC)

Comments from BethNaughtEdit

This is an epic project and I hope that it can be well executed and have a significant impact on harassment. As an English Wikipedia adminstrator who spends a large proportion of their on-wiki time dealing with a single troll via edit filters and rangeblocks, I am particularly interested in the "Blocking" and "Detection" parts of the initiative. I would like to keep up to date with the project: will there be a newsletter to sign up to? Also, where can I find more details on the planned changes to the AbuseFilter extension? I hope that the scoping exercises will cover these very technical issues as well as general community policies. Thanks, BethNaught (talk) 19:09, 30 January 2017 (UTC)

Hi BethNaught, I'm really glad you're interested; we'll need to work with a lot of admins on this project. We're currently hiring people to work on the project -- the official start is March 1st, but as people join the team, we'll be creating more documentation and starting to make plans.
Community Tech has been doing some work on blocking tools over the past few months, based on a proposal on the 2015 Wishlist Survey and interest from the WMF Support and Safety team. The two features we've worked on are setting a cookie with each block (phab:T5233), and creating a Special:RangeContributions tool (phab:T145912). We're going to finish both of those pretty soon, and then that work will be continued as part of the new community health project.
Right now, the plans for feature development -- like changes to AbuseFilter -- are intentionally vague, because we want to partner with admins and others on English WP to figure out what needs to be improved. For now, bookmark this page -- there'll be a lot more updates coming soon. Let me know if you have more questions... -- DannyH (WMF) (talk) 19:55, 30 January 2017 (UTC)
@BethNaught: I can speak for enwiki that the administrators' newsletter will keep you informed :) Any major technical changes will also be announced in the Tech News. MusikAnimal (WMF) (talk) 23:38, 7 February 2017 (UTC)

Harassment is intrinsic to the WP systemEdit

Harassment and more globally to take care of the "Community health" is a very important topic to cover and address. It is really time.

But from my point of view harassment is not due to the behaviour of some bad contributors, uneducated people, agressive personnalities, trolls, ... who should be blocked as soon as they are identified and then efficiently banned.

Harassment (that should be globally seen as any serious behaviour that is in contradiction with the 4st pillar) is an unavoidable consequence of the working principles of the WP project in which numerous people have to interact in *writing* when human communication is based on *visual* interaction. This is accentuated by the fact they have to achieve a *common result* (the content of the article on which they discuss) and for which there are not precise and definied rules of *decision* (rule is that there should be a consensus, which means nothing).

Thes conflictual situations generate hot discussions that can only de-generate into conflicts (by the principle of spiral of violence). In these circumstances, the solutions set up to restrain conflits are:

  • discussions with third party, which is good but not easy to manage and difficily efficient again due to the nature of the communication (*writing*)
  • coercion, which means failure but unfortunaly is the only way to solve issues that is working today.

With other words: Wikipedia has offered to people a place where to develop articles and discuss their content but has not set up efficient mechanisms to rule and manage these discussions, stating that it is community's business to manage them. Expecting that it would work that way was an utopy (due to human nature) . When human people are put in a situation of pure chaos this generates into conflict (some sort of struggle for life to get the point). And this however goodwill, good faith or just smart they coulb be... And humans are just normal...

It should not be forgotten either that the harassment comes also from "good" and well-establish contributors (and sysops and institutions...), who to "survive" in this system gather themselves, to protect themselves and just become stronger. (And this is not a complotist theory -> that a sociological obvious behaviour)...

Have a look at en:Stanford prison experiment and related experiments. That's what we face but in another context. (And here is an example illustrating this)

Not new: by the way this is not new. Citizendium was launched based on the same assessment and a solution was to force contributor to interact under their real identity, in giving an editorial decision power to some contirbutors and in focusing and priviledgin expertise... That's interesting to understand if Citizendium failed because contributors don't feel motivated by a "project" when they can't fight anonymously and when they are expert to take final decisions... That's not directly linked but that's linked anyway.

Proposal: Experts in en:systems psychology should beinvolved to study wikipedia system, understand ist mechanismes, identify its weaknesses and the origin of agressivity and harassment inside this and suggests solutions.

Short term remedies: at short term and before deeper diagnosis could be brought I think that the actions to take should be:

  • consciouness: all contributors should be a way or the other informed or invited to take awareness of the system in which they are "playing", what are the rules of such systems, the difficulty of making it work, the uselessness and counterproductivity of "fighting/stress/agressivity" in such a system, ...
  • highest standards should be expected (and if needed enforced unfortunately) in term of contributor interaction respect (both other people and other ideas should be respected) and therefore any kind of agressivity, from the smallest level, should be prevented (corrected, prevented & sanctionned).
  • practical proposals: 0RR - formalisation and standardisation of communication protocols - establishment of content reviewer committees, ...

Pluto2012 (talk) 10:20, 5 February 2017 (UTC)

NB: I fear I am too late. That's not problem for engineers. That's a problem for sociologists. Good luck.

Smart comments above. Engineers and computer scientists cannot solve this problem alone. Psychologists and those in the social interaction academic fields should be at the forefront. 07:04, 28 February 2017 (UTC)
And the solution that is foreseen using 'bots' and other 'automatic filters' is even worse. The real issue is just denied. Instead of trying to educate people, thinking about the system and making it evolve, we are just going to "shot" at and make shut up everybody who will not "play the game"... I don't feel at ease with this.
People just needs to be listened and understood and then, informed. Here, they will be rejected. We are going to generate cyberterrorists... :-( Pluto2012 (talk) 20:37, 1 March 2017 (UTC)

LTA Knowledgebase and abuse documentationEdit

Over at the English Wikipedia, there are talks (permalink) to implement an off-wiki tracking system for long-term abuse. The idea is that by hiding this perceivably sensitive information from the general public, we avoid BEANS issues where the abusers could use the info to adapt their M.O. and avoid detection. This seems similar to what WMF had in mind for the health initiative, so I thought I'd start a discussion here so we can discuss plans and see if it makes sense to team up.

As I understand it, the project for the English Wikipedia tool is being lead by Samtar and Samwalton9. A demo for the tool can be seen here. I am personally quite impressed with the idea and the interface, but I imagine if we were to adapt it for the anti-harassment project we may need to broaden the scope (so it's not just LTA cases), add multi-project support and internationalization. That is something I'm sure Community Tech would be willing to help with, if we do decide we should join forces with the Sam's. TBolliger (WMF) (Trevor) is the product manager for the health initiative project, and can offer some insight into the WMF plans.

So what does everyone think? The RfC on enwiki seems to be going well, so there at least is support for the idea. I know the health initiative is still in it's early stages, but do we have an idea of what we would want in a off-wiki private tool? Does it align with what enwiki is working on? I don't think we have to decide right now, but I also don't want to hold back the enwiki project. MusikAnimal (WMF) (talk) 18:10, 8 February 2017 (UTC)

Hello! And thank you, Leon for looping everyone in. I'm still getting my bearings here at the WMF and we're still assembling the rest of the team for the CHI so I don't have anything more concrete than what's written about on the Community health initiative page and the embedded grant document. One large question we (WMF + wiki project communities and users) will need to work through will be finding an appropriate balance between transparency and privacy for potentially sensitive content. (For example, is a public talk page the most appropriate place to report and discuss incidents of harassment?) Your work on this LTA list is a great pilot program for the concept of hosting community related content off-wiki, and could potentially be expanded/connected to our efforts in the years to come.
I admire the initiative you two have shown and I will be watching your work. Please contact me if I can be helpful in any way. --Trevor Bolliger, WMF Product Manager 🗨 20:09, 8 February 2017 (UTC)
Jumping in here at a later date, it appears consensus is forming for some sort of improvement to the current LTA page setup, but that an off-wiki tool is not looking favourable. A private wiki (like the English Wikipedia Arbcom wiki) has been suggested a couple of times -- samtar talk or stalk 11:27, 20 February 2017 (UTC)
Thanks, Samtar. If visibility is really the only problem to solve (as opposed to searching, sorting, or filtering, or efficacy in data entry) then simply moving the existing templates and content to a private wiki seems to be an appropriate decision. Riffing off this decision — and this idea would need plenty of research and consultation — but what if we built a mechanism to make a single wiki page private on an otherwise public wiki? This functionality already exists for some admin or checkuser features. — Trevor Bolliger, WMF Product Manager 🗨 01:32, 28 February 2017 (UTC)

Better mechanisms for normal editors tooEdit

I am not an administrator but have been an active editor on the English Wikipedia for over ten years. In recent months, my involvement on WikiProject Women in Red has led a number of editors to report serious cases of unjustified aggression and stalking to me. One of the main problems cited has been continued attacks by an administrator which has resulted in several of our most productive editors either leaving the EN Wiki completely or cutting back considerably on creative editing. From what I have seen, the editors concerned have been unable to take any effective action in their defence and other administrators have been reluctant to do much about it. Furthermore, while several editors have been following these developments carefully, with one exception, the administrators responsible do not appear until now to have been aware of the extent of the problem. I think it is therefore essential for editors to have reliable reporting mechanisms too so that they can communicate their problems without risking punitive consequences themselves (as has often been the case). In the Women in Red project, we have tried to attract and encourage women editors. It is heartbreaking to see that some have been constantly insulted or completely discouraged for minor editing errors. Further background can be seen under this item on the Women in Red talk page.--Ipigott (talk) 16:39, 26 February 2017 (UTC)

@Ipigott: As an English Wikipedia administrator, I'm obviously rather interested in hearing more - could you email me some more information and I'll see what sort of action could be taken -- samtar talk or stalk 19:19, 27 February 2017 (UTC)
Maybe it's time for another reminder from the late Kevin Gorman, who used to moderate the gender gap mailing list:

I get about twenty emails a week from women Wikipedians who don’t want to deal with any of the process on Wiki, because every arbitration committee case that has involved women in the last two years, has involved all of them being banned.—Kevin Gorman

That goes for Meta too. —Neotarf (talk) 00:36, 28 February 2017 (UTC)

Hi Ipigott. Thank you for your questions and for sharing your project's story. It really is heartbreaking. We hope the tools we build and decisions we make will help avoid the problems your project recently went through. No person should be harassed or bullied into reducing their enjoyment, safety, satisfaction, or pride of contributing to Wikipedia. All contributors (admins and non-admins) can be the victim of harassment and all victims deserve the same resources and respect to report their incident for investigation. And I wholeheartedly agree that the current reporting processes open victims to additional harassment. This is unacceptable.

As for admin unresponsiveness — one of our theories is that many admins do not participate in investigating conduct disputes because it is time consuming and difficult to track incidents across multiple pages. We hope to counter-act this by building an interaction history feature so admins don't have to dig through diffs, histories, and talk pages. Likewise, we want to make some improvements to the dashboards that admins use to monitor incoming reports of harassment. But these are just tools — we also want to help equip our admins with the skills to better identify harassment and make more informed decisions in how to respond to both the victims and the harassers.

Most importantly — all our work will be built on top of in-depth research into the current tools, workflows, and policies on English Wikipedia (at a minimum) as well as consultations with both affected projects and users like WikiProject Women in Red and administrators. As we learn more, our plans will change but the guiding principles and focus areas will remain the same. — Trevor Bolliger, WMF Product Manager 🗨 01:32, 28 February 2017 (UTC)

TBolliger (WMF): Thanks for these reassurances. I'm glad you agree with the need for improvements in the reporting process. You are also quite right in concluding that administrators seldom have the time or patience to go through long lists of past edits but at the moment without doing so it is impossible to draw up a meaningful history of abuse. I look forward to seeing whether your new tools will make it easier to overcome some of these constraints. I'm afraid we are faced with the choice of submitting complaints to ArbCom on the EN Wiki (with the risk of little or no action) or waiting to see if the health initiative can provide a more positive environment for overcoming the problems we have been facing. Perhaps you can provide some estimate of how long we will need to wait for the proposed tools to become accessible. Or whether there is any effective forum ready to examine our findings in the interim.--Ipigott (talk) 16:54, 28 February 2017 (UTC)

The WMF is working on solidifying our FY 2017-2018 plan so nothing is set in stone, but our current draft timeline can be read at Community health initiative#Schedule of work. The reporting system is currently scheduled for 2018 (which is certainly and unfortunately too far away to help your current problems) but if we hear during our consultations and conversations that the reporting system should be fast-tracked, we will rearrange our plans to better serve our communities.

As for your current problems, I do know the Trust and Safety team does review specific cases of harassment, and that they can be contacted through — Trevor Bolliger, WMF Product Manager 🗨 18:09, 28 February 2017 (UTC)

Google's Jigsaw/Perspective project for monitoring harassmentEdit

In connection with the above, I happened to come across news of Google's recently launched Perspective technology as part of its Jigsaw safety project. In particular, I was interested to see that Google had analyzed a million annotations from the Wikipedia EN talk pages in order to study how harassment had been followed up. Apparently "only 18 percent of attackers received a warning or a block from moderators, so most harassment on the platform went unchecked." Has there been any collaboration with Google on this? It seems to me that it would be useful to draw on their analysis, possibly adapting the approach to identifying aggressors and preventing unwarranted abuse.--Ipigott (talk) 14:22, 3 March 2017 (UTC)

1,000,000 comments were rated by 10 judges...
That makes roughly 1 full-year of work for each of them, assuming they have done nothing else but rating these. Poor guys...
This linked is easy to rate a 'fake' ;) Pluto2012 (talk) 16:03, 3 March 2017 (UTC)
This is not fake. It has been mentioned on the WMF blog and widely reported in the press.--Ipigott (talk) 17:14, 3 March 2017 (UTC)
Indeed, "ten judges rated each edit" is not the same as "ten judges rated a million edits", although it may sound similar in English. For details of the methodology, the text of the study is here: [1]. There is also a more comprehensive writeup at the google blog: [2]Neotarf (talk) 20:49, 3 March 2017 (UTC)
I understood but it is a fake news or a lie...
1,000,000 / 10 = 100,000 edits
To rate one edit, let's assume it takes 1' (you have to read and understand)
100,000 / 60 / 8 = 208 days ie 1 year of work.
Pluto2012 (talk) 06:11, 4 March 2017 (UTC)
Pluto2012: You're assuming that there were only 10 people working on the panel, reviewing every edit. According to the journal article, they had a panel of 4,053 people, which reduces the workload considerably. :) -- DannyH (WMF) (talk) 19:50, 7 March 2017 (UTC)
DannyH (WMF): I just read what is in the article: "Ten judges rated each edit to determine whether it contained a personal attack and to whom the attack was directed."
Neotarf first said that the sentence was not clear and I was wrong to assume that each edit was analysed by ten judges when we had to read they were 10 to make the whole work. Now you tell me that they were 4000+. As stated, I just read what is written and that clearly impossible... Pluto2012 (talk) 04:54, 9 March 2017 (UTC)
Pluto2012: Yes, the sentence in the TechCrunch article was unclear. It should have said, "Each edit was rated by ten judges" [out of 4,000+] instead of "Ten judges rated each edit". There were 4000+ people on the panel.[3] -- DannyH (WMF) (talk) 19:24, 9 March 2017 (UTC)

Yes, a researcher here at the WMF was working with Jigsaw on this project. His findings can be found on Research:Detox. This machine-learning based API can only detect blatant harassment at the moment (e.g. "you are stupid" scores as 97% toxic whereas "you know nothing" score 26%) so claiming that only 18% of harassment is mediated is not entirely accurate — it may be an even lower percentage. At the moment I don't think this exact measurement is worth monitoring — some situations can be self-resolved without the help of an admin, sometimes an aggressive comment is part of a healthy debate (e.g. "we're all being stupid" score 92%) and there is the opportunity for false positives (e.g. "this constant vandalism is fucking annoying" scores a 97%.)

At the end of the day, we want to build tools that empower admins to make the correct decisions in conduct disputes. We are certainly going to explore if using an AI like makes these tools even more powerful. — Trevor Bolliger, WMF Product Manager 🗨 18:30, 3 March 2017 (UTC)

Apparently "only 18 percent of attackers received a warning or a block from moderators, so most harassment on the platform went unchecked." Yes, good catch, the assumption behind this project has been that admins are unable to detect blatant harassment on their own. What if there is something else going on?
Also, I notice that after I mentioned "undue emphasis on blocking", the comments on blocking were merely moved to a difference section. No, I meant "blocking" as being the only tool in the playbook. I know it is the Wiki Way to find the biggest stick possible, and then find someone to hit with it, but what about *talking*? I have always found it ironic that there are five different levels of informational templates for vandals who can barely communicate above the level of chimpanzees typing randomly on a keyboard, but one admin acting alone has the authority to indef long term good-faith contributors without any discussion for any reason or for no reason. Why not simply remove the harassing comment? Why give it a platform at all? —Neotarf (talk) 21:30, 3 March 2017 (UTC)
"there is the opportunity for false positives (e.g. "this constant vandalism is f*...." A big "citation needed" there. If dropping the f-bomb is viewed by the WMF as a desirable manner for admins and employees to conduct themselves, and to defuse potentially inflammatory situations, maybe the next question should be whether this project team intends to be responsive to the needs and values of the editing community as a whole, or just expect to impose its own interpretations and explanations on marginalized groups, as it does not seem to be a very demographicly diverse group. —Neotarf (talk) 22:09, 3 March 2017 (UTC)
Would you agree with the idea that at the end having an IA automatically sanctionning harassment would be disastrous or do you think this has to be nuanced ? Pluto2012 (talk) 06:20, 4 March 2017 (UTC)
I would agree that any system that purely judges how aggressive a comment is and auto-blocks wouldn't work on any Wikipedia or any other wiki project. It's too open to both false positives and false negatives. But I wouldn't say 'disastrous' — there's room for exploring some preventative solutions. (EN Wikipedia and others already use some preventative anti-abuse features such as AbuseFilter, auto-proxy blocking, etc.) — Trevor Bolliger, WMF Product Manager 🗨 19:15, 7 March 2017 (UTC)
I permit to insist that you try to get some support from expert in social sciences.
You don't seem to understand what I mean and the consequences of what you build... Pluto2012 (talk) 04:56, 9 March 2017 (UTC)
As a social science researcher, I understand the social sciences are messy. Nothing is ever black and white and there is a whole lot of grey. Communities are always different and definitions and values are generally unique. Harassment, however, is universally toxic to communities. I personally was excited when I first read about this study. It is the intersection of topics very important to me: community, harassment, education equality, and I've always had a love for computer learning. I honestly spent a few hours reading about this tool and playing with it trying to sort out what might be false positives, or false negatives. I did find some potential holes (with terms generally directed toward women), but that does not mean it is all a loss. In social science research, sometimes you don't know what you'll find, and sometimes those findings surprise you. Even cooler is when some of those findings actually help other people. I'm hopeful this will help a whole lot of people. For now, how about we assume good faith and see how this impacts the community health for current and future contributors. Jackiekoerner (talk) 03:00, 10 March 2017 (UTC)
Of course: Harassment is universally toxic. That's not the point. The point is that harassment is not spontanous or coming from bad people joining the project but it is generated by the wikipedia system on normal people.
The problem is not therefore how to fight it but how to prevent it to arise !
See eg: [2]
-> For each tool developed, the targeted people will find alternatives to express their 'feelings'
-> Other form of harassment (and feeling of harassment) will appear if you counter these
Pluto2012 (talk) 05:39, 5 October 2017 (UTC)


  1. Wulczyn, Ellery; Thain, Nithum; Dixon, Lucas (27 October 2016). "Ex Machina: Personal Attacks Seen at Scale" – via 
  2. "When computers learn to swear: Using machine learning for better online conversations". 23 February 2017. 
  3. Wulczyn, Ellery; Thain, Nithum; Dixon, Lucas (27 October 2016). "Ex Machina: Personal Attacks Seen at Scale" – via 

Some technical queriesEdit

Since this group seems only inclined to answer technical queries without regard to ethical and organizational implications, there are a few technical issues I have not been able to find an answer to; perhaps someone here has time to give some feedback on some of these. 1) I see auto-blocks mentioned above and just had a conversation about it a few days ago with user who is blocked on enwiki: "... whenever I copy the contents from EnWP it triggers the autoblock if I am logged in. Which means I have to log out, cut the content to something like notepad and then log back in. It's too much effort..." The problem with this is that when the autoblock triggers, it also makes the name of the blocked user visible to anyone else at that IP, thus linking their identity in real life with their user name, which by my reading, is a violation of the privacy policy. There was a fairly high-profile situation a few years ago where someone triggered an auto-block at a conference. One of the Wikipedia criticism sites is reporting that admins do not think the privacy policy applies to blocked users, but it seems like at the very least such users should be informed in advance that their IP will be linked to their user name. 2) Is it possible to turn off the extended edit information on the edit count feature on enwiki? Mine seems to be locked in. 3) The official website for the Syria Direct news service is globally blocked, due to an edit filter for the nonexistent site —Neotarf (talk) 00:19, 10 March 2017 (UTC)

English Wikipedia discussion regarding harassed administratorsEdit

I thought it might be interesting to those working on and following this initiative to follow a discussion I opened on the English Wikipedia regarding returning administrator rights after a user is harassed to the degree that they undergo a clean start on a fresh account. See here. Samwalton9 (talk) 08:51, 19 March 2017 (UTC)

Hi Sam, thanks for sharing this. It's certainly an interesting topic with a lot of insightful side conversations occurring. One of the goals of the Community Health Initiative is to empower more admins to be confident in their dispute resolution decisions. I've been thinking of this in two dimensions: motivation & ability. 'Ability' will manifest as training and tools so admins can make accurate and fair decisions. 'Motivation' is a little more difficult — it is learning why some admins never participate in dispute resolution and why some cases on ANI are entirely ignored, and providing resources to combat these reasons. Fear of harassment as retribution is definitely a hurdle for involvement, and even worse is a disgraceful result for hard working admins who are legitimately trying to make Wikipedia a healthier environment for collaboration. — Trevor Bolliger, WMF Product Manager 🗨 17:59, 20 March 2017 (UTC)
Hm. Some admins only care about content and not "community management", that's entirely normal. We have a problem if a user cares about handling hairy stuff such as w:en:WP:ANI but doesn't "manage" to; not if they don't care at all. Nemo 19:45, 21 March 2017 (UTC)
I believe you're absolutely correct. I suspect most (if not all) admins became admins because of their involvement in the content building/management. (Browsing a dozen or so RfAs reinforces this suspicion.) I don't expect all admins to participate in community management/moderation, but with 100,000+ active monthly contributors there is certainly the need for some people to perform this work. And those people should be equipped and prepared to be successful. — Trevor Bolliger, WMF Product Manager 🗨 21:11, 21 March 2017 (UTC)

Too technical maybe?Edit

I want first to congratulate the authors of this proposal and I hope that it goes through.

My small contribution to its improvement:

I suspect that this is a computer expert driven initiative, hoping that some tools will help Wikimedia users to better detect harassment. These tools will definitely work... at first. So I support this proposal.

However, I believe (and I have felt it really bad) that harassment can take place in many more covert ways. These cannot be dealt with by tools and software. I would really put much more effort in

  • making a diagnosis on each and every wiki
  • training administrators (yes, I saw that you propose this as part of the job of one person)
  • training for all users to better handle harassment.

--FocalPoint (talk) 12:32, 26 March 2017 (UTC)

Hello, FocalPoint, and thanks for sharing your perspective. I agree — software will only go so far. We view half of the initiative as being software development (thusly named Anti-Harassment Tools) and the equally important second half will be resources (named Policy Growth and Enforcement.) This current wiki page is heavy on the software plans only because I've been working on it since January while SPoore_(WMF) (talk · contribs) just started two weeks ago. That content will grow as we develop our strategy. You may see some of the next steps of this on Wikipedia(s) first as we begin a more proactive communication methods.
Could you please share a little more of your thoughts around what you mean by "making a diagnosis" ? It'd too vague for me to completely understand what you mean. Thank you! — Trevor Bolliger, WMF Product Manager 🗨 21:27, 27 March 2017 (UTC)

Hi Trevor,

making a diagnosis on each and every wiki


  • Stage 1: a simple general diagnostic questionnaire - impersonal and with easy questions, example:
    • what do you like best when contributing?
    • do you participate in discussions?
    • have you ever felt uneasy when contributing to articles?
    • have you ever felt uneasy during discussions?
    • a bit more intrusive but still easy questions

It will probably be easier with multiple choise answers.

Look Trevor, I am no expert, but I know that when you want to diagnoze if kids live in a healthy family, phychologists ask them to make a drawing of the family, they do not ask whether their father or mother is violent.

  • Stage 2: A few weeks, or even months after the first, issue a targeted questionnaire wigth harder questions:
    • have you ever seen any discussion that you believe is harassement?
    • Did any one do anything about it?
    • have you ever accused of harassement durinh the previous xx months?
    • have you felt harassed durinh the previous xx months?
  • What did you do about it?


Stage 3: Initiate on-wiki discussion

Stage 4: Create focus group and study in person their reactions and interactions when discussiong life on wiki.

I hope I gave you a rough idea, for a process which not only provide valuable information, but which will prepare people, making them more sensitive to community health issues (even before the "real thing" starts).

As far as the el-wiki, my home wiki, please see in the discussion for the future of wikipedia (use machine translation, it will be enough) el:Συζήτηση Βικιπαίδεια:Διαβούλευση στρατηγικής Wikimedia 2017. Already 6 users have supported the text entitles : A Wikipedia which is fun to contribute it, where we are asking an inclusive, an open and healthy environment without harassement, without biting newcoments. Out of the total of 8 participants in the page, six support these thoughts.

With about 49-50 active editors, 6 is already 12% of contributors. A loud message.

The project proposed here is really important. --FocalPoint (talk) 21:06, 29 March 2017 (UTC)

Yes, I'll definitely have a read, thank you for sharing those links! And thank you for expanding on your suggestion. We already have a lot of background information from Research:Harassment_survey_2015 but that was primarily about understanding the landscape — our first steps will be to test the waters on how we can successfully affect change to build a healthier environment for all constructive participants. — Trevor Bolliger, WMF Product Manager 🗨 22:52, 29 March 2017 (UTC)


Have you considered hiring psychologists in order to determine what the root causes are, culturally, of the common harassments? Do you have a thorough understanding of the common harassments consist of, and the contexts in which they arise? Have you considered offering counselling services to users who probably need it?

While some of the tools would definitely benefit from more love, many of the problems we have with harassment, especially on the larger projects, are cultural in nature, and thus should likely be addressed as such, or at very least understood as such, first. How this varies across project is also something you should be looking into, as that is going to be relevant to any new tools you do produce. -— Isarra 01:49, 28 April 2017 (UTC)

+1. Pluto2012 (talk) 04:33, 7 May 2017 (UTC)

"Civil rights" vs "social justice power play"Edit

Re: "The project will not succeed if it’s seen as only a “social justice” power play." at Community input. Is there some reason this alt-right anti-Semitic dogwhistle was used in a grant application? Is there some reason it isn't just referred to as "civil rights"? This really jumped out at me when I read the grant application, that it shows such a contempt for justice and such a zero-sum-game approach to - of all things - writing an encyclopedia. Someone else questioned this on the Gender Gap mailing list as well, but received no response.—Neotarf (talk) 22:06, 28 April 2017 (UTC)

This is a direct effect of that word being thrown into the faces of those who engage in these kinds of topics with some regularity I think. If I had written this, I would have been inclined to formulate it like that as well. However you might be right that it is better to reword this fragment. —TheDJ (talkcontribs) 08:32, 4 May 2017 (UTC)

Getting rid of harassment reports by getting rid of the harassedEdit

About the section on "potential measures of success": "Decrease the percentage of non-administrator users who report seeing harassment on Wikipedia, measured in a follow-up to the 2015 Harassment Survey." This kind of loophole may create more problems than it solves.

If your goal is just to get rid of harassment reports, the fastest way is to just get rid of the people who might be inclined to report harassment, that is, anyone who is not white, male, and heterosexual. Since many of the people who reported harassment in the survey said that they decreased their activity or left the project as a result of the harassment, they may already be gone. More left after they faced retaliation for speaking out, or witnessed retaliation against others. There are a few at the highest levels of the movement who have enough money to be unaffected by policies or who have the money to influence policies, but many professionals will no longer engage in these consultations unless it is face to face and they can see who they are talking to. More and more, such consultations occur in venues that are open only to those who are wealthy enough to be in a donor relationship with the foundation and so receive scholarships to attend. This avenue has been further narrowed since members of websites that are known to dox and publish hit pieces about Wikimedians have started showing up at such consultations. —Neotarf (talk) 23:22, 28 April 2017 (UTC)

I agree that changes in such a metric (relative number of self-reported "victims" of harassment) are extremely unlikely to provide any meaningful information about changes in the real world, because they can be caused by so many things, like: different ways to explain/define the ultra-generic term "harassment", different perception of harassment even if nothing has changed, participants invitation method, willingness of users to alter the results because they like/dislike this initiative, retaliation from users who got blocked or have some other dissatisfaction, other selection biases.
I don't know how you're so sure that some of the groups you mention (like non-heterosexuals) are more harassed than others, but I imagine that at least on some wikis rather than eliminated they're more likely to be discouraged by some entrenching on non-NPOV views for some content which wil disgust large sectors of the population, and this could alter statistics.
In general, as several people noted on Research talk:Harassment survey 2015, the survey was especially susceptible to biases of all kinds. I'm not sure it's easy to manipulate the survey with actions, because it's impossible to predict how any change in reality would be reflected in the survey: an increase in harassment may discourage people from engaging in a survey; an elimination of harassers may make them flock to the next survey; blocked users may organise in some way to skew the results if they feel they have leverage or vice versa that the situation is getting worse for them. I think the results will just continue being meaningless.
There is certainly a risk that certain specific people or groups will be targeted even more strongly that they currently are, by people feeling empowered (or threatened) by this initiative, but I'd expect this to happen mostly for old grudges. Almost certainly someone will try to get rid of some old enemy in the name of the greater good, but this will happen under the radar. --Nemo 13:21, 2 June 2017 (UTC)

English WikipediaEdit

Thanks for specifying what is specific to the English Wikipedia. I hope the needs of (some parts of) the English Wikipedia will not dominate the project. --Nemo 12:55, 14 June 2017 (UTC)

@Nemo bis: You're welcome, and thank you for participating. English Wikipedia presents unique complexities due to its size and history, but at the end of the day we want all the tools we build to work effectively for all wikis who desire to use them. — Trevor Bolliger, WMF Product Manager 🗨 17:26, 14 June 2017 (UTC)

Exploring how the AbuseFilter can be used to combat harassmentEdit

The AbuseFilter is a feature that evaluates every submitted edit, along with other log actions, and checks them against community-defined rules. If a filter is triggered the edit may be rejected, tagged, logged, trigger a warning message, and/or revoke the user’s autoconfirmed status.

Currently there are 166 active filters on English Wikipedia, 152 active filters on German Wikipedia, and 73 active filters here on Meta. One example from English Wikipedia would be filter #80, “Link spamming” which identifies non-autoconfirmed users who have added external links to three or more mainspace pages within a 20 minute period. When triggered, it displays this warning to the user but allows them to save their changes. It also tags the edit with ‘possible link spam’ for future review. It’s triggered a dozen times every day and it appears that most offending users are ultimately blocked for spam.

AbuseFilter is a powerful tool and we believe it can be extended to handle more user conduct issues. The Anti-Harassment Tools software development team is looking into three major areas:

1. Improving its performance so more filters can run per edit

We want to make the AbuseFilter extension faster so more filters can be enabled without having to disable any other useful filters. We’re currently investigating the current performance in task T161059. Once we better understand how it is currently performing we’ll create a plan to make it faster.

2. Evaluating the design and effectiveness of the warning messages

There is a filter on English Wikipedia — #50, “Shouting” — which warns when an unconfirmed user makes an edit to mainspace articles consisting solely of capital letters. When the edit is tripped, it displays a warning message to the user above the edit window:

From en:MediaWiki:Abusefilter-warning-shouting. Each filter can specify a custom message to display.

These messages help dissuade users from making harmful edits. Sometimes requiring a user to take a brief pause is all it takes to avoid an uncivil incident.

We think the warning function is incredibly important but are curious if the presentation could be more effective. We’d like to work with any interested users to design a few variations so we can determine which placement (above the edit area, below, as a pop-up, etc.) visuals (icons, colors, font weights, etc.) and text most effectively conveys the intended message for each warning. Let us know if you have any ideas or if you’d like to participate!

3. Adding new functionality so more intricate filters can be crafted.

We’ve already received dozens of suggestions for functionality to add to AbuseFilter, but we need your help to winnow this list so we can effectively build filters that help combat harassment.

In order to do this, we need to know what types of filters are already successful at logging, warning, and preventing harassment. Which filters do you think are already effective? If you wanted to create a filter that logged, warned, or prevented harassing comments, what would it be? And what functionality would you add to AbuseFilter? Join our discussion at Talk:Community health initiative/AbuseFilter.

Thank you, and see you at the discussion!

— The Anti-Harassment Tools team (posted by Trevor Bolliger, WMF Product Manager 🗨 23:07, 21 June 2017 (UTC))

Changes we are making to the Echo notifications blacklist before release & Release strategy and post-release analysisEdit


I've posted #Changes we are making to the blacklist before release and #Release strategy and post-release analysis for those interested in our Echo notifications blacklist feature. Feedback appreciated! — Trevor Bolliger, WMF Product Manager 🗨 18:34, 23 June 2017 (UTC)

Combat harassment ?Edit

For the record:

it is really a pity that people in charge of this project nicely named "community health initiative" will use agressivity to prevent agressivity.

There is no harassment on wikipedia. There are frustrations and misunderstandings due to the poor communication (no body langage) and the poor definition of behavioural rules, which generates agressivity, a form of which is harassment.

As explained a few months ago already, I think there lack some psychologists or specialists in social sciences in this project. I think the leader(s) do(es)n't have the human competences to manage and lead this project and they should report this concern to their own management.

With your bots and filters you will create more damages than "community health" or sanity.

Pluto2012 (talk) 10:36, 12 July 2017 (UTC)

@Pluto2012: You are arguing that there is harassment on Wikipedia in the form of aggressiveness (which is supported by evidence), so this general argument is not persuasive. I also don't think there's a lot of evidence to suggest that this aggressiveness stems from legitimate misunderstandings. In terms of whether the right kind of expertise is present on this initiative, I believe that folks with the right combination of on-wiki experience, tool-building, awareness of research in this area, and ability to communicate with contributors is apt. Expertise in psychology could be useful, but probably isn't strictly necessary to develop responsible and community-supported approaches to address these problems reported by Wikimedia communities. I JethroBT (talk) 21:55, 15 September 2017 (UTC)

Our goals through September 2017Edit

I have two updates to share about the WMF’s Anti-Harassment Tools team. The first (and certainly the most exciting!) is that our team is fully staffed to five people. Our developers, David and Dayllan, joined over the past month. You can read about our backgrounds here.

We’re all excited to start building some software to help you better facilitate dispute resolution. Our second update is that we have set our quarterly goals for the months of July-September 2017 at mw:Wikimedia Audiences/2017-18 Q1 Goals#Community Tech. Highlights include:

I invite you to read our goals and participate in the discussions occurring here, or on the relevant talk pages.


Trevor Bolliger, WMF Product Manager 🗨 20:31, 24 July 2017 (UTC)


Even before "policing", I like projects which help self-defense. seems ok, focusing on the most common mistakes people make which put them at real risk (though among the resources they link I only know ). --Nemo 06:31, 11 August 2017 (UTC)

I like these projects too. I think there's room for a "security/privacy/harassment check-up" feature which walks users through their preferences and more clearly explains the trade-offs that different settings allow. Additionally, I think features like our Mute and User page protection features would work in this regard. — Trevor Bolliger, WMF Product Manager 🗨 20:04, 12 August 2017 (UTC)

Update and request for feedback about User Mute featuresEdit

Hello Wikimedians,

The Anti-harassment Tool team invites you to check out the new User Mute features under development and to give us feedback.

The team is building software that empowers contributors and administrators to make timely, informed decisions when harassment occurs.

With community input, the team will be introducing several User Mute features to allow one user to prohibit another specific user from interacting with them. These features equip individual users with tools to curb harassment that they may be experiencing.

The current notification and email preferences are either all-or-nothing. These mute features will allow users to receive purposeful communication while ignoring non-constructive or harassing communication.

Notifications muteEdit

With the notifications mute feature, on wiki echo notifications can be controlled by an individual user in order to stop unwelcome notifications from another user. At the bottom of the "Notifications" tab of user preferences an user can mute on-site echo notifications from individual users, by typing their username into the box.

Echo notifications mute is feature is currently live on Meta Wiki and will be released on all Echo-enabled wikis on August 28, 2017.

Try out the feature and tell us how well it is working for you and your wiki community. Suggest improvements to the feature or documentation. Let us know if you have questions about how to use it. Talk:Community health initiative/User Mute features

Email Mute listEdit

Soon the Anti-harassment tool team with begin working on a feature that allows one user to stop a specific user from sending them email through Wikimedia special:email. The Email Mute list will be placed in the 'Email options' sections of the 'User profile' tab of user preferences. It will not be connected to the Notifications Mute list, it will be an entirely independent list.

This feature is planned to be released to all Wikimedia wikis by the end of September 2017.

For more information see. Community health initiative/Special:EmailUser Mute

Let us know your ideas about this feature.

Open questions about user mute featuresEdit

See Community health initiative/User Mute features for more details about the user mute tools.

Community input is needed in order to make these user mute features useful for individuals and their wiki communities.

Join the discussion at Talk:Community health initiative/User Mute features Or if you want to share your thoughts privately, contact the Anti-harassment tool team by email.

For the Anti-harassment tool team, SPoore (WMF) (talk) , Community Advocate, Community health initiative (talk) 20:25, 28 August 2017 (UTC)

Community health: Definition neededEdit

Please define the term "community health". It's not at all clear what this is. If you could cite relevant science, that would be helpful. Thank you. --MatthiasGutfeldt (talk) 13:59, 11 September 2017 (UTC)

Hello MatthiasGutfeldt, The way that Wikimedia uses the phrase “community health” is explained on this page. It mentions the first times that the phrase was used around 2009, including the name for task force for the Wikimedia Strategy 2010, Community health task force. Since then the term has been used to study whether there is a good working environment in the Wikimedia community. See Community health workshop presentation slides for another explanation about the term.
So you can see that the term was adopted for this initiative based on prior use. But as far as I’m concerned, if the word is too confusing or does not translate well from English into another language then another term can be substituted that conveys a similar meaning. Currently, there is a discussion on dewiki about the use of the phrase.
I’m interested in knowing your thoughts about the use of the term. You can respond here on meta or the dewiki. SPoore (WMF) (talk) , Community Advocate, Community health initiative (talk) 23:47, 11 September 2017 (UTC)

Focus on devianceEdit

This project has a serious problem: it seems to primarily focus deviant behavior. My experience is, however, that the gravest incidents of harrassment come from respected and experienced members of the communities, not seldom admins. Tools for blocking etc. will be of little use in these cases as they tend to be applied against people at the margins of the communities. A "community health project" should assume a critical stance to the norms and mores that are dominant in the cores of the communities, not just try to make an exclusion of problematic outsiders more effective. This means it should also deal with harrassment coming from admins and regular users (which can hardly be reached by exclusion). If the project neglects the core users' part in harrassing, it will either fail or have negative outcomes.

In a German wikipedia dicussion, it became clear that the concept of health is very problematic, for similar reasons. There is already a tendency to qualify "disturbers" as pathological, which usually will not be accepted if the target of such attacks are regular and experienced users. But if the targets are users with little or bad reputation, such attacks are more frequent and often hardly criticized. We should not equalize good behavior with "healthy" behavior and bad or unwanted behavior with illness.--Mautpreller (talk) 14:48, 13 September 2017 (UTC)

Honestly, no matter what wording you pick, you will probably not cover everything appropriately and definitely not across multiple languages. I note that health has a very broad scope. Fitness is health too. Food can be health.. If the german wiki calls some people's behavior pathological, then that seems like a problem, but it's not really connected to this choice in wording in my personal opinion. —TheDJ (talkcontribs) 20:12, 13 September 2017 (UTC)
@Mautpreller: You're right, we need to address all forms of harassment that are occuring and be able to respond to new forms that may arise in the future. This includes blatant harassment, (e.g. personal insults and universally unacceptable insults) harassment from newcomers who are acclimating to the encyclopedia's content standards and harassment from seasoned editors who've grown tired of low quality edits and vandalism. (Yes, this is a reductive list, intentional for brevity.)
Personally, I think the biggest problem we need to solve is how to properly sanction highly productive editors but retain their productivity. Full site blocks are extreme — what other low- or mid-level punitive responses can we build? Our team tries to not to villainize users who need to be sanctioned — collaboratively building an encyclopedia is hard work and emotions can get the best of any of us. How can we create an environment where incidents of incivility are opportunities for learning and self-improvement? — Trevor Bolliger, WMF Product Manager 🗨 22:26, 13 September 2017 (UTC)
H'm. "Our team tries not to villainize users who need to be sanctioned", that is good. However, I definitely see a problem that this might occur even against your will, simply because of the issues of power asymmetry and social dynamics. Moreover, I am not so sure that I like the idea that "an environment where incidents of incivility are opportunities for learning and self-improvement" should be created. Think of Jimmy Wales. He often uses incivility as a provocation. I can't imagine that he will use it as an "opportunity for learning and self-improvement". --Mautpreller (talk) 09:45, 14 September 2017 (UTC)
Hello Mautpreller, thank you for raising issue of power asymmetry. As the Community health initiative designs solutions to address harassment and conflict resolution, it is important to consider social dynamics of the community. Going back to your first statement, it is true that to be successful this Community Health Initiative needs to look for ways to support people at the margins of the community. As a Community Advocate my job is to make sure all stakeholders are considered, including the marginalized people who are not currently well represented on Wikimedia Foundation wikis today. To do this, I'm arranging for the Anti-harassment tools team members to speak to active and less active contributors, long term contributors and newer ones, and also community organizers who are attempting find new contributors from less well represented groups. And as feasible, the Anti-harassment tools team is speaking to people at the margins of the community, too. We are doing private one on one conversations, group interviews, surveys, formal and informal community consultations (both on and off wiki) in order to learn from many different types of stakeholders. We know that there are many individuals and groups in many different language wikis that need to be considered. The team is committed to expanding our reach as much as we practically can. It is challenging work because there is no feasible way to repeatedly hold large scale meaningful multi-lingual conversations on hundreds of wikis. So, we greatly appreciate you finding us on meta and sharing your thoughts with the international wikimedia movement.
I'm following the German Wikipedia discussion about the Community health initiative with Christel Steigenberger assistance. She will update me next week about the discussions happening there. The phrase "community health" might not work well in some wiki communities because of preexisting cultural interpretations of the words. In these communities, an alternative name for the initiative can be discussed and agreed on.
Mautpreller, your concerns are reasonable and I'm glad you are sharing them now. Our team doesn't want to build tools that will be used to make the marginalization of some groups of new users worse that it is now. With good communication with all stakeholders, our team aims to foster a more welcoming editing environment for a more diverse community. Please continue to join our discussions (on meta and your home wiki) and invite others to participate, too. SPoore (WMF) (talk) , Community Advocate, Community health initiative (talk) 14:20, 19 September 2017 (UTC)
SPoore (WMF) just a small note concerning your last paragraph: marginalization isn't necessarily restricted to new users. --HHill (talk) 12:39, 20 September 2017 (UTC)

Translation tagsEdit

Hey folks, I just made a few changes to the translation markup that were causing some of the links to other pages to not work properly. Feel free to adjust the names for the <tvar|> tags if needed, as I just set some names that I thought made sense for the links. I JethroBT (talk) 21:39, 15 September 2017 (UTC)

Help us decide the best designs for the Interaction Timeline featureEdit

Hello all! In the coming months the Anti-Harassment Tools team plans to build a feature that we hope will allow users to better investigate user conduct disputes, called the Interaction Timeline. In short, the feature will display all edits by two users on pages where they have both contributed in a chronological timeline. We think the Timeline will help you evaluate conduct disputes in a more time efficient manner, resulting in more informed, confident decisions on how to respond.

But — we need your help! I’ve created two designs to illustrate our concept and we have quite a few open questions which we need your input to answer. Please read about the feature and see the wireframes at Community health initiative/Interaction Timeline and join us at the talk page!

Thank you, — CSinders (WMF) (talk) 19:48, 3 October 2017 (UTC)

Anti-Harassment Tools quarterly updateEdit

Happy October, everyone! I'd like to share a quick summary of what the Anti-Harassment Tools team accomplished over the past quarter (and our first full quarter as a team!) as well as what's currently on the docket through December. Our Q1 goals and Q2 goals are on wiki, for those who don't want emoji and/or commentary.

Q1 summary

📊 Our primary metric for measuring our impact for this year is "admin confidence in resolving disputes." This quarter we defined it, measured it, and are discussing it on wiki. 69.2% of English Wikipedia admins report that they can recognize harassment, while only 39.3% believe they have the skills and tools to intervene or stop harassment and only 35.9% agree that Wikipedia has provided them with enough resources. There's definitely room for improvement!

🗣 We helped SuSa prepare a qualitative research methodology for evaluating Administrator Noticeboards on Wikipedia.

⏱ We added performance measurements for AbuseFilter and fixed several bugs. This work is continuing into Q2.

⚖️ We've begun on-wiki discussions about Interaction Timeline wireframes. This tool should make user conduct investigations faster and more accurate.

🤚 We've begun an on-wiki discussion about productizing per-page blocks and other ways to enforce editing restrictions. We're looking to build appropriate tools that keep rude yet productive users productive (but no longer rude.)

🤐 For Muting features, we've finished & released Notifications Mute to all wikis and Direct Email Mute to Meta Wiki, with plans to release to all wikis by the end of October.

Q2 goals

⚖️ Our primary project for the rest of the calendar year will be the Interaction Timeline feature. We plan to have a first version released before January.

🤚 Let's give them something to talk about: blocking! We are going to consult with Wikimedians about the shortcomings in MediaWiki’s current blocking functionality in order to determine which blocking tools (including sockpuppet, per-page, and edit throttling) our team should build in the coming quarters.

🤐 We'll decide, build, and release the ability for users to restrict which user groups can send them direct emails.

📊 Now that we know the actual performance impact of AbuseFilter, we are going to discuss raising the filter ceiling.

🤖 We're going to evaluate ProcseeBot, the cleverly named tool that blocks open proxies.

💬 Led by our Community Advocate Sydney Poore, we want to establish communication guidelines and cadence which encourage active, constructive participation between Wikimedians and the Anti-Harassment Tools team through the entire product development cycle (pre- and post-release.)

Feedback, please!

To make sure our goals and priorities are on track, we'd love to hear if there are any concerns, questions, or opportunities we may have missed. Shoot us an email directly if you'd like to chat privately. Otherwise, we look forward to seeing you participate in our many on-wiki discussions over the coming months. Thank you!

— The Anti-Harassment Tools team (Caroline, David, Dayllan, Sydney, & Trevor) (posted by Trevor Bolliger, WMF Product Manager 🗨 20:53, 4 October 2017 (UTC))

Submit your ideas for Anti-Harassment Tools in the 2017 Wishlist SurveyEdit

The WMF's Anti-Harassment Tools team is hard at work on building the Interaction Timeline and researching improvements to Blocking tools. We'll have more to share about both of these in the coming weeks, but for now we'd like to invite you to submit requests to the 2017 Community Wishlist in the Anti-harassment category: 2017 Community Wishlist Survey/Anti-harassment. Your proposals, comments, and votes will help us prioritize our work and identify new solutions!

Thank you!

Trevor Bolliger, WMF Product Manager 🗨 23:57, 6 November 2017 (UTC)

Implicit bias study grant proposalEdit

FYI   Grants:Project/JackieKoerner/Investigating the Impact of Implicit Bias on Wikipedia, a proposed qualitative study of implicit bias on Wikipedia. Very close to final decision, with committee comments noted on talk page. I imagine it would be helpful to have some feedback on how such a project would fit into this initiative. (not watching, please {{ping}}) czar 20:10, 24 November 2017 (UTC)

Anti-Harassment Tools team goals for January-March 2018Edit

Hello all! Now that the Interaction Timeline beta is out and we're working on the features to get it to a stable first version (see phab:T179607) our team has begun drafting our goals for the next three months, through the end of March 2018. Here's what we have so far:

  • Objective 1: Increase the confidence of our admins for resolving disputes
    • Key Result 1.1: Allow wiki administrators to understand the sequence of interactions between two users so they can make an informed decision by adding top-requested features to the Interaction Timeline.
    • Key Result 1.2: Allow admins to apply appropriate remedies in cases of harassment by implementing more granular types of blocking.
  • Objective 2: Keep known bad actors off our wikis
    • Key Result 2.1: Consult with Wikimedians about shortcomings in MediaWiki’s current blocking functionality.
    • Key Result 2.2: Keep known bad actors off our wikis by eliminating workarounds for blocks.
  • Objective 3: Reports of harassment are higher quality while less burdensome on the reporter
    • Key Result 3.1: Begin research and community consultation on English Wikipedia for requirements and direction of the reporting system, for prototyping in Q4 and development in Q1 FY18-19.

Any thoughts or feedback, either about the contents or the wording I've used? I feel pretty good about these (they're aggressive enough for our team of 2 developers) and feel like they are the correct priority of things to work on.

Thank you! — Trevor Bolliger, WMF Product Manager 🗨 22:41, 7 December 2017 (UTC)

Update: We've decided to punt one of the goals to Q4. Here are the update goals:
  • Objective 1: Increase the confidence of our admins for resolving disputes
    • Key Result 1.1: Allow wiki administrators to understand the sequence of interactions between two users so they can make an informed decision by adding top-requested features to the Interaction Timeline.
    • Key Result 1.2: Allow admins to apply appropriate remedies in cases of harassment by beginning development on more granular types of blocking.
    • Key Result 1.3: Consult with Wikimedians about shortcomings in MediaWiki’s current blocking functionality in order to determine which improvements to existing blocks and new of blocking our team should implement in the first half of 2018.
  • Objective 2: Reports of harassment are higher quality while less burdensome on the reporter
    • Key Result 2.1: Begin research and community consultation on English Wikipedia for requirements and direction of the reporting system, for prototyping in Q4 and development in Q1 FY18-19.

Trevor Bolliger, WMF Product Manager 🗨 01:36, 20 December 2017 (UTC)

I suggest to follow the advice at writing clearly. Does point 1.1. actually mean "Add some highly requested features to the Interaction Timeline tool so that wiki administrators can make an informed decision with an understanding of the sequence of interactions between two users"? Or will administrators add something to the timeline tool content? --Nemo 13:53, 26 December 2017 (UTC)
The Anti-Harassment Tools software development team at the WMF will add the new features. I format these team goals with the desired outcome first, to help us keep in mind that our software should serve users, and that we're not just building software for the sake of writing code. — Trevor Bolliger, WMF Product Manager 🗨 18:42, 2 January 2018 (UTC)

Anti-Harassment Tools status updates (Q2 recap, Q3 preview, and annual plan tracking)Edit

Now that the Anti-Harassment Tools team is 6 months into this fiscal year (July 2017 - June 2018) I wanted to share an update about where we stand with both our 2nd Quarter goals and our Annual Plan objectives as well as providing a preview for 3rd Quarter goals. There's a lot of information so you can read the in-depth version at Community health initiative/Quarterly updates or just these summaries:

Annual plan summary

The annual plan was decided before the full team was even hired and is very eager and optimistic. Many of the objectives will not be achieved due to team velocity and newer prioritization. But we have still delivered some value and anticipate continued success over the next six months. 🎉

Over the past six months we've made some small improvements to AbuseFilter and AntiSpoof and are currently in development on the Interaction Timeline. We've also made progress on work not included in these objectives: some Mute features, as well as allowing users to restrict which user groups can send them direct emails.

Over the next six months we'll conduct a cross-wiki consultation about (and ultimately build) Blocking tools and improvements and will research, prototype, and prepare for development on a new Reporting system.

Q2 summary

We were a bit ambitious, but we're mostly on track for all our objectives. The Interaction Timeline is on track for a beta launch in January, the worldwide Blocking consultation has begun, and we've just wrapped some stronger email preferences. 💌

We decided to stop development on from the AbuseFilter but are ready to enable ProcseeBot on Meta wiki if desired by the global community. We've also made strides in how we communicate on-wiki, which is vital to all our successes.

Q3 preview

From January-March our team will work on getting the Interaction Timeline to a releasable shape, will continue the blocking consultation and begin development on at least one new blocking feature, and begin research into an improved harassment reporting system. 🤖

Thanks for reading! — Trevor Bolliger, WMF Product Manager 🗨 01:29, 20 December 2017 (UTC)

Do I understand correctly that an "interaction timeline" tool has become the main focus of the project for an extended number of months? It's a bit weird: the idea that interaction history between two users has such a prime importance makes it look like we're encouraging users to get personal or that conflict resolution is actually a divorce tribunal. --Nemo 13:58, 26 December 2017 (UTC)
'Evaluation' is one of the four focus areas for our teams' work, in addition to Detection, Reporting, and Blocking. We have found that many reported cases of harassment are so complex that administrators or other users will not investigate or get involved because it is too much of a (often thankless) time commitment. We believe the Interaction Timeline will decrease the effort required to make an accurate assessment so more cases will be properly handled. More information on what lead us to prioritize this project can be found at Community_health_initiative/Interaction_Timeline#Project_information. — Trevor Bolliger, WMF Product Manager 🗨 18:42, 2 January 2018 (UTC)

Pet (stalking) projectsEdit

One problem that seems to pop up quite often is that some otherwise good user has a pet project, which sometimes is about stalking some person off-wiki. Often that person has done something semi-bad or fully-stupid thing. It is very tempting to give examples, but I don't think that is wise. Those contributors seems to focus more on collecting bad stuff about the persons in their biographies than writing real biographies. Asking the users about stopping that behaviour usually does not work at all. Giving the person a topic ban could work, but initiating a process like that would create a lot of anger and fighting.

So how can such a situation be solved? You want the user to continue, but not to continue with stalking the off-wiki person, and in such a way that you don't ignite further tension. Now this kind of situation can be solved by blocking the user, but I don't believe that is what we really want to do.

I've been wondering if the situation could be detected by inspecting the sentiment on page itself, as it seems like those users use harsh language. If the language gets to harsh, then the page can be flagged as such, or even better the contributions in the history can be flagged. Lately it seems like some of them has moderated their language, but shifted to cherry-picking their references instead. That makes it harder to identify what they are doing as it is the external page that must be the target for a sentiment analysis. In this case it is the external page that shoud somehow be flagged, but it is still the user that adds the questionable reference.

Another idea that could work is to mark a page so it starts to use some kind of rating system, and make it possible for any user to activate the system, and then make it impossible for the involved stalking user to remove it. Imagine someone turn the system on, and then it can only be turned off by an admin when the rating is good enough. There would be no user to blame, someone has simply requested ratings on the page. It would be necessary to have some mechanism to stop the stalking user (or friends) from gaming the system. A simple mechanism could be to block contributing users from giving rating simply by inspecting the IP-address. The weight of the given rating should be according to some overall creds, so a newcomer would be weighted rather little while an oldtimer would be weighted more.

Both could be merged by using the sentiment rating as a prior rating for the article. Other means could also be used to set a prior rating. — Jeblad 01:52, 27 December 2017 (UTC)

@Jeblad: I moved this section here from Talk:Community health initiative/Blocking tools and improvements because it was not on-topic about blocking. It has more to do with other areas of our work, such as Detection or Reporting.
Aside from harassment I agree that we could use deeper automated content analysis and understanding across all our wiki pages. Which pages are too complex for a standard reading level? Which pages seem promotional (and not encyclopedic?) Which pages are attack pages? (like you suggested.) This type of system is outside of our scope, as our team has no Natural Language Processing software engineers.
The AbuseFilter feature is often used to identify unreliable references and/or flag blatant harassing language, but we found that blatant harassment is far less common than people using tone or dog-whistle words to harass or antagonize another user. — Trevor Bolliger, WMF Product Manager 🗨 01:52, 3 January 2018 (UTC)

Reporting System User InterviewsEdit

The Wikimedia Foundation's Anti-Harassment Tools team is in the early research stages of building an improved harassment reporting system for Wikimedia communities with the goals of making reports higher quality while lessening the burden on the reporter. There has been interest expressed in building a reporting tool in surveys, IdeaLab submissions, and on-wiki discussions. From movement people requesting it, to us as a team seeing a potential need for it. Because of that, myself and Sydney Poore have started reaching out to users who have expressed interest over the years of talking about harassment they’ve experienced and faced on Wikimedia projects. Our plan is to conduct user interviews with around 40 individuals in 15-30 min interviews. We will be conducting these interviews until the middle of February and we will write up a summary of what we’ve learned.

Here are the questions we plan to ask participants. We are posting these for transparency in case there are any major concerns we are not highlighting, let us know.

  1. How long have you been editing? Which wiki do you edit?
  2. Have you witnessed harassment and where? How many times a month do you encounter harassment on wiki that needs action from an administrator? (blocking an account, revdel edit, suppression of an edit, …?)
  3. Name the places where you receive reports of harassment or related issues? (eg. arbcom-l, checkuser-l, functionaries mailing list, OTRS, private email, IRC, AN/I,….?)
    • Volume per month
  4. Name the places where you report harassment or related issues? (eg. emergency@, susa@, AN/I, arbcom-l, ….?)
    • Volume per month
  5. Has your work as an admin handling a reported case of harassment resulted in you getting harassed?
    • Follow question about how often and for how long
  6. Have you been in involved in different kinds of conflict and/or content disputes? Were you involved in the resolution process?
  7. What do you think worked?
  8. What do you think are the current spaces that exist on WP:EN to resolve conflict? What do you like/dislike? Do you think those spaces work well?
  9. What do you think of a reporting system for harassment inside of WP:EN? Should it exist? What do you think it should include? Where do you think it should be placed/exist? Who should be in charge of it?
  10. What kinds of actions or behaviors should be covered in this reporting system?
    • an example could be doxxing or COI or vandalism etc

--CSinders (WMF) (talk) 19:16, 11 January 2018 (UTC)

Translation: <tvar|audiences> is brokenEdit

My translation shows <tvar|audiences> for under Quarterly goals as template broken. Any fix available? --Omotecho (talk) 15:57, 31 January 2018 (UTC)

I think it is fixed now. :) Joe Sutherland (Wikimedia Foundation) (talk) 19:00, 31 January 2018 (UTC)

New user preference to let users restrict emails from brand new accountsEdit


Wikimedia user account preference set to not allow emails from brand new users

The WMF's Anti-Harassment Tools team introduced a user preference which allows users to restrict which user groups can send them emails. This feature aims to equip individual users with a tool to curb harassment they may be experiencing.

  • In the 'Email options' of the 'User profile' tab of Special:Preferences, there is a new tickbox preference with the option to turn off receiving emails from brand-new accounts.
  • For the initial release, the default for new accounts (when their email address is confirmed) is ticked (on) to receive emails from brand new users.
    • Use case: A malicious user is repeatedly creating new socks to send User:Apples harassing emails. Instead of disabling all emails (which blocks Apples from potentially receiving useful emails), Apples can restrict brand new accounts from contacting them.

The feature to restrict emails on wikis where a user had never edited (phab:T178842) was also released the first week of 2018 but was reverted the third week of 2018 after some corner-case uses were discovered. There are no plans to bring it back at any time in the future.

We invite you to discuss the feature, report any bugs, and propose any functionality changes on the talk page.

For the Anti-Harassment Tools Team SPoore (WMF) (talk) , Community Advocate, Community health initiative (talk) 00:43, 9 February 2018 (UTC)

English Wikipedia Administrators' Noticeboard/Incident Survey UpdateEdit

During the month of December, the WMF's Support and Safety team and the Anti-Harassment Tools team ran a survey targeted at admins on Wikipedia English Administrators' Noticeboard/Incidents and how reporting harassment and conflict is handled. For the past month of January, we have been analyzing the quantitive and qualitative data from this survey. Our timeline towards publishing a write up of the survey is:

  • February 16th- rough draft with feedback from SuSa and Anti-Harassment team members
  • February 21st- final Draft with edits
  • March 1st- release report and publish data from the survey on wiki

We are super keen to release our findings with the community and wanted to provide an update on where we are at with this survey analysis and report.

--CSinders (WMF) (talk) 01:15, 9 February 2018 (UTC)

Auditing Report ToolsEdit

The Wikimedia Foundation’s Anti-Harassment Tools Team is starting research on ways reports are made about harassment used across the internet, while also focusing on Wikimedia projects. We are planning to do 4 major audits.

Our first audit is focusing on reporting for English Wikipedia. We found 12 different ways editors can report. We then divided these into two groups, on-wiki and off wiki reporting. On-wiki reporting tends to be incredibly public, off wiki reporting is more private. We’ve decided to focus on 4(ish) spaces for reporting that we’ve broken into two buckets, ‘official ways of reporting’ and ‘unofficial ways of reporting.’

Official Ways of Reporting (all are maintained by groups of volunteers, some are more adhoc volunteers e.g. AN/I)

  • Noticeboards: 3rr, AN/I, AN
  • OTRS
  • Arb Com Email Listserv
    • We’ve already started user interviews with arb com

Unofficial Ways of Reporting:

  • Highly followed talk page (such as Jimmy Wales)

Audit 2 focuses on Wikimedia projects such as Wikidata, Meta and Wikimedia Commons. Audit 3 will focus on other open source companies and projects like Creative Commons and Github. Audit 4 will focus on social media companies and their reporting tools, such as Twitter, Facebook, etc. We will be focusing on how these companies interact with English speaking communities and their policies for on English speaking communities, specifically because policies differ country to country.

Auditing Step by Step Plan:

  1. Initial audit
  2. Write up of findings and present to community
    • This will include design artifacts like user journeys
  3. On-wiki discussion
  4. Synthesize discussion
    • Takeaways, bullet points, feedback then posted to wiki for on-wiki discussion
  5. Move forward to next audit
    • Parameters for the audit given to us from the community and the technical/product parameters

We are looking for feedback from the community on this plan. We anticipate to gain a deeper level of understanding about the current workflows on Wikimedia sites so we can begin identifying bottlenecks and other potential areas for improvement. We are focusing on what works for Wikimedians while understanding on what other standards or ways of reporting are also out in the world.

--CSinders (WMF) (talk) 17:14, 2 March 2018 (UTC)

Just noticed this project through mention on The Guardian. Where is the feedback page?
As a 7 year editor til I finally lost it at constant Wikipedia harassment in 2014 and was booted. So glad to see this is happening. I'm too behind still on personal projects, plus another wiki I work on, to try again. But I use Wikipedia every day, so glad for everything that improves its content! Carolmooredc (talk) 05:50, 16 March 2018 (UTC)

Critical essay about the Community health initiativeEdit

Community health initiative

Stanford prison effect

Prima facia, the Community health initiative ("CHI") places a substantial burden on Wikipedia administrators to both investigate[1] and enforce[2] anti-harassment policies on the Wikipedia project. However, as demonstrated by the Stanford prison experiment those who perceive themselves to be in positions of authority are likely to abuse it, especially in the absence of effective oversight from higher level authorities[3]. The entire CHI proposal assumes that administrators will not themselves be the individuals discharging harassing behaviours. The proposal does not set out what a Wikipedia user with a grievance against an administrator, who may abuse their position of authority, can do to represent their complaints and seek a remedy; thus creating an impression that there are no ways for the reasonable person to believe that they may seek redress for abuse of power. This also strengthens the perception of unfettered authority in those that seek to abuse it.[4]

Absence of victim support provisions

Finkelhor et al. (2000) found 31% of online users reported being very or extremely upset, 19% were very or extremely afraid, and 18% were very or extremely embarrassed by online harassment. Ybarra (2004) discovered a positive relationship between electronic bullying and clinical depression. On the extreme side of the scale electronic bullying can be a predominant factor in suicide[5][6]. This suggests that those who are subject to electronic harassment are likely to need emotional support after an incident; however, the CHI proposal doesn't contain any provisions which help victims of harassment and bullying on Wikipedia find a pathway to assistance with treatment or management of the potential harm. This creates a reactionary system which does little to address the negative impact of bullying on users.

Isolation and internalisation of abuse regulation

Those who experience harassment which causes them distress generally have the right to report it to the relevant policing authorities. In many countries, including the United Kingdom, there are robust systems in place to deal with harassment which include the provision of support services[7]. Using established judicial procedures mitigates the risk of a Stanford prison effect due to the higher oversight on multiple levels within these systems. However, the CHI proposal does not mention the creation of machinery that would engage with legitimate judicial process and predicates a system of "internal abuse regulation" similar to that seen deployed by the Roman Catholic Church in response to high levels of abuse of children from amongst its membership[8].

Moving the burden of proof

A new harassment reporting system that doesn't place the burden of proof on or further alienate victims of harassment

— Community health initiative, reporting

To those well versed in law, a statement which advocates a shift in the burden to prove innocence for a criminal offence such as harassment onto the accused is alarming. In fact, there is a concept of ei incumbit probatio qui dicit, non qui negat (“the burden of proof is on the one who declares, not on one who denies”) that is enshrined into the Universal Declaration of Human Rights, Article 11.

The issue of defining harassment

Harassment, in most jurisdictions, is a criminal offence especially when it causes alarm or distress. As a result of this there are various sources of law dependent on jurisdiction which define what constitutes harassment. For there to be order in the enforcement of Wikipedia harassment policies, the term itself must first be unambiguously defined. The question as to who or what defines the term is very important, establishing an interpretation that is exclusive to application of Wikipedia policy creates the risk of Wikipedia moving from an encyclopedia to a non-authoritative dictionary, even a pseudo-court. If the interpretation is to be sought elsewhere, then the question as to where becomes relevant, will it be from British Case Law, US Case Law, a literal definition provided for by Collins Dictionary etc?

Risk of becoming a pseudo-court

As mentioned above, for Wikipedia to undertake in self-regulating with regard to harassment then it must first place itself in an authority to define the term itself or select which outside source to use. It will then be incumbent on Administrators to apply an interpretation of harassment when deciding if a person is guilty of it or not, thus becoming judges as to if a person has committed a criminal offence. While no competent Court would accept the judgement of a Wikipedia Administrator in determining the guilt of an alleged offender, it still risks creating an impression to Wikipedia users that its administrators exercise excessive powers which extend into ruling regarding criminal matters.

Putting Administrators at risk

Another important element which seems to have been omitted from the Community Health Initiative proposal is how Wikipedia Administrators may be protected from the harm inflicted against them as a form of revenge for undertaking their duties regarding harassment. The World Wide Web is a vast expanse which exists beyond, below, around and above Wikipedia and those who conduct harassment against others online have access to all manner of tools which can be utilized when attacking another person. The wise Administrator would be all too aware of this and that knowledge itself may influence their decisions when and if to act, especially when there are no systems of protection to fall back on.

Those who have discharged serious harassing behaviors have demonstrated mens rea and actus rea in committing a criminal offence and there is little to nothing preventing that same person escalating the severity of their criminality, especially when on a platform which proposes internal solutions to resolving the conduct which does little more than offer the deterrent of a ban from Wikipedia.

This problem can present itself between those with authority on the Wikipedia project. For example, if Administrator A notices Administrator B&C&D harassing user E then Administrator A may be afraid to act if Administrator B&C&D appears to have the power to do him harm, such as to his reputation etc. As there are no clearly defined internal checks and balances mentioned in the CHI this situation is more likely to occur.


  1. "Community health initiative - Meta". Retrieved 2018-08-05. 
  2. "Community health initiative - Meta". Retrieved 2018-08-05. 
  3. Teacher. "STANFORD PRISON EXPERIMENT". Retrieved 2018-08-05. 
  4. Hodson, Randy; Roscigno, Vincent J.; Lopez, Steven H. (2006-11-01). "Chaos and the Abuse of Power, Chaos and the Abuse of Power: Workplace Bullying in Organizational and Interactional Context, Workplace Bullying in Organizational and Interactional Context". Work and Occupations 33 (4): 382–416. ISSN 0730-8884. doi:10.1177/0730888406292885. 
  5. Raskauskas, Juliana; Stoltz, Ann D. (May 2007). "Involvement in traditional and electronic bullying among adolescents". Developmental Psychology 43 (3): 564–575. ISSN 0012-1649. PMID 17484571. doi:10.1037/0012-1649.43.3.564. 
  6. Hoff, Dianne L.; Mitchell, Sidney N. (2009-08-14). "Cyberbullying: causes, effects, and remedies". Journal of Educational Administration 47 (5): 652–665. ISSN 0957-8234. doi:10.1108/09578230910981107. 
  7. "Stalking and Harassment | The Crown Prosecution Service". Retrieved 2018-08-05. 
  8. Keenan, Marie (2013-07-18). Child Sexual Abuse and the Catholic Church: Gender, Power, and Organizational Culture. Oxford University Press. ISBN 9780199328970. 

Hey Wikipedia. I've written the collapsed essay regarding this CHI proposal. I hope that it offers another perspective on some issues which can be discussed further. Thank you. Dogs curiosity (talk) 21:35, 7 August 2018 (UTC)

Too much deja vu all over again! I'm having flashbacks! Wikipedia just a tiny corner of a much bigger problem. Sigh... Carolmooredc (talk) 21:55, 7 August 2018 (UTC)
Hello Dogs curiosity, Thanks for your interesting essay. There is a variety of topics to think about. One point I want to respond to is regarding support for people. The Trust and Safety team which is part of the Community health initiate has developed a page with Mental health resources. In particular I recommend Samaritans' Supporting someone online page as a useful resource. As to some of your other points, I will come back to them when I have time to give a more thorough reply. SPoore (WMF) (talk) , Trust and Safety Specialist, Community health initiative (talk) 22:21, 7 August 2018 (UTC)

How often do blocked users attempt to edit? We measured!Edit

The new mobile block notice.

A few months back our team revamped the design of the mobile web "you are blocked" note (screenshot to the right). Despite it being a tiny little pop-up on a tiny little screen it was more complicated to implement than we thought. We had to abandon formatting the block reason because so many reasons are templates (e.g. {{schoolblock}}) so we decided to measure how often those notices actually display to see if it mattered at all.

Short answer: enough people see the "you are blocked" message on mobile to warrant fixing block reasons, but it's not urgent.

I've compiled some data about how often the "you are blocked" messages appear to our users on desktop and mobile on Community health initiative/Blocking tools and improvements/Block notices. There are some graphs and a table of raw data as well as some synthesized findings. (Take a look at the Persian Wikipedia chart, it's mesmerizing! 🌈) Here are the main takeaways:

  • Block notices appear very often on the largest Wikimedia projects, sometimes outnumbering actual edits. (6.2 million blocked edits occured on English Wikipedia over a 30-day period.)
  • The desktop wikitext editor sees the vast majority of impressions by a wide margin. (98% on English Wikipedia, 89.5% on Spanish, and 98.7% on Russian.)
  • The VisualEditor and mobile block notices may occur less frequently, but still display to thousands of people every month.

We scratched this curiosity itch (and per usual have more questions than before!) but this lets us know that yes — blocks are stopping people. LOTS of people. — Trevor Bolliger, WMF Product Manager 🗨 00:17, 19 January 2019 (UTC)

Detox deprecatedEdit

The Detox tool, cited by the initiative, has been found to produce racist and homophobic results, and has been deleted. Conclusions based on its use are likely to be flawed. DuncanHill (talk) 01:43, 26 June 2019 (UTC)

@DuncanHill: I can clarify -- nobody at the WMF is using the Detox tool. Our Research team collaborated with Jigsaw on training Detox in 2016-2017, and found some promising-looking initial results. In 2017, the Anti-Harassment Tools team tried out using Detox to detect harassment on Wikipedia, and we found the same kinds of flaws that you have. The tool is inaccurate and doesn't take context into account, leading to false positives (flagging the word "gay" as aggressive even in a neutral or positive context) and false negatives (missing more nuanced uses of language, like sarcasm). As far as we know, the model hasn't really improved. I believe there's a team at Jigsaw who are still investigating how to use Detox to study conversations at Wikipedia, but nobody at the WMF is using Detox to identify harassers on Wikimedia projects. I'm glad that you brought it up; I just edited that page to remove the outdated passages that mention Detox. Other surveys, studies and reports are sufficient to establish that harassment is a serious problem on our platform. -- DannyH (WMF) (talk) 19:10, 29 June 2019 (UTC)
Return to "Community health initiative" page.