Grants:IdeaLab/Identify and quantify toxic users

Identify and quantify toxic users
Toxic users seem to be a source of demotivation for conflict-averse users which then discontinue editing. By having a measure, administrators and arbcoms could target these users specifically with a set of behavioral rules containing possible sanctions.
themenot sure
contact email•
idea creator
this project needs...
created on10:28, 11 July 2018 (UTC)

Project idea


What Wikimedia project(s) and specific areas will you be evaluating?


Is this project measuring a specific space on a project (e.g. deletion discussions), or the project as a whole?
All Wikimedia projects.

Describe your idea. How might it be implemented?


Provide details about the method or process of how you will evaluate your community or collect data. Does your idea involve private or personally identifying information? Take a look at the Privacy Policy for Wikimedia’s guidelines in this area.

Analyse user block history, analyse occurence of users on the page 'Administrator intervention against vandalism' as a reported user and as a reporting user, analyse rollback behaviour

Other Possible Ideas (by Lonehexagon)

  • Allow users to be reported directly for harassment, breaking the rules, following users around and reverting their changes, or other extremely rude behavior. If a user is flagged by enough users, maybe 20 different people within a certain time period, it would trigger a community review of that user account, up to once a year per user.
  • If an admin is reported multiple times within a certain time period, have it trigger a community review just like a review for a new admin. This could happen up to once a year per admin.
  • Anybody who reports a user wouldn't be allowed to vote in that user's community review. They also wouldn't be allowed to report the same user more than once.
  • IP users would be completely unaffected. They would not be allowed to flag or vote to prevent abuse, but also wouldn't be able to get flagged.

Just some ideas to try to allow people to report problematic users, without any one user having much power. Lonehexagon (talk) 06:28, 21 July 2018 (UTC)

Are there experienced Wikimedians who can help implement this project?


If applicable, please list groups or usernames of individuals who you can work with on this project, and what kind of work they will do.

How will you know if this project is successful? What are some outcomes that you can share after the project is completed?


Improvement of the new users statistics?

How would your measurement idea help your community make better decisions?


After you are finished measuring or evaluating your Wikimedia project, how do you expect that information to be used to benefit the project?
Identifying problematic behaviour of toxic users and manage it before more damage is caused

Do you think you can implement this idea? What support do you need?


Do you need people with specific skills to complete this idea? Are there any financial needs for this project? If you can’t implement this project, can you scale down your project so it is doable?
I'm very sorry, i'm not a programmer

Get Involved


About the idea creator


Author, mainly in natural sciences, member since 2014 until today.


  • Volunteer I am familiar with the issue and can provide evidence, constructive ideas and links to related projects. For example, I recently provided input to Research:Civil Behavior Interviews. And I did much of the early work on the article The No Asshole Rule, which details the issue in a wider context. Andrew D. (talk) 13:01, 20 July 2018 (UTC)
  • Volunteer Come up with ideas on how to implement this Bennpenn777 (talk) 15:08, 20 July 2018 (UTC)
  • Volunteer I'd love to identify and tag these users Fire emblem fan 776 (talk) 06:08, 21 July 2018 (UTC)
  • Volunteer I would love to volunteer however I can. I think a friendly and welcoming community is crucial to Wikimedia's success. Lonehexagon (talk) 06:12, 21 July 2018 (UTC)
  • Volunteer Hey, I recently started editing Wikipedia more seriously, especially after seeing how it and the awesome Wikitionary were helpful in my language learning activities. A possible Idea I have for the project is maybe monitor the use of swears and explatives in discussion areas. I think it is reasonable to assume that combative and toxic editors are more likely to use swearing against other users. A bot can be used to measure the amount of swearing, and alert human volunteers whenever a user is using an abnormally large amount. The volunteer will then evaluate whether the swearing is used in a manner intending to disparage other editors or is just incidental. If the editor is evaluated as toxic, an intervention whereby they will be warned by an administrator (who will be alerted by the volunteer), or in extreme cases, even blocked. I am willing to work on a bot or other technical aspects if need be. Ido66667 (talk) 16:36, 22 July 2018 (UTC)
  • Volunteer Spread kindness. Seacolor88 (talk) 23:28, 23 July 2018 (UTC)
  • Volunteer   Oppose would like to join to make sure that there is an opt-out - any attempt to quantify users is subjective (and also to encourage creating several ratings (separate rating for content edits, for talk page edits, for user talk edits, etc) several ratings may make it easier for others to know that the user's rating is subjective and not a measure of superiority, but even then I think this would be rather dangerous in such a large group as English Wikipedia) Gryllida 01:15, 26 July 2018 (UTC)
  • Volunteer Your idea sounds good. Im in. Krishna23456 (talk) 18:22, 27 July 2018 (UTC)
  • Volunteer Implementing this idea 6669726520656d626c656d (talk) 19:35, 30 July 2018 (UTC)
  • Volunteer An user was harassing me here recently, and when I wanted to express it , that got worse. So I would like to help other people not to experience the same thing.



+1 - Hello Ghilt, but please do not forget the toxic administrators, referees, oversighters and bureaucrats (including Ex). In the German Wikipedia, I could name a few users who are characterized by greed for power and are often the cause of conflict escalations. The ongoing confrontations in the topic of stumbling stones (Stolpersteine) are just one example, til today. Good luck - Bernd - die Brücke (talk) 07:20, 17 July 2018 (UTC) German: Hallo Ghilt, bitte vergiss aber nicht die toxischen Administratoren, Schiedsrichter, Oversighter und Bürokraten (inkl. Ex). In der deutschsprachigen Wikipedia könnte ich einige Namen nennen von Personen, die sich durch Gier nach Macht auszeichnen und nicht selten Ursache für Konflikt-Eskalationen sind. Die andauernden Auseinandersetzungen im Themenbereich der Stolpersteine sind nur ein Beispiel – bis heute.

Good luck for you idea. -jkb- 18:44, 17 July 2018 (UTC)#

  • It's a drag to have people leave, for whatever reasons - even if they are minor contributors. Projects like this depend on cooperation. It takes time for someone to join the project, get acquinted with and devote their time to work. On the other hand, it's extremely easy to discourage, criticize and/or delete someone's work, mock for alleged level of unprofessionalism (which could be relative), etc. Each new contributor should be treasured, and all mesaures should be made to keep them and even motivate them.
And I agree: there are admins in higher positions that are bad communicators, bad tempered, and bad people in general. I don't know how they even got there? Is there that much of a desperation to get people into work? (Question especially relevant to international/local Wikis that have fewer users.) I don't care how professional someone is if they spread negativity, confusion, sparks of conflict while they work. The good old "quantity or quality" issue. Measuring the work of the most responsible "inner" people (or anybody here) just based on their quantity of input (while they may have bad personality) is a technocrat/robotic way, and we are humans.-Jozefsu (talk) 14:45, 20 July 2018 (UTC)
  • May this not go too far or we'll end up banning everything that's non-conformist Seba.ds (talk) 17:56, 20 July 2018 (UTC)
  • I've seen this problem over the years and I agree that it can scare away or demotivate many editors. It is very hard to fix, but worth trying Codrinb (talk) 19:21, 20 July 2018 (UTC)
  • I agree. Toxic and rude editors are demotivating and harmful to this community. Jane955 (talk) 00:01, 21 July 2018 (UTC)
  • I agree with the above about the higher-ups. Wikipedia has became too strict and bureaucratic in the last few years. How many attempts were made to prove that Avicii had died before an admin finally accepted it? I understand erring on the side of caution, but Avicii's death had already been all over the media. We need checks and balances for higher-ups. PF4Eva (talk) 01:43, 21 July 2018 (UTC)
  • I quit editing wikipedia for several years over a poorly resolved conflict over a silly interlinks dispute. Wikipedia definitely needs more attention ot rules and guidelines on that. 01:45, 21 July 2018 (UTC)
  • I have been put off by a user who followed my edits and reverted them. His name is Eric. I tried to talk to him but he gave me a lecture instead about how I needed to understand wiki processes and editing. Kelly222 (talk) 03:24, 21 July 2018 (UTC)
  • I saw and experienced for a short while some toxic users. My usual strategy is to avoid these type of users and follow whatever he wants, however, watching this toxic user continue with his harassment and manipulation to other users saddens me. Rochelimit (talk) 04:41, 21 July 2018 (UTC)
  • I agree, such users discourage people from contributing to the Wikipedia and spoil the atmosphere. MonsterHunter32 (talk) 04:56, 21 July 2018 (UTC)
  • Endorse and as per Bernd - die Brücke, please do not forget the toxic administrators, referees, oversighers and bureaucrats. Lotje (talk) 05:07, 21 July 2018 (UTC)
  • Endorse When I was a new editor, I worked on a page for several weeks about a topic related to feminism. One day, an administrator came, tagged just about everything on the page as having some sort of problem (although everything was cited) and started reverting most of my edits as soon as I made them, even when I was trying to improve the problems that were pointed out. For a while, I started speaking with the admin on their Talk page to get permission before making changes. After several rejections to a new lede, I got another editor to help me write an impartial lede in hopes that having multiple people work on it would prevent a revert. I proposed the new lede on the articles's talk page, and tagged the admin specifically. They did not make any comments or objections to the proposed change for over a week, but as soon as I made the change, they reverted it within minutes. I felt so hurt because I'd worked so hard on the lede for weeks, and thought I had at least tacit approval from the admin, but they reverted my edit almost instantly. After that, I abandoned the article completely and I've never gone back to even see what's going on with it. The admin won. To this day it was the most negative experience I've ever had on Wikipedia. Lonehexagon (talk) 06:09, 21 July 2018 (UTC)
  • Being able to identify problem users based on anonymous flags from many other users would be a good feature to have. The alternative is following them around and gathering evidence of their misbehavior manually, filing official reports, becoming a target of their harassment, etc. which is tedious and tiresome and not good for the non-confrontational. — Omegatron (talk) 00:57, 22 July 2018 (UTC)
  • Strongly endorse I don't know if this will help new editors, but it will certainly help retain experienced editors. I have been editing for 11 years and the few times I have become discouraged it is because of a tiny number of long-term editors who create entropy way, way out of proportion to their numbers. I've never run into a bad admin, but I recognize the toxic rogue editor gallery described above: the editor who doesn't write any content himself but makes a career of reverting other's edits on trivial grounds; the single-issue POV editor who won't give up, the chronically bad tempered, curt, confrontational editor who gets his way because good editors don't want to waste tune engaging with him. I dealt with an editor who WP:OWNED a series of articles for 5 years, putting on pseudoscientific WP:fringe content which was sourced by fringe books in the bookstore he owned (WP:COI). He stayed below the radar, quietly reverting every effort to correct the article. Because it was a content dispute about a very technical esoteric subject and he always copiously sourced his edits, no admin wanted to wade into it. It took 5 of us editors 5 years and two ANIs to get him to stop. Toxic editors consume enormous amounts of time and patience of good editors. If they could be stopped or, better, converted, it would hella improve retention rates. Part of the problem is the uneven distribution of admins among subject areas. A process likr this one to block toxic editors without requiring a decision by an outside authority, is what we need. Chetvorno (talk) 05:03, 22 July 2018 (UTC)
  • I think it is necessary to facilitate a check mechanism by which somebody who is not acting within the expected conduct rules can be corrected or in the worst case, expelled. Strong endorsement. Prateek Pattanaik (talk) 16:52, 22 July 2018 (UTC)
  • support - i would suggest a pivot to metrics about toxic behavior, with tactics to counter the behavior. i fear targeting users will feed the adversive environment. better to de-esclatate. Slowking4 (talk) 01:16, 23 July 2018 (UTC)
  •   Weak support This seems like a good train of thought, though I caution against abusive applications of this. Seems like it could become a morality score.  — Mr. Guye (talk) (contribs)  03:05, 23 July 2018 (UTC)
  • it seems like a no brainer Uumlout (talk) 13:55, 23 July 2018 (UTC)
  • I concur - with most of the sentiments being expressed here. It is entirely too easy for editors to revert a deserving edit on presumed WP guidelines even though the judgement call is not supported. Alternative to summery reverts could be: the second editor help correct the erroneous segment to make it a collaborative endevour. Some collaboration does occur in sections but, not as frequently as deletions of one's work on a judgement call of another. Editors with a number of reversals in a predetermined amount of time should be subjected to edit evaluation. The Pudit (talk) 03:16, 28 July 2018 (UTC)
  • This idea has his value, if it is inbedded in Grants:IdeaLab/Editor rescue and does not have the purpose to ban these people. --Havang(nl) (talk) 11:14, 28 July 2018 (UTC)
  •   Support Anything that reduces conflict and toxicity in a project is welcome as long as it remains inclusive of those (or their ideas which may be valuable) who are creating the conflict in the discussion and the resulting decision is seen as a consensus. Ouranista (talk) 15:34, 28 July 2018 (UTC)
  •   Weak support I agree toxic users is probably Wikipedia's worst problem. However measurement is such a gnarly problem, in order to quantify things we don't want in the community. On one hand I'm thinking something like Reddit's up/down votes for page changes. I see we can already "thank" an editor for changes in the article history. On the other hand, we'll inevitably get people gaming the system for upvotes and gold, which might well be worse. I also wonder if anonymity is a problem; I know there's furious debate about it, but it remains true that toxic online behaviour is often due to anonymity. I would wager that the same behaviour in a pub, much less a room full of people trying to work together on something, would be resolved fairly quickly...! So this boils down to psychology, accountability, and the eternal problem of trying to re-create or facilitate the normal, real-life, human social contracts and behaviour in online platforms. Gosh, sorry to rabbit on... Jonathanischoice (talk) 23:23, 28 July 2018 (UTC)
  • My contribition here was moved to the discussion page which makes me already think, if this initiative here is up to face the real problems. Is it really necessary to censor or to clean this page from everyone who does speak about them in a rather concrete way? I suggest as an effective measure to deprive users abusing administrative power for some years from a certain status which did neither anything good for them nor for anybody else! Unfortunately, we have to wait after it had allowed them to do a pretty amount of damage here. I am quite confident, it will help them to increase their communicative skills without excluding them from constructive contributions (which are even possible for somebody who is not inscribed here). I observed that quantitative research here is often meant as an insufficient substitute for a rather concrete measure. I am not against it, but this might be a perspective to check, how serious the situation really is (for the platform in a certain language). But this is about how to interprete a set of datas, and usually it has to prove what was originally intended. Is this really so much more objective? A discussion about strong endorsement is rather about a prompt action and how to organise it. At the end you have to deal with human beings (which cannot be reduced to those actions of theirs which are easily to anticipate). It can hardly be avoided. Platonykiss (talk) 15:01, 31 July 2018 (UTC)
  •   Support As one of the most active mentors in German wikipedia, I think this is a very important problem. I read here of spoiled articles. Every author of wiki gives his article as a present to the community. Therefore it must be handled like one. So I think, we should grant the main author of a comprehensive article here the right, that his/her self-created work - with reference to the cause (username of the causer, etc.) - will be deleted by an Oversight, if the author wants to draw back his present (in cases, that it had been handled badly). This would take away the pleasure from the toxic users from torturing and identify them at the same time. How else should - especially a new user - defend against the attack of two or three others? You can hardly expect the author, to stand his own, but botched article permanently here and still working happily here? Greetings Redlinux (talk) 11:16, 3 August 2018 (UTC)
  • I am sure also a bully author of a certain article might develop as a toxic user as well, especially concerning the German wikipedia this is a well-known problem. In general, wikipedia uses a very modern technology (usually better developed than most of the professional encyclopaedia portals I know). Thus, authors can inform worldwide about available resources like digitised manuscripts or new publications in the field. I never saw me alone here, but also asked colleagues to have another look to develop an article, and somehow it is a tradition that you build in former paragraphs by earlier authors (with exception of those which are simply not correct, but even in that case their thought is still there!). But I understood (please correct me, if I am wrong!), we are talking (as far as strong endorsement is concerned) about more radical problems. Groups of users who even pay others just for propaganda aims to get the control over a whole portal in a certain language, and who voluntarily support the exclusion of other users who might have brighter visions. It is not about the usual conflict or an argument between two users who are working at one article at the same time (quite naturally it is not so easy to collaborate in a symbiotic way, but nobody says it is impossible, just try it out). In such a case you will argue with the other user. I mean something else: already to doubt the correct use of a template which has the long term effect to delete certain articles, can cause very weird reactions sometimes, because a certain propaganda does not like that the public is informed about certain issues (which is exactly, what a free encyclopaedia stands for!). Such a group tries to establish a kind of censorship, because the usual censorship to block the whole portal in a certain country does not work any longer. It is enough that certain administrators exclude you immediately and what is fishy about them, is that they do not even try to communicate with you. It is not about somebody who is angry with you, because she or he changed an article that you regard as yours. I am talking here about users to whom you have to communicate, that there is a certain article, and if this user will try to delete it, he or she will be observed and might get in trouble! The usual way is to improve articles, but these users do not do anything of that kind, they are not even interested to talk about the article. Platonykiss (talk) 12:02, 7 August 2018 (UTC)
  • Toxic users delete contents, toxic users block and ban those who create it, and most of all toxic users write rules that enable all of it. The infallible immortals who run the house. Identified and quantified ... then what?! Retired electrician (talk) 14:14, 7 August 2018 (UTC)
  • I advice to use a platform, where you know the language well, but where you have been rarely active (because otherwise you end up with misunderstandings which is the cardinal argument of those users who are notoriously abusive). Choose a delicate article which is about something that really stinks in that country, and you will be surprised, how easy it is to catch a toxic user like a big fish. It does not matter, when they ban you, because wikipedia is transparent enough that you can watch every insane step they do and I also would advice against investing too much energy in any kind of fight, after they had already done the stupid thing, we expected them to do! But your description is precise, they will behave like the headmaster, ban you and I also do believe that the majority of them is male (sorry for being so sexistic, but I still think that wikipedia still suffers from a lacking gender balance here, just as a hint who might be a suitable substitute as administrator). I wrote a precise protocol (by the way in addition to the usual quantative research which missed its point), because I asked, if you think that so many users try to get anonymous access, what exactly is their motive of doing so (not what is next step to prevent it, because what kind of free encyclopaedia is this supposed to be, if the only answer is to put barbed wire around it)? It is fishy, whatever might be the way to look at it! The fish I caught followed me even here to meta-wiki and tried to delete my protocol. Hence, I saved it temporarily at my discussion page (you might have a look there). My suggestion is, if quantitative research already points at problems to patrol as a group of users (why not independently?) in order to make a systematic lists of articles which toxic users marked for deletion and go exactly to those articles. The next step is to make a list of those fishes in order to prevent that they do further damage. As I said, in less drastic cases it is enough to deprive them from any administrative privileges, so that they can learn to communicate with other users at eye level, but if they even try to hack your account (as it happened in my case), I suggest this is already a category of behaving irregularly that such a user should be excluded with his/her former identity, and more important from administrative privileges connected with it (at least for some years). But these rules have been already established, it is not necessary to re-invent the wheel. Platonykiss (talk) 16:22, 7 August 2018 (UTC)
  • Too much vandalism sometimes by the same people under disguise Saederup92 (talk) 16:57, 7 October 2018 (UTC)

Expand your idea


Would a grant from the Wikimedia Foundation help make your idea happen? You can expand this idea into a grant proposal.

Expand into a Rapid Grant

No funding needed?


Does your idea not require funding, but you're not sure about what to do next? Not sure how to start a proposal on your local project that needs consensus? Contact Chris Schilling on-wiki at I JethroBT (WMF) (talk · contribs) or via e-mail at cschilling for help!