Grants talk:IdeaLab/Identify and quantify toxic users

Latest comment: 5 years ago by Omotecho in topic An indicator could help both

What could go wrong with this idea? edit

In the Russian Wikipedia the most of administration consists of deleters and writers very rare got administrative status. So the powerful deleters very often call writers who argue and disagree with delters as "toxic" and block 'em (Idot (talk) 06:32, 22 July 2018 (UTC))Reply

Automatisation edit

Yes, great idea - on the one hand, Wikipedia lacks proper communication structures; this is actually one of the main critique coming from newbies etc. Communication means that people get involved, discuss and interact. Why on earth should it be helpful to have a programme/app/bot/whatever analyzing humans on WP? I am strictly against such kind of automatisation: you can't name someone as "toxic" by relying on software summaries. Results would have to be analyzed anyway. And honestly: I am quite convinced that there are a couple of users, at least on the German WP, who do this job properly just by reading edits ;). --AnnaS.aus I. (talk) 14:55, 17 July 2018 (UTC)Reply

Hi Anna, imho this idea is meant as an indicator, which can be used as an additional advisory tool by arbcoms and admins, not an automatism. All the best, --Ghilt (talk) 15:18, 17 August 2018 (UTC)Reply

Opposition edit

-1, In the run-up to a promotion Ghilt should clarify which dates of an user in which way will be processed. Is-it planned to identify styles of communication and collaboration? Such a tool can seriously influence the fate of authors. In my opinion it will replace in parts the human engagement with a topic. I think that parts of the responsibilty of an admin etc. will be transferred to an algorithm. It could be more promising to organize further trainings for admins and interested user-groups. In this way personal competence can be developed. Belladonna

Hi Belladonna, well, we'll have to find out which quantifiable parameters are useful in describing toxic behaviour. --Ghilt (talk) 15:15, 17 August 2018 (UTC)Reply

Different edit

Hello! Sry, this is a very bad idea! When i see the initiator, i can see the final version! But, nobody are in the wikiversum judge, attorney and executioner in one person! Where are here the users, or they lawyers? I`m the german nr.87 TU - but also nr.93 with the most edits in 12 years! Thats the mistake! We are not POISON, We are the WIKIPEDIA!!! Our work is a very big part of the project. Poison killed! Fertilizer also - who is the expert, to different between this booth things? OK, some people wish to his works the silence from libarys, but this ist a step to silence from a grave. The last closed the coffin. Oliver S.Y. (talk) 13:41, 19 July 2018 (UTC) PS Sry, i dont speek english, but this some words must writeReply

@Oliver S.Y.: These comments are somewhat difficult for me to understand, but I do comprehend that you have serious concerns about the consequences of this idea. However, there's not yet a plan in place or a method by which these users would be identified, so this reaction seems somewhat premature. I'd also like to point out that in some communities there are already quantifiable ways that problematic behavior is identified. On some projects for example, editors are not supposed to revert each other more than a specific number of times regarding article content, because this is disruptive in most cases. There is consensus that this measurement is an appropriate way to gauge problematic behavior, so it is fair to say that there may be other ways to measure this. I JethroBT (WMF) (talk) 15:44, 20 July 2018 (UTC)Reply

Comments from DragonflySixtyseven edit

There's a serious problem: toxic individuals rarely consider themselves to be toxic; rather, they consider that those who they vex are overreacting, and that those who try to mediate and discipline are the toxic ones. DS (talk) 11:46, 20 July 2018 (UTC)Reply

Hi DragonflySixtyseven, interestingly, when this topic was discussed on de.wp, my two main opposing discussion partners were imho users with problematic behaviours. So some of them seem to be aware... --Ghilt (talk) 15:21, 17 August 2018 (UTC)Reply

Comments from Dweller edit

Strong oppose Bad idea. I can't think of a more divise idea. It'll become an utter drama-fest. --Dweller (talk) 11:54, 20 July 2018 (UTC)Reply

Hi dweller, i disagree, this idea is intended to deliver an additional argument, which is rather objective. All the best, --Ghilt (talk) 15:23, 17 August 2018 (UTC)Reply

What is toxic behavior edit

The core problem is to figure out what is toxic behavior and how can it be identified and then measured. It isn't so simple as what someone likes or dislikes, that is a subjective measure. What is needed is some kind of objective measure, and that isn't simple to implement.

One possibility is to measure whether user A keeps contributing at the same page after an encounter with user B. If there are statistics for all users B encounters, and this is negative, then there are an indication that something weird is going on. If the users B encounters leaves the project, then the indication could be even worse.

This statistics isn't so simple as it might seem as a user welcoming new editors will do so on a lot of pages where there will be no reply. This will give statistics for the welcoming user that indicates (s)he is a very toxic user, because there are no continued activity from the welcomed user.

Several measures for what is a toxic user have similar flaws. I don't say a measure of toxicity can't be found, I just say it is difficult. — Jeblad 15:11, 20 July 2018 (UTC)Reply

Hi Jeblad, i agree. Maybe ask Jimbo what he meant here, maybe ask the arbcoms and some psychologists? All the best, --Ghilt (talk) 15:11, 17 August 2018 (UTC)Reply

Huge problems with admins edit

Wikipedia has turned into a very closed circle of admins who set their own interpretation of rules and use their vast knowledge thereof to expel users they don't like. Of course those are labeled "toxic" and are quickly and effectively banned using loopholes in the rules. I encountered this behavior in both Russian and English wikipedia. I was provoked and banned quickly and easily when I tried to raise concerns about how Wikipedia is managed. The whole idea of collaboration of enthusiasts is long gone and only exists in Jimbo's imagination. It is in fact a very corrupt organization where people fight for power, scheme and eliminate opposition. And the saddest thing - I don't even expect to get any response to this. When I told the Russian admins that I'll complain to someone who's above them, they just laughed. Wikipedia is ruled by people who do very little for the encyclopedia and therefore have lots of spare time to write hundreds of pages in "meta". People who actually want to write articles are drowned in endless "discussions" that only waste their time and energy. Admins have enormous power and are not controlled by anyone. Everyone who is supposed to be on top of them, are elected by them and from them. There is absolutely NO oversight of admins actions. And that's what kills it. Wikipedia has become the laughing stock of the world. Anyone citing Wikipedia as a source always adds something like "I'm sorry it's from Wikipedia". It's lost any credibility. As an idea of friendly collaboration the project is DEAD. Has been for a very long time. Le Grand Bleu (talk) 21:20, 20 July 2018 (UTC)Reply

@Le Grand Bleu: I can appreciate that you've had a profoundly bad experience with Wikipedia relating to administrators, and I agree that the conduct of administrators is relevant to the topic here, but I don't think this kind of editorial commentary on Wikipedia is particularly helpful here. Discussion here should be focused on commenting on and improving the idea. There are other venues to discuss what is or is not happening to Wikipedia more broadly. I JethroBT (WMF) (talk) 21:39, 20 July 2018 (UTC)Reply
I saw an invitation to express my views on the "community". I followed some links and ended up here. Being permanently banned from said community, I have limited means to express it anywhere else. If it doesn't belong here, please feel free to just delete it. Like I said, I don't expect to have any proper response. Splitting money is far more important, I understand. Le Grand Bleu (talk) 18:21, 21 July 2018 (UTC)Reply
@I JethroBT (WMF): Hi Jethro, can you give any pointers to where the problem of administrators' behaviour can be discussed? I'm talking about a general trend rather than reporting a specific admin. Thanks. Great floors (talk) 18:17, 25 July 2018 (UTC)Reply
@Great floors: I think if it's about a general trend, and you have an idea for how to evaluate or measure how and when that behavior occurs on a project or a particular space (like in deletion discussions for instance), that kind of submission is welcome in this campaign. I am reluctant to permit open speculation about admin behavior in this campaign; these discussions tend to be unfocused, unproductive, and typically result in people insulting each other. The point of this campaign is for communities to decide on ways to gather data, evidence, or personal experiences so a more focused discussion can happen on local projects. I JethroBT (WMF) (talk) 19:26, 25 July 2018 (UTC)Reply
@I JethroBT (WMF):I've no proposal for this campaign and I agree that this is not the place for open discussion about admin behaviour, but on the topic of admins using procedural knowledge to end discussions or push contributors away you said "There are other venues to discuss [that]". I've been looking for such a venue for years, can you tell me where it is? Thanks. Great floors (talk) 23:56, 25 July 2018 (UTC)Reply

You can't be serious? edit

There is no way on god's earth that a charitable foundation should be providing funding to 'research' aimed at labelling identifiable people as 'toxic contributors'. Not unless the objective is to lose charitable status, or to see how much money can be lost in lawsuits. Or both.

If you want to engage in research, find someone who understands how to do it without pre-judging the conclusions, without resorting to loaded terminology, and without violating the rights of individuals. 86.147.197.31 21:48, 20 July 2018 (UTC)Reply

There is already ongoing research related to identifying disruptive behavior and evaluating its impact on users. This idea doesn't seem all that different in concept, though I agree the focus should be on behavior generally rather than on particulars users. However, behavior always has a person attached to it, and I think the idea creator is accurate in saying there are contributors who exhibit obvious, long-term patterns of conduct that causes harm and wastes volunteers' time and energy. Right now, this idea isn't in a state to be funded through our grant programs because it lacks important details such as how this information (e.g. block user history) would be used differently than it is now; I don't think there is any need to be alarmed, but it is inappropriate to assert that this kind of conceptual research simply cannot be done, because it is already happening. I JethroBT (WMF) (talk) 02:37, 21 July 2018 (UTC)Reply
I wasn't aware that the WMF had already funded 'research' aimed at confirming their preferred conclusions. Getting it wrong once is no reason to do it again. Proper research into 'toxicity' (whatever that is) would begin by asking whether there were structural reasons for undesired behaviour, rather than assuming it is an issue with individuals. That however would require proper open-ended research which could well result in uncomfortable conclusions, so I think it is safe to assume that the WMF will instead continue to fritter charitable donations away on pretend 'research' apparently based on the premise that one can find technological solutions to social problems. 86.147.197.31 18:49, 21 July 2018 (UTC)Reply
Two comments: 1) If you have a different approach for research that you would like to undertake or suggest, then I'd recommend submitting an idea during this campaign that you could develop with other volunteers. Just based on my personal knowledge of current and existing research, structural factors seem like they could use more attention in how they inform editing behavior and interactions. 2) You're welcome to believe whatever you wish about Wikimedia Foundation staff and research. I can't say I agree with your characterization of the researchers or the work itself, but whether I agree with you or not is unimportant. What is relevant is that this is a discussion page of a volunteer's idea, not a platform to discuss the Wikimedia Foundation broadly. You need to contact the researchers involved with Research:Detox if you want to suggest a different approach. I JethroBT (WMF) (talk) 21:21, 21 July 2018 (UTC)Reply
Ad hominem (and ... ad organizatum?) arguments aside, the anon actually has two very valid points that should be taken seriously here: you can't fix people with technology, and it may the technology that's a large part of the problem. There is no doubt whatsoever that there are structural and institutional and community/subculture enabling factors at work here. It's too easy to filibuster/stonewall, the system is set up to give people the benefit of the doubt over and over again, people don't even have to create accounts to edits about 99% of pages, and we can't, under the assume good faith principle, even question the motives for disruptive and iffy behavior until it rises to a very severe level. It's rather like the Christian fantasy of "turning the other cheek" (over and over again) has actually be implemented as a weird social experiment. As Wikipedia transitions into the next phase of the organizational lifeycycle, this curious system, a relic of WP's earliest days, is necessarily going to finally break irreparably and have to be replaced. We'd be better off getting on with it. The community has already evolved, as an escape valve, consensuses like w:en:WP:DUCK, w:en:WP:SPADE, w:en:WP:ROPE, etc. – some ignoring of rules against the AGF principle, but these are just essays, and are not sufficient. Something has to make the system less attractive to and less abusable as a playground by people who are not really here to write an encyclopedia [or dictionary, or whatever the wiki in question is], nor competent to collaboratively do so. PS: Yes, I'm aware all these shortcuts don't pertain to this wiki and that one; but they tend to have equivalents in some form or another on other busy wikis.  — SMcCandlish ¢ >ʌⱷ҅ʌ<  17:40, 22 July 2018 (UTC)Reply

What could go wrong with this idea? edit

In the Russian Wikipedia the most of administration consists of deleters and writes very rare got administrative status. So the powerful deleters very often call writers who argue and disagree with delters as "toxic" and block 'em (Idot (talk) 06:32, 22 July 2018 (UTC))Reply

This does seem a risky idea. It all depends on how it's implemented. Any new ability to get others given such a strong label as "toxic" is likely to be abused, for example by factions such as deletionists. If though the data if anonymised, with the only publically visible data being something like the total number of "toxic" users recorded over time, then perhaps it would be an interesting metric to track along with other stats. FeydHuxtable (talk) 15:14, 22 July 2018 (UTC)Reply
Without any objective measure of what is a toxic user this will be abused. — Jeblad 10:43, 18 August 2018 (UTC)Reply

Something based on objective stats might work, but more "drama-board" stuff could play into PoV hands edit

In response to Activist's ping on the grants page: I'm well aware of the problem. This proposal seems like a more formalized return of the now-defunct w:en:Wikipedia:Requests for comment/User conduct process (RFC/U), which may still have equivalents on other wikis (en.WP shut it down in late 2014, as it wasn't very effective, though that mostly was because nothing it did was binding, just used as weakly evidentiary in later dispute resolution proceedings like w:en:WP:ANI, w:en:WP:RFARB, w:en:WP:AE, and other noticeboards). Something more useful could come of this, though it's already veering a bit in dubious directions.

I can't speak as to other wikis, but I know that en.WP does have a problem effectively dealing with more subtle long-term abusers of the system; w:en:WP:LTA doesn't even really get at this at all, being focused on vandals, trolls, really obvious PoV-pushers – mostly those returning with sockpuppetry after bans and indefinite blocks. The real problem for WP's long-term viability is the "civil-PoV" types, the "long-game" con, the "slow-editwar" approach, which sometimes plays out over years, even an entire decade. "Working" the system is an attractive goal for more and more parties, as WP's importance as a global information provider increases. Unfortunately, it's comparatively easy for someone with an agenda to make just enough constructive edits, to use just polite enough wording, to stop just short of edit-warring sanctions, to have just enough w:en:WP:SANCTIONGAMING tricks up their sleeves, that their overall viewpoint-pushing can go unaddressed year after year. The motivation varies topically – often nationalism, religion, racialism, fringe theories, political activism (left, right, and other), and commercial promotion. If it's done carefully, it might not even preclude them from adminship if they're particularly good at it and use a light touch early on.

But I'm not sure this proposal is the right approach. I don't care for the labeling and witch-hunting tone of this (which I take to be a first-draft proposal). Here on the talk page and in the grants page posts, people are already raising objections. One I've not seen articulated is that such a process/tool could easily be used for w:en:WP:POVRAILROADing. If you've already got a "civil PoV and slow-editwar" tagteam, against some actual encyclopedists whose patience is running thin and who are thus easily goaded into verbal explosions and other [not really] "disruptive behavior", you could use a process like this to carefully frame them.

There's the germ of a good idea in here, but I think it needs to be re-formulated. It can't be motivated by hunting down and branding "toxic users". The criteria have to be objective, e.g. based on administrative, arbitration, and community sanctions, which have not been successfully appealed or otherwise overturned. It should be based on auto-generated stats, not on someone "reporting" you. We already have noticeboards for that; they're very hit-or-miss, and they often punish the wrong party. (This is partly because manipulators understand the politics of the game, while fair-minded people have more of a sense of justice about it, which can trap them into a "protect the content by any means necessary" pattern, in which following the core content policies is held forth as an excuse for decreasingly civil or consensus-building action, a handy trap for PoV railroading. While I've written about this from a tongue-in-cheek perspective of "how not to get yourself in trouble", at w:en:WP:CAPITULATE, there is this other entrapment aspect to it.)

A successful proposal here will need to be something different from more noticeboard "drama". The base proposal seems compatible with doing something stats-based, but the "Other Possible Ideas" materially really does just look like more drama.
 — SMcCandlish ¢ >ʌⱷ҅ʌ<  17:17, 22 July 2018 (UTC)Reply

Comments in endorsements section edit

Hi folks-- the endorsements section is not the right place for extended discussion of ideas (this is why we have a discussion page), so I'm moving replies (with context for clarity) to this section here. Thanks, I JethroBT (WMF) (talk) 19:44, 25 July 2018 (UTC)Reply

Hello Ghilt, but please do not forget the toxic administrators, referees, oversighters and bureaucrats (including Ex). In the German Wikipedia, I could name a few users who are characterized by greed for power and are often the cause of conflict escalations. The ongoing confrontations in the topic of stumbling stones (Stolpersteine) are just one example, til today. Good luck - Bernd - die Brücke (talk) 07:20, 17 July 2018 (UTC) German: Hallo Ghilt, bitte vergiss aber nicht die toxischen Administratoren, Schiedsrichter, Oversighter und Bürokraten (inkl. Ex). In der deutschsprachigen Wikipedia könnte ich einige Namen nennen von Personen, die sich durch Gier nach Macht auszeichnen und nicht selten Ursache für Konflikt-Eskalationen sind. Die andauernden Auseinandersetzungen im Themenbereich der Stolpersteine sind nur ein Beispiel – bis heute.Reply

Hi Bernd, this idea is not subgroup specific. Your example of the conflict surrounding the 'Stolpersteine' is imho not adequate, since there is one toxic user amidst of it, Meister und Margarita. But this is off-topic here. --Ghilt (talk) 15:37, 17 July 2018 (UTC)Reply

When I was a new editor, I worked on a page for several weeks about a topic related to feminism. One day, an administrator came, tagged just about everything on the page as having some sort of problem (although everything was cited) and started reverting most of my edits as soon as I made them, even when I was trying to improve the problems that were pointed out. For a while, I started speaking with the admin on their Talk page to get permission before making changes. After several rejections to a new lede, I got another editor to help me write an impartial lede in hopes that having multiple people work on it would prevent a revert. I proposed the new lede on the articles's talk page, and tagged the admin specifically. They did not make any comments or objections to the proposed change for over a week, but as soon as I made the change, they reverted it within minutes. I felt so hurt because I'd worked so hard on the lede for weeks, and thought I had at least tacit approval from the admin, but they reverted my edit almost instantly. After that, I abandoned the article completely and I've never gone back to even see what's going on with it. The admin won. To this day it was the most negative experience I've ever had on Wikipedia. Lonehexagon (talk) 06:09, 21 July 2018 (UTC)Reply

This is a rather interesting usage pattern. User A does some good faith edits, and user B starts reverting, and continue to do so over time without user B providing any additional content while user A provides content. This could be solved by a conditional temp ban of user B after a few reverts, given that the edits can't be classified as vandalism, blocking him from just removing user As contributions. That will force user B into providing content, or at least a meaningful dialogue of some sort. Not sure if it will work though. — Jeblad 14:22, 21 July 2018 (UTC)Reply

I don't know if this will help new editors, but it will certainly help retain experienced editors. I have been editing for 11 years and the few times I have become discouraged it is because of a tiny number of long-term editors who create entropy way, way out of proportion to their numbers. I've never run into a bad admin, but I recognize the toxic rogue editor gallery described above: the editor who doesn't write any content himself but makes a career of reverting other's edits on trivial grounds; the single-issue POV editor who won't give up, the chronically bad tempered, curt, confrontational editor who gets his way because good editors don't want to waste tune engaging with him. I dealt with an editor who WP:OWNED a series of articles for 5 years, putting on pseudoscientific WP:fringe content which was sourced by fringe books in the bookstore he owned (WP:COI). He stayed below the radar, quietly reverting every effort to correct the article. Because it was a content dispute about a very technical esoteric subject and he always copiously sourced his edits, no admin wanted to wade into it. It took 5 of us editors 5 years and two ANIs to get him to stop. Toxic editors consume enormous amounts of time and patience of good editors. If they could be stopped or, better, converted, it would hella improve retention rates. Part of the problem is the uneven distribution of admins among subject areas. A process likr this one to block toxic editors without requiring a decision by an outside authority, is what we need. Chetvorno (talk) 05:03, 22 July 2018 (UTC)Reply

  • @SMcCandlish:There are clearly many paid editors who use generating toxicity as a tactic to keep those who oppose their agendas, tasks, client list members, etc., from participating in editing. Wikipedia's dogged resistance to outing paid or COI editors works against the health of the community when this is allowed, in the interests of preserving anonymity. Some balance needs to be established, or enthusiastic and competent editors will increasingly abandon the task because a few make it so inhospitable. Lee Fang busted them years ago, but I'm unaware if any action has ever been taken to avoid or contain this problem. h ttps://thinkprogress.org/koch-industries-employs-pr-firm-to-airbrush-wikipedia-gets-banned-for-unethical-sock-puppets-6570bbd615bd/ Activist (talk) 16:12, 22 July 2018 (UTC)Reply
    I'll give some thoughts on the talk page. I think there are some subtleties that needs to be addressed, but it's a bit long for what looks to be a voting section.  — SMcCandlish ¢ >ʌⱷ҅ʌ<  17:08, 22 July 2018 (UTC)Reply

I think it is necessary to facilitate a check mechanism by which somebody who is not acting within the expected conduct rules can be corrected or in the worst case, expelled. Strong endorsement. Prateek Pattanaik (talk) 16:52, 22 July 2018 (UTC)Reply

Thank you for these very interesting observations. I have generally positive and many years of experience as far as English and other wikipedia platforms are concerned, but I made recently a very weird experience at the Italian wikipedia.
What stroke me as odd, was that certain administrators did exclude me immediately (and even several times and with longterm effect), but without following the usual protocol or offering any feedback to me as excluded user (it was the combination of abusive administrative actions and the lack of exchange as it was already mentioned here, that did really surprise me). I admit that I had from the very beginning the impression that something was fishy and what I experienced did exactly confirm my suspicions.
The scale included rudeness (attacking me with the pretext I could not express myself, which is ridiculous because I do live in Italy and as a stranger you also experience situations how to face any kind of rudeness), censorship, bully other users with rules they do not really care about, up to two warnings (within an interval of 2 months) that other users tried to hack my user-account. I am always de-escalating in a certain way, but also this strategy has its limits (and cannot be regarded as general rule which works in the rare case of strong endorsement).
My point is I rather prefer to be excluded and observe what certain users do (after they had clearly overdone it, because I personally do not like this either, one must do a lot to get at this point). I explained them from the very beginning that they should not feel too comfortable and unobserved. If people are paid to behave that way, that could be an explanation. But I regard this as a very serious problem which needs some better suggestions, than the one that an ethical attitude should be avoided! The project of creating a free platform for knowledge is great, and without any doubt an ethical one, but certain ways of organisation (even involving payment for behaving unethical) also ask for special strategies to face these problems in an effective way. And it cannot be done by just one user alone. Platonykiss (talk) 18:34, 23 July 2018 (UTC)Reply

Being able to identify problem users based on anonymous flags from many other users would be a good feature to have. The alternative is following them around and gathering evidence of their misbehavior manually, filing official reports, becoming a target of their harassment, etc. which is tedious and tiresome and not good for the non-confrontational. — Omegatron (talk) 00:57, 22 July 2018 (UTC)Reply

  • "Popularity contest" stuff will never work. Anyone working to enforce policies and guidelines in a controversial topic will simply get voted down by all the PoV pushers, while people who aren't editwarriors will be off elsewhere doing constructive non-drama things and not notice, thus not come to the defense of the right-acting editor about to be "PoV railroaded". The system you're proposing would probably get me banned within the month.  — SMcCandlish ¢ >ʌⱷ҅ʌ<  17:46, 22 July 2018 (UTC)Reply
I agree, in certain cases it requests a concrete measure. The way like most of the inscribed members juggle with rules and bother others with an internal "community burocracy", something the community did never ask for, has become a very efficient behaviour to convince newcomers to stay away from here for ever. And that is exactly my concern. Strong endorsement is certainly a much more particular case and also needs individual and concrete measures which should be ideally found by not just one member. Platonykiss (talk) 15:13, 31 July 2018 (UTC)Reply
I JethroBT (WMF), I am not sure it is wise to move all opposing comments to the discussion page. Now it gives the impression that this isn't really an open discussion, but something that is already decided. The only thing that remains is to get a positive vote for this as a good idea. In fact I wonder if this a pretty bad idea, as it formulates no clear theory how the toxic users should be identified and quantified, it only makes claims about toxic users. — Jeblad 13:56, 1 August 2018 (UTC)Reply
@Jeblad: This is not the first time this concern has been raised, and I agree it would be a fair concern if submissions to IdeaLab operated like an RfC on English Wikipedia, but it does not. While endorsements do indicate the level of general interest in an idea, there are many other, often more substantial factors required for that idea to actually go anywhere, especially through our grant programs. Furthermore, concerns and criticism are better left on the discussion page, because this usually results in discussion and sometimes changes to the idea, whereas endorsements typically do not. As you say, this idea isn't really going anywhere until a more concrete plan around how this identification / quantification would work is defined and has had some community discussion around it. I JethroBT (WMF) (talk) 15:31, 1 August 2018 (UTC)Reply
@Jeblad: One thing that could be useful coming from this idea, however, is better organizing how this information is organized for say, administrators. Block history is in one place, searching for how often someone has been reported to AIV is in another, looking at overall rollback behavior is possible but tedious, etc. So there may be something to go on with this idea related to consolidating this information about particular editors so it is easier to surface and form conclusions. I JethroBT (WMF) (talk) 15:53, 1 August 2018 (UTC)Reply

Issues edit

A similar project on EN Wikipedia edit

Several years ago I started, and after a while stopped a slightly similar project on the English Wikipedia. It was the most contentious thing I have ever done on wiki, and that's despite our attempts not to name and shame people, and our focus on an area where policy was fairly clear - we created a bunch of articles that did not meet the deletion criteria and watched their fate. If you are going to publicly identify toxic editors on any Wikipedia I would advise that you rewrite the concept from the start and have a filter for good, bad and ambiguous behaviour. Remember that we have problems with people vandalising pages, adding copyright violations, spamming or changing to the "wrong" version of a language, so our systems actively encourage editors to follow such problematic editors and revert their edits. Obviously that becomes a problem if the reverted edits were ones that should not have been reverted, but people who do that are rare. Any project seeking to identify those who follow others and revert their edits needs to start from the knowledge that such behaviour is usually what we are encouraging. Similarly AIV reports, though there there is an easier metric - what you could look at is people making AIV reports that administrators dismiss as false accusations of vandalism but I wouldn't worry about those whose reports at AIV are not acted on because by the time an admin looked at it the IP vandal has gone quiet. I doubt if there is much point looking for people who are reported at AIV, either they stop vandalising, or they continue vandalising till they get blocked.

Where I suspect much of the toxicity arises is over the areas where policy is not clear or where ruthless editing is seen as necessary by a large part of the community. One such area on the English wikipedia is over the adding of unsourced material. Policy, especially where it is material on living people, is not keeping up with community norms. I suspect we could get consensus to reject on sight edits that replace cited information with uncited information, I'm sure there are plenty of community members who do this, there are even some who will revert unsourced edits regardless. But there are people out there who would rather add up to date but uncited information, or even leave the referencing intact and just change the information to be more up to date. Others would count that as vandalism if the result is that the text now contradicts the source that it cites. I frequently get allegations of Wikipedians being toxic and overly deletionist, my first response to such allegations nowadays is to ask people what sources they cited on their content, usually that ends the discussion. WereSpielChequers (talk) 13:03, 30 September 2018 (UTC)Reply

An indicator could help both edit

Ideally, your idea could be handy for the semi-harrasing contributors, too, who don’t realise they are discouraging good faith editors. Here is a model of a “too enthusiastic” hardworking contributor with least evil intentions but pride. From my own experience, I have been checked by the same person who is watching over specific errors, and in my case is to submit edits containing citation parameter errors.

  • positive: That person is not toxic in the sense abiding to WP Policy, no vandal act.
  • positive: Very productive to handle 3,000+ parameter errors and have diminished down to around 300.
  • positive: sticks to some Category as a keeper and watches over new entry of errors under that category and responds with minor timerug.
  • negative: pushing too hard expecting everybody to move along the criteria that person sees as “the standard”.
  • Looses sense of personal space at times and scolds/nags holding an impression that the other party is a repetitive offender producing same senseless errors.

We have exchanged amicable talks, but honestly, I have been intimidated, but have replied in very nice tones so that I could draw out hints and understand concrete how-to for solutions. When my counterpart started screaming at me (I felt so), but did not give up on me and searched for clues why I don’t stop making the same error, then the checking person found out that Preference choice made you see no error while previewing your edits, if you chose to review your edits “without reloading the page.” I changed my Preference, thinking at the back of my mind that would mean to my counterpart that I have no excuse to make param errors anymore. That idea also had tempted me to turn away from WP.

If that person would have a speedometer-kind of indicator, will it act as a yellow lamp, soft warning to “slow down” ? If they could imagine their enthusiastic activity could bother people before turning into toxic user?

I must note I am quite respecting my counterpart as not abondaning me too soon. My Preference choice to review without reloading the page made me see no error while previewing my edits. check numbers of her/his action with good intention versus how the counterpart like me distanced oneself from editing WP, on days-basis or weeks, maybe by retention rates of the other person who was checked? Keeping errors as low as possible is very appreciated, I must say as I firmly believe.

As people have pointed out already, my opinion is that the key would be, to what extent could the script/tool be objective. --~~ Omotecho (talk) 22:45, 29 October 2018 (UTC)Reply

Return to "IdeaLab/Identify and quantify toxic users" page.