Talk:Community health initiative/Blocking tools and improvements/Archive1

This is an archive page of Talk:Community health initiative/Blocking tools and improvements

General discussionEdit

  • 2 and 3 could be minimized by instituting an admin standard of practice, of blocking for days, not for years for single incidents. school template, or group ip's should be a warning not a target. Slowking4 (talk) 02:22, 8 December 2017 (UTC)Reply[reply]
    • This is true. Is there any way the Special:Block interface could be improved to help administrators set more appropriate lengths for blocks? Human judgement is the primary driver for setting a block length, but the software could help reinforce community policies or act as a check. — Trevor Bolliger, WMF Product Manager 🗨 19:59, 8 December 2017 (UTC)Reply[reply]
      • you could incorporate suggested block length, similar to huggle. with a semi-automated button to engage with the editor, and length pre-selected.. however experience at huggle is that people blow through suggestions to get to the warning templates. Slowking4 (talk) 11:35, 11 December 2017 (UTC)Reply[reply]
        • The problem with human judgement is that the longest possible blocks will always be given, most first time blocks are indefinite and rarely are users informed how and where to appeal. --Donald Trung (Talk 🤳🏻) (My global lock 🔒) (My global unlock 🔓) 12:18, 11 December 2017 (UTC)Reply[reply]
          • I don't know where such experiences come from, so do not generalise, please. In I sometimes read that we're blocking users for too short and we delaying infinite blocks (hoping that the situation will improve, even if objectively we have no reasons to hope for it). Too long blocks are IMHO rare and are usually a result of action of unexperienced admins (but such blocks are shortened by more experiences in most cases). Wostr (talk) 18:20, 11 December 2017 (UTC)Reply[reply]
  • I already said ( ) what, in my opinion, is the primary source of the issue. In my opinion, as long as this is not addressed, blocking random vandals and LTAs will always be a game of whack'a'mole. In addition to this we can look at other Internet communities, how they deal with issues of this kind. Shadowbanning most obvious vandals is one idea – I do believe it may work pretty well. The most extreme measure would probably be to restrict editing to accounts older than a week – I believe this could defer most vandalism, still I wouldn't try it unless all other options are exhausted. Also blocking on user agent... won't it cause many false positives? Marcgalrespons 02:21, 10 December 2017 (UTC)Reply[reply]
    • Shadowbanning is difficult on collaborative spaces like wikis. We've built preferences to Mute emails and notifications from individual users, but log entries and edits to articles become more difficult to implement and keep secret. ACTRIAL on English Wikipedia will be an interesting insight into how walling-off contributions from brand-new users affects the encyclopedia, but I'm fairly confident that requiring all accounts to hit autoconfirmed before editing will never be accepted. However, I can see the argument for setting an aggressive IP range block that only allows edits from autoconfirmed users. — Trevor Bolliger, WMF Product Manager 🗨 01:38, 12 December 2017 (UTC)Reply[reply]
      • I'll note that, since the English Wikipedia 2005 "experiment" of preventing unregistered users from creating pages, we still have to prove that we're able to act on evidence collected (i.e. to change mind when facts prove us wrong). Nemo 08:54, 14 December 2017 (UTC)Reply[reply]
  • I'm happy to see you're collecting more information and thoughts on the topic. Can you confirm that no changes will be made unless confirmed to be desired and in compliance with practices outside the English Wikipedia? Changing technical capabilities around blocks is especially dangerous, because technical possibility shapes social practice (cf. doi:10.1109/MIS.2011.17). Nemo 08:54, 14 December 2017 (UTC)Reply[reply]
    • @Nemo bis: I agree that what works well for one wiki may not work well for another. Smaller wikis may not need the granularity or complexity of tools that larger wikis need. Our team believes it is imperative to get wide input from many wikis on this topic, since blocking tools are used on all projects of all languages. We began this discussion on Meta (a neutral ground for all wikis to discuss) and are proactively inviting specific users from the top 20 wikis to join the conversation (phab:T180071). At this point, we don't know what software we will build — my team is hoping the direction of this conversation will help inform that decision. Depending on what is prioritized, the new software may be global (e.g. phab:T152462) or enabled on a per-wiki basis (e.g. phab:T2674, depending on how it is built and feedback.) — Trevor Bolliger, WMF Product Manager 🗨 23:17, 15 December 2017 (UTC)Reply[reply]

Wishlist topics related to blockingEdit

We're watching the 2017 Community Wishlist, and there are three proposals related to blocking that we're following closely: Smart Blocking, Per-page user blocking, and Allow further user block options ("can edit XY" etc.). We expect there will naturally be a large amount of overlap in the ideas/support for these three ideas on this page for this consultation — this is 100% welcomed! — Trevor Bolliger, WMF Product Manager 🗨 23:34, 7 December 2017 (UTC)Reply[reply]

There was also one to extend global blocks to named accounts. --Donald Trung (Talk 🤳🏻) (My global lock 🔒) (My global unlock 🔓) 12:16, 11 December 2017 (UTC)Reply[reply]

Disorganized thoughts by MER-CEdit

This is part one of two, I'll send the second via email for BEANS reasons.

Problem 1
  • Block by user agent -- definitely useful against spambots. On someone using the latest version of Chrome on Windows 10? Not so much. One will have to go deeper into the HTTP headers in order to uniquely identify a browser, posing potential privacy problems. I don't see this as much of an issue, you already cough up this information. One should have the option for a cookie-only block for users of heavily shared IPs.
  • Block by device ID I'm not sure whether this passes the privacy policy. iOS randomizes MAC addresses. But yes, I want this.
  • Global blocks for usernames -- useful for blocking, say, incompetent users. The unfriendliness of locking is sometimes a feature (especially regarding LTAs and spammers).
  • Add "Prevent account creation" to global block -- support. I thought this was already the case.
  • Cookie blocks for anons -- support. Useful for technically illiterate users on large or frequently used ranges with dynamically assigned IPs (e.g. or shared IP addresses. This has already been cleared by legal and code exists, making this the lowest hanging fruit in terms of implementation.
  • Autoblocks longer than a day -- support. Cookie blocks are ideal for this as they target individual computers as opposed to IPs. Unless it's a public computer, it's safe to make the cookies last longer. Not all blocked users return in a day.
  • Proactively block open proxies. Strong support. See my comment about the searchable WHOIS database below.
Problem 2
  • Require all accounts created in an IP range to confirm their email address before [doing anything else] (not just edit!) -- support. This will make it more annoying for LTAs on unblockable ranges to bulk register sockpuppets, especially with the two additional solutions below.
  • Prevent the use (or flag incidents) of blacklisted email addresses from being associated with new user accounts. A poorly thought-out implementation exists at MediaWiki:Email-blacklist as part of the Spam Blacklist extension. This list should not be public. Policy needs to catch up here. Do we ban registration using disposable email addresses (e.g. Since we can't see email addresses, how can we tell what to blacklist?
  • Throttle account creation and email sending per browser as well as IP address. Support as proposer. The aim of the throttle is to restrict account creation by a person. Targeting a browser with cookies is better than targeting an IP address (though we should do both, as always).

Potentional additional solutions:

  • Should MediaWiki prevent registration of new accounts with an identical email address to a blocked account (maybe with a distinct block parameter)?
  • Should we require verified email addresses to be unique or capped at no more than N accounts per email? This works very well when forcing all accounts in a range to verify their emails.
Problem 3
  • Block a user from [stuff]. Support all of the above including combinations thereof. Regular expression support for pages and categories would be useful. One needs to be careful that some of these measures cannot be circumvented by moving pages.
  • Block that only expires when a user has read a specified page. The only way current non-communicative users can be dealt with is the banhammer. This would be helpful, but there is a very strong caveat -- I see block appeals all the time where blocked users claim to read guidelines and policies, when it is blatantly obvious they don't even understand the block message. And how does one determine whether a page has actually been read (as opposed to merely visited), let alone understood? With the current system, you are forced to place an unblock request demonstrating your understanding (or sockpuppet, which we want to stop).
Problem 4
  • Logging warnings may be a problematic harassment vector.
  • "Allow admins to set a block date range via datetime selector" -- yes

Potential other solutions:

  • w:User:Timotheus_Canens/massblock.js should be implemented in core MediaWiki as opposed to a JavaScript hack.
  • If I understand phab:T174553 correctly, Community Tech plan on making a local DB of geolocation and WHOIS data. I've been dealing with a case of VPN spamming spread over at least 20 distinct ranges and it would be a lot faster to block the entire VPN in one go, as oppposed to hunting down the individual ranges via other means. The ability to search through the WHOIS db for certain registrants would be greatly appreciated.
  • New block option: notify me when this block is due to expire. Useful for renewing range blocks of webhosts or following up on temporary blocks so that problematic behavior does not recurr.

Prioritization: Hmmm, I really don't know. If I were to pick four from the above list, I'd say require confirmation of email addresses, cookie blocks for anons (because it's easy), blocking users from [stuff] and the searchable WHOIS database. MER-C (talk) 10:59, 8 December 2017 (UTC)Reply[reply]

you realize that confirmed email is a Larry Sanger idea? Slowking4 (talk) 16:05, 8 December 2017 (UTC)Reply[reply]
@Slowking4: I didn't even bother to include "require email confirmation on account registration" or "disable IP editing" on this consultation because they're nonstarters. (Although they certainly would make it more difficult for people to evade blocks!) The concept of requiring all accounts created in an IP range to confirm their email address before editing is on the table though. It's in a grey area, but arguably could be used in situations for highly-shared IPs such as schools. — Trevor Bolliger, WMF Product Manager 🗨 22:35, 8 December 2017 (UTC)Reply[reply]
i agree it is a non-starter. and i thought we had a broad consensus, that we are not Citizendium. it is a failed model to keep in mind. Slowking4 (talk) 00:21, 9 December 2017 (UTC)Reply[reply]
@MER-C: These are organized: you responded in the same groups/order that's on the article page. :) Thanks for all your comments on the strengths and weaknesses of each idea, your new suggestions, and your proposed prioritization.
I agree that if we're going to use email addresses as a method to prevent block evasion we need to go all-in (e.g. solve for disposable email addresses, post-pending characters, etc.) since emails are as disposable as IP addresses. For blocks to be effective we need to find something identifiable about that user, or the blocker(s) need to agree that the collateral damage of problem #2 is an acceptable side effect. We'll check everything we do against our Privacy Policy with our Legal department, but in general I think we need to be aggressive as dealing with block evasions is a waste of everybody's time.
It looks like Allow further user block options ("can edit XY" etc.) is a strong performer in this year's Wishlist and feels to me like an obvious evolution for the blocking software. And I wholeheartedly agree that requiring users to read a warning/policy/etc before editing again is easily pencil-whipped. (When's the last time you ever read a software agreement, or even a splash screen on a new app?) — Trevor Bolliger, WMF Product Manager 🗨 22:35, 8 December 2017 (UTC)Reply[reply]
i would like to see more about how to use existing tools to avoid disruption, including a/b testing and ORES. i see a lot of wack-a-mole and doubling down, and we do have tools to provide metrics to develop a standard of practice. Slowking4 (talk) 00:21, 9 December 2017 (UTC)Reply[reply]
@Slowking4: For this discussion we're thinking about how the WMF can build software to address two types of people: those who are entirely not welcome back (in which full site blocks are appropriate, fair, and should be less circumventable) and users who are causing disruption but not to the extent to deserve a full site block. For this second ground, each case is handled differently: some users are blocked, others are not. These cases are not consistently categorized or identifiable so generating data on them would require manual data sorting, which our team is not equipped or staffed to perform. Earlier research on Detox found that only 18% of identifiable attacks on English Wikipedia were responded to with a block and numerous on-wiki discussions (and Wishlists from the past three years) have shown there is a large appetite for a less severe type of block, which we hope we can design and build effectively. — Trevor Bolliger, WMF Product Manager 🗨 23:44, 15 December 2017 (UTC)Reply[reply]
good points. i would say that it has been 15 years of technical solutions, including cookies. maybe some metrics including standards of practice and UX would be a better way. you could try hashtags to track practices, (volunteer tracking, not a/b testing) but still better to include observation of results, rather than proceeding based on seat of the pants. as User:EpochFail can attest, sometimes results are counter-intuitive. and it may be how the tools are used, so learning patterns and training would be in order. Slowking4 (talk) 23:56, 15 December 2017 (UTC)Reply[reply]
I've been thinking a lot about how a lot of workflows suffer from lack of precedent. If an admin works hard to reach a fair, agreed-up decision in a dispute, another admin in another similar case shouldn't have to reinvent the wheel. A lot of this could be automated to help accelerate discussions, but it will take some preliminary time to better tag and categorize blocks so a baseline can be established. With this type of data, we could then analyse which types of blocks work (e.g. the user returns as a constructive contributor, and/or the user does not evade their block) and which types of blocks are not effective (blocks are evaded, users return and are blocked again). — Trevor Bolliger, WMF Product Manager 🗨 00:12, 16 December 2017 (UTC)Reply[reply]

Disorganised criticisms and suggestions by Donald TrungEdit

Well the culture around blocking also has to change, I agree with Slowking4 that blocks that last years shouldn't be immediately given but the culture (and this page by extension) exists to make those blocks longer, not shorter and most admins and Stewarts only believe in a “you only get one chance” mentality.

Also when it comes to blocks it’s not “Debate ideas, not people.” as blocks don't concern ideas, a good example would be LBHS Cheerleader who returned after 5 (five) years to only make good edits and was immediately blocked again after it was discovered that the prior block wasn't appealed because block evading good faith editors should be treated worse than actual vandals.

(1) Create a class of users to actually oversee administrative actions, like there are edit patrollers there should be a class of users to patrol admin log actions. Although on paper administrative actions can be reviewed by other administrators, in reality these things are rarely (if ever) brought up. Just look at [ ARichardMalcolm] who was denied talk page access or basically any right to appeal because the blocking admin didn't inform him that donating images from his book is considered “spam” on Wikimedia Commons.

(2) Hold administrators to actual policy, a good example that comes to mind is Wikimedia Commons

Another example from Wikimedia Commons would be this indefinite block that was only made indefinite because of disruptions on other projects, is this covered in the Commons blocking policy? No, but things like this happen every day, and administrators dealing in blocks like this should be able to be sanctioned by not being able to (site-)block users for a certain period of time, in fact this doesn't break just one policy, it breaks several as the Commons Blocking Policy currently states “Only prevent the blocked user from using their talk page or sending e-mail if they are likely to abuse these privileges.” Which this user hasn't demonstrated, but as many administrators view appealing blocks as “abuse” it's quite clear why it's rare for users to actually have talk page access post-blocking.

On Dutch Wikipedia we have this rather curious case where the only blocking reason provided is “see English Wikipedia”, so this person is banned for life from a Wikimedia project that they’ve never edited, why? No particular reason, there isn't any policy on Dutch Wikipedia that actually perscribes this, and this is far from an isolated case and if I were to appeal this block for them the moderators threaten me with a permaban from all Wikimedia projects for bringing it up, of course they have no policy to back this up, there are even policies that go against this but again the policies only apply to people without the tools.

Or on that same wiki where despite there being no local policy claiming that global locks are global bans certain moderators still enforce them as such, even if official policy states “This list does not include accounts that have been globally locked on charges of cross wiki disruption, spamming, or vandalism. Such users are not globally banned, per se. If they create new accounts and are not disruptive with those accounts, they will not be locked again merely because it is discovered that they were previously globally locked.” At List of globally banned users, this either shows that content isn't important because of Wikimedia policies (that don't exist) or that administrators may make up their own policy whenever they wish. A similar case to my own was with Graaf Statler prior to his Foundation ban, he was not banned from Dutch Wikisource but was still banned from Dutch Wikisource because of a global lock, so Stewards may circumvent community discussion and policies whenever they wish? The answer is “yes”, but for users who go around accusing others of pedophilia, calling them Nazi’s, and making legal threats like INeverCry global locks don't seem to be “appropriate” without community discussion, it's almost as if (former-)administrators are allowed to do whatever they want as long as no community global ban has been enacted, just look how a global lock for Classiccardinal was denied, but for other users with more edits like Reguyla (prior to his Foundation ban) these are enacted. Also no policy dictates that stewards may globally lock non-spambot non-vandalism-only accounts but if I were to let’s say make a second account on Dutch Wikipedia (where I'm not sanctioned, though administrators claim that I am locally sanctioned without ever mentioning policies or guidelines) without clearly linking this to this account and never engaging in any disruptive behaviour this would get me a permanent global ban, but INeverCry was allowed to call people Pedo-Nazi’s and no steward dared locking them without discussion.

(3) Cookie blocks are useless against those really intent on disruptive behaviour, it only actually prevents good faith editors from evading their blocks (which are bans even though they're not called that). Cookie blocks can easily be removed by simply clearing ones cookies or simply opening a new incognito tab, those intent on disruption will disrupt but a good faith editor that made let's say a formatting mistake in their first edit where they accidentally removed a large chunk of text can be blocked immediately and indefinitely if that first edit is spotted by an admin that assumes that that user is “a vandalism-only account” (and will likely disable talk page access and e-mail functionality) and then that user would never be able to edit from that device again, and most users are probably unaware of things like the UTRS so as far as they're concerned they have no way to appeal.

(4) Anti-“spam” blocks are by far the worst offenders of bad faith blocks, in fact the whole Wikimedia definition of “spam” is bad faith. Such as non-sensical removal of references simply because the link to reliable source was places by a user the “spam”-fighter hates, what’s the definition of “spam” in Wikimedia projects? Very simple, links placed people administrators hate, and getting a link removed from the blacklist is next to impossible, if you get blocked then automatically you’re “a paid editor”, according to “spam-fighters” proper attribution to reliable sources is “spam”, and they wonder why “citation needed” is placed everywhere, it's because reliable sources get removed, and the abuse filter will permanently ban you if you dare to properly source content.

So what is the Wikimedia definition of “spam”, well if you properly source content and you do it on two (2) wiki’s and you get globally banned for whatever reason you become “a paid editor”, and all links on every Wikipedia to that website have to be removed, even if those links are all references placed there by other people, even if it's in the user space of another user, references are “spam” if they’re ever used by someone you don't like. And the moment you are accused of being a paid editor with “an obvious COI” you are “a paid editor with an obvious COI”, and if you defend yourself against bad faith claims that you’re a spammer you will be censored. So every time I see that “a spammer” gets blocked you can assume that It’s a good faith editor who sourced their content properly but got caught in some bad faith abuse filter.

Or let's talk about the abuse filter, is this user a long-term Chinese spammer? Unless all of this is Chinese spam there wouldn't be a reason to block a user with 0 edits, in fact some admin pats himself/herself on the back thinking that they've “stopped abuse” while all they probably did was prevent a good faith edit, this is also why there should be a special trusted user group to hold admin actions accountable. If an admin gets caught by the spam-filter they can always unblock themselves so obviously no admin would ever see collateral damage as “a problem” as they would never fall victim to it.

(5) No more secret trials.

Just look at Cosmic Colton who has now been blocked for 5 (five) years, why? Because admins suspect that he’s a banned/sanctioned user, not a particular sanctioned user, just a sanctioned user. Imagine if Wikipedia was written like this, “we don't any evidence to back up our claims” or “I will only show people my source in the IRC or a closed off mailing list”, another example would be Technical 13 or Tarc. A more recent example would be @The Devil's Advocate:. ArbCom trials should be made public unless it concerns sensitive information, but currently e-mails sent to the ArbCom of English Wikipedia aren't published on-wiki, and on Dutch Wikipedia the AC only publicises the initial e-mail of a trial, but all further discussion and reactions of ArbCom members between each other are never made known to either the person appealing their block/ban or the community. Wikimedia communities are not open, and the majority of block- and ban-related things are done off-wiki where the harmed party rarely (if ever) learns of any conversation regarding to their ban. Such secrecy is a bane to community discussion, and ArbCom’s should always poblicise e-mails (also internal e-mails) unless they contain private information.

(6) Actually try to inform blocked users of their options, as stated before despite policies stating that things like talk page access and e-mail access should officially only be removed if abuse is likely, these things are often disabled without any sign that they will be abused, how many people know that the UTRS exists? How many people know of #wikipedia-en-unblockconnect? Well, you only get tự see that the UTRS exists after an appeal on your talk page was denied, as for the IRC, you only get to see that that exists after you’re banned from the UTRS. Blocked users are never properly informed of their venues of appeal and though the chances of actually getting unblocked (even if you fully understand what you did wrong) are next to nothing, deliberately hiding these things only encourages block evasion as for someone who only recently joined a Wikimedia project and only concerns themselves with content and only reads policies relating to content and not “vandal fighting” then they wouldn’t be aware of what they can do to get back to editing and evade.

I thought of an idea that could basically serve as “the Miranda rights of Wikipedia” (or “the Blockbox”) where after a user or IP gets blocked by an admin (so not rangeblocks or autoblocks but direct blocks) a bot will automatically leave this message on their talk page (even if the talk page was previously deleted). This could start off on English wikipedia and then if it turns out to be successful could be “exported” to other wikis.

It appears that you have been blocked.

Please read the guide to appealing blocks.

* If you are currently unable to access your talk page you may request an unblock via the Unblock Ticket Request System or the #wikipedia-en-unblockconnect chat channel.
* Checkuser and Oversight blocks may be appealed to the Arbitration Committee.
* If you were blocked by Jimbo Wales then you may appeal directly to him and/or the Arbitration Committee.
* If this is a Sockpuppet then you should confess your main account (or IP) now so you may receive a reduced penalty.
* If you have been blocked for a username violation then you can simply create or request a new account or request to be renamed here or at #wikimedia-renameconnect, if however the username was made in bad faith then first request a rename and then you may appeal the block; further reading Wikipedia:Changing username.
* If you have been blocked for adding promotional material or spam then please read about our policy on this and our external links policy before requesting an unblock.
** If you continue to violate this policy then the next time the duration of your block will increase. If you believe the link(s) you added aren't spam then you may request for it to be removed from the blacklist.
* If your options are currently still unclear then please read the more technical how to appeal a block.

Notes: As of currently the standard offer is 6 months but maybe if a user confesses this could be reduced to 3 months or something in that direction, this would give sockpuppeteers more of an incentive to co-operate.

The above are all the blocking criteria I am currently aware of, this is just a suggestion and open to improvement.

(7) No non-sense blocks, I remember in August seeing a thread on the English Wikipedia’s administrators’ noticeboard that was closed as “WP:DENY” where a user was angry at another user for constantly nominating articles for deletion, rather than explain to them the conventions of what articles are acceptable on Wikipedia or inform them on undeletion requests and that the deletionist wasn't breaking any policies this person was permabanned under “WP:NOTHERE” with no talk page access (which was never abused, and this person never insulted anyone), no e-mail capabilities and basically no right to appeal. This person is now essentially banned for life for one at most misinformed incident, and it's very likely that they were probably an anonymous editor for years that worked on some of the articles that were deleted, but instead of actually informing this person what and where this person essentially got a ban for life for simply bringing up a valid point that deletionism is harming the encyclopedia.

If admins don't have the patience to deal with disgruntled users then they shouldn't be admins in the first place, this is also why non-sensical things like the Standard Offer exist despite the fact that most people who get thrown the standard offer pose no threat to the content of the encyclopedia, blocks are never preventive and always punitive.

(8) Let “community bans” be decided where actual community members come, though the majority of the editors will probably never see a talk page, most people who concern themselves with the actual content on Wikipedia are more likely to frequent the village pumps and help desks (or teahouses) than the Administrators’ noticeboard, yet all “community” bans are only voted for by administrators and admin-wannabe’s like rollbackers and new page patrollers, or are only “vandal fighters” members of the community?

(9) Create a reasonable amount of time where previous blocks should become irrelevant if the behaviour isn't continued, though this is more policy related than technical, one could just as well argue that “blocking is a biological and not a technical limitation” based on how they are enforced, a good recent example is this user who was blocked in 2010, blocks don't exist to fight vandalism, they exist to prevent good faith editors from editing.

(10) Either extend global blocks to named accounts, or actually force stewards to follow policy, why is Diegusjaimes globally banned when this user is only blocked on 1 wiki? This person doesn't qualify for a global ban, yet has been waiting for a global unlock since July. And yes, if they were to mail a Wikimedia staff member their response would be “wait six months, then try it again” because apparently no-one with user rights has to follow policy, and policies are only suggestions. I already named the idea to extend global blocks to named accounts in the 2017 survey but this was conveniently omitted from the “ideas concerning blocks” above.

(11) No punitive blocks, just look at Slowking4, this user is actually interested in building an encyclopedia and has done so by trying to evade their block, obviously the many articles closing the gender gap they created should be mass-deleted without discussion, from what I can tell Slowking4 has never vandalised a single article and their block is because of incivility issues that hasn't played up in years, so how would a site-wide block benefit the encyclopedia? Obvious “block evasions waste everyone’s time” but I highly doubt that any WP:READERS looking for information about either Bascha Mika or Christine Huber would agree, but then again the readers should always come last and hoaxes should be restored if they’re removed by sockpuppets.   --Donald Trung (Talk 🤳🏻) (My global lock 🔒) (My global unlock 🔓) 12:14, 11 December 2017 (UTC)Reply[reply]

Hi @Donald Trung: Many of your comments are about the various on-wiki practices and policies. I firmly believe that the MediaWiki software should work hand-in-hand with fair, understandable, and enforceable policies. I also agree that there is likely need for additional training for moderators who donate their time by evaluating difficult reports. It's hard, thankless work and the WMF wants to support them by making sure they're equipped with the best possible training and tools. But for now, we're discussing how the Anti-Harassment Tools product development team can improve the blocking tools that exist or build much needed additional tools. We are focusing the conversation around the four outlined problems as we believe this is where improved software can help.
You have a few points where I can see the software potentially helping. For (2) the block interface could be updated to help the blocking administrator to match an appropriate block length with the reason for the block. If successful, this would result in a decreased workload for the administrator, as the software would better help them understand past precedent for similar policy infractions and more appropriate block lengths for the blocked users. And I agree that (6) the block appeal workflow could be improved from a usability standpoint. Are there other points where you think the MediaWiki software is lacking? — Trevor Bolliger, WMF Product Manager 🗨 01:28, 12 December 2017 (UTC)Reply[reply]

beaurocratic and authoritativeEdit

please reduce complexity instead of inventing +++buttons. I don't think this sort of authoritative micromanagement is useful for users or admins. ppl will get lost in this wonderland of special buttons. take a short look on en:Wikipedia:List of guidelines and en:Wikipedia:List of policies and think of how useful it would be to add hundreds of new guidelines and policies to explain the new buttons. I don't think everybody is comfortable with this kind of improvement. Do you know the Iron Law of Bureaucracy? Once installed, no way back. Cheers --Sargoth (talk) 12:19, 11 December 2017 (UTC)Reply[reply]

It's better than entire site-bans or (unilateral) global bans that are against policy, if a full site-ban doesn't improve the encyclopedia, why initiate them? And it would a lot of time for those users obsessed with hounding others to see if they broke their sanctions in any way. --Donald Trung (Talk 🤳🏻) (My global lock 🔒) (My global unlock 🔓) 12:31, 11 December 2017 (UTC)Reply[reply]
In what way is it better? In what way is blocking to prevent imminent or continuing damage and disruption to Wikipedia not improving Wikipedia? Any studies you can cite? In my eyes, it's a system of belief: the majority beliefs "much helps much". Many buttons will improve a lot. My own experience is that blocking a user is good and communicable. You didn't adress my reasoning, btw, but added sth has nothing to do with it. Like if I would have said I appreciate hounding. I've said a nutshell: blocking disruptive users is useful. Inventing specific micromanagement buttons will cause beaurocratic overkill, has to be justified by hundreds of new policies and guidelines and is not easy to communicate. --Sargoth (talk) 13:52, 11 December 2017 (UTC)Reply[reply]
@Sargoth: Thank you for your comments. In general I favor simple processes and tools, and I hope/anticipate that any new tools we build will be tactical and effective. We've also called-out the fourth problem "The tools to set, monitor, and manage blocks have opportunities for productivity improvement" to reduce complexity in the existing workflows. I believe the MediaWiki software can do more of the heavy lifting and better inform admins as they make a decision to block or not.
In the end we will likely build only one or two of these ideas (or another yet-to-emerge idea.) Our team has two developers and our Community health initiative has other obligations we need to pursue later in 2018. I organized this page under four problems to solve rather than jumping straight to tools to build — I'd love to hear if you think these four are actually problems at all, and which problem we should address first. — Trevor Bolliger, WMF Product Manager 🗨 00:51, 12 December 2017 (UTC)Reply[reply]
I'm relieved that you prefer simplicity and that only few of the ideas will be realised. Banning users from specific pages might be helpful. Thanks & good luck --Sargoth (talk) 10:31, 12 December 2017 (UTC)Reply[reply]


New tools, based on the ORES machine-learning, look-at all human edits, and human reverts, and human blocking of editors...
Then ORES cumulates statistics and learns from humans to estimate edits, attitudes of editors and patrollers.
Also humans continue to correct ORES activity and ORES modifies itself for better efficiency.
That will higthly help patrollers which always will have the last decision. --Rical (talk) 15:23, 12 December 2017 (UTC)Reply[reply]
Yes, our team's philosophy for the tools we build is "software should do the heavy lifting, but the human should make the final judgement decision." We're currently exploring a few ways ORES could help us: detecting edit wars, stalking/hounding, and even individual harassing comments. These could eventually be built into evaluation workflows to help administrators and other users make more informed, timely decisions.
I also think there could be a system which could make the decision-making of block type and length easier on administrators once fault or a violation of a policy has been determined. For example, if an administrator decides that a user should be blocked for vandalism, the Block tool could recommend a fair block length based on previous cases of vandalism and the user's past block and contribution history. — Trevor Bolliger, WMF Product Manager 🗨 23:51, 15 December 2017 (UTC)Reply[reply]
AI is definetely something to be investigated, but ORES is just as good as its training. For small wikis, training it might be problematic, because of the low number of admins and low number of blocks. There are also all kinds of corner cases, such as block-wars between sysops, unblocks for procedural reasons rather than real reasons etc. If on larger wikis all these get ironed out by the clear-cut blocks, on smaller wikis they might represent a significant percentage of all blocks, tricking the tools.--Strainu (talk) 00:53, 16 December 2017 (UTC)Reply[reply]

Add topic: interaction ban by abuse filterEdit

Hi all, the opened the phabricator ticket [1] a while ago to create a variable for the abuse filter for administrators and arbcoms to separate harassing users or user groups. All other arbcoms were informed. Could this be added as Problem 5, please? --Ghilt (talk) 09:39, 14 December 2017 (UTC)Reply[reply]

Hello @Ghilt: Thanks for joining the discussion! How is this problem statement different from Problem 3: "Full-site blocks are not always the appropriate response to some situations"?
We will not be building anything on top of AbuseFilter as the result of this consultation, due to the complexity and fragility of its code base. But we could build a different, stand-alone tool for interaction blocks. For example, "block this user from editing pages that have been edited by User:X in the past N days." — Trevor Bolliger, WMF Product Manager 🗨 00:00, 16 December 2017 (UTC)Reply[reply]
Hi Trevor, that would be great, the desired properties are in the phabricator ticket, cheers, --Ghilt (talk) 00:04, 16 December 2017 (UTC)Reply[reply]
We are inviting people to participate in this discussion until mid-January, at which point we'll pivot toward winnowing the possible list of features we will build. I'll be sure to update the Phabricator ticket if we move forward with this concept. — Trevor Bolliger, WMF Product Manager 🗨 00:16, 16 December 2017 (UTC)Reply[reply]

Wishlist resultsEdit

As the specific block proposal came in at #12, does this mean that it won't be considered for development next year? There has been calls to reform the blocking options for years, it would be sad to see the status quo never change. --Donald Trung (Talk 🤳🏻) (My global lock 🔒) (My global unlock 🔓) 14:12, 15 December 2017 (UTC)Reply[reply]

Not necessarily, but it is definitely a strong indicator of where there is strong support for improved blocking software. We are still inviting users from many wikis to come participate in this discussion on this page, so we are avoiding jumping to any decisions until everyone has an opportunity to participate. — Trevor Bolliger, WMF Product Manager 🗨 00:01, 16 December 2017 (UTC)Reply[reply]

Full summary of feedback, December 22, 2017Edit

Here is my in-depth summary of all the feedback we’ve received on the English Wikipedia and Meta Wiki consultations for Blocking tools and improvements. We will incorporate feedback from discussions on other language wikis when we identify users who can help us translate.

Problem 1. Username or IP address blocks are easy to evade by sophisticated users
  • Anything our team (the Wikimedia Foundation’s Anti-Harassment Tools team) will build will be reviewed by the WMF’s Legal department to ensure that anything we build is in compliance with our privacy policy. We will also use their guidance to decide if certain tools should be privileged only to CheckUsers or made available to all admins.
  • Many users expressed support for adding UserAgent and/or Device ID matching to IP range blocks. However, others expressed concern that these may trigger too many false positives and cause collateral damage. This will also need to check with Legal to comply with the privacy policy.
  • If data cannot be surfaced directly to admins or CheckUsers, it could be potentially hashed and stored, and the permissioned users could see a similarity score (e.g. User:A’s and User:B’s devices are 96% identical.)
  • There was a request to build AI that can compare editing patterns and language to detect/predict possible sockpuppet accounts.
  • There were several requests to implement cookie blocking for IP addresses.
  • There were several requests to proactively globally block open proxies or to create a system to share identified open proxy IPs.
Problem 2. Aggressive blocks can accidentally prevent innocent good-faith bystanders from editing
  • There is a lot of energy around using email addresses as a unique identifiable piece of information to either allow good-faith contributors to register and edit inside an IP range, or to cause further hurdles for sockpuppets. (Note: Requiring email confirmation for all users is a much larger question which fundamentally goes against our privacy policy and mission. This talk page isn’t the right space to debate that decision.)
    • Several people mentioned that the email addresses would need to be unique (e.g. only confirmed on one Wikimedia account) and we would have to identify tricks to deduplicate email addresses per provider (e.g. Gmail supports the dot and plus-sign tricks.)
    • Several people mentioned we would need to prohibit disposable email domains via a blacklist, or require the use of a major email provider via a whitelist. The blacklist should not be public.
    • It was suggested to allow CheckUsers to check email addresses against each other, without actually revealing the email address itself (e.g. only display the domain and a similarity score between the local part.)
    • Email addresses can be just as disposable as IP and Wikimedia accounts. Sophisticated users will be able to evade these requirements. However, the additional speedbump may slow down sock farms and deter lower-level block evasions. While it won’t be watertight it still might help alleviate the work spent on sock hunting.
    • The more identifiable pieces of information the better: A combination of UserAgent, IP and email may be a high enough hurdle to prevent most evasion.
  • There was a request to require two-factor authentication for edits in certain IP ranges.
  • There was a request to convert Twinkle and/or Huggle from gadgets to extensions, and to increase their accuracy and eliminate common bugs (e.g. missing previous warnings, not working on all wikitext editors, etc.)
  • There was support for extending autoblocks for longer than one day.
  • There was support for throttling account creation and email sending per UserAgent as well as IP address.
Problem 3. Full-site blocks are not always the appropriate response to some situations
  • There is a lot of support for building per-page blocks and per-category blocks. Many wikis attempt to enforce this socially but the software could do the heavy lifting.
    • One user expressed support for regex blocks on page or category names.
    • Another user suggested that any granular blocks should also contain sub-options such as preventing account creation, applying to IPs, to also optionally apply to talk- and sub-pages.
  • It was requested that if we add new types of blocks, to not overcomplicate the Special:Block interface.
  • Dutch Wikipedia, in some cases, sets a maximum number of edits per day per namespace. It has been requested that this could be enforced in software.
  • There is concern that “Block a user from viewing Special:Contributions of another user” may cause more harm than good.
  • Several users requested that we extend or duplicate AbuseFilter to work on a per-user level so the same logic rules that exist can be applied to enforce per-user sanctions. This would be a very powerful system but our software development team will unfortunately not be able to commit to such a large undertaking, due to AbuseFilter technical complexity.
  • A request for a tool to prevent users from writing about themselves.
  • A request for a tool that blocks users from certain actions (revert, upload, etc.) that could be applied to only a certain area of the wiki (e.g. disallow reverting on edits about sports.)
  • It was cautioned that requiring a user to read a specific page before a block is lifted is easily skipped. If built, it would need to be more clever than just “User:A looked at page X.”
Problem 4. The tools to set, monitor, and manage blocks have opportunities for productivity improvement
  • It was requested that we add a "Redact username from edits and lists" tickbox to Special:Block for administrators just like Oversighters have. It was also requested that we create revdeluser and suppressuser permissions and reassess which usergroups should have access. (Note: These are possible but will require in-depth RfCs. I look forward to seeing if these are further supported.)
  • There has been lengthy discussion and concern that blocks are often made inconsistently for identical policy infractions. The Special:Block interface could suggest block length for common policy infractions (either based on community-decided policy or on machine-learning recommendations about which block lengths are effective for a combination of the users’ edits and the policy they’ve violated.) This would reduce the workload on admins and standardize block lengths.
    • Countless blocks have been on Wikimedia wikis over the past 16 years but the block reasons are often blank or inconsistent, so running automated analysis on which blocks lengths are effective for which policy infractions is currently not possible. If we standardized blocks, this data could eventually be gathered. Potentially ORES or other machine learning systems could eventually make recommended block lengths.
  • One user cautioned that publicly logging warnings may be seen as a potential harassment vector.
  • It was requested that we improve the date picker on Special:Block
  • It was requested that we allow admins to opt-in to notifications when blocks expire
  • It was requested that we build a better way to set mass blocks
  • It was requested that the block appeal process could be improved to reduce the work required for admins
  • It was requested to display if a user is currently blocked on another wiki on Special:Block
  • Any blocking tools we build will only be effective if wiki communities have fair, understandable, enforceable policies to use them.
  • Shadowbans can work as alternatives to blocks, but the collaborative nature of wikis complicates this.
  • What works for one wiki might not work for all wikis. As such, our team will attempt to build any new features as opt-in for different wikis, depending on what is prioritized and how it is built.
  • Keep our solutions simple, we should avoid over-complicating these problems.
  • As we design our solutions, think about how we can stop/slow block evasions at every step of the workflow for those who evade blocks. Try to avoid jumping straight to solutions.
  • Cross-wiki harassment is a problem that local blocks cannot remedy. We should improve the workflows and tools for Stewards to manage cross-wiki harassment (and for local wikis to understand if harassment is spilling over from a different wiki.)

I’ve also posted an abridged summary on the primary talk page. Please let me know if I’ve misstated anything or missed anything. Thank you! — Trevor Bolliger, WMF Product Manager 🗨 20:16, 22 December 2017 (UTC)Reply[reply]

TBolliger, what are good-father contributors? I tried to find a translation but was not able to find one. Thanks --Sargoth (talk) 15:09, 4 January 2018 (UTC)Reply[reply]
Most likely a father is a person you have faith in, which is to say that good-faith contributors were meant. → «« Man77 »» [de] 15:54, 4 January 2018 (UTC)Reply[reply]
Whoops! Spelling mistake. It is supposed to read "good faith" not "good father." 😆 Thank you for pointing this out — I've corrected it. — Trevor Bolliger, WMF Product Manager 🗨 18:00, 4 January 2018 (UTC)Reply[reply]
Return to "Community health initiative/Blocking tools and improvements/Archive1" page.