Community Wishlist Survey 2021/Admins and patrollers

Admins and patrollers
25 proposals, 409 contributors, 972 support votes
The survey has closed. Thanks for your participation :)



Overhaul spam-blacklist

  • Problem: The current blacklist system is archaic; it does not allow for levels of blacklisting, and is confusing to editors. Main problems include that the spam blacklist is indiscriminate of namespace, userstatus, material linked to, etc. The blacklist is a crude, black-and-white choice, allowing additions by only non-autoconfirmed editors, or only by admins is not possible, nor is it possible to allow links in certain namespaces, certain wikis (or certain wiki-flavours, e.g. disallow everywhere except for all of wikitravel). Also giving warnings is not possible (on en.wikipedia, we implemented XLinkBot, who reverts and warns - giving a warning to IPs and 'new' editors that a certain link is in violation of policies/guidelines would be a less bitey solution).
  • Who would benefit: Editors on all Wikipedia's
  • Proposed solution: Basically, replace the current mw:Extension:SpamBlacklist with a new extension with an interface similar to mw:Extension:AbuseFilter, where instead of conditions, the field contains a set of regexes that are interpreted like the current spam-blacklists, providing options (similar to the current AbuseFilter) to decide what happens when an added external link matches the regexes in the field (see more elaborate explanation in collapsed box).

    Note: technically, the current AbuseFilter is capable of doing what would be needed, except that in this form it is extremely heavyweight to use for the number of regexes that is on the blacklists, or one would need to write a large number of rather complex AbuseFilters. The suggested filter is basically a specialised form of the AbuseFilter that only matches regexes against added external links. Alternatively, it could be incorporated into the current AbuseFilter as a specialized and optimized 'module'.

description of suggested implementation

description of suggested implementation

  1. Take the current AbuseFilter, create a copy of the whole extension, name it ExternalLinkFilter, take out all the code that interprets the rules ('conditions').
  2. Make 2 fields in replacement for the 'conditions' field:
    • one text field for regexes that block added external links (the blacklist). Can contain many rules (one on each line, like current spam-blacklist).
    • one text field for regexes that override the block (whitelist overriding this blacklist field; that is generally more simple, and cleaner than writing a complex regex, not everybody is a specialist on regexes).
  3. Leave all the other options:
    • Discussion field for evidence (or better, a talk-page like function)
    • Enabled/disabled/deleted (not turn it off when not needed anymore, delete when obsolete)
    • 'Flag the edit in the edit filter log' - maybe nice to be able to turn it off, to get rid of the real rubbish that doesn't need to be logged
    • Rate limiting - catch editors that start spamming an otherwise reasonably good link
    • Warn - could be a replacement for en:User:XLinkBot
    • Prevent the action - as is the current blacklist/whitelist function
    • Revoke autoconfirmed - make sure that spammers are caught and checked
    • Tagging - for certain rules to be checked by RC patrollers.
    • I would consider to add a button to auto-block editors on certain typical spambot-domains (a function currently taken by one of Anomie's bots on en.wikipedia).

This should overall be much more lightweight than the current AbuseFilter (all it does is regex-testing as the spam-blacklist does, only it has to cycle through maybe thousands of AbuseFilters). One could consider to expand it to have rules blocked or enabled on only certain pages (for heavily abused links that actually should only be used on it's own subject page). Another consideration would be to have a 'custom reply' field, pointing the editor that gets blocked by the filter as to why it was blocked.

Possible expanded features (though highly desired)
  1. create a separate userright akin AbuseFilterEditor for being able to edit spam filters (on en.wikipedia, most admins do not touch (or do not dare to touch) the blacklist, while there are non-admin editors who often help on the blacklist).
  2. Add namespace choice (checkboxes like in search; so one can choose not to blacklist something in one particular namespace, with addition of an 'all', a 'content-namespace only' and 'talk-namespace only'.
    • some links are fine in discussions but should not be used in mainspace, others are a total nono
    • some image links are fine in the file-namespace to tell where it came from, but not needed in mainspace (e.g. flickr is currently on revertlist on en.wikipedia's XLinkBot)
  3. Add user status choice (checkboxes for the different roles, or like the page-protection levels)
    • disallow IPs and new users to use a certain link (e.g. to stop spammers from creating socks, while leaving it free to most users).
    • warn IPs and new users when they use a certain link that the link often does not meet inclusion standards (e.g. twitter feeds are often discouraged as external links when other official sites of the subject exists; like the functionality of en:User:XLinkBot).
  4. block or whitelist links matching regexes on specific pages (disallow linking throughout except for on the subject page) - coding akin the title blacklist
  5. block links matching regexes when added by specific user/IP/IP-range (disallow specific users to use a domain) - coding akin the title blacklist
Downsides

We would lose a single full list of material that is blacklisted (the current blacklist is known to work as a deterrent against spamming). That list could however be independently created based on the current rules (e.g. by bot).


Modular approach: make the current AbuseFilter 'modular', where upon creation of a new filter, you can define a 'type' of filter. That module can be a module like the current existing AbuseFilter, or specialised modules like this spam-blacklist filter described above.


Discussion

Voting

Create an extension for fixing parent IDs

  • Problem: As of MediaWiki 1.31, when revisions are restored, they will keep their old parent IDs. But this also means that it is impossible for revisions whose parent IDs have changed from undeletions in previous MediaWiki versions to revert back to having their original parent IDs.

Also, other problems with rev_parent_id could occur too:

  • Undeleting revisions that were deleted prior to MediaWiki 1.5 could cause all of them to have rev_parent_id 0 (see the history of Template talk:Db-g1/Archive 1 on Wikipedia, for example).
  • Undeletions of revisions deleted prior to MediaWiki 1.5 and imports could cause the restored or imported revisions to all unexpectedly have the latest revision ID at the time of the undeletion or import as the parent ID (see the histories of Joshua Claybourn, Talk:Netherlands, and California on Wikipedia, for example).
  • Page deletions in MediaWiki 1.18 and earlier did not fill in the ar_parent_id column, so undeleting revisions that were deleted in MediaWiki 1.18 or earlier could cause unexpected results (see the histories of Eshay and Sembcorp on Wikipedia, for example).
  • Importing could cause the imported revisions to have a parent ID that does not correspond to the one on the original source wiki (see the history of MediaWiki:Gadget-formWizard-core.js on Wikipedia, for example).
  • Who would benefit: Users patrolling page histories for size differences
  • Proposed solution: Create an extension for fixing parent IDs and install it on Wikimedia wikis. The following are the (collapsed) descriptions from the Phabricator tasks:
Extended content

T223343:
When revisions are imported, we should attempt to preserve the parent revision from the source wiki. This means that if rev_id m has rev_parent_id n or 0 on the source wiki, then rev_id m' would have rev_parent_id n' or 0 on the target wiki, where the primes mean the corresponding imported revision IDs on the target wiki. If the parent ID on the source wiki is a deleted revision ID or has a different rev_page, then we would either have to fallback to using the preceding revision ID as rev_parent_id or insert dummy "ancestor" rows into the archive table.

T223342:
Before creating the extension, we should make the "populateParentId" script in MediaWiki also populate missing ar_parent_id fields, at least for those archive rows that have a non-null ar_page_id field, where it is assumed that the equivalent of "has the same rev_page" for the archive table is "has the same ar_namespace, ar_title, and ar_page_id combination". Dealing with the null ar_page_id case is a bit trickier, because we need to know when each revision was deleted, so it is best to leave ar_parent_id null for such deleted revisions for now.

We now keep the old parent ID when restoring revisions, but this previously wasn't the case. We should start fixing rev_parent_id for all old restored revisions.

First, the extension needs 2 globals named "$wg(ExtensionName)119date" and "$wg(ExtensionName)131date". These should respectively be the date the wiki started using MW 1.19 (when deleted revisions started to have parent IDs saved in the archive table) or later and the date the wiki started using MW 1.31 wmf.15 (when undeletions started to keep the old parent ID) or later. We also need 2 tables named "backlog_temp_page" and "backlog_temp_revision". The former will have columns named "btp_id", "btp_namespace", "btp_title", and "btp_timestamp". The latter will have columns named "btr_id", "btr_rev_id", "btr_btp_id", "btr_old_parent_id", "btr_new_parent_id", and "btr_table".

Next, we need to create a script that will dump some pages and revisions (including deleted ones) into the 2 tables. All pages that have a "restored page" log entry with a timestamp on or earlier than the "$wg(ExtensionName)131date" global will be dumped into the "backlog_temp_page" table, with "btp_timestamp" being the log entry's timestamp (the earliest one if there is more than one such entry). The same will also be done for pages with later "restored page" log entries if they also have at least one "deleted page" log entry with a timestamp on or earlier than the "$wg(ExtensionName)119date" global, as well as pages with import log entries. Again, if a page has more than one "restored page" or import log entry, or both, then it will only be added once to the table, and "btp_timestamp" will be the timestamp of the earliest such entry. Targets of merge and move log entries from titles already in the table will also be added to the table, with "btp_timestamp" being the timestamp of the earliest such entry, and this will be done recursively. Merge and move log entries with timestamps earlier than the "btp_timestamp" for the source page will be ignored. Once the "backlog_temp_page" table is completely filled, it is then time to fill in the "backlog_temp_revision" table. All live and deleted revisions for pages in the "backlog_temp_page" table will be dumped into the "backlog_temp_revision" table, with "btr_old_parent_id" and "btr_new_parent_id" both initially being the current value of rev_parent_id or ar_parent_id, and "btr_table" being either "revision" or "archive".

If there is a page that one thinks also needs repair, but is not automatically added to the "backlog_temp_page" table, then one can just visit a special page named "Special:FixParentIDsRequest" to request that the page be manually added to the table, and another administrator will then approve or decline the request. After approving the request, all of the page's live and deleted revisions will then be dumped into the "backlog_temp_revision" table, following the same rules as the above script. If one thinks that a declined request should have been approved, then one can just make another request for the same page, and the new request should then be approved by a different administrator from the one who declined the original request.

While the extension is ongoing, it needs some hooks. When a page listed in the "backlog_temp_page" table is deleted, the "btr_table" field must be changed from "revision" to "archive" for all rows corresponding to the revisions that had just been deleted. When such a page is undeleted, the "btr_table" field must be changed from "archive" to "revision" for all rows corresponding to the revisions that had just been restored. When such a page is moved, the new title must replace the old one in the "backlog_temp_page" table, and the old title will be re-added to the table if it still has some deleted revisions. In the latter case, the "btr_btp_id" field will be replaced with the ID of the newly inserted "backlog_temp_page" row for all rows in the "backlog_temp_revision" table corresponding to the deleted revisions for the old title. Finally, when Special:MergeHistory is used with a source page that is already in the "backlog_temp_page" table, the target page must be added to the table if it is not already there, and the "btr_btp_id" field must be updated for all rows corresponding to the revisions that had just been merged. When all revisions are merged, the source page will be removed from the "backlog_temp_page" table if it does not have any deleted revisions.

Finally, we need a special page named "Special:FixParentIDs". The special page will require a page from the "backlog_temp_page" table to be fixed. Then, all of the page's revisions from the "backlog_temp_revision" table will be listed on the special page, with the timestamp, author, edit summary, and "minor edit" status shown. For each revision, there will be 2 radio buttons below it. One of them will say to keep the current parent ID, and the other one will say to change the parent ID to whatever the user thinks it should be. There will also be 2 buttons named "Save settings" and "Fix page". The former will only update the "btr_new_parent_id" fields, while the latter will also immediately fix rev_parent_id or ar_parent_id for all of the page's live and deleted revisions and remove the page and its revisions from the "backlog_temp_page" and "backlog_temp_revision" tables.

After completing a request to fix parent IDs for revisions from a particular page, messages will be left on the user talk pages of the affected editors to let them know that the parent ID has successfully been fixed for one or more of their edits.

The message will look something like the following in English (as usual, the four tildes will automatically be replaced with a signature and a timestamp):

== Check out the following page: Affected page ==

Hi, {{ROOTPAGENAME}}. I would like to let you know that I have fixed parent IDs for one or more of your edits to the page [[Affected page]]. The affected revision ID(s) is/are the following: (List of affected revision IDs).

Please check the history of the page to confirm that the size diff numbers have successfully been fixed. ~~~~

{{ROOTPAGENAME}} is used here so that it remains displayed correctly if the user had been renamed or if the message had been archived. Also, if the user talk page is a redirect to another page, then the message will be posted at the target page instead, and the possessive pronoun "your" will be replaced with the possessive form of the original editor's username (which might, for example, be a bot username). For usernames that do not end with "s", "'s" will be added automatically; for those that do, one must decide whether or not "'s" should be added. For imported edits with usernames having an interwiki prefix, no message will be posted.

Other languages will of course need a translated version of the message.

With the example "User:Calliopejen1/Bronces de Benín" below, Millars would receive messages on both the English Wikipedia and the Spanish Wikipedia that say that the parent ID had been fixed for four of his edits on the respective wikis. Also, since User talk:Xqbot (enwiki) and Usuario discusión:Xqbot (eswiki) redirect to User talk:Xqt (enwiki) and Usuario discusión:Xqt (eswiki) respectively, Xqt would receive messages that say that the parent ID had been fixed for one of Xqbot's edits on the respective wikis. The size diff number would then change from a "heavy" red negative number (-12,550) relative to the old parent ID to a "light" green positive number (+24) relative to the new parent ID.

Summary:
First, all rows in the archive table that have an associated page ID (ar_page_id) but no parent ID (ar_parent_id) will have their parent IDs populated. After that, all page titles that were imported, deleted in February 2012 or earlier and also have at least one page undeletion log entry, or undeleted in January 2018 or earlier will automatically be added to a new table. Targets of merge and move log entries with sources in the table will also be (recursively) added to the table. One can still request for a page that was not automatically added to the table to be added manually if needed. Such requests are needed, for example, when one has a page with suppressed edits that were migrated from the Oversight extension.

If one finds a page from the table that has at least one revision that needs to have its parent ID fixed, then one should go to Special:FixParentIDs/Page title ("Page title" should be replaced with the page's actual title) and start implementing the required parent ID changes. After the changes have been saved, the authors of the affected revisions will then be notified of the changes. A log entry notifying of the change will also be created.

Discussion

  • Sample usage (from my comment on T223342):
The following example shows that for imported revisions, rev_parent_id might require fixing on both the source and the target wikis.
The subpage User:Calliopejen1/Bronces de Benín on the English Wikipedia was imported from the page Bronces de Benín on the Spanish Wikipedia. There is also a move comment that says "fusión de historiales", which, of course, is the Spanish translation of "merging histories". This together with the negative size diff for the edit at 23:23, 21 July 2010 by Xqbot suggests that we should fix rev_parent_id for some of the the edits between the 2 moves by Millars. The strategy is to separate the revisions containing <nowiki>'ed categories from the ones that contain live categories until the Xqbot edit.
The following fixes should therefore be done:
  • Revision ID 526466627 on enwiki and revision ID 38879757 on eswiki should both be fixed to have rev_parent_id 0 to make them show as page creations on Millars' contributions on the respective wikis.
  • Revision ID 526466630 on enwiki and revision ID 38881146 on eswiki should be fixed to have rev_parent_id 526466626 and 38879182 respectively.
  • Revision ID 526466631 on enwiki and revision ID 38884289 on eswiki should be fixed to have rev_parent_id 526466629 and 38881051 respectively.
  • Revision ID 526466643 on enwiki and revision ID 38956953 on eswiki should be fixed to have rev_parent_id 526466630 and 38881146 respectively.
  • Revision ID 526466644 on enwiki and revision ID 38975104 on eswiki should be fixed to have rev_parent_id 526466641 and 38922039 respectively.
  • Finally, the rest of the revisions' parent IDs should be left unchanged on both wikis.

— The preceding unsigned comment was added by GeoffreyT2000 (talk)

As just a side note, for anyone not technical, this is going to be incredibly difficult to understand and probably should be rewritten for that audience if it is going to get any votes. --Rschen7754 19:23, 16 November 2020 (UTC)[reply]
I honestly think this is going to get basically 0 votes. It's an exceedingly technical change that impacts <1% of all editors. --Izno (talk) 21:48, 16 November 2020 (UTC)[reply]
This is very difficult to understand even for someone who does understand technical stuff. As far as I can tell, there's two things being proposed here. One of them is "fix T38976", which is a reasonable proposal, and the other is "support pages containing parallel histories", which doesn't seem to solve a real problem (just don't do that, and you can fix any pre-existing pages with parallel histories using selective undeletion without any new code being written). Also note that w:User:Calliopejen1/Bronces de Benín no longer exists. * Pppery * it has begun 00:14, 17 November 2020 (UTC)[reply]
@Pppery parallel histories would be incredibly useful for a whole hist of things. See T113004 for some. But it's a massive undertaking that would probably be outside the wishlist's scope. Tgr (talk) 03:37, 13 December 2020 (UTC)[reply]

Voting

Watchlist of users

  • Problem: Vandals are hard to follow, as are new users.
  • Who would benefit: Better control of new users and vandals.
  • Proposed solution: At the very least, for administrators, it would be necessary to have a "user watchlist".
  • More comments: When we are reviewing articles, many times we see a user who has done something wrong, or has entered wrong data. We write a message to him in his discussion, and we watch the edits he makes for a while, to see if he has improved in his attitude. However, when you have written to many users, it is almost impossible to follow them all, so a watchlist of each user's edits would be interesting, at least for administrators.
We could have a star on the users, as we do with the articles, and have a "user watchlist" where a list of issues will appear separated by user, to see what each one has done. When we consider that a user is doing their job well, we can remove the star, just like we do with the articles. Thanks.

Discussion

  • This was previously declined (5 whole years ago) as it was seen to be a major tool in 'harassment'. Basically, it would be trivial to write a script that does just this (User:MER-C just did that): collect the last ### edits of a set of users, sort them out, and display them in a watchlist-like-manner (that could be a user-script in my on-wiki .js, it could be on my own computer and no-one would see that I would 'script' except maybe CheckUsers). For N00bs: I can do the same things with a folder of bookmarks of user-contributions for these users and just every morning check them (as I do with my watchlist). Yes, having this might be a nice tool to harass users ('what have my favourite victims been up to') but that can be done anyway in undetectable ways.
Having this tool has some big benefits in 'following problematic users' (suspect spammers, socks, COI editors, etc.). To mitigate harassment one could make the user-watchlist-lists visible to users with advanced rights (e.g. that admins can see who others are following, though probably better only CheckUsers/Stewards can see it).
I think it is time that the community re-assesses this, and possibly we have a commmunity discussion on solving possible concerns. --Dirk Beetstra T C (en: U, T) 06:29, 25 November 2020 (UTC)[reply]
@Samwilson: and @Beetstra: Thanks a lot for your feedback. Have a nice day ;) --Vanbasten 23 (talk) 09:20, 25 November 2020 (UTC)[reply]
@Vanbasten 23 and Beetstra: I've moved this back to its category, so it can be voted on. (I'm using my staff account now, but am the same person as User:Samwilson above.) —SWilson (WMF) (talk) 22:25, 1 December 2020 (UTC)[reply]
The right to use this feature could be limited to admins (and editors), or it could be withdrawn from users who use it abusively. -- Aspiriniks (talk) 21:24, 8 December 2020 (UTC)[reply]
  • This ought only be implemented if for administrators only, and only usable on accounts with under a certain number of edits, as Bilorv suggests. Though I do often think of such a tool for keeping an eye on vandals, it could so easily be turned into a harassment tool. CaptainEek Edits Ho Cap'n! 03:32, 10 December 2020 (UTC)[reply]
    • @CaptainEek: I strongly doubt whether this is becoming a harassment tool if you give visibility to the watchlist (e.g., admins can plainly see who you are watching, or have access to a Special:WhoWatchesWho which is bidirectional, or even a Special:WhoWatchesMe .. meh, even add a 'approve' for named accounts if someone wants to follow them - then in case of (group-)mentoring they will likely approve, but for regulars that will not happen, or they can later 'unapprove' if they at first did not have a problem with it). I have currently a watchlist of Wikipedia pages of my interest, which contains some filters and user contribs. This tool is literally nothing else than that. Whether I have 3 users that I 'follow around' through three clicks on bookmarks, or that I have one tool where I have all of that in one page is all the same. I really see no difference in harassing an editor through looking at their Special:Contribs every hour or by using this tool every hour. Can someone please explain me how this is (becoming) a harassment tool (pinging oppose !votes, I'd like to hear your reasoning: @MarioSuperstar77, Keepcalmandchill, Putnik, and NMaia:)? --Dirk Beetstra T C (en: U, T) 05:47, 10 December 2020 (UTC)[reply]
      • I certainly think some checks and balances would be necessary, I like your suggestion of a needing approval, or making the watch watchlists be public/seen to admins. My fears would be that both our vandals and our veterans could use this tool for ill. Vandals could throw together a list of the admins they hate, and then use it to follow them and harass them, or keep an eye on who is active so as to evade them. Much harder to do with 50 contribs pages, but easy if its in one place. For veteran users, I can imagine it worsening feuds and disagreements and encouraging edit stalking. When two users don't quite like each other, and are on the other's watchlist, they would be more likely to follow the other around, and I can see a lot more IBans being handed out. I'm not fully against this, I too would like to be able to put vandals and hooligans on a sort of watchlist, but I do think a lot of preparation/rules are needed to ensure it doesn't become a tool of evil. CaptainEek Edits Ho Cap'n! 18:57, 10 December 2020 (UTC)[reply]
  • To further alleviate the concern of harassment: make it a right, ‘canwatchusers’, that is not standard given out to anyone. People who can show a use for it on a wiki can then be given this ability (mentors, spam-fighters), or wikis can decide never to give it out. The right can be temporary for mentors, who can only use it for the time of an event. —Dirk Beetstra T C (en: U, T) 04:29, 20 December 2020 (UTC)[reply]
  • Needs to be re-thought - for mentors/education/etc, allow users to specify who can "watch" them and for how long. For vandalism, make it a user-right, perhaps one that is NOT bundled into the "admin/sysop" bit. I have no problem with Stewards, Oversighters, ARBCOM, and others at that level being able to watchlist editors and make decisions about who else can watchlist people for vandalism control. On projects with many sysops/admins like en-wiki, I would be reluctant to include this in the "admin" bucket of rights. On projects with fewer admins, this might make more sense. There should also be a "global-watcher" user-right that is included in steward and some other highly trusted global- user-groups. I can also see value in having an "bot" account have this user-right to prepare off-wiki reports for use by highly-trusted abuse-fighting editors. Davidwr/talk 15:29, 21 December 2020 (UTC)[reply]

Voting

UNDO (rollback) by UserName

  • Problem: Sometimes User make destruct change in one day in many articles (like Bot) - and Admins (Patrollers) must undo User actions (5-10 articles typically, but meet - 30 per day) article by article - it's expansive loose of time
  • Who would benefit: Admins and patrollers
  • Proposed solution: In list of "User contributions" make special UNDO-button for Admins (Patrollers). Maybe have choose Undo by DAY ACTIONS of user, or BY SIZE of change (for example: from 300 Bytes to 325 Bytes (diffs max 1 Kb))
  • More comments: It's need consensus by 2 Admins or 3 Patrollers (for example: it's mean: 2 admins must press special UNDO-button for User actions)
  • Phabricator tickets:
  • Proposer: Всевидяче Око (talk) 22:15, 17 November 2020 (UTC)[reply]

Hi! I read Manual:Combating vandalism Всевидяче Око (talk) 10:55, 18 November 2020 (UTC) (added later)[reply]

Discussion

Hi, Nihiltres! Extension:Nuke, near, because it's delete all articles in summary diapason - we need Undo same changes (or rollback same changes). But really this code can be useful for begin base. Thx)
With respect -Всевидяче Око (talk) 10:10, 18 November 2020 (UTC)[reply]
Hi, Camouflaged Mirage! In Manual:rollbackEdits.php or mass rollback.js I don't see "summary diapason" (size of change, date of cange,- only User by IP). Both script's delete all changes(. We need little more flexible script) with partial deletion
With respect -Всевидяче Око (talk) 10:55, 18 November 2020 (UTC)[reply]
Hello, if it's partial deletion (or reversion) of an user / IP contributions, I will suggest manually as if we need to quantify a specific range, it's more likely we have to look into the edits, so why not do it by hand. Best, Camouflaged Mirage (talk) 16:14, 18 November 2020 (UTC)[reply]
HI! "..destruct change...30 per day)... must undo User actions article by article - it's expansive loose of time". If Your do this time by time - Your konow whot about it. ) With respect -Всевидяче Око (talk) 01:30, 21 November 2020 (UTC)[reply]
If You are read upper: "It's need consensus by 2 Admins or 3 Patrollers..." - it's how one of variant, or can propose Your too. With respect -Всевидяче Око (talk) 14:28, 13 December 2020 (UTC)[reply]
Yes, it is partially possible - but not at all. For example, when 1 administrator or 1 patrol has seen this revision and do action, other administrators or patrols may see it as a specification message (or as some may suggest yours). And we need remember about ethically action. With respect -Всевидяче Око (talk) 11:11, 19 December 2020 (UTC)[reply]

Disculpen que pregunte esto, pero quisiera ayudarles a controlar las cuentas, etc. Asi como lo hacen sus bots. me podrian decir si es posible. Gracias

Voting

Automatically detect edit-warring

  • Problem: Bringing edit-warring to the admins' attention currently requires editors that have the time to go through a fairly bureaucratic process.
  • Who would benefit: Editors who wish edit-warring to be more strictly enforced against, and administrators/other editors who wish to enforce it more strictly or just arbitrate conflicts.
  • Proposed solution: Add a function that automatically compares every new revision against all those of the last 24 hours, and then flags to the administrators or other editors if it detects the same text being added or removed more than three times (as per the three revert rule).
  • More comments : There could still be a separate process for making formal complaints, and this could simply be a way to get attention that there is a conflict brewing, preventing things from getting out of hand earlier. Further comment: it seems that the current process for edit-warring complaints is extremely inefficient, with complaints being archived without any attention having been given to them. Therefore, perhaps this function should automatically give short blocks for first time breakers (24hrs) and longer ones for repeat offenders (72hrs and more). I think having complaints not addressed at all is extremely disappointing, and perhaps a reason for editors to stop contributing.
  • Phabricator tickets:
  • Proposer: Keepcalmandchill (talk) 04:19, 17 November 2020 (UTC)[reply]

Discussion

  • This seems like something that could be easily automated and would make it a lot simpler to detect edit-warring and hopefully keep it from festering. Jessamyn (talk) 04:36, 17 November 2020 (UTC)[reply]
  • This should be a lot easier to build now thanks to the reverted edit tag. You might even be able to write a Quarry query for it. MusikAnimal talk 05:36, 17 November 2020 (UTC)[reply]
  • I've seen some research on this topic that is interesting (unfortunately the results of which are predominantly not used by wiki editors). --Izno (talk) 19:51, 17 November 2020 (UTC)[reply]
  • This shouldn't be too hard, but I do think you'd need to take some steps to avoid major false positives. I don't know how general the auto-tagging system is, but with that en-centric bias noted, you'd want to have edits with "potential vandalism/blanking/etc" tags discounted, so edits reverting them wouldn't count. The pages are already being handled and so pulling further attention to it is unneeded Nosebagbear (talk) 15:03, 18 November 2020 (UTC)[reply]
    • There are various other issues with this, as well. For an enwiki example, reverting against consensus doesn't really need to be reported to ANEW. Bots wouldn't really need to report for 3RRNO exceptions either. As such, I think it's best this be done on a per-project level, with a bot, if needed. ProcrastinatingReader (talk) 06:46, 1 December 2020 (UTC)[reply]
  • I'm going to move this to the Admins & patrollers category, since the proposal is more about moderation than it is about improvements to editing. Hope this is okay, MusikAnimal (WMF) (talk) 20:22, 24 November 2020 (UTC)[reply]
  • This should probably be implemented at the community/bot level, at the discretion of each local community. Now that we have the reverted edit tag, each local community can action automatically-detected edit warring as it sees fit without further WMF technical effort. Best, KevinL (aka L235 · t) 06:39, 1 December 2020 (UTC)[reply]
  • Support with reservation. I agree with the problem, but not sure with solution. It's not difficulty to create automatic detection but once it's implemented, it's also very easy to bypass and circumvent. It motives warriors to change their behavior and make it hard to detect. Xinbenlv (talk) 22:24, 8 December 2020 (UTC)[reply]

Voting

Ability for sorting Special:UnreviewedPages by time

  • Problem: It is a long-time request from patrollers/reviewers, that items on Special:UnreviewedPages should be able to be sorted by creation date, so the very old items can be handled with priority, organize campaign for them etc. The list is sorted alphabetically now, which is not very useful.
  • Who would benefit: reviewers/patrollers, admins
  • Proposed solution: Offer an option to sort the list by time
  • More comments:
  • Phabricator tickets: T45857
  • Proposer: Samat (talk) 14:10, 29 November 2020 (UTC)[reply]

Discussion

Voting

Discourage External Spam that Exploits Wikipedia

  • Problem: Many spamming scammers use Wikipedia references to add credibility to their scams (usually 419s).
  • Who would benefit: Wikipedia would have less spurious traffic reflecting the interests of scammers and potential victims would benefit by being warned against scams. Wikipedia's reputation would be enhanced by taking a clear position against criminal use of Wikipedia.
  • Proposed solution: A flag option to create a banner saying that This article is currently being referenced by scam email. The link to Wikipedia should not be taken as evidence of the truth or accuracy of email which is referencing this article. The basic implementation would allow someone to report the spam email to Wikipedia and select the kind of crime it represents. Most of the examples I have seen appear to be 419s, so in that case the warning message should include a link to the article about 419s, but others are phishing scams or various other crimes. The warning should persist for a while to allow for delays in people reading their email, but the basic idea is that spammers would stop abusing Wikipedia because their scams would be quickly negated at the Wikipedia end. The person reporting the spam email could trigger the warning, or it could go through a review process to confirm its spamminess.
  • More comments: This kind of abuse has been going on for many years and "Live and let spam" is obviously great for the scammers. If they weren't making money by exploiting Wikipedia's reputation then they would stop doing it.
  • Phabricator tickets:
  • Proposer: Shanen (talk) 04:06, 25 November 2020 (UTC)[reply]

Discussion

  • @Shanen: This sounds like it could be a template that's placed on such articles (such as those used when articles are the subject of active news events etc.), and not something that requires anything technical or software development. Am I understanding it correctly? —Sam Wilson 06:07, 25 November 2020 (UTC)[reply]
@Samwilson: [Is that the correct way to flag it to your attention?] Sorry, but even after many years of making minor contributions to Wikipedia, I do not understand Wikipedia well enough to be sure that I understand your terminology accurately enough to answer correctly. I feel the answer is "Sort of yes", but the "template" would need a couple of slots for filling in the blanks. As I would imagine a possible implementation, the person reporting the spam would submit the email, preferably with the headers, and the Wikipedia-side server would parse it for Wikipedia links or references. Then it would probably ask the person who is submitting the report for help regarding the category of crime, unless the report was passed to someone else, perhaps a member of the Wikipedia abuse team. After that, the warning notification would be added to the front of the article in question (for the appropriate period of time, perhaps two weeks in the case of an email-based scam). Not sure if this is the best way to reply to you. Maybe I should try to include a copy in your personal talk page? Shanen (talk) 16:31, 26 November 2020 (UTC)[reply]
@Shanen: Yep, the {{re}} template is a great way to get people's attention; no need to separately mention on their talk page. :-) Thanks for clarifying. This sounds like there are some technical development tasks here, so I think it's fine to proceed to voting (the week after next). —Sam Wilson 02:43, 27 November 2020 (UTC)[reply]
  • I feel that this is likely to cause individuals to distrust it to a fairly high degree, even though it's by no means the article's fault. It also needs someone to review the emails, which would drench OTRS (speaking as an agent, I at least wouldn't want this added), a standard process for timed removal, and so on. I would be against its addition. Nosebagbear (talk) 15:46, 2 December 2020 (UTC)[reply]
  • This is an encyclopedia. It might be similar to spoiler warnings, which have since been discontinued. Firestar464 (talk) 05:01, 9 December 2020 (UTC)[reply]

Voting

Enable the ability to rollback to specific version

  • Problem: The current rollback function cannot specify a version to revert to (that wasn't deleted or oversight'd). Extensions and user scripts are required to do so.
  • Who would benefit: Rollbackers
  • Proposed solution: (unmentioned), seems to add a (rollback to version) button, or something like that.
  • More comments:
  • Phabricator tickets:
  • Proposer: Yining Chen (Talk) 10:34, 17 November 2020 (UTC)[reply]
  • Translator: 1233 T / C 13:54, 22 November 2020 (UTC)[reply]

  • 問題: 现阶段回退功能无法将指定页面回退到某一个已知的且未被删除或监管的版本
  • 有益於誰:拥有回退权限的用户
  • 提案的解決方案:
  • 更多評論:它可能是一个有用的功能。另外,我不确定这个提议是否在这次调查的范围内。如果不在的话,很抱歉打扰。我对Community_Tech#Scope的理解不是非常好。
  • Phabricator請求:
  • 提案者: --Yining Chen (Talk) 10:34, 17 November 2020 (UTC)[reply]

Discussion

Voting

Mark all consecutive edits by same user as patrolled

  • Problem: Often users (especially unregistered and new ones) make lot of consecutive edits that can be rollbacked and checked in the history diff at once with a single click, but those will result unverified unless every single edit is marked as such one by one, which is tedious. As a result, no patroller uses this feature on my wiki.
  • Who would benefit: patrollers, mostly.
  • Proposed solution: "mark N edits as verified" link or something like that, in recent changes, history and such, just how it's available for rollbacks.
  • More comments:
  • Phabricator tickets:
  • Proposer: Wedhro (talk) 11:02, 22 November 2020 (UTC)[reply]

Discussion

Voting

Improve display of multiple IPv6 contributions by a single editor

  • Problem: A single IP editor using an IPv6 connection can have multiple IP addresses, which can change daily, usually without them knowing. However, all those assigned IP addresses fall within "the /64 range", and so all of these contributions belong to just that one user. Unfortunately, Special:Contributions will only display edits from just one of those IP addresses, unless specifically commanded to show them all. The full picture of an editor's contributions is therefore often missing. Without knowing that one can simply add /64 to the end of an IPv6 url at Special:Contributions, it is obvious that admins, vandalism patrollers and any editor interested in the full range of a user's edits are unable to see them, and probably unaware they can be found. The picture of editing is therefore often seriously incomplete. Warnings for bad faith edits can easily be scattered across multiple IP address, and remain hidden from view unless one knows the /64 trick. Even if one IPv6 address gets blocked by an admin, the user can still edit on a more recently assigned address unless the fully range of addresses is blocked (explained in plain English here).
A further problem arises when admins or patrollers work from a mobile phone. From a PC one has to click and manually add '/64' to the end of an IPv6 url in a browser window every time one needs to check an IPv6 editor's full contributions. But on a mobile phone in Desktop View (which most serious editors use) it is not feasible on at least some phones. On an iPhone (and probably many others), even in landscape mode, the url runs off the screen, making insertion of '/64' onto the end of that url quite impossible). Anti-vandalism patrolling of IPv6 users on a mobile phone is therefore severely restricted or impossible to fully achieve because there is no way to display the full contributions of an IPv6 user across the whole /64 range.
Even if IP anonymisation is implemented, there will still be a need to view all of one user's past contributions and talk page warnings related to multiple, changing IPv6 addresses. Knowing their actual IP addresses is not essential to this proposal; it's about seeing them all. Nick Moyes (talk) 15:33, 17 November 2020 (UTC)[reply]
  • Who would benefit: All admins, anti-vandalism patrollers, and any editor interested in understanding the range of editing contributions made by the huge number of users who have changing IPv6 addresses.
  • Proposed solution: Provide a clear and very visible button on the Special:Contributions page to permit the easy displaying of all IPv6 edits on the /64 range by that one IP user. e.g. a big, easy-to-click button labelled: Display full range of IP edits by this user which appears whenever IPv6 contributions are shown.
  • More comments: Related to this issue is the inability to see any previous warnings or discussion messages left for a user at an active IPv6 talk page address within the /64 range because it is no longer currently in use. A lot of separate IPv6 talk pages need to be individually opened up to see all those messages. Equally, the IPv6 editor won't be able to see earlier warnings or notices left for them. This closely-related proposal attempts to address that issue.
  • Phabricator tickets:
  • Proposer: Nick Moyes (talk) 12:08, 17 November 2020 (UTC)[reply]

Discussion

Voting

New on-wiki block appeal process

  • Problem: Currently there are several on-wiki and off-wiki block appeal processes. They have some deficiency:
  1. Talk page
    • This proposal will not replace talk page as the primary venue for appeal in the foreseeable future. This proposal is dedicated to situations that talk page is unusable or inappropriate.
    • Global locked users can not log in and use this venue without evading blocks.
  2. EmailUser
    • This may only make a specific admin (usually the blocking admin) aware, which may not receive attention from unbiased admins. In addition, this admin may be inactive.
    • Some wikis have a mail list for appeal. But a mail list is less usable than a request tracker system.
    • EmailUser is not publicly logged.
    • There are no case list available.
  3. UTRS
    • Only existed in few wikis such as English Wikipedia.
    • Because it is in WMF labs, UTRS can not automatically fetch the block status of user or IP currently involved (i.e. user must provide one). There is no integraton with the CheckUser extension either.
    • No on-wiki way to query the past appeals so admins must left a note in user talk page (such as this).
    • No integration to notification system (i.e. admins does not get a notice if someone they blocked have submited an appeal).
    • Case list is updated by a bot and not real-time.
  4. IRC
    • This requires some admins online and active watching IRC messages.
    • Appeal is not publicly logged.
  5. For global lock the primary route to appeal is steward OTRS
  • Who would benefit: Blocked users who are not suitable to use user talk pages to appeal, especially:
  1. Users with both talk page and email access revoked. (User may be banned from this process, but I propose local admins other than stewards and ArbCom members) can only issue bans up to one year.
  2. Users blocked in a wiki with few active admins. (This may be handled by stewards and global sysops.)
  • Proposed solution: We introduce a new special page Special:BlockAppeals and allow users to appeal through this page. This page works similar to UTRS (appeals and comments are private), though log of status changes on individual tickets are public (i.e. admins may provide public summary when closing an appeal). See also proposed workflow.
  • More comments:
  • Phabricator tickets:
  • Proposer: GZWDer (talk) 00:04, 18 November 2020 (UTC)[reply]

Discussion

  • With respect, there are a few misconceptions you have about UTRS. This may be because a lot of things have changed in the switch from Version 1 to Version 2. That said, I feel a complete replacement is a waste of volunteer time and foundation money. First, UTRS was designed initially for only English Wikipedia. Since then, I've only received requests from stewards and ptwikipedia to have systems setup for them. Because of different types of communities, the requirements for the software are different. We are currently working on internationalization and it would give us the ability to scale to any wiki that wishes to have it given that they can show a consensus that their community wants it.
UTRS does automatically obtain the block information based on user input. We can't automatically call blocks for IPv6 because the Cloud Services has refused us IPv6 for years despite multiple asks, but may now be giving it to us. If this is the case, we'd be happy to integrate the use of that into the tool. Even then, you can't use OAuth to gain information on blocked users because OAuth refuses to authenticate blocked users even for identification purposes only.
While UTRS does not integrate with CU, it has the same functionality as CU. Any checkuser can sign in with their OAuth credentials and look at the details of any appeal where data has not expired. If you think there should be a generic interface to search it from, that can be done with a feature request. Either way, external software is never meant to integrate into the database of another software. I also have yet to see a proper use case for that functionality. Typically most CheckUsers don't care about specific actions taken, as the data is usually sufficient already and CU isn't meant to be magic pixie dust.
There is no onwiki way to query past appeals, you are correct, but that is partially because the English Wikipedia community outright rejected making appeals publicly visible as when I did make most visible with version 2 forcing a complete code reversion. I'd be happy to make a onwiki master list of sorts if that would help alleviate the concerns. We also still plan to return to talkpage posting about appeals, it's just something that still has to be coded.
There is no integration with the notification system to notify administrators of their blocks because no administrator has asked for it. It's also never been implemented even in the form of a bot pinging an admin. I know personally I would never want such pings as policy is we can't review to decline ourselves, we can only unblock them. This would also allow socks to preform targeted harassment of admins with each and every sock they create.
As for the appeals list not being in realtime...again, no one has ever asked for a real time version, and the the one that handles the onwiki appeals has never been realtime either.
I also would like to note for the majority of the time, we have only ever worked with two developers at a time at most. We are willing to onboard others who are willing to step up instead of forcing the WMF to pay for staff to rewrite. I also started to consider a grant for this year, but the process for getting one grinded to a halt in June and there is no indication the WMF intends to return to it. So a fair chunk of the deficiencies that already exist because of issues related to WMF administrative decision fallout, so I don't see how the WMF replacing the tool would fix that.
Overall, a lot of the complaints about UTRS listed above are not something that has been requested or expected out of the tool, and to replace something where the system hasn't even been tried to have been fixed is frankly a slap in the face of volunteer time and effort, and even WMF money for the travel to hackathons that created this tool in the first place back in 2012. -- Amanda (aka DQ) 08:24, 18 November 2020 (UTC)[reply]
  • UTRS is imperfect, but it is adequate. It does not need replacement. Any developer effort spent replacing it would be better spent refining it. Please see AmandaNP's response for fuller details. Emailing the blocking admin needs to be deprecated. Email requests cannot be scrutinized as we are discouraged from revealing email contents. UTRS can be scrutinized if only by a group that has been vetted, which is actually a good thing. I don't like IRC and I never will. Secret and secretive sub rosa back channel. I prefer requests be in the open. There is a final avenue of appeal apart from TPA and UTRS. Appellants can email the ArbCom. --Deepfriedokra (talk) 08:43, 18 November 2020 (UTC)[reply]
As UTRS affects mostly the English Wikipedia, and as it seems evident that UTRS is adequate to English Wikipedia's needs, perhaps we can close this discussion as moot and start a new discussion for those other projects that have unmet needs. As to transparency, it is easy enough to post a decline note and link on the user's talk page (as I do). Posting their appeal to their talk page might defeat the purposes of maintaining confidentiality and preventing disruption. I will go on to reiterate that developers that might create a replacement for UTRS would be better utilized optimizing it.--Deepfriedokra (talk) 01:32, 19 November 2020 (UTC)[reply]
  • I concur with AmandaNP and Deepfriedokra. We don't need something new. Agree that emailing the blocking admin should be depreciated. 331dot (talk) 10:08, 18 November 2020 (UTC)[reply]
    • At least since 2014 I did have some concern about UTRS - not such listed above but a lack of transparency (everyone should be able to see a list of currect and past appeals and for declined appeals a summary for why they are declined). If English Wikipedia does not want to replace UTRS let's focus on smaller wikis with inadequate appeal processes (e.g. Commons does not have one).--GZWDer (talk) 11:05, 18 November 2020 (UTC)[reply]
  • In general, I would say that the policies of the different projects and languages of each wikiproject are rather different. Not to mention the different attitudes of each individual sysop everywhere. For users who end up blocked, I think most of them end up leaving the project for a while, at least. Especially for vandals who might be young children, and then come back to the project later in life. I am not sure how many of these people want of be associated with a former blocked account. The unblock process is a little weird, to say the least, but this is due to the decentralized nature of wikis in general. It could use some positive changes, but I don't see much of a way of doing so without hurting the distributed power necessary for a decentralized project. Cocoaguytalkcontribs 16:43, 18 November 2020 (UTC)[reply]
  • I don't think creating a different version of UTRS, when UTRS is actively maintained, is a good idea. If wikis want a system like this, UTRS seems good enough. If UTRS could be integrated into the wikis more, like letting OAuth work for blocked accounts only for UTRS, in my opinion that would be a better use of WMF money and developer time. Perhaps something like this is needed for smaller wikis, but the proposed idea talks about a special page on the wiki. I would have thought that for global sysops a central place would be better (like a global version of that special page). I would have thought global sysops won't want to be checking many small wikis for unblock requests, so having a centralised system might be easier for them. Perhaps UTRS can be this centralised system, which is enabled for wikis that want it. Dreamy Jazz talk to me | enwiki 23:41, 19 November 2020 (UTC)[reply]
Perhaps UTRS can be extended with configuration options which allow wikis to opt in to having appeals and their responses public. Also perhaps UTRS can be merged into the WMF sphere, and WMF money + developers could be used to work on UTRS? Dreamy Jazz talk to me | enwiki 23:45, 19 November 2020 (UTC)[reply]

Voting

Allow global user rights to expire automatically

  • Problem: Like local user rights used to, up until the 2016 Community Wishlist, global rights don't expire automatically, creating a burden of keeping track of the expiry dates for the rights that need to be removed.
  • Who would benefit: Stewards, mostly
  • Proposed solution: When granting global user rights, add a new option to set a time how long the rights will last (maybe a drop-down menu?).
  • More comments: There is already a patch pending review on phab/gerrit. It'd need someone to adopt it and push it forward.
  • Phabricator tickets: phab:T153815
  • Proposer: —Thanks for the fish! talkcontribs 17:20, 28 November 2020 (UTC)[reply]

Discussion

Voting

Diffs to deleted content

Discussion

Not opposed to this, but a better tool to this end would be to have a bot look at all articles created at previously deleted titles and note when recreated content substantially duplicates the previously deleted content. BD2412 T 19:43, 8 December 2020 (UTC)[reply]

Voting

Provide user info in recentchange stream on EventStream platform

  • Problem: Unable to get information about the user without additional API requests.
  • Who would benefit: Patrollers
  • Proposed solution: Provide information (registration date, edits count, groups) about user in recentchange stream on EventStream platform. It's very important for global counter-vandalism tools (SWViewer, etc). It will greatly increase the speed of work, will reduce load to API. For example, if someone wants to patrol recent changes of multiple wikis, for filtering edits we need to send an API request for every edit, which means hundreds of requests per minute! This overloads API, user's network, and user's device (sometimes its mobile device), and ultimately slows speed. This has already been implemented in links stream (see Performer array), but not in recentchange stream.
  • More comments:
  • Phabricator tickets: T218063
  • Proposer: Iluvatar (talk) 21:30, 18 November 2020 (UTC)[reply]

Discussion

This seems like a good idea, it will make it much easier for tools and their users to filter recent changes. Reducing the load on the API is helpful too. the wub "?!" 23:41, 29 November 2020 (UTC)[reply]

Voting

Overhaul AbuseFilter

Discussion

  • Suggested addition: have an option 'show captcha' as result for a hit, with that, both for IPs and logged in editors, you can filter whether an account is genuine and not a spambot. It may be annoying for the regular editor who is acting in good faith, but should stop most of the bot edits and could be used to thwart vandals. --Dirk Beetstra T C (en: U, T) 06:17, 17 November 2020 (UTC)[reply]
  • @Beetstra: Grants:Project/Daimona Eaytoy/AbuseFilter overhaul has started, which should greatly improve the situation. phab:T186960 for instance could be considered as making it more "modular", perhaps. The captcha task you mention is not on the Overhaul-2020 board, however. Would you like to refocus your proposal on the captcha bit? MusikAnimal (WMF) (talk) 06:34, 17 November 2020 (UTC)[reply]
    @MusikAnimal (WMF): maybe this request as it is gives a bit of an extra push (if it receives much support). The Captcha-task could be as a separate phab ticket linked to either. --Dirk Beetstra T C (en: U, T) 06:57, 17 November 2020 (UTC)[reply]
    @Beetstra: My concern is the proposal as written is very broad without a clear problem statement. It needs to be more definitive so we know what we're voting for. The CAPTCHA idea I think is great as a standalone proposal. One thing I can assure you is that AbuseFilter is no longer unmaintained. It's been under heavy development since Kaldari made that statement two years ago. MusikAnimal (WMF) (talk) 23:17, 17 November 2020 (UTC)[reply]
    As a user I find captchas extremely annoying. And usually there is discrimination involved. You're using Linux? That's suspicious etc. --Shoeper (talk) 14:52, 23 November 2020 (UTC)[reply]
    @Shoeper: in a way, the 'being extremely annoying' is true, but that is also part of the function. Spambots are a reality where the use of captchas does way more good than the bit of damage from catching good edits (we shouldn't be enabling this on all external links, just external links additions that follow a certain pattern). Basically you have to have a filter that results in >99% positive hits of a certain type before you enable the captcha. --Dirk Beetstra T C (en: U, T) 06:01, 24 November 2020 (UTC)[reply]
    @Beetstra: Some may believe they can achieve ">99% positive hits". In reality there are always faults and regular users are annoyed. Not even Cloudflare, nor Google (who if not Google?) are able to reliably detect bots WITHOUT annoying users. One point on Wikipedia is that it would only apply to edits, but I want to encourage to be extremely careful. If it is too annoying users won't edit pages. And editors are already a problem. How many non technical editors does Wikipedia have and how many women are editing Wikipedia at all? The last time I looked these figures were worrying. Some people may just quit the page, once a capture occurs. On the other hand professional attackers will find a way around the captchas. Basically, captchas are like DRM. You try to not allow someone to watch a video while the purpose of the video is to be watched. Regular users now have to live with the disadvantages and are annoyed while copycats still find ways around and publish accessible versions on the web. The bottom line is paying customers get a bad experience while it is still possible to copy the content for professionals. It will be the same on Wikipedia. Regular users are annoyed and professional spammers are going to bypass the captcha.--Shoeper (talk) 18:13, 28 November 2020 (UTC)[reply]
    @Shoeper: what I mean is that the underlying filter can reach high levels of positive hits on spam material. You then have the choice, outright block them by setting the filter to disallow (which is going to annoy everyone and most will walk away because they don't know how to get 'around it'), or you throw a warning in front of them (which is going to annoy all people, but certainly all spammers/bots will just 'click it' away and is hence useless), or you feed a captcha to them, which will (hopefully) annoy ALL bots, massively slow down spammers (so their accounts/IPs can be (b)locked), and yes, annoy quite some of the genuine editors. It is then a choice between a completely annoying filter, a totally useless filter, or one that will seriously slow down spammers (if the captcha is good enough) and annoy a part of the remaining editors (and a part will just shrug like I do when I get a captcha) in a way similar to the 'click here to continue'-box. And it beats massively the situation where ALL IPs and ALL unconfirmed editors have to solve a captcha for any 'new' link they add. --Dirk Beetstra T C (en: U, T) 11:45, 29 November 2020 (UTC)[reply]
  • I would support having captcha added as a result of a hit. Would be useful to prevent spambots. Dreamy Jazz talk to me | enwiki 23:46, 19 November 2020 (UTC)[reply]
  • As pointed out by MusikAnimal (whom I'd like to thank), this idea of AbuseFilter being unmaintained should really stop. Yes, the codebase is legacy; yes, it has limitations; and yes, it used to be buggy. But things have changed a lot in the past couple of years, and the current codebase is actually much better than those of several other deployed extensions. I also agree that the current proposal is too broad, and integrating a "SpamBlacklist module" inside AbuseFilter is not going to happen. These tools are different, and even if their scopes overlap, it doesn't mean one should be merged into the other. OTOH, I would strongly support the addition of captchas as an AbuseFilter action. The current AbuseFilter overhaul should hopefully simplify (and unbreak) the process for adding custom actions, so this should be definitely doable once the overhaul is over. Should this proposal be selected, I'd certainly be happy to help the team with the implementation. --Daimona Eaytoy (talk) 16:22, 20 November 2020 (UTC)[reply]
    @Daimona Eaytoy: there has for long been talk to make the AbuseFilter more modular (which, in my view, would mean that you have an AbuseFilter that has several modules that you can choose). Whether a 'spamblacklist' module should be part of that, or that that should be done differently (see Community_Wishlist_Survey_2021/Admins_and_patrollers/Overhaul_spam-blacklist) is up to them (though I insist that many of the AbuseFilter options would be great to have on the spam-blacklist, but I agree that the 'module' may be too different - but maybe a joint effort would be good). --Dirk Beetstra T C (en: U, T) 06:14, 23 November 2020 (UTC)[reply]

Voting

Mass page protection functionality in the checkuser tool

  • Problem:When checkusers find sockpuppets of blocked users, they have a list of tasks to do: (1) block the sockpuppet (2) place a note on the sockpuppet's user page (3) redirect the sockpuppet's talk page to the user page (4) protect the sockpuppet's user page and talk page so that they can no longer be written to. Often there is a large number of sockpuppets involved.
1-3 can easily be done en masse using features integrated into the checkuser tool, but 4 has to be done separately, for each page involved.
  • Who would benefit: checkusers
  • Proposed solution: add an option to the checkuser tool for protecting the user and talk page of users when blocking them.
  • Further comments:
  • Phabricator tasks: -
  • Proposer: Pallertitalk 19:35, 25 November 2020 (UTC)[reply]

Discussion

  • As an enwiki CU, I don't know about huwiki practices but on enwiki (a) we don't generally protect user/talk pages for sockpuppetry and (b) we generally use user scripts to supplement these kinds of functions. (It's rare for me to use the built-in blocking tool from Special:Checkuser.) Perhaps a script would help more than a MediaWiki change? KevinL (aka L235 · t) 00:58, 11 December 2020 (UTC)[reply]

Voting

Having read both the Hungarian original and the English version, I don't think this is a machine translation. It'll be a long time before you can get a machine translation of this quality on such an arcane topic. But it's neither here nor there: the proposer has reworded the English version to make it easier to understand. --Malatinszky (talk) 18:41, 10 December 2020 (UTC)[reply]
That hurt, MusikAnimal :) --Tgr (talk) 03:48, 13 December 2020 (UTC)[reply]
@Tgr Hehe sorry!! I for one understood it perfectly, as did the rest of the team :) My comment above was to defend non-English speakers, in general. I just assumed it was machine translation here since that is what we used for most non-English proposals. I should have checked the revision history. MusikAnimal (WMF) (talk) 04:12, 13 December 2020 (UTC)[reply]

CentralAuth lock should trigger global autoblocks

  • Problem: When stewards are locking vandal accounts, their IP address is not globally blocked, meaning they can continue vandalizing using their IP. Now stewards need to go to loginwiki to check an IP and block it. It would save a lot of time if we don't need to do the check afterwards.
  • Who would benefit: Stewards
  • Proposed solution: Automatically block an IP of a locked account
  • More comments:
  • Phabricator tickets: T19929
  • Proposer: Stryn (talk) 11:07, 17 November 2020 (UTC)[reply]

Discussion

  • Good idea. This proposal is something I will like to raise also. Camouflaged Mirage (talk) 11:08, 17 November 2020 (UTC)[reply]
  • I would want to see the ability to enable autoblocks when locking. Dreamy Jazz talk to me | enwiki 23:54, 19 November 2020 (UTC)[reply]
  • Your idea is good to save 80% of Stewards work on this, but I didn't fully agree to implement "automatically block an IP of a locked account", instead of this we could have an check box "Block the IP globally" during global (un)lock, if any of those account need to be unlocked then remove from check box. Why did I propose this? Some of Wikimedian work for WMF for a contract period, when their contract ended their user ID also locked as "no longer with WMF". If automatically block an IP of a locked account, those good users will have to find excepcional way for their volunteer account. Regards, ZI Jony (Talk) 17:13, 21 November 2020 (UTC)[reply]
  • This is a good idea provided that, similarly to local blocks, there is an option to disable autoblocks. This shouldn't hamper the overall request though. Best, KevinL (aka L235 · t) 06:41, 1 December 2020 (UTC)[reply]
  • Caution needed: a single IP address is often used by multiple people. An obvious example is when a device is used by multiple people in the same family, university, business etc. Another example is when multiple machines in the same institution (or with the same ISP) use the same gateway. I understand the reasons for wanting autoblocking, but in the past I have seen entire networks of affiliated schools using the same gateway being blocked from editing as a result of an unknown user (probably some unknown student/ex-student from an affiliated school) getting the school's/schools' IP address blocked (possibly years ago), causing havoc for teachers. I believe this problem still continues to happen (and needs to be fixed). Autoblocking, without some form of safeguard to prevent this, will compound the problem. Can anybody comment on safeguards?--Fh1 (talk) 22:38, 15 December 2020 (UTC)[reply]

Voting

Improve anti spam mechanisms

  • Problem: "Wikimedia's captchas are fundamentally broken: they keep users away but allow robots in" (T158909). This was sadly true in 2017 and so it is in 2020 (T241921). While a proposal to enable better ones exists (T141490) its implementation is being delayed due to lack of testing/metrics. Year after year stewards and other volunteers spent most of their time blocking spambots and clearing after them. Thousands of spambots need to be manually blocked and cleaned up after by stewards and administrators every month that should've not been allowed to register from start. Moreover, this abusive spambot registration occurrs mostly on small and scarcely watched wikis. While the global SpamBlacklist and AbuseFilter are enormously helpful when it comes to prevent spam edits, we could do better than that and prevent that they register from the start. We need a long-term strategy that spares volunteers from this continuous hindrance. Existing proposals (in addition to those mentioned above): Revamp anti-spamming strategies and improve UX (2015), Automatically detect spambot registration using machine learning (2017) (#aicaptcha), enable MediaWiki extension StopForumSpam (Phabricator workboard · Beta Cluster deployment request (2017)). CheckUser shows that most spambots we detect register and edit using IPs or ranges blacklisted in one or more anti spam sites such as StopForumSpam and analogous DNSBL sites. Filtering out traffic originating from those would also help addressing this.
  • Who would benefit: All users.
  • Proposed solution: I guess it depends on how Community Tech would like to address this issue. My informal proposal (which may not be the path that the developers have in mind) would be as follows: (a) short term: Deploy improved FancyCaptcha, (b) medium term: enable MediaWiki extension StopForumSpam (passive mode: do not send data about our users, just receive the data they have about toxic IPs/networks), and (c) long term: AICaptcha.
  • More comments: —
  • Phabricator tickets: See above, but T125132 contains an accurate summary of the issue. Of interest: T125132 (restricted), T212667, T230304 (restricted).
  • Proposer: —MarcoAurelio (talk) 19:05, 16 November 2020 (UTC)[reply]

Discussion

  • I believe the Captcha work is already underway at Phab:T250227. Samwalton9 (talk) 00:01, 17 November 2020 (UTC)[reply]
    hCaptcha might be another posibility, but I am not sure how many people would agree to use a third-party system. In any case, if it is decided that hCaptcha is the way to go, Community Tech could still get involved. —MarcoAurelio (talk) 18:49, 17 November 2020 (UTC)[reply]
  • I would probably not use Google's reCAPTCHA and instead use a simple in-house developed "type the letters you see" captcha. If the network the user is on blocks Google but not Wikipedia, they may not be able to edit. Félix An (talk) 02:29, 17 November 2020 (UTC)[reply]
    Indeed. I am not proposing to use reCAPTCHA or other third-party system. User privacy is important to me. —MarcoAurelio (talk) 18:49, 17 November 2020 (UTC)[reply]
  • Query: how will this be usable by those who do not have "latinized" keyboards? (Examples: users from many Asian and Middle Eastern countries) Not inherently opposed, just not sure how this will work when many of the projects that would benefit the most are languages for which "latin" letters are not standard. I can see how producing a captcha of some sort that uses the same alphabet as the project may reduce spam, which is often in a different language than the project. Risker (talk) 18:46, 20 November 2020 (UTC)[reply]
    I guess this needs to be analyzed and come to a solution that can be as inclusive as possible regardless of the cultural background. Instead of writting random words as it happens currently, users could be offered a mosaic of pictures from Commons and ask them to click on the ones that are cats/dogs/cars/rivers/etc. for example? Or maybe solve an easy math question (e.g. How much is 3 + 7?). I feel that, ideally, the solution would be the IACaptcha work started some time ago where without the need of captchas the system is able to identify and exclude non-humans from registering. That, I guess, can take some time; but maybe we can profit from this oportunity to restart that work and end this situation of cross-wiki volunteers having to deal with hundreds of spambots every day. I think that not doing anything in this subject is no longer an option. —MarcoAurelio (talk) 19:00, 28 November 2020 (UTC)[reply]
  • Please, be very careful. For me as a user Captachas are increasingly annoying. As a normal user which includes not using Windows one should NEVER see a captcha. --Shoeper (talk) 15:02, 23 November 2020 (UTC)[reply]
    • I may be missing something here, Shoeper, but I don't see any way that you could never see a captcha unless you were being tracked across the internet. If you come to a new site which does not have any information about you, and they need to be sure you aren't a bot, a captcha is how they do it. Natural-language questions are also useful, but have to be changed regularly and are hard to make non-culturally-biassed. I'd suggest offering the user some simple editing tasks, suggested-edits style, might be an effective way to test. Pre-Google, re-CAPTCHA asked users to digitize a couple of words from scanned public-domain books, using overlap between users to validate. I understand this is too easy for modern bots anyway, and Google now asks everyone to train their proprietary driverless car algorithms (usign the same consensus-of-humans method to verify). But we have no shortage of bot-undoable tasks on the wikis. HLHJ (talk) 01:22, 24 November 2020 (UTC)[reply]
      • (People could be used to improve Wikidata, but it would probably require the user to research something leading to bad user experience.) The increasing use of captchas on the internet is worrying. Non technical users and women are underrepresented on Wikipedia. I fear adding a captcha is going to worsen that situation further. But tbh although I'd like to improve Wikipedia and try it from to time it never was a real pleasure. Looks like it is getting even worse.--Shoeper (talk) 18:13, 28 November 2020 (UTC)[reply]
        • Wikimedia already uses Captchas, but they're broken so bots easily get in, and some people struggle with them. Real people being blocked by broken captchas is certainly a concern for me and I'd like to find a solution that is both effective and inclusive. I think AICaptcha is the solution to this as it'd use no captcha at all. In the meanwhile if you are having problems with the Wikimedia captchas, you can ask for a global captcha-exemption permission. See details at this page. —MarcoAurelio (talk) 19:00, 28 November 2020 (UTC)[reply]
        • Basically the choice is between an unlimited/unrestricted flow of rubbish coming in (which is haunting away regulars due to the amount of work), restrict everything that looks like rubbish (which is stopping spambots but disallowing new genuine editors), or a 'click here' OK-box which is a nuisance (extra click, though most people won't bother too much) but basically an unlimited/unrestricted flow of rubbish coming in. A captcha is a path between: it is (when there is a good captcha) rather restricting on spam-bots (except for the really intelligent ones, which cost money to the spammer), and a nuisance for genuine editors (I myself do not care about the occasional captcha, it should however be reasonable; I agree that some (new) editors will be genuinely annoyed, but that will be way less than when you fully restrict or have to click away the OK-box every single time). --Dirk Beetstra T C (en: U, T) 10:47, 30 November 2020 (UTC)[reply]
  • It is important to note that this is, probably, not a matter of shifting a tradeoff between being better at keeping bots out and being better at allowing humans in - it is very likely that both can be improved at the same time. The core capability that we are missing here is some kind of analytics to evaluate captcha changes - there are easy options to tweak the captcha algorithm in a way that probably improves all the parameters, but we cannot actually measure those parameters currently, so we'd have to fly blind. That has kept those changes from being made for a long time. --Tgr (talk) 03:28, 13 December 2020 (UTC)[reply]
  • The only concern I have here is accessibility for blind users. As if Google's reCAPTCHA was blind-friendly in the first place anyway (their audio feature is broken)... But I wonder how the three suggested implementations would handle that. Pandakekok9 (talk) 03:04, 15 December 2020 (UTC)[reply]

Voting

Flagged Revisions randomly fails to load user permissions

  • Problem: User permissions and rights are not loaded properly from FlagRev's configuration file in random cases. Some edits made by trusted users/admins etc. are marked as unreviewed, or the other way (!), some of these editors get permission denied messages on pages they should normally access, and other user rights related problems.
  • Who would benefit: reviewers, patrollers, admins
  • Proposed solution: unbreak the problem that the configuration file is not properly loaded
  • More comments: This is a critical bug which is not solved for more than a year now.
  • Phabricator tickets: T233561, T234743
  • Proposer: Samat (talk) 14:01, 29 November 2020 (UTC)[reply]

Discussion

Generally speaking, the Flagged Revision extension is an essential part of MediaWiki used on many wikis, and it has no maintainer or developer even for critical and security bugs for years now (see T185664). This should be changed, in my opinion. In addition, failing in loading configuration files or misuse of permissions for wikis is a critical bug which should be handled with priority and solved in days, not years. Samat (talk) 14:01, 29 November 2020 (UTC)[reply]

I think it is possible that this was fixed in September 2020 when Matmarex fixed phab:T237191 (exact change [3] ). It should be still confirmed though, but I think that root cause for both bugs were same. --Zache (talk) 21:25, 30 November 2020 (UTC)[reply]
And answer to myself. It is not likely fixed yet as configuration was separated to Configs that must be set before FlaggedRevsHooks::onExtensionFunctions and Configs that must be set after FlaggedRevsHooks::onExtensionFunctions and the later which contais user permissions is still same as before and thus likely broken. (thanks to Tgr for noticing this) --Zache (talk) 23:20, 30 November 2020 (UTC)[reply]

This is relatively easy to work around for Flagged Revisions specifically (just configure those permissions elsewhere). The fact that our configuration system, which determines who can access various functionality, including highly sensitive ones, is broken and we have no idea why, is deeply scary. --Tgr (talk) 03:19, 13 December 2020 (UTC)[reply]

Voting

(Un)delete associated talk page

  • Problem: Add a checkbox to the (un)delete forms to (un)delete the associated talk page simultaneously with the content page. Add a corresponding parameter to the (un)delete actions in the Action API.
  • Who would benefit: Administrators
  • Proposed solution: Self-explanatory.
  • More comments: There is a fairly recent WIP patch for some of this. However, the sooner this is delivered the better. This may sound trivial, but there is a lot of stuff that gets deleted and therefore will save quite a bit of time.
  • Phabricator tickets: T27471, T263211, T263209
  • Proposer: MER-C (talk) 17:20, 21 November 2020 (UTC)[reply]

Discussion

  • Thank you MER-C. The only thing demotivating me from completing that patch is really lack of commitment for review. In my understanding PET is not interested in reviewing patch for this feature and there's really no one else to do. If anyone or team is committing to reviewing I can complete it in as far I am available. And by the way I just abandoned that patch because so many things have changed and it would be easier to rewrite from scratch. – Ammarpad (talk) 09:05, 25 November 2020 (UTC)[reply]
  • @MER-C: This would certainly be useful, but don't forget about the API. (the proposal only mentions "checkbox") When tools and gadgets (like Restore-a-lot and DelReqHandler) can do this, it would save loads of work. And creating a language-independent workaround within the gadget is surprisingly difficult because the gadget doesn't "know" in what namespace a page is and even if it did, can't guess the name of the associated talk namespace, and even if it could, performance is already not great and would be roughly halved. — Alexis Jazz (talk or ping me) 11:42, 25 November 2020 (UTC)[reply]
    Good point, added. MER-C (talk) 14:04, 25 November 2020 (UTC)[reply]
    It only takes two simple API calls to get the name of a pages's associated talk page: One to get the pageid (example) and one to the title from pageid (example). That shouldn't be hard for a tool or gadget to do. --Dipsacus fullonum (talk) 14:30, 25 November 2020 (UTC)[reply]
    @Dipsacus fullonum: Thanks! That's easier than what I was thinking of. (a lookup table or something) Performance is still an issue though, and implementing this in gadgets (and having to implement and maintain it in every gadget separately) is likely more error-prone. — Alexis Jazz (talk or ping me) 15:01, 25 November 2020 (UTC)[reply]
    Actually your trick doesn't work for deleted talk pages. So this won't work for undeletion. — Alexis Jazz (talk or ping me) 15:08, 25 November 2020 (UTC)[reply]

Voting

Implement deferred changes

  • Problem: Aside from edits blocked by the edit filter, vandalism and other damaging edits (particularly those on BLPs) can still be viewed on pages for a short amount of time before they are reverted. According to a 2012 study by the Signpost, around 10% of damaging edits are seen by more than 100 readers, affecting the credibility of Wikipedia. The persistence of vandalism and BLP violations on low traffic biographies of living people is a lingering problem. Despite anti-vandalism bots and semi-automated tools, a substantial proportion of those damaging edits is not identified and reverted in a timely manner. (w:Wikipedia:Deferred changes).
  • Who would benefit: Readers, as they are less likely to view vandalized pages. Vandal patrollers, who will have more time to revert edits. Wikipedia's mission and reputation, with less BLP vandalism.
  • Proposed solution: Implement w:Wikipedia:Deferred changes, which basically means that edits, rather than just pages, can be sent in for review. Classification of suspicious edits can be done with edit filters, m:ORES and by bots (eg ClueBot NG's classification system). Since bots revert at a relatively high threshold to avoid false positives, this will give them more slack (as they can use lower limits just to flag it as deferred) and hopefully catch more vandalism. Similarly, looser edit filters can be used to flag for defer. This would open quite a few new doors for anti-vandalism work.
  • More comments: This project has been previously developed mainly by w:User:Cenarium, and has gained near unanimous support in a 2016 RfC on enwiki. Cenarium is now inactive, but it seems like they finished the bulk of the code needed to make this happen even in the unfinished commits, so this shouldn't be too much effort to pick up where they left off and complete.

Commits at w:Wikipedia:Deferred_changes/Implementation: however some of these commits are no longer necessary since an equivalent has been merged in since 2020, eg by Ostrzyciel's work on reverted edits, and our work on Echo notifications in gerrit:608884, so slightly less work needed to make this happen)

Discussion

Haven't had much time to edit Wikipedia recently. Thank you for reproposing. Darylgolden (talk) 08:23, 26 November 2020 (UTC)[reply]

  • @ProcrastinatingReader:, I can't recall if it was you I chatted about this with on en-wiki some time in 2020, but as then, I think this is unwise. Primarily because of the same reasons that pending changes has issues - the system doesn't handle stacked edits well at all. This could generate a major bulk-up in reviewable edits (even if far less than putting the pages on PC). A single deferred edit would stack up edits behind it, which is particularly problematic if some of them are good and others not. Reviewing these blocks is a massive pain, so they get avoided, so they get worse. I could only support this if there was a simultaneous fix to make handling of pending changes better and smoother (which was proposed 2 years ago, but didn't get the votes). Nosebagbear (talk) 15:52, 2 December 2020 (UTC)[reply]
    It could've been, I've kept an interest in this since around June. I can sympathise with your points. Improved edit conflict handling, like 'branches' of edits and 'merging' them (in a git-like manner) would also be an improvement, but I feel the crux of what you describe is mostly an enwiki culture issue. Often we don't want to press accept or decline ourselves in edge cases, since someone gets on one's case if a mistake was made, but if a bot does it for "staleness" that's all okay. I think we can work around these issues in a policy/culture way, we don't really have a shortage of editors able to accept. Mainly, we're just shifting the order from (in an ideal case) a Huggler reverting to a person approving [the edit]. If that fails, abusefilter editors can be 'conservative' on what filters are appropriate to send for review than others? ProcrastinatingReader (talk) 01:01, 11 December 2020 (UTC)[reply]
    @Nosebagbear: Also note this 'disadvantage' only applies for active deferring. Passive deferring will still allow the change to show up as normal, so people keep editing while it's added to the queue. If backlogs grow, we can adjust our config parameters or have a bot automatically remove 'stale' changes. ProcrastinatingReader (talk) 11:09, 11 December 2020 (UTC)[reply]
  • I can see benefits but there is a risk that anyone's queued edit could effectively lock the article until a suitably privileged reviewer deals with it. That problem might even be exploited as a denial of service vector. Certes (talk) 01:08, 11 December 2020 (UTC)[reply]
    "Passive deferring" is also possible with this change, rather than "active deferring", which simply queues the edit for review but still allows people to view the edit live in the meantime. And filter tweaks can be used to limit attack vectors. Also see Mz7's comment in the RfC close which addresses this concern. ProcrastinatingReader (talk) 01:16, 11 December 2020 (UTC)[reply]
  • As a few people have asked me this now, perhaps this information will be helpful: "Pending changes" allows admins to manually send in all edits to a particular page for review. This change ("Deferred changes") allows automated tools to "flag" an edit, and make it so that edit is put in for review (either before it shows, or let it show while still allowing it to be sent in for review). ProcrastinatingReader (talk) 11:06, 11 December 2020 (UTC)[reply]
  • Are you basically trying to reïnvent FlaggedRevision's stabilisation feature? --Base (talk) 18:51, 17 December 2020 (UTC)[reply]

Voting

Easier flagging

  • Problem: We need better tools that allow editors sifting through articles looking for mistakes to flag things for citation quicker and more easily.
  • Who would benefit: Mainly editors who dedicate specific editing time. Not everyone can dedicate time to fixing every mistake they encounter, and not everyone searches an article for mistakes every time they view it. This would help those people.
  • Proposed solution: Insert an option next to each section header in visual editor mode that reads "flag". For the plaintext editor, perhaps there could be a splitting of the templates into writing and correction templates? Anyway, a click on this would bring up a dropdown menu of the most common mistakes that need to be flagged:

Citation needed Style correction Written like an advertisement etc. This would allow for quicker identification and flagging of mistakes.

Discussion

Tracked in Phabricator:
Task T209797
  • Regicide1, I'd suggest broadening the scope of the "problem" sentence to include all inline tags, like the rest of the proposal, and making it more succinct. I think you are massively understating your case on "who will benefit" . There is evidence[4] that constructive criticism, such as inline-tagging, causes new editors to learn, fix their own edits, and become long-term editors. Currently, reverting is much easier than tagging. Reverting, unsurprisingly, makes both vandals and potential good editors give up. Wikipedia has lifethreatening problems with recruiting new editors; our editor numbers are slowly declining. If we don't fix, Wikipedia will die. Messily, as paid propagandists and shills take over. For more details and citations of the research lit., see en:WP:Encourage the newcomers (an essay to which I have contributed substantially, so not an independent opinion). On the "proposed solution", it might be better to be fairly flexible about the details of the UI, and just describe the functionality we want; working prototypes tend to cause major design changes. We won't really know what method would work best until it's been built. Do the Community Tech Team have any advice on what level of design detail would be optimal? HLHJ (talk) 19:44, 23 November 2020 (UTC)[reply]
  • I am moving this to the "Admins and patrollers" category since it seems to be more about patrolling. Hope this okay, MusikAnimal (WMF) (talk) 19:35, 24 November 2020 (UTC)[reply]
  • We le=redy have this. It's Twinkle. DGG (talk) 10:07, 29 November 2020 (UTC)[reply]
  • Users ought to be able to work out their own flexible ways of patrolling e.g. look at a page on your watchlist every day or every weekSpinney Hill (talk) 13:46, 16 December 2020 (UTC)[reply]
  • "next to each section header in visual editor mode" Meaning you have to click the Edit button first and then you can flag things? Maybe just a flag on the main article view that pops up when you hover over it would be better. — Omegatron (talk) 15:03, 20 December 2020 (UTC)[reply]

Voting

Reverse Partial blocks

  • Problem: Recently, on meta, we have a couple of blocked users that will be undergoing a global ban discussions. This cause them needing to participate on talkpage for the global ban discussion, which then require people to transclude their discussions to the main page. This is also true for other communities where the discussion of blocked users for their unblock or unban will be either at their Arbcom pages or Admin noticeboard. They will either be given a chance to create an account to be used exclusively on the board or comments will be copied on talkpage. This isn't ideal as discussion continuity will be affected via the talkpage route or if it's the new account route, there is no means to prevent the account to edit other pages.
  • Who would benefit: Admins
  • Proposed solution: Since we have the partial block tool, why not allow us admin to configue the settings to make it block all editing except XXX pages (like an inverse checkbox) rather than the now (only block XXX page). I.e. an inverse partial block, setting exemptions to a sitewide block, rather than the current pages to block. This will be an added feature on top of the partial block tool.
  • More comments:
  • Phabricator tickets:
  • Proposer: Camouflaged Mirage (talk) 18:23, 21 November 2020 (UTC)[reply]

Discussion

Is this the item discussed at phab:T27400? Jo-Jo Eumerus (talk, contributions) 08:52, 23 November 2020 (UTC)[reply]
@Jo-Jo Eumerus Didn't know there is such a discussion, yes, I am thinking along those lines. Thanks for pointing out the phab ticket. Camouflaged Mirage (talk) 09:07, 23 November 2020 (UTC)[reply]

Voting

abusive Usernames in the blocklog

  • Problem: Usernames are always saved in the blocklog and other logs, some usernames should not be visible in the log. Logs must all be deleted individually.
  • Who would benefit: Sysops
  • Proposed solution: On the page "Special:block" the possibility to hide / suppress (for Oversighters) the username in all logs.
  • More comments:
  • Phabricator tickets:
  • Proposer: 𝐖𝐢𝐤𝐢𝐁𝐚𝐲𝐞𝐫 👤💬 19:45, 16 November 2020 (UTC)[reply]

Discussion

  • There's multiple variants that could be explored here such as suppressing the username in rollbacks, log entries, etc. That being said, I hope this would continue to remain an oversight-only function since hideuser is generally restricted to oversighters. --Rschen7754 01:18, 17 November 2020 (UTC)[reply]
By hide, I mean the same effect as log deletion. And suppression should of course continue to be carried out by Oversigthers.--𝐖𝐢𝐤𝐢𝐁𝐚𝐲𝐞𝐫 👤💬 08:21, 17 November 2020 (UTC)[reply]

Voting

Add new functions to the abuse filter

  • Problem: The abuse filter could benefit from more functions that work in conjuction with timestamp.
  • Who would benefit: People who want to use time-based rules to prevent vandalism.
  • Proposed solution: Add new functions such as last_edit that will pick up the timestamp of any given article when it was last edited compared to the overall timestamp of the wiki itself, or timezone that could detect where someone lives and apply a timestamp accordingly. Those are examples, of course.
  • More comments: Other function ideas, per comment in discussion : first_edit (Or when_created), account_creation (for users), wiki_birthdate and last_active (for users). I would be more interested in regard to timestamp-related functions. Those are all the ideas that come to mind right now.
  • Phabricator tickets:
  • Proposer: MarioSuperstar77 (talk) 22:59, 24 November 2020 (UTC)[reply]

Discussion

@MarioSuperstar77: For personal reference, can you give some other examples of functions that you might also suggest adding? Best, KevinL (aka L235 · t) 06:38, 1 December 2020 (UTC)[reply]

Voting