Requests for comment/Global AbuseFilter

The following request for comments is closed. Global abuse filters are now deployed and applies to all small and medium-sized wikis by default, plus other wikis not in those sets per explicit request. Stewards allowed to manage those filters.


AbuseFilter is a MediaWiki extension used to prevent vandalism and spam. Administrators and other privileged users create filters that analyze actions to the wiki (such as making an edit or creating an account) and detect specific likely abusive behavior patterns (for example, edits that introduce only repeated letters). AbuseFilter managers can mark individual filters as public or private. Public filters allow any user to view the filter's source code and the log of its related hits (times when the filter was activated). Private filters (source code and hits) are restricted to AbuseFilter managers.

AbuseFilter filters can match edits, page moves, file uploads, account creations and more. AbuseFilter creators can choose among the following actions to be triggered when a matching edit is found: tag the edit, prevent the edit, warn the user with a custom interface message, or block the user. Whenever an edit trips the filter, it is logged. Full documentation for the AbuseFilter extension is available on

The technical work to create a global (across a set of Wikimedia wikis) AbuseFilter started in 2009. In 2012, a proposal to create a Global AbuseFilter was created on Meta-Wiki. A vote was held on Meta-Wiki talk page, with a majority of supports, but the discussion was not well-advertised and had little participation. For almost two years, the proposal languished, but technical development slowly continued. In 2013, Global AbuseFilters were enabled on a limited set of wikis (metawiki, specieswiki, incubatorwiki, testwiki, test2wiki, mediawikiwiki, and in 2014 "small wikis"). This was the beginning of the trial phase. Global filters were recently enabled on "medium sized wikis" as well. Some rules about global filters were proposed, but never completely accepted. There is an ongoing request for comment on policy governing the use of the global AbuseFilters. In the meantime, specific wikis can opt out of using the global AbuseFilter. These wikis can simply add a request to Global AbuseFilter/Opt-out wikis.

PiRSquared17, Glaisher, MZMcBride — 08 November 2014

Should cross-wiki filters exist at all?


We think so. It helps with dealing with cross-wiki abuse (spam in particular), and makes local admins' jobs easier.

Who should be allowed to create/edit/view global abuse filters and the abuse logs?


Currently stewards and abuse filter editors can create, edit, and view global filters, and Meta admins can also view global filters. It is also possible for stewards to globally enable filters created by local Meta admins, because Global AbuseFilter is controlled from Meta-Wiki. Should there be a process to request abuse filter editor rights? Should global sysops be granted this right by default? Should local admins be allowed to view filters and disable them locally on a case-by-case basis, or at least view the abuse log?

  • Stewards should be able to edit the global filters as they already do. Also I'd like others to be able to edit the global filters after going thru an SRGP request (that is, for abusefilter editor group). Global sysops shouldn't be granted the right by default as many wikis which have GAF enabled are not GS wikis. Local admins are already able to view the abuse log, afaik. Also they should be able to disable them locally but this not yet technically possible bugzilla:43761. --Glaisher (talk) 05:35, 5 November 2014 (UTC)[reply]
  • I agree with Glaisher + I think that local admins should have the rights to view global filters. --Stryn (talk) 14:22, 9 November 2014 (UTC)[reply]
  • I am seeing these global filters in the local abuse log but, speaking as one such local admin,[1] functionality of the log appears to be limited. At least I am not aware of the "Filter ID" to use for non-local filters, which could be used to view a list showing how often and by whom a particular filter was tripped.

    This would be useful for deciding how to reconfigure local filters when overlaps and inconsistencies between filters arise (as exemplified in this instance today where is single edit attempt triggered six different filters, only one of which was authored by a local admin). Being able to view neither the global filters' specifications nor their logs, I am somewhat at a loss to manage local filters with which they coexist. ~ Ningauble (talk) 14:19, 10 November 2014 (UTC)[reply]

  • I would support formalizing the process for global abuse filter editor, including not less than a week of discussion before granting the right.--Jasper Deng (talk) 17:56, 14 November 2014 (UTC)[reply]
  • I am very concerned about the use of these filters for outright censorship, which I see as more of a threat than the abuse such filters are meant to stop (I will explain below). One of several safeguards I would call for is that any global filter that results in an action (such as preventing the edit, let alone blocking the user) be fully open to public examination. Because a global filter affects small wikis whose users don't have special permissions, it is not reasonable for them to be subject to such restrictions without their knowledge or participation. Wnt (talk) 13:49, 19 November 2014 (UTC)[reply]
  • Only stewards should be able to edit the global filters. --Steinsplitter (talk) 12:06, 22 November 2014 (UTC)[reply]
  • Stewards plus a new global abuse filter manager group requested via a 2 week approval process and which should have some pre-requisites including holding admin rights on a local project (or abuse filter edit rights) so that the candidates competence can be assessed. QuiteUnusual (talk) 10:27, 8 January 2015 (UTC)[reply]
  • In my opinion only stewards should be able to create, edit or modify global abuse filters. -- M\A 10:36, 8 January 2015 (UTC)[reply]
  • Only stewards if big wikis join in. Otherwise I'm fine with an ad-hoc usergroup. Elfix 17:55, 10 September 2015 (UTC)[reply]
  • Only stewards, indeed global AbuseFilter is a too many powerful tool for others groups.--Gratus (talk) 17:49, 14 September 2015 (UTC)[reply]

What should they be used for?


Currently: Cross-wiki spam and vandalism. For abuse that only affects one or two wikis, local filters should be used. Global filters should be used for spammers or vandals that actually hit multiple wikis. Anything else?

  • I think it is very important for people to specifically exclude the censorship of information as a use. I was not pleased when the en.wikipedia "oversighters" started using a hidden filter ( [2], which I have not been able to access myself) to help them delete any mention of the name of David Cawthorne Haines from the encyclopedia when he was being held hostage by ISIS; they claimed there were safety reasons yet nearly all of the world media, except a few British outlets, were happily printing the name. Indeed, I am worried that so long as a "filter" mechanism exists, even if policy is against using it for censorship, those citing legal, moral, or public safety motivations will find it to be impermissible and even criminal not to use it to suppress anything from the AACS encryption key to Wikileaks to a site hosting ISIS propaganda videos. Unless Wikimedia can persuade people that it is capable of resisting the abuse of the abuse filter for purposes of censorship, it is better that the feature be entirely disabled and written out of the Wiki code than to risk it becoming mandatory to use it for such purposes. Wnt (talk) 13:58, 19 November 2014 (UTC)[reply]
  • Like with CheckUser, no censorship goes without saying. The filter must only be for vandalism and spam, particularly since that was the sole use SWMT identified for it.--Jasper Deng (talk) 11:08, 9 January 2015 (UTC)[reply]
  • Only to target specific crosswiki spams/vandals. Other more generic filters will inevitably lead to false positives, and to unfriendly error messages to the editors who happen to stumble upon them. Elfix 18:50, 10 September 2015 (UTC)[reply]

What actions should be allowed?


Previously proposed: Filters which do anything other than tagging should have the non-tagging actions removed after 1 week of no valid (non-false-positive) hits of the global filter.

The blocking function is available because Meta decided to use it locally and global filters depend on the AbuseFilter interface of Meta. It was previously proposed that global filters should not get the block function set until a real policy about using it has been created, but this proposed guideline has been mostly ignored, and there is currently a blocking global filter (although there don't seem to be any false positives yet). Some possibilities: no blocking, blocking but only after warning, allow blocking in specific cases, always allow blocking. Should local admins be able to control which actions of the global filter apply to their wiki?

  • I think we can live without local blocking of accounts/IPs on any global filter. --Glaisher (talk) 05:50, 5 November 2014 (UTC)[reply]
  • I prefer to retain the blocking function in extreme cases such as a cross-wiki LTA that is actively vandalizing at the time.--Jasper Deng (talk) 17:55, 14 November 2014 (UTC)[reply]
  • If used at all the quality on the filters should be much better than they are now on some of the projects. The way the filters are configured on some of the projects block such things as adding references to articles. That means the abuse filter degrades the contributions, quite the opposite than the intended behavior. I don't think this can be easily resolved, it must be possible to identify whether the linked resource is in fact a site with relevance. Otherwise a blocking action must be taken according to user history, or use of a captcha, but the later is not a viable long term solution and only stops bots. — Jeblad 03:53, 16 November 2014 (UTC)[reply]
  • Blocking filters should be used in extremly obvious and rare cases only, without any language specific rules. It's really easy to miscalculate some language or wiki specifics. E.g. 6 consecutive consonants may look very fishy for many languages, but in some languages there are such words. Blacklisted website URL on one wiki, is almost a must on the other.--Xelgen (talk) 06:23, 16 November 2014 (UTC)[reply]
  • No blocking should be allowed, enabling an additional blocking filter should be up to the local administrators. Vogone (talk) 18:37, 16 November 2014 (UTC)[reply]
  • No blocking should be allowed, because that is a decision to be left to local wiki administrators. To have a global filter block someone on any project for adding the same link implies a desire to globally block that contributor, but a global block should never be done by a machine. If someone is really being that much of an uberspammer they deserve actual human attention. Wnt (talk) 14:01, 19 November 2014 (UTC)[reply]
  • Blocking needs to be left to local admins. The only thing the filter can do is report to the equivalent of WP:AN that an account might need to be blocked. Not even Stewards can block anyone on a local wiki if there is a local admin available. Why would a filter have blocking capability? It would not make any sense at all. Even on a local wiki I would not want to see a filter ever being used to block accounts for any reason - you have to be able to look at the block log to see who blocked an account and why, and you would not have that option if it was just a filter that triggered the block. Too much chance of false positives. Apteva (talk) 20:18, 19 November 2014 (UTC)[reply]
  • No blocking should be allowed. --Steinsplitter (talk) 12:04, 22 November 2014 (UTC)[reply]
  • No blocking at all — Arkanosis 00:31, 12 September 2015 (UTC)[reply]
  • No blocking : Global blocking yet exists, whatever I guess it should not, as there is no way to check the reasons and the rightness of the block. Therefore strongly against global blocking allowed by filters. --La femme de menage (talk) 08:54, 12 September 2015 (UTC)[reply]

Review of existing filters


Perhaps we should review the last 200 hits of each active global abuse filter (as of this RfC), and count the number of false positives.

In the future, maybe we should disable filters with no hits after a certain time, or disable non-tagging actions (see previous section).

Here is a quote from the current version of the proposed policy:

After no more than a week without any successful actions, filters should be disabled to prevent false positives. An exception to this policy are filters designed to just tag edits and not take further action. These filters can be left up indefinitely, since their purpose is to filter edits for further review and they do not take any automatic actions.
  • I oppose disabling filters with no hits after a certain time. Filters should be disabled only in the event of false positives or when they are so inefficient they exceed the condition limit too often. If the filters are doing their job, even with not many hits, we should keep them.--Jasper Deng (talk) 17:52, 14 November 2014 (UTC)[reply]
  • The disabling of filters with no uses is a weak but useful safeguard against their use for censorship; it implies that an abuse filter should target actual abuse rather than being deployed as an anti-personnel mine against people who might link to an unwanted fact at any time in the future. Wnt (talk) 14:04, 19 November 2014 (UTC)[reply]

What wikis should it apply to by default?


Proposed: Maybe small and medium sized wikis by default, but allow opting out or in. More specific?

Other comments and questions


Just side note: I have started local discussion to opt-out global abusefilter on kowiki (here). — revimsg 03:45, 10 November 2014 (UTC)[reply]

  • I would propose a bag limit for global filters, after which they should automatically self-disable pending further examination. By this, I mean that there are only so many editors in good standing that a filter should be allowed to block, only so many it should be able to block posts of, and indeed (to avoid spamming your notifications) even a limit on how many should have their edits logged before the filter shuts itself off. For blocking, that should be a very low number. The rationale for this is that there can be a very fine and subjective distinction between "spamming" and reporting of facts. People have put sites like Encyclopedia Dramatica into the global blacklist (and I would not be surprised if also the abuse filter, though I'm not allowed to really know) because they host private/personal information/doxxing, yet other times (as with the owner of some sports team in Los Angeles who was recorded saying racist things to his young girlfriend) such leaked recordings become international news. Who decides which is which? I would say, we do not leave this to the personal discretion of those behind the secret layer of filters; we agree that once several longstanding editors independently decide to add the data, that is prima facie evidence that the data is not spam, no matter how sordid it may be or what the motivations of those adding it are. The filters are all about doing administration automatically - so make them audit and stop themselves automatically! Wnt (talk) 14:32, 19 November 2014 (UTC)[reply]
  • Imho it should be possible to disable global filters locally. --Steinsplitter (talk) 11:59, 22 November 2014 (UTC)[reply]
    I agree with this. Elfix 16:26, 12 September 2015 (UTC)[reply]
  • One can not make abuse filters valid on the entire planet. All countries and even states, provinces and municipalities have different rules what is allowed, abuse, excessive violence, attitude towards women (how they're dressed [or not at all] e.g. wearing a burqa is not allowed in many civilized countries and considered a huge lack of freedom while in others it's an obligation. Making advertisements for cigarettes (even smoking in a tv-film) is not allowed or strongly discouraged. They should only work on a certain Wiki and even then we're crossing at least municipalities, provinces etc. strong oppose because too bureaucratic 'nuff said?  Klaas `Z4␟` V 08:59, 27 November 2014 (UTC)[reply]
    The definition for bureaucratic is "overly concerned with procedure at the expense of efficiency or common sense". You are telling me that instead of being able to create one filter and prevent a type of spam across all of Wikimedia, we should need to individually create filters on 700 wikis to accomplish the same goal? Except we (people fighting global spam) can't, since we would need to go through individual wiki processes for every project with local admins. You have it backwards. Ajraddatz (talk) 16:31, 27 November 2014 (UTC)[reply]
    @Patio: In the past 24 hours alone, and counting only one filter (#104, which is publicly viewable) has prevented over 500 automated spam attempts. That's not to say the filter is perfect, but it is only set to "warn", not "disallow", so anyone with legitimate edits can re-submit and ignore the warning. In fact, since I started writing this comment, there have already been two more spam hits on this one filter. If you think it's better to deal with this all manually or on a wiki-by-wiki basis (especially when the wikis are tiny), then I must disagree. PiRSquared17 (talk) 06:13, 30 November 2014 (UTC)[reply]
  • I wonder if the whole idea of global rules for the AbuseFilter is broken by design. There are some core concepts that we know holds for abusive behavior, but we don't know how dependable they are. For example, we know that IP-addresses outside some country are indicative for abusive behavior, but we don't know the address range. We could fix that by learning the range, but AbuseFilter can't learn as it is now. We also know that for some languages certain characters should not be stringed together, but that to is different from language to language. Again we can learn such behavior. Perhaps we could create rules that doesn't have blocking behavior, but are feed into some sort of machine learning system. Then we could have a lot of such feature extractors and let the users on the site supply the goals, that is we use continuous supervised learning. When it is likely that the edit is unwanted a question is posted to the user asking if they really want to do the edit, and if so let them make the edit but tag it accordingly. — Jeblad 20:43, 4 January 2015 (UTC)[reply]
    However, I don't think any of these "patterns" you mentioned are what a global abuse filter would be used for really. There are specific patterns belonging to cross-wiki long-term abusers as well as patterns indicative of spam. In any case, false positives in a filter set to tag would in no way undermine its value as a detection tool. Snowolf How can I help? 15:55, 13 September 2015 (UTC)[reply]
  • If it is decided that wikis can opt in or out of the GAF, then please create an ad-hoc WikiSet so this can be properly followed by Stewards. Elfix 18:32, 11 September 2015 (UTC)[reply]