Research:Spambot detection system to support stewards
Stewards are the group of user with the highest levels of rights and permissions across projects in Wikimedia, including global blocks, global locks and global bans that are indispensable for combatting cross-wiki abuse. Despite their critical role in governance and content moderation, stewards' workflows barely rely on advanced tools. In order to increase the efficiency of stewards in spambot detection capacities, this research project will investigate the potential of computational approaches.
While much of the research on Wikipedia governance and content moderation has focused on local admins of specific Wikipedia projects, the workflows and challenges of stewards have remained understudied. Stewards are responsible for tasks as crucial as granting newly-appointed functionaries their permissions, or global locks/bans (accounts) and global blocks (IPs) that are indispensable for combatting cross-wiki abuse. Although both global locks and bans prevent accounts from editing, there are relevant differences between these two processes, shown in the following table.
|Global lock||Global ban|
|Target||Account (though if an account is globally locked, other accounts by that editor are likely to also be locked)||Editor (that could manage one or several accounts)|
|Consequence||Unability to login and invalidation of current login sessions.||Revocation of some or all privileges at all Wikimedia projects. This could focus on editing privileges, although bans are usually enforced with a lock.|
|Procedure||A formal report is submitted via the Steward Requests/Global noticeboard or the IRC channel. Then a steward examines whether the behavior of the suspected account matches established criteria such as cross-wiki vandalism, spam, or a proven long-term abuse. Finally, if the investigation shows evidence of such behaviors, they apply a global lock.||A Requests for Comment process for a global ban is initiated on Meta by a valid nominatior. If all criteria for a global ban are met and consensus to ban is reached after discussion (including the reported editor), the RfC is closed and a steward applies a global ban.|
In the case of global locks, matches are sometimes made explicit through AbuseFilter filters as they speed up the investigation and represent an effective way to formalize the experience and judgment of stewards. Also, as spambots usually add links promoting a product or hosting malware, the SpamBlacklist extension can be used to prevent future edits containing unwanted keywords or URLs from specific domains.
Another key characteristic of spambots is that they often create a large numbers of accounts. As a consequence, stewards need to examine spambot patterns in other accounts associated with the IPs of the suspected one. The comparison of the behavior of the reported account and the associated accounts is performed qualitatively using the CheckUser extension. This has been found the most complex and time-consuming task of stewards because
- they need to temporarily grant themselves with CheckUser permissions wiki by wiki,
- there is no systematized comparison tool of spambot patterns.
To support stewards in spambot detection, this project will examine computational approaches that will speed-up this critical content moderation process.
In the research project for sockpuppet detection in Wikimedia projects the following core principles have been defined:
- Simplicity and interpretability
- Minimize language-specific features
- Machine-in-the-loop (support existing human processes)
- Balance risks (there are legitimate scenarios in which a single individual maintains multiple accounts and keeps this information broadly private -- generally around privacy or harassment concerns)
These principles will inspire the approach of this research project. As also proposed in that research project, the output of the system will not provide a probability prediction of malicious behaviour (there sockpuppets, here spambots) because false positives are harmful for community health.
The needing to run CheckUser repeatedly in each local wiki to compare account behaviours could be seen as a motivation to implemement a Global CheckUser (see open ticket T212779) that would allow stewards to retrieve IP/UA information of editors from multiple wikis in a central repository. However, the current procedure based on multiple local checks is the one defined at the CheckUser policy:
If local CheckUsers exist in a project, checks should generally be handled by those. In emergencies, or for multi-project CheckUser checks (as in the case of cross-wiki vandalism), stewards may perform local checks. Stewards should remove their own CheckUser access on the projects upon completion of the checks and notify the local CheckUsers or the CheckUser e-mail list.
Therefore, this research project will assume a fictitious scenario in which Global CheckUser exists. This assumption will help assess the value of implementing such an extension, which is currently non-trivial from both policy and technology considerations.
The research gap that motivates this project results from a previous investigation on spambot block workflow by Claudia Lo.