Research:Sockpuppet detection in Wikimedia projects
This page describes on-going work to build models focused on helping Checkusers identify sockpuppet accounts. Sockpuppet detection covers a very wide range of behaviors and therefore is far from straightforward to detect. There are various types of sockpuppet accounts (see en:Sockpuppetry for more background) and reasons why they are problematic, but broadly there are at least two ways in which tools to simplify detection of sockpuppet accounts are useful:
- Detecting long-term abusers who switch accounts or edit anonymously because e.g., their other accounts have been blocked for damaging behavior. Being able to more quickly link together accounts helps checkusers to understand the severity and extent of a block that might be needed.
- Identifying groups of accounts that are working together to introduce bias, misinformation, or leave the impression of discussion or consensus where there is none. Often none of the edits made by accounts like these warrant blocking so it can be very difficult to detect these rings of accounts and helping checkusers gather evidence to detect this behavior could potentially be very helpful.
Background
editThere have been a number of past research projects focused on sockpuppet detection and adjacent modeling challenges. The direct precursor of this work developed supervised classification approaches to identifying whether a given user account is a sockpuppet or not. The tool discussed here looks to find similar accounts to a given user based on language similarities in their edits. See literature review there for additional past work and this report on patrolling on Wikipedia for more general context.
This work differs crucially from most past research projects in three important ways:
- Class imbalance: most research projects have used balanced samples -- i.e. building models to identify users who are sockpuppets from a sample of Wikipedia users who are half sockpuppet accounts and half valid editor accounts (controls). The vast majority of user accounts, however, are not sockpuppets and in reality a model likely needs to operate in a world where most accounts that it will make predictions about are not sockpuppets. Using balanced datasets can lead to misleadingly good performance when the same model will produce many more false positives than true positives in reality.
- Machine-in-the-loop: most research projects have focused on building standalone systems where a machine learning model makes the judgment of whether an account is a sockpuppet or not. This approach tends to lead towards supervised classification, which can be evaluated in a straightforward manner. The models in the current iteration of this research, however, are being developed to support existing human processes. Specifically, they are being designed with checkusers in mind (example sockpuppet investigation), which means that we can, e.g., allow for lower precision in exchange for greater interpretability because we know that the model outputs will be used to support an existing process and will be evaluated by people with experience in identifying sockpuppet accounts. We can also move away from a simple supervised classification paradigm to one of candidate generation -- e.g., lists of similar accounts.
- Actionable: most models have focused on identifying sockpuppets in historical data, which has the benefit of being available in dumps or on the cluster and is labeled as a result of sockpuppet investigations. These models, however, will need to inform current investigations, which means that they will need up-to-date information based on edit streams or APIs. Computationally-expensive features like words added or removed based on edit diffs are acceptable when training a model but might induce unacceptable latency when making predictions (especially because sockpuppet accounts may generate high volumes of edits to achieve certain user levels -- e.g., this example in French Wikipedia).
Existing Tools
editThere are currently tools that support sockpuppet investigations by gathering and visualizing relevant data such as:
- User compare reports
- User interaction timeline
- Editor interaction utility
- Socksfinder (closest to what is described here)
Core Principles
editThere are many choices to make when determining how to build sockpuppet detection models. Some of these choices are laid out below. To help in deciding, this work is guided by the following principles:
- Simplicity and interpretability: we aim for more straightforward, interpretable approaches to identifying potential sockpuppet accounts. This generally means that we use machine learning models but structure our approach so that we can give explanations for why the models believe two users are similar. It potentially means that we lose some small amount of performance as we are less likely to capture some of the more complex reasons why two accounts might be the same, but currently we view this as an acceptable trade-off.
- Minimize language-specific features: ideally, if these models are useful, they will be available to be used to support sockpuppet detection in other language communities beyond English Wikipedia. To reduce maintenance, complexity, and keep these models largely comparable, features that are language-specific (e.g., depend on text or policies specific to a wiki) should be carefully justified.
- Machine-in-the-loop: these models will be developed to support existing human processes (as opposed to being used for purely automated decision-making).
- Balance risks: there are legitimate scenarios in which a single individual maintains multiple accounts and keeps this information broadly private -- generally around privacy or harassment concerns. We do not want these tools to be used for uncovering these accounts. The guidance given for these users is to avoid any perceptions of sockpuppetry by not editing the same articles or topic spaces. We will be purposeful in our design to focus on identifying sockpuppet accounts while minimizing the likelihood that these tools turn up legitimate alternative accounts.
Modeling Approaches
editModel Architecture
editThere are at least two different types of models that one can imagine for sockpuppet detection:
- A model that, for a given user, estimates the probability that that user is a sockpuppet account. Many past approaches have taken this route. While there may be characteristic behavioral patterns that a model could learn about sockpuppet accounts, in general this is a very tough thing to model effectively and this approach likely skews towards identifying the abusive accounts that also happen to be sockpuppets. Given the likelihood of false positives with this approach, this type of model might be used most effectively to just generate a shortlist of likely sockpuppet candidates for checkusers to consider. We are currently leaning away from this modeling approach given these challenges.
- A model that, for a given user, provides a list of similar users. This approach is less direct -- i.e. it does not provide a prediction whether any given user is a sockpuppet -- but it shows promise and can be directly used within sockpuppet investigations to help a checkuser identify potential accounts to evaluate. This type of model can be used for other purposes as well -- e.g., finding similar editors to join a given WikiProject or provide input on a discussion. Our intent is to build this type of model given that this approach to modeling seems both more likely to be effective and can potentially be used as a valuable input into existing processes.
Model Data
editThere are at least three different types of data that a model might use for making predictions:
- Edit text: what words does an editor typically add or remove from a page when they make edits. This approach can be computationally intensive because it requires computing diffs but captures some of the stylistic or topic similarities that might be characteristic of a group of sockpuppet accounts.
- User-page edit patterns: what articles did a given account edit. Similar editors will have edited the same or similar articles.
- Edit(or) metadata: for example, common time of day for editing, namespaces being edited, existence of a user page, average size of edit, etc.
Furthermore, for any of these categories, there are many other choices that must be made:
- Public vs. private: in the model's current form, we are only considering public data, but most saliently private data such as IP addresses or user-agent information are features currently used by checkusers that could be later (carefully) incorporated into the models.
- Namespaces: is data gathered from all namespaces or just certain namespaces? For instance, some types of sockpuppets are used primarily in talk or project namespaces while others focus solely on content.
- How many edits are modeled for a given user: is the data from all edits included or, e.g., just the first 5 or last 5?
- Within- or cross-wiki: are accounts modeled in isolation within a single wiki or across all wikis? The former is far more computationally intensive and simpler, but the latter might catch users who are blocked from one wiki and move to another under a new account.
Descriptive Analyses
editWhen making choices about what type of model to build, it helps to understand the distribution of types of sockpuppet accounts and how different data does or does not tie together these accounts. These analyses are largely based off of documented sockpuppet accounts per the category Wikipedia sockpuppets on English Wikipedia and specifically those sockpuppet accounts who have edited since 2020. The category was parsed in the following way:
Parse Wikipedia Sockpuppets Category on enwiki |
---|
-- MariaDB query that produces a list of socks where e.g.
-- https://en.wikipedia.org/wiki/Category:Wikipedia_sockpuppets_of_John_Smooth has in it:
-- * User:Couple_Inches
-- * User:DontShoot123
-- * User:Iamthebestthereis
-- ...
-- which becomes:
-- sock | puppet
-- John_Smooth | Couple_Inches
-- John_Smooth | DontShoot123
-- John_Smooth | Iamthebestthereis
-- ...
SELECT
SUBSTRING(cl_to, 26) AS sock, -- remove the 'Wikipedia_sockpuppets_of_' text from the category title
page_title AS puppet
FROM categorylinks c
INNER JOIN page p
ON (c.cl_from = p.page_id)
WHERE
cl_to LIKE "Wikipedia_sockpuppets_of_%"
|
In September 2020, this query captured 18491 sockpuppet groups and 162423 unique accounts. Of the 996 sockpuppet groups (and 3766 unique users) that overlapped with data from 2020, the following can be said (based on data from namespaces 0,1, and 118):
- 399 of the 996 sockpuppet groups (40%) had at least one account that didn't overlap with at least one other account in the group. So if Users A, B, C, and D were a sockpuppet group, maybe A edited the same page as B and B edited the same page as C but D never edited the same page as either A, B, or C.
- 588 of the 996 sockpuppet groups (59%) were fully-connected -- i.e. you could start from any user and iterate outwards to the whole network by just looking at edit overlap.
- 9 of the 996 (1%) had two or more isolated networks -- e.g., User A and User B co-edited, User C and User D co-edited, but User A/B didn't co-edit with User C/D.
This indicates that just high-level information about what pages were edited by which users can help to identify 60% of known sockpuppet groups on English Wikipedia. The other 40% of sockpuppet groups would require both co-edit data and other data sources to fully identify (though the co-edit data should identify at least part of the network).
Outcomes
editThis model has been prototyped as Extension:SimilarEditors but the deployment is paused as of July 2023. It just made use of public user-page edit patterns and editor metadata.