Research:Newcomer desirability modeling
no url provided
In order to better support Snuggle users, I'd like to allow them to view newcomers who are most likely to be editing in good-faith first to save them time. To do this, I'll need to build a model of newcomer activities that allows me to assign a likelihood of goodfaith. This model will have to be able to make useful judgments about newcomers who have made few edits and would, preferably, refine these judgments based on new information. In other words, I need to be able to produce a useful rating for users that I know little about and refine that rating with more information.
One simple and naive approach would sort editors by the proportion of their edits that have been reverted. One problem with this approach is that it, in some ways, defeats the intended use of Snuggle. If only the newcomers who are least reverted are determined to be working in good-faith, snuggle won't be a very useful tool for newcomers who run into the dark side of Wikipedia's quality control system. Certainly, there's potential for a massive difference between quality of newcomers who submit racial slurs and those who simply ran foul of some of Wikipedia's more complicated policies and guidelines.
To differentiate between these two types of reverts, I'll take advantage of models already used to assess newcomer behavior in Wikipedia, anti-vandal bots. Many of these bots publish an API that allows an external service (like Snuggle) to request scores and other metadata that the bots' machine classifiers use to determine when an edit is vandalism. For example, see [Stiki's API readme] for an example.
To train a model of desirable newcomers, I'll need a training set containing pre-labeled newcomers. Luckily, I built such a dataset in my previous work with the Wikimedia foundation .
As a signal, I'll be using scores assigned to individual edits by Stiki, a statistical vandalism detection system running on the English Wikipedia. Sadly, the hand-coded dataset only overlaps with the vandal scores for the last year or so. This means that I only have a 124 observations to work with. This may be close to enough for me to move forward.
Although there is a substantial amount of overlap between the two distributions, the center of density for desirable newcomers (1) is much lower than undesirables. So, if these values are randomly distributed, but bound by 0 and 1 (which seems evident), we should be able to fit a couple of en:beta distributions.
Using these theoretical distributions as a prior for all new editors in Wikipedia, I can use the STiki scores of their new edits to assign a confidence for which model best fits the newcomer's scores. Since the two potential models represent a dichotomy (desirable/undesirable), I can represent the probability that a newcomer's edits fit into the desirable distribution a desirability ratio:
|This is not a proper model test. I did not withhold any of my training data for testing. This is generally considered to be bad practice. However, due to the highly supervised nature of this model and my desire to build something that simply works better than nothing at all, I'll save better model checks for when I have more hand-coded newcomers to assess.|
To ensure that this model was doing its job, I re-applied it to the original training set to see how well it could re-classify by labeled newcomers. The figure below captures the desirability ratio scores (converted back to a probability) for each quality class.
The figure above suggests that this model will be effective for projecting a large desirability ratio for desirable newcomers.
This document describes an approach for predicting the desirability of new editors that has the following characteristics:
- Useful predictions can be made with very little data
- Predictions will gain accuracy with more data
- Prediction output value is trivially sortable (for presentation in en:WP:Snuggle