Community health initiative/Measuring the effectiveness of blocks

Blocks are the technical method to prohibit editing on a wiki. The Wikimedia Foundation's Anti-Harassment Tools team is currently developing software to allow for Partial Blocks, or the ability for an administrator to prohibit a user account or IP address from editing certain pages, namespaces, or from uploading files.

This page documents a feature the Wikimedia Foundation's Anti-Harassment Tools team has committed to build.

🗣   We invite you to join the discussion!

As part of this work, we are looking into ways to measure the effectiveness of blocks. Please join us by reading the Proposed measurements below and participating in discussions on our talk page! 💫

Proposed measurements edit

Sitewide blocks’ effect on the affected users edit

Currently there is not much known about how tools like sitewide blocks affect users. This makes a comparison with the effects of the partial block tool difficult. To provide all of us with some insight, the Anti-Harassment Tools team would like to examine historical data to establish a baseline.

Similarly as we discuss partial blocks below, measuring sitewide blocks’ effect on users is challenging because it is not always clear who the involved parties are. In the case of a sitewide block it might not be clear why a user got blocked, which means that it could be impossible to establish how that blocks affect other users. Therefore, we propose to establish a baseline for a temporary (non-indefinite) sitewide block’s effect on the blocked user, and leave measurements of other involved parties as future work.

1. Blocked user does not have their block expanded or reinstated.
  • Expanded by having its expiry date delayed, or reinstated within k days after expiry.
2. Blocked user returns and makes constructive edits.
  • User edits within some number of days after the expiry of the block.
  • The edits they make are not reverted, and/or are not predicted to be reverted by ORES.

When measuring events occurring after other events, such as the possibility of having a block reinstated, it is common to define a certain time period (e.g. “within k days” above) that this can occur within to make sure that all blocks are treated equally. Otherwise, blocks that occur early in a study period would have more time to potentially be reinstated compared to blocks occurring later. What the value of k should be is something we might infer from historical data on blocks, and it is also something we value feedback from the community on.

Understanding the blocked user’s editing behavior before and after the block can also be challenging. Prior to the block, they might have made a lot of edits that contributed to them being blocked. This makes it difficult to establish a baseline of prior behavior. Similarly, they might return only to post a series of inflammatory statements on other users’ talk pages, or vandalize articles. We would like to be able to determine the quality of their post-block contributions, and thus propose to measure both what proportion of their edits after the block get reverted, and also use ORES’ predictions to understand if they were likely to be reverted.

We will attempt to categorize sitewide blocks by ‘reason’ to perform different analysis, as Open Proxy blocks are very common and are not similar to user conduct blocks.

Partial block’s effect on the affected users edit

We are particularly interested in how partial blocks affects the community, but are also concerned with the cost if we choose to measure this using surveys. This leads us to seek targeted measurements that can give us an idea of how this affects users, which in turn should give us an indication of how this affects the community. In the short term, this provides us with some answers, while we can also utilize other approaches in the longer term (e.g. surveys).

Most of our measurements concern the blocked user. This is partly due to the fact that it is (relatively) easy to identify said user as that will be found in the block log. It is more cumbersome to identify other involved parties on a large scale. Therefore, we propose to measure the effect of a partial block on the blocked user leading to the following measurements:

1. Blocked user makes constructive edits elsewhere while being blocked.
  • This means they make edits, said edits are unlikely to be reverted, and ORES does not label them as damaging or bad faith.
2. Blocked user does not have their block expanded or reinstated.
  • Expanded either by having its expiry date delayed, a sitewide block instated, or the block reinstated within k days after expiry.
  • Getting a sitewide block after the partial block within k days after partial block expiry also qualifies as having the block reinstated.

Partial block’s success as a tool edit

The Anti-Harassment Tools team is particularly concerned with whether partial blocks is successful as a tool or instrument. It is designed to provide administrators with more fine-grained tools to counter problematic situations. Because of this, we want to know if and how much it is used after being released, paying particular attention to whether it augments and replaces existing blocking tools such as sitewide blocks and page protection. We propose the following measurements and hypotheses:

1. Partial blocks will lead to a reduction in usage of sitewide blocks.
a. If usage of sitewide blocks is slowly increasing over time, it will flatten after PBs are introduced.
b. If usage of sitewide blocks is constant over time, it will decrease over time after PBs are introduced.
2. Partial blocks will lead to a reduction in usage of short-term full page protections.
a. Proportion of pages with protection with short term full page protection will be reduced after the introduction of PBs.
b. PBs will be introduced to allow the non offending editors to continue editing of pages that previously were protected while stopping offending editors.
3. Partial blocks will retain more constructive contributors than sitewide blocks.
a. The percentage of users who keep making edits which ORES does not label as damaging will be larger for users who receive a PB than a sitewide block.

One additional instrument we might want to use is to survey the administrators when they are setting a partial block, asking them what they would do if it was not available (with options like use a sitewide block, protect the page, seek to add an edit filter (or ‘AbuseFilter’), seek to enforce a social block, and perhaps “other”). This might be a quick way to get specific feedback on whether PBs replace other tools, rather than infer it from data.

Short term gains edit

In our list of measurements below, we propose a lot of longer term ones, e.g. surveys. These are important and should be considered to be implemented later, because they can provide all of us with insight that is otherwise hidden. Analysis of log data and such can tell us what is happening, but we cannot know why, which is where surveys and interviews are useful.

However, the first part of this research will mainly focused on the short term gains in the utility of partial blocks in order to understand whether it appears to be working and if there appears to be a need for changes. The measurements proposed will provide us with insight quickly.

What do you think? edit

What do you think of our 11 proposed measurements? Join the talk page discussions!

All possible measurements edit

Below are two tables containing information about ideas for measuring the impact of the partial block tool. The content is split across two tables to make it somewhat easier to navigate as it would otherwise result in a very wide table (this was originally a single spreadsheet).

Block scenarios edit

Our first table describes block scenarios, mainly separating them into successful and unsuccessful ones. At the end are also some comparison scenarios, comparing partial blocks with other potential interventions (e.g. site-wide blocks or page protection). These are all found in the leftmost column in the table.

Then, in the second column from the left is the effectiveness of the block, which to some extent defines the scope of what we aim to measure (e.g. an individual user). The third column is the specific event that would occur, e.g. "the blocked user returns and makes constructive contributions". Lastly, the fourth column describes a potential way for us to measure that event. Because some events can be measured in multiple different ways, some events (in the third column) will be repeated, but with different approaches listed in the fourth column.

Some of the comparative scenarios do not have specific measurements proposed, but will instead have some comments added as footnotes.

Block scenario Effectiveness Event How do we measure that?
Successful on individual user(s) directly involved in an incident No recurring similar harm from the blocked user[note 1] Ask the other user(s)
No recurring similar harm from the blocked user Harm-detection software[note 2]
Other user(s) are retained users User retention after block start
Other user(s) are retained users User retention after block start
Blocked user returns as a constructive user Blocked user edits again after block ends
Blocked user returns as a constructive user Post-block edits are not reverted
Blocked user returns as a constructive user Post-block edits are good faith/not damaging
Blocked user returns as a constructive user Post-block edits are not harmful[note 3]
Blocked user returns as a constructive user Other users judge their contributions to be constructive[note 4]
changing the culture to be more welcoming Uninvolved users feel that the incident was properly handled[note 5] Ask uninvolved users
Involved user(s) feel that the incident was properly handled[note 6] Ask the involved user(s)
Increase in new users to articles and article talk pages (or an equivalent increase in other namespaces) Identify new users contributing to articles/talk pages/relevant namespaces over a specific period of time, compare before/after block.
Return of previously uninvolved users to articles and article talk pages (or an equivalent return in other namespaces)[note 7] Identify previously uninvolved users, compare where they made their contributions prior to, and during/after the block.
Perception of how harassment is handled changes Ask community members (e.g. Community Insights)
as a set of tools for admins to use The tools are understandable to any administrator without reading documentation Observe users during use
The tools are understandable to any administrator without reading documentation They do not ask questions about how to do something
The tools are understandable to any administrator without reading documentation "Help" tool tips are not used often, and admins do not follow links to documentation.
Administrators do not require a large amount of time to set a block Observe users during use
Administrators do not require a large amount of time to set a block Log data from the interface
Administrators indicate that they are satisfied with the tool Ask the administrators
The tools have no frustrating defects Ask the administrators, or check bug reports on Phabricator
Unsuccessful Blocked user receives a sitewide block while partially blocked Block log data
Blocked user receives an expanded partial block while partially blocked Block log data
Blocked user receives a sitewide block after they have been unblocked[note 8] Block log data
Blocked user's partial block is reinstated Block log data
Blocked user evades a block. Block evasion is reported by another user
Blocked user evades a block. Identify one or more users as the same as the user who is blocked[note 9]
Blocked user is reported to wiki leadership (ANI or elsewhere) for any misconduct during a partial block User is reported to wiki leadership (ANI or elsewhere) for any misconduct during a partial block
Blocked user is reported to wiki leadership (ANI or elsewhere) after a sitewide block has expired User is reported to wiki leadership (ANI or elsewhere) after a sitewide block has expired[note 10]
Blocked user stops editing after receiving any form of block (except an indefinite sitewide block) User retention after block
Successful Blocked user continues to edit after receiving a partial block Contributions after block[note 11]
Blocked user's partial block is not expanded[note 12] Partial block is not altered during its tenure
Blocked user's partial block is reduced[note 13] Partial block is altered during its tenure, reducing its duration/pages/namespaces
Blocked user returns to edit after receiving a sitewide block, and their block is not reinstated Period of time after return during which the block is not reinstated
User who was harmed by a blocked user continues to edit User retention after block[note 14]
Comparisons Better or worse than a ban that is not enforced through technology?[note 15] Gather users' perceptions about these types of interventions.
Identify involved parties, define a measurement of success.
Better or worse than no block at all?
Better or worse than a sitewide block?
Do partial blocks replace site-wide blocks? Compare trends in block usage after the introduction of partial blocks[note 16]
Check if existing sitewide blocks are modified to become partial blocks
Check if policies are updated to recommend a different type of block
Do partial blocks replace page protection? Compare trends in usage of blocks and page protection after the introduction of partial blocks
Check for overlap between pages being unprotected and partial blocks affecting the same pages
How do different topic areas react to these types of blocks?[note 17]

Potential measurements edit

This next table goes into more detail about specific measurements for the scenarios described above. The leftmost column repeats the proposed measurement from the first table, but there are fewer rows because some repeat measurements have been collapsed into a single row. The second column from the left aims to capture a potential source for data for that measurement, or a type of approach we could use to make this measurement available. For example, "log data" refers to data that is logged by MediaWiki and is available in its database tables, while "machine learning" refers to a more complex process of training and applying a machine learning classifier.

The third columns seeks to describe the potential cost, or effort, needed to use a specific measurement, and uses a simple Low/Medium/High schema. Example of "Low" cost would be measurements using log data where the data is readily available, whereas "High" cost means requiring substantial amounts of time and/or labour such as running surveys in a large number of languages.

The fourth column describes benefits of a specific measure, and these would typically be benefits that come in addition to allowing us to understand the effects of partial blocks. Lastly, the fifth column describes who or what we are measuring. This column says something about the scope of the measure, for example whether we are measuring an individual user (which for some measurements will mean we average across many to understand overall effects), or are measuring a larger group of users.

How do we measure that? Type/Source Cost Benefit Who/what are we measuring?
Ask the other user(s) Survey Medium Can learn more about how behaviour is judged, not just a "yes/no" answer. Harmed user(s)
Harm-detection software Machine learning High Machine learning models for harm detection Blocked user
User retention after block start Survey[note 18] Medium Can learn more about what users think "retained" means. Harmed user(s)
User retention after block start Log data Low Harmed user(s)
Blocked user edits again after block ends Log data Low Blocked user(s)
Post-block edits are not reverted Log data Low Blocked user(s)
Post-block edits are good faith/not damaging ORES Low Blocked user(s)
Post-block edits are not harmful Machine learning High Machine learning models for harm detection Blocked user(s)
Other users judge their contributions to be constructive Survey High[note 19] Can learn more about what "constructive" means (i.e. it might not mean "not reverted"). Other users
Ask uninvolved users Survey Medium/High Enables us to understand more about how uninvolved users learn about these situations and how they are handled. Other users
Ask the involved user(s) Survey Medium Allows us to identify pain points in incident responses. Harmed user(s)
Identify new users contributing to articles/talk pages/relevant namespaces over a specific period of time, compare before/after block. User and revision log data Medium[note 20] Pages affected by a block
Identify previously uninvolved users, compare where they made their contributions prior to, and during/after the block. User and revision log data High Pages affected by a block
Ask community members (e.g. Community Insights) Surveys High[note 21] Key measurement of long-term project success? (for all meanings of "project") Community
Observe users during use Lab study High Deeper insight than relying on surveys or self-reports. Administrators
They do not ask questions about how do to things? Survey of help desk and phabricator? Medium? Administrators
"Help" tool tips are not used often, and admins do not follow links to documentation. EventBus/EventLogging Medium Administrators
Log data from the interface EventBus/EventLogging Medium Administrators
Ask the administrators Survey Medium? Administrators
Ask the administrators, or check bug reports on Phabricator Survey and/or Phabricator data Medium? Administrators
Block log data Log data Low Blocked user
Block evasion is reported by another user Medium[note 22] Blocked user
Identify one or more users as the same as the user who is blocked High Blocked user
User is reported to wiki leadership (ANI or elsewhere) for any misconduct during a partial block Medium Blocked user
User is reported to wiki leadership (ANI or elsewhere) after a site-wide block has expired Medium Blocked user
User retention after block Log data Low Blocked user
Contributions after block Log data Low Blocked user
Partial block is not altered during its tenure Log data Low Blocked user
Partial block is altered during its tenure, reducing its duration/pages/namespaces Log data Medium[note 23] Blocked user
Period of time after return during which the block is not reinstated[note 24] Log data Low Blocked user
User retention after block Log data Medium/High Harmed user(s)
Gather users' perceptions about these types of interventions. Surveys High Community
Identify involved parties, define a measurement of success. Measurement-dependent. High Harmed user(s)
Compare trends in block usage after the introduction of partial blocks Log data Medium/High[note 25] Log data for all blocks
Check if existing sitewide blocks are modified to become partial blocks Log data Medium/High[note 26] Log data for all blocks
Check if policies are updated to recommend a different type of block Manual check of policy pages High-ish[note 27] Community policies
Compare trends in usage of blocks and page protection after the introduction of partial blocks Log data Medium/High[note 28] Log data for all blocks
Check for overlap between pages being unprotected and partial blocks affecting the same pages Log data Low Log data for all blocks and page protections

Known data edit

Block data edit

In Phabricator task #T190328 our team generated some simple statistics on blocks. This was a single snapshot from early 2018. In Phabricator task #T206021 we want to implement automated weekly reports so we have a better understanding of trends and patterns in how often blocks are set.

On May 7, 2018, there were a total of 3,482,751 active blocks against Wikimedia usernames, IP address, and IP ranges across all wikis. From April 30 to May 7, 2018:

  • 62,467 blocks were set
    • 64.1% of all blocks set were on Russian Wikipedia, with 15.9% on English Wikipedia, 6.5% on Polish Wikipedia, 4.5% on Dutch Wikipedia, and 2.5% on French Wikipedia.
    • Distribution of block expirations:
      • 3.3% — shorter than 24 hours
      • 6.7% — between 24 hours and 7 days
      • 32.3% — between 8 days and 6 month
      • 41.5% — exactly 6 months (mostly from Russian Wikipedia)
      • 4.01% — longer than 6 months but not indefinite
      • 12.2% — no expiration date (aka indefinite).
  • 14,330 blocks were modified, 96.6% of which occurred on Russian Wikipedia
  • 3,225 blocks were manually unblocked, 95.4% of which occured on Dutch Wikipedia because open proxies[note 29] became closed.
  • On Russian Wikipedia, the most common identifiable block reasons were:
    • Vandalism 58.93%
    • Open Proxy violation 28.45%[note 30]
    • Username violation 4.47%
    • Block evasion 1.65%
  • On English Wikipedia, the most common identifiable block reasons were:
    • Open Proxy violation 65.14%
    • Vandalism 9.34%
    • Spam 7.02%
    • Sockpuppet 5.30%
    • School 4.69%[note 31]

Page protection data edit

In Phabricator task #T195202 our team generated some simple statistics on page protections on the five largest wikis. This was a single snapshot from the week of October 11 to 18, 2018 for edit protections (not including create protections) for the main namespace (NS:0) only. In the future we would like to run this for all types of protections for all wikis for all namespaces.

Page edit protections set during October 11 to October 18, 2018
Wiki Full protection

(only admins can edit)

Semi protection

(autoconfirmed can edit)

Extended protection

(enwiki only, 500+ edits can edit)

dewiki 12 101 0
enwiki 22 417 18
eswiki 1 70 0
frwiki 1 49 0
itwiki 1 31 0

We also calculated histograms for page protection expirations during this time frame. All five wikis displayed a similar pattern of expiration for page protections: most protections (69.6%) expired within one month, 11.6% expired between 1-5 months, and expirations were clustered at 6 months (4.5%) and 12 months (5.8%). Only 5.4% of protections set were of indefinite length. Below are the charts for English and German Wikipedias, chosen as they have the most data. Charts for Italian, French, and Spanish Wikipedias can be found here.

We also generated a snapshot of how many pages in the main namespace are edit protected at this very moment, regardless of when the protection was set. Only 0.25% of pages on English, 0.11% of pages on German, 0.07% on Spanish, 0.03% on French and 0.02% on Italian Wikipedias are edit protected in some level.

Total active page edit protections on October 18, 2018
Wiki Full Semi Extended
dewiki 129 2,257 0
enwiki 2,012 11,305 1,194
eswiki 133 910 0
frwiki 70 626 0
itwiki 27 233 0

Notes edit

  1. Is this "harm" in the sense of "harassing another user", or do we also consider unconstructive editing? This event and its sibling below using machine learning, are both related to the question of the blocked user being reported to ANI (or similar places) while being blocked, as that would also signal "recurring similar harm".
  2. Isn't this what Jigsaw is studying?
  3. See above note about whether Jigsaw is looking into this.
  4. By "other contributors" we mean both those who were involved in the situation as well as uninvolved/unaware contributors.
  5. The potential audience are contributors who are somewhat competent with wiki policies and customs. "Properly handled" is a synonym for "fair" and includes both the process, the people managing the process, and the outcome.
  6. Many of the same notes as for the uninvolved users apply here. We might also ask the blocked user if they found that the matter was properly handled.
  7. What does "previously uninvolved" mean? Should this be broader or narrower?
  8. Is there a difference between "unblocked" and "expired" when it comes to blocks?
  9. Research exists on identification of sockpuppet accounts as well as clustering similar accounts (e.g. identify a new account as a copy of an old, banned account). The cost of implementing this research is unknown.
  10. A variant of "user's partial block is reinstated", except this one is for a sitewide block, and the former is about the consequence, not the report.
  11. Not sure if "continues to edit" needs to also refer to "useful contributions", e.g. user is not making edits that get reverted.
  12. May or may not be connected to "user continues to edit after receiving a partial block."
  13. Can we identify whether the reduction happens because they made useful contributions, or whether it's due to arguing for a reduced sentence? Does the reason for the reduction matter?
  14. How do we identify the harmed user(s)?
  15. This ends up having a high cost in the second table because we first need to define "better" and "worse", or perhaps what "success" and "failure" mean. The same goes for the two similar comparisons below it.
  16. This can also be measured with snapshots of the state of blocks on a wiki, if we find that we are not concerned about trends.
  17. This question requires further refinement, particularly with regards to how we can define a topic area in a way that is language-agnostic. Secondly, what types of reactions would we be looking for?
  18. This measurement proposes surveys because retention through log data is usually edit-based. If a contributor is lurking, they are not defined as retained. A survey might allow us to learn that they're still around, or editing under a different account. At the same time, it's worth noting that the event asks for them to be retained as a contributor.
  19. Cost is "High" because there is no existing infrastructure to gather this kind of data, and because running multilingual surveys require translations.
  20. This is easier than the one below because "new user" is fairly straightforward to define.
  21. One of the reasons for the "High" cost estimate is the time frame involved here, the unit is "years".
  22. Higher cost than log data as this data might not be machine readable (meaning cost might be "High"). It might be possible to discover usernames submitted to sock puppet reporting pages. If an account is identified to be a sock puppet and the wiki has templates for labelling those, we should be able to discover them.
  23. Cost defined as "Medium" because diffing to figure out what the change was is slightly more complicated than just grabbing and visualizing data.
  24. How long after their return do we measure? That goes for both this one and retention measurements, what is the window?
  25. This gets a cost of "Medium/High" because the trend analysis related to this can be complicated. While we potentially have years of log data for sitewide blocks, it is uncertain what the trends for partial blocks will be.
  26. See previous note about trend analysis.
  27. Hard to guess how much work this is, and it also depends on how many wikis we are studying.
  28. See previous note about trend analysis for site-wide blocks, the same issue might apply here.
  29. It is Wikimedia policy that users are not allowed to edit from open proxies as a means to limit content vandalism. Different wikis use different tactics to enforce this rule. Some wikis proactively seek and block open proxies, while some block the proxies after they have made edits.
  30. This percentage is likely higher, our data truncated is due to how ru.wp annotates their block log.
  31. On English Wikipedia, when an IP address or range is identified to be owned by a school or library, the School block warning is displayed to inform readers they can still create an account to participate.