Peer Review in Process now through July 17
The Learning & Evaluation team from the WMF will be actively reading and responding to comments and questions until July 17. Please submit comments, questions, or suggestions on the talk page!
This report presents data on a year of edit-a-thons in order to increase community knowledge about edit-a-thons and their goals, inputs, outputs, and outcomes. In addition, this report can be used for designing and planning future contests, exploring contest effectiveness, and celebrating edit-a-thon successes. It presents reported and mined data, and the authors invite further exploration and analysis using these data. This report contains metrics used across a broad spectrum of programs so that program goals and outcomes can be discussed across different types of programs. The authors recommend using caution in drawing conclusions about an individual program and its success or failure based solely on the data presented here. There is insufficient information about each unique edit-a-thon’s context and goals.
The report includes data from 121 edit-a-thons held in 19 countries between September of 2013 and December of 2014. The events include a total of 2,328 participants who added over 5 million characters to over 5,000 articles.
Edit-a-thons activate editing communities and can create significant content around a specific topic. The average edit-a-thon had 14 participants, 20,812 characters added or removed, and 7 articles created or improved. In total, the edit-a-thons included in this report added over .
Retention rates appear to be higher for edit-a-thons than average rates of new and existing editor retention across all projects. About 32% new users made at least one edit one month after their event, but the percentage editing dropped to 15% in the sixth months after their event. Retention rates for existing editors were steady around 70% over the analysis period.
Program leaders generate an enormous amount of resources around how to run an edit-a-thon and different event styles. Every one of the 38 events reporting this metric was run by an experienced program leader who can help others run a similar edit-a-thon. (See the data tables to find which program leaders ran which events.)
We still need more data in order to draw stronger key findings about inputs (e.g. dollars or volunteer hours), outputs (e.g. bytes added), or outcomes (e.g. retention). This also means that we are limited in determining the following:
Actual costs for edit-a-thons. Key monetary costs for edit-a-thons may include renting space, purchasing food, or renting equipment, but other costs may exist depending on local context and we were not able to obtain many reports of these costs. In addition to dollar cost, we need more data around other resources such as the efforts of volunteers and staff.
Associating costs to outcomes. The report is unable to draw clear conclusions about how inputs influence outputs and outcomes without more data.
Other measures and outcomes. This report is limited to certain measures and outcomes of edit-a-thon achievements. Growing volunteer abilities to run programs and increasing collaborative editing may be other likely outcomes which are not captured, among others.
Use the report data for planning future edit-a-thons. The report summarizes each input, output and outcome metric. For planning, program leaders and funders can use the range of data to know what is generally a high or low number for each metric. Tables are provided with data from each edit-a-thon. This is available to offer readers access to local context information and contacts.
Increase shared learning about edit-a-thons through data. Having more data and more measures are key to having a deeper understanding about programs and their outcomes. This includes online data (e.g. articles created) as well as offline data (e.g. budget, volunteer hours, motivation). To successfully translate data into shared learning we need both more data and more ways to share & interpret it. Two ways to increase data is to increase capacity building around data collection, as well as improve data collection tools. Two ways to increase sharing and interpreting data could be to connect program leaders and elevate existing resources to be more visible.