Peer Review in Process now through July 3
The Learning & Evaluation from the WMF will be actively reading and responding to comments and questions until July 3. Please submit comments, questions, or suggestions on the talk page!
In terms of participation and producing content only, writing contests tend to engage existing users, with some indication that they are engaging new users. Further measures would be needed to understand in what other ways, beyond working together to produce content, writing contests build and engage communities.
The total number of participants engaged in the 39 contests included in this report was 1,027; 909 (89%) were existing users. The proportion of existing active editors to total existing editors increases from 66% to 89% between 30 days before the contest to 30 days after the start date. Much of this increase (about 60%) can be attributed to WikiCup participants.
Last year's report about on-wiki writing contests suggested that writing contests "aim at engaging existing editors". This report suggests differently. Thirteen of 39 programs involved new users, ranging from 2.8% to 64.5% new. In all these contests, 57 (50.0%) out of 114 made at least five edits in the first month following the start of the event, and 20 (17.5%) made at least one edit between the second and third month following the event. While these data seem promising, more data is needed to make stronger statements and to learn how effective writing contests are at engaging new users.
"In one competition, university departments donated prize money and space for an awards ceremony, and judges were able to volunteer as little as 30 minutes of their time rating articles in their area of expertise and providing comments on talk pages so that articles can be improved in the future, and students were able to compete for a sizeable prize, but also a participation certificate that they could reference in their CVs."
Kacie Harold, WMF Summarizing the Physiwiki Contest in Hebrew Wikipedia
Writing contests might increase awareness in a few ways, however the metrics in this report do little to measure how much writing contests increase awareness of Wikimedia projects.
None of the metrics in this report specifically measures awareness of Wikipedia projects. It may be useful to explore with program leaders how they think they are increasing awareness and include these measure in the data collection phase of the report.
Some metrics may measure awareness indirectly. For example, the metric for number of new editors could say something about how new users are engaging with the projects. We found several contests that recruited new editors. In the 30 contests with data on numbers of new users, 114 (18%) of 627 participants were newly registered users. In the future, it might prove useful to explore what strategies program leaders are using to engage new users in writing contests.
Program leaders are producing online and printed materials, which may lead to increased awareness of writing contests. From the 39 contests, seven reported producing shared learning resources. All seven reported writing blogs or other online content and three reported producing brochures or printed materials.
Contests are implemented in many Wikipedia language communities, which suggests they engage many different language speakers. More data and measures would be needed to better understand diversity of information coverage.
From the 39 programs reporting, at least eleven Wikipedia language projects were represented. This figure does not include the 4 contests that were "interwiki" contests, or contests that spanned more than one language within a Wikimedia project. While contests are conducted in multiple languages, more measures could be used to examine other aspects of diversity, such as diversity of content or diversity of contributors.
While some contests in this report engage new editors for a period of time, more data from contests that engage new editors is needed to understand to what extent new editors are retained.
The 39 contests included in the report had a total 1,027 participants, and 114 (11%) were known to be newly registered users two weeks before the events. Four contests had more than 40% new users. In examining retention over time of all new users, we found that while new users participated in the contests, they may not continue editing for much longer. For example, from programs that were 3 months long or less, only 3 (3%) remained active 6 months after the start date of the event.
Previously, writing contests were believed to only engage existing users. But in this round of data collection, two programs emerged that included over 50% new users, and four with over 40%. While it is great to see contests being designed to engage new users, more program implementation data would be needed to understand how and to what extent contests are able to recruit and retain new users.
"We try to make sure that judges in the first stage have just one page to review, because that limits the amount of time that they must commit to working. By the way, this judging system is another thing that gets a lot of people involved in WP. We have 40 scientists throughout Israel that participate by reading and commenting on just one article. It is not a lot of work because they are reading something from their field that may only take 30 minutes of time. Usually these judges haven't contributed to Wikipedia before. This is very helpful because afterwards, if people want to improve a page they have specific feedback from a subject area expert."
Physiwiki, Hebrew Wikipedia
How this information can apply to program planning edit
Use the information to help you in planning for program inputs and outputs.
While most of the contests in this report engage existing (and possibly experienced) editors, some contests are successfully engaging new editors. This suggests that it is important to keep in mind who the target audience of a contest will be and design the contest for that audience. Writing contests may often attract experienced users because competing to write articles requires that participants already have a significant amount of expertise editing Wikipedia. In examining the few contests in this report that engage new editors, we find that writing contests may be successful at engaging new users for a short period of time. Contests engaging new editors may need a different model for recruiting participants. For example, one contest set maximum lifetime edits to 100 edits for contest participants, to encourage newer editors to participate.
The data from different contests can help to find what the right combination of participants could be for your contribution goals.
In terms of content production, writing contests show an increasing relationship: if a contest has more participants, it tends to affect more articles and produce more pages of text. Additionally, participants in 40-week contests produce more content than those in one-week contests. If you know how many participants you will have for your contest, use the data from other implementations in this report to see what goals and targets may be reasonable to set for total content produced for the Wikipedia project you are working with. You can also use the table below to help guide you in setting targets.
Example: You plan to have a writing contest that lasts 4 weeks and it is the first time you are running the contest. Set a lower target: aim for 12 participants who make 52,800 characters (4 weeks x 1,100 characters x 12 participants), and 24 articles created or improved (4 weeks x 0.5 articles created or improved x 12 participants). If you think the contest would be popular, set higher targets.
Examine the resources that have gone into implementing different contests.
This report includes too few budgets to be used as a reliable source for planning a budget. The 10 budgets included range from $23 USD to $4,000 USD, which is a clear demonstration of the variety of budgets used with contests.
Contests leverage donated resources, which shows there are approaches to resourcing beyond monetary funds. From the 14 program leaders reporting donated resources, 13 (93%) reported having received any donation, and 13 contests (93%) reported receiving donated contest prizes. This number suggests that program leaders running contests may do well in obtaining donations, but especially contest prizes.
Reach out and connect to other contest leaders.
Among all the benefits of connecting with fellow program leaders, you can also take the opportunity to ask about how they plan their budget. When using budgets presented here for planning purposes, try to find an event in a location with a similar economy to your area and consider reaching out to a successful program leader to discuss potential resource needs (including possible budget or donated resources). Alternatively, you can find an event based on the same model in a different location and talk to the program leader about the costs before translating those expenses into local prices.
Use the distribution statistics as guardrails against costly plans that may not produce scaled results.
The boxplots illustrating cost per participant and cost per text page or articles created/improved can also be helpful references for comparing the cost of your event with how much content is produced. As with overall budget information, the boxplots should be taken in the context of each event. If planning a new program, you might expect the costs to fall within middle 50% of costs per output reported (ie, within the green bar on the boxplot.) As programs move down the boxplot they create better outcomes with fewer inputs. We hope, as we continue to evaluate programs and feed the results back into program design, that we can learn more from the programs achieving the most impact using the fewest resources.
Writing contests differ in goals, length, subject area and scope, yet they are organized successfully within and across many Wikipedia language communities, and in many contexts to meet diverse goals. In some ways, this speaks to their ability to be replicated. In planning and implementing a writing contest, recognition is typically offered through awards or prizes based on quality or quantity of content produced. Judging who receives recognition or prizes may require tracking content contributions or evaluating them. Program leaders use many different methods to track contributions, such as event pages, bots and wiki-based tools, in order to judge contributions or contestants. Use the data tables in the report to find program leaders who are being successful in tracking submissions. You can also reach out to program leaders who helped create the toolkit.
We are currently in the process of developing a Program Toolkit for writing contests. We will reach out to writing contest program leaders, learn about best practices and challenges, and collect the information to make a resource. We will have stories, resources and advice on how to plan, run, and evaluate writing contests.
How does the cost of the program compare to its outcomes? edit
Our very rough cost-benefit analysis includes data from 30 implementations with non-zero budgets and outputs data that ended between May 2013 through September 2014. First we examine the total content contributions that result from these contests to Wikimedia projects, in terms of content and participation. We also divide the cost by the amount of content produced.
Pages of text: 15,144 (or 22.7 million characters)
Articles created/improved: 15,156
Projects: At least 11 different language Wikipedias
Existing Editors, 3 months, 1 edit or more: 520 (82.9%)
New editors, 3 months, 1edit or more: 20 (17.5%)
Median cost per text page: $1.14 USD
Median cost per article created or improved: $0.59 USD
These numbers show that contests produce a substantial amount of content, and engage many participants. We cannot make strong comparisons or judgements about writing contests because we need more quantitative and qualitative data. We need more information about the context in which the contests exist as well as additional outcomes that are not included in this analysis. Also, many writing contests are conducted entirely by volunteers without a budget and are not included in these cost comparisons. It would be great to one day learn how much content writing contests produce across all the Wikimedia projects, and learn more about how writing contests motivate and build our lively online communities.
Join the conversation! Visit the report talk page to share and discuss:edit
Questions about Evaluation and Impact
What, if any, ideas do you have about other ways we should evaluate programs?
What questions around program impact or evaluation do you have after reading the reports?
What further data investigations would you like to see (or do!) for these programs?
Questions about Measures
What, if any, measures have you used that are missing from these reports?
What, if any, tools/bots/programs/strategies do you use to measure the outcomes of your programs?
Questions about Sharing
If you ran a program that delivered excellently against goals, speak up! Consider writing a blog or how-to guide highlighting your ideas on why your program was so successful.
If your program surmounted a particularly tricky problem in program design, consider writing a learning pattern!
If you have run a program and want to report key metrics to the Learning and Evaluation team, our collector is always open. Visit our reporting page to learn about the reporting forms contents and find the link to voluntary reporting.
Questions about Connecting
If you are considering running a new program or updating an existing one, consider reaching out to experienced program leaders who have organized a similar program.
Join the program leader mailing list of weekly updates about program evaluation, tools, etc.
↑Here we examine all contests that reported together, but recognize that not all writing contests may share these as priority. Contests are a diverse set of programs and reflect a diversity of goals across contexts--we encourage organizers of each event to consider the data in terms of what matters most to their priority goals