On average, contest participants create or improve 131 pages, costing an average of $1.67 USD each.
Unlike our reporting on edit-a-thon outcomes, for this specific program evaluation we chose to focus on how many articles were improved/created through on-wiki writing contests. We decided to report on how many article pages have been created/improved instead of how many characters added. This is because on-wiki writing contest participants will produce characters across the Wikipedia article space outside of their contest work, so the data would be misrepresented regarding characters added.
Seven of the eight (88%) reported contests had data available about how many article pages were created/improved during their on-wiki editing contest. Contest participants created/improved wiki page counts ranging from 22 to 6,374. The average number of pages created/improved was 131.[2]
We were also able to learn, for three programs that reported a non-zero budgets, how much each page costs based on budget input. Dollar per page ranged from 0.40 USD to $3.44 USD. On average, the monetary cost per page was $1.67 USD.[3]
Image uploads are not an intended outcome for on-wiki writing contests, but half of the contests in this report did upload images, with 33 on average per contest.
We were curious to know if image uploading was a frequent occurrence in on-wiki writing contests. Four out of the eight program leaders (50%) included image upload data. Upload counts ranged from one to 250. The average upload amount per contest was 33.[4]
Graph 2: Dollars to pages, participation, and pages created/improved. This bubble graph displays participation against the dollars per page created/improved. It uses bubble size to depict the total number of pages created/improved for the three contests for which budgets were available along with a report of the number of article pages created/improved. Although these few points appear to associate more pages created/improved with fewer dollars spent, there are too few observations to draw any such conclusion.
Graph 3: Dollars to pages created. This box plot shows the distribution of the three contests costs in terms of dollars spent per page created/improved for the contest. The distribution shows the wide variation in dollars spent per page created/improved. As illustrated by the long vertical line running from the low of $0.40 to high of $3.44, results were highly variable for this small set of data. Importantly, this analysis does not measure how many pages of written text were actually produced by these contests.
On-wiki writing contests aim to improve the quality of Wikipedia articles, and succeed at doing it. The longer the contest, the more quality articles, but even the shortest contests succeed at improving the quality of Wikipedia articles.
We asked program leaders to report about the type of quality articles being produced through on-wiki writing contests. We asked about two types of article quality types found in Wikipedia: good articles and featured articles.
Six contests out of eight (75%) reported how many good articles were produced from their contests. Reports ranged from seven to 436 good articles, with an average of 28 produced (see Graph 4).[5] The same number of contests, six out of eight, reported how many featured articles were created during their contests. Featured articles, which are the highest standard in Wikipedia article quality, ranged from counts of four to 42. Those six contests averaged 10 featured articles each (see Graph 5).[6]
Based on the reported data, we discovered that contests with more articles rated as good or featured lasted the longest ( 10 months), but, contests that were shorter ( 3 months) also produced an impressive number of high-quality articles (see Graph 6 and Graph 7) We also learned that the more participants, the more the quality content produced, but, it appears that participants in smaller contests with less participants tend to produce high amounts of quality content, too (see Graph 4 and Graph 5).
Graph 4: Participation, good articles per participant, number of good articles. One of the key goals for contests is improving article quality. The bubble graph measures the number of participants and the number of Good articles per participant with the total number of good articles. As illustrated by the bubbles in the graph, the number of good articles per participant ranged from zero to three. The total number of good articles for each contest is illustrated by bubble size and label. Looking at the bubble we can see that, for four of the six contests for which data were available, at least one good article is produced per participant. Although it also appears that as the number of participants increases, so does the number, some of the highest rates for producing good articles per participant occur within the smaller contests.
Graph 5: Participation, featured articles per participant, and number of featured articles. This bubble graph examines the number of participants and the number of featured articles per participant with the total number of featured articles. The total number of featured articles for each contest is illustrated by bubble size and label. As illustrated by the bubbles, similar to the rate of good articles per participant as the number of participants increases, the number of featured articles per participant seems to decrease. The four bubbles to the right of the 0.5 mark on the x-axis indicate that for those four contests there was at least one featured article for every two participants.
Graph 6: Good articles per participant, contest duration, number of good articles. This bubble graph compares program duration with good articles per participant and the number of articles produced in a contest. We can see that although the program that lasted the longest produced the most good articles, the contests that lasted under four months also produced a considerable amount of good articles.
Graph 7: Featured articles per participant, contest duration, number of featured articles. As seen in Graph 6, this article compares contest duration with the number of featured articles per participant. The bubble size depicts the number of featured articles produced. This graph shows a similar story, emphasizing that programs that lasted only one month were still able to produce a large amount of featured articles.
References
↑Note: Although "content production" is a direct product of the program event itself and technically a program output rather than outcome most of the program leaders who participated in the logic modeling session felt this direct product was the target outcome for their programming. To honor this community perspective, we include it as an outcome along with quality improvement and retention of "active" editors.
Data was available for five out of eight reported (63%) contests for both three and six months after the events. We only had data for those five, because the other three contests had not passed their three and six months post-event time periods.
Three months after: Retention rates three months after the end of the contests ranged from 60% to 100%. Retention averaged at 81%.[1]
Six months after: Retention rates six months after the end of the contests ranged from 60% to 86%. Retention averaged 76%.[2]
We need to do additional research to learn about why retention numbers decline slightly six months after (from an average of 83% to 79%), and editor motivations - are contest participants more prone to edit more during the contest itself, or do they go back to "regular" editing habits after the contest ends?