Wikimedia Blog/Drafts/Help us fill in the gaps: Contribute to the on-going story about Wikimedia programs

POSTED JAN. 29 AT 6:30PM PT

Title ideas edit

  • Help fill in the gaps: Contribute to the story of Wikimedia programs

Alternatives:

  • Help fill in the gaps: Contribute to the evaluation of Wikimedia programs
  • Help us fill in the gaps: Contribute to the on-going story about Wikimedia programs
  • ...

Body edit

 
Examples of Wikimedia programs under evaluation this year. Can you help collect data for these programs?
Data collection graph by María Cruz, freely licensed under CC-BY-SA 4.0. Survey photo from the U.S. Bureau of Reclamation, in the public domain

The Wikimedia Foundation's Learning and Evaluation team invites you to help collect data about Wikimedia programs, so we can evaluate their impact. Here's an overview about this initiative and how you can contribute. This will help us understand how these programs support the goals of our movement.

Evaluating Wikimedia programs edit

 
Like field surveys, program evaluation requires extensive data collection.
Survey photo from the U.S. Bureau of Reclamation, in the public domain

In November 2012, the Wikimedia Funds Dissemination Committee (FDC) proposed that we discuss “impact, outcomes and innovation, including how to measure, evaluate and learn” across the Wikimedia movement. This request raised some important questions, such as:

  • What will lead to a significant improvement in our overall community health?
  • How can we better learn from each other and adapt our programs?

To address these questions and help measure the impact of our programs, the Wikimedia Foundation's Program Evaluation and Design initiative started in April 2013, with a small team and a community call to discuss program evaluation. In the first few months, the team worked to identify the most popular Wikimedia programs and collaborated with a first set of program leaders to map the goals for these programs and potential metrics. After this first step, the team invited a broader community of program leaders to share feedback about their capacity for evaluation through an online survey. We wanted to explore what programs were out there, what was important for program leaders and what they were measuring.

Survey results indicated that many program leaders were tracking quite a bit of data about their programs. By August 2013, informed by these survey results, we launched the first Round of Data Collection in September 2013 and completed our first Evaluation Report. This high-level analysis started to answer many of the questions raised by movement leaders about key programs and their impact. The report was well received by our communities and generated many discussions about the focus of these programs, their diversity and the data they collected. But it still left room for improvement. Some of the feedback was that the foundation collect more of the publicly available data from grants reports, event pages, and program registries.

The first reports included findings from 119 implementations of 7 different types of programs, as reported by 23 program leaders from over 30 countries. To learn about these implementations, we relied extensively on data submitted to us by community members.

Last year, based on community requests, we focused more on looking for data that was already reported. To make the best use of staff and volunteer time, we first searched for programs and program data posted on project pages between September 2013 and September 2014, gathering as much relevant information as possible. Then we began reaching out to the community, to help us fill in the gaps in this data. Some key information was hard to find, especially on 'program inputs', such as staff or volunteer hours, budgets and non-monetary donations. This input data, along with data on outputs and outcomes, can help us to compare and contrast program implementations more accurately, to see what strategies might work best to achieve their goals.

This year's programs edit

 
Highlights of Data Collection Round 2.
By María Cruz, licensed under CC-BY-SA 4.0 international.

For this year's round, we have identified 733 program implementations, and data was reported directly for 110 of them. We also identified at least 98 different program leaders, and are still working to contact more leaders of programs reported by grantees. Global distribution of these programs is deeper and broader this year, spanning 59 countries -- 74% in countries where English is not the first language, and 21% in the Global South.


Check out the list of programs under review!


To save time for program leaders, we now accept many different forms of documentation, and we are happy to find and pull the relevant data as needed. Data contributions are being examined more closely, including running additional analyses to assess impacts on content quality and participation. This data will be used to identify the most successful programs so that, together, we can use a powerful lens to investigate programs and share the best strategies, as well as potential pitfalls, across our many Wikimedia communities. All data will be shared back with the community in the 2014 Program Evaluation Reports, to help understand how programs develop, change and contribute to larger movement goals.

Get involved! edit

 
Programs reporting directly
By María Cruz, licensed under CC-BY-SA 4.0 international.

We welcome your participation, as we continue to reach out to program leaders. If you coordinated a program listed above, your help would be much appreciated: there are still many gaps in the data! If you would like to report on a program that is not listed above, please send us an email at 'eval @ wikimedia.org'.

This new data will be included in our second round of evaluation reports, enabling program leaders to compare, contrast, and connect to one another across different locations, languages and cultures. These reports will:

  • help guide choices of programs that impact shared movement goals
  • help find and share stories across different Wikimedia communities
  • help the WMF Grantmaking team identify promising practices, and develop program toolkits and learning patterns
  • inform communities about promising practices and potential new partnerships

Together, we can tell the stories of our Wikimedia movement programs in a more meaningful, informed and motivating way!


María Cruz, Community coordinator, Learning and Evaluation team


Summary edit

The Wikimedia Foundation's Learning and Evaluation team invites you to help collect data about Wikimedia programs, so we can evaluate their impact. Here's an overview about this initiative and how you can contribute. This will help us understand how these programs support the goals of our movement. (...)

(this will appear next to the featured image on the blog's home page.)

Notes edit

Ideas for social media messages promoting the published post:

Twitter (@wikimedia/@wikipedia):

Help fill in the gaps: Contribute to the evaluation of Wikimedia programs
---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|------/

Facebook/Google+

  • The Wikimedia Foundation's Learning and Evaluation team invites you to help collect data about Wikimedia programs, so we can evaluate their impact. Here's an overview about this initiative and how you can contribute. This will help us understand how these programs support the goals of our movement.
  • Categories: Wikimedia Learning
  • Tags: program evaluation, data


Extra Images edit


 
Mind the [data] gaps! Help us tell the story of Wikimedia programs!
Photo by CGP Grey licensed under CC-BY 2.0.
 
Program Leaders reporting directly
By María Cruz, licensed under CC-BY-SA 4.0 international.



-->