Wikimedia Blog/Drafts/First Program Evaluation & Design Workshop held in Budapest

This was a draft for a blog post that has since been published at https://blog.wikimedia.org/2013/07/11/finding-out-what-works-first-workshop-program-evaluation/

Improving program performance: first evaluation workshop in Budapest edit

 
Participants from 15 countries attended the first Program Evaluation & Design Workshop

In the Wikimedia movement, there are many organized activities seeking to contribute to the Wikimedia vision and strategic goals. But how do you determine which of these programs work and which don't? And how can you further improve the performance of programs? To tackle these difficult question, 26 international participants came together in June 2013 for the first Program Evaluation & Design Workshop in Budapest, Hungary. The event was held by the Wikimedia Foundation, in partnership with Wikimedia Magyarország, the local chapter.

With record high temperatures in Budapest, participants kept cool in the heart of the city, engaging in an intensive, two-day workshop that presented the basics of Program Evaluation. The workshop focused on creating a shared understanding of what program evaluation is, why it is important, and providing attendees with some basic skills and a logic modeling tool for mapping out their programs in order for them to begin incorporating Program Evaluation into their program work.

The workshop brought together 21 Wikimedians from 15 countries. The participants – all with a track record of doing program work – represented five different program types:

Topics of the workshop edit

Day one opened with a welcome by Frank Schulenburg, Senior Director of Programs at the Wikimedia Foundation, and a long-time Wikipedian. He gave a brief background on why Wikimedia is investing in Program Evaluation and what it is. Schulenburg stressed three points about the current evaluation initiative:

  • self-evaluation: program leaders evaluate their own programs
  • collaborative: we're all in this together and we will learn together
  • focused on capacity building: our goal is to equip program leaders in the movement with the necessary skills to use program evaluation and design practices

Dr. Jaime Anstee, Program Evaluation & Design Specialist for the Wikimedia Foundation, then led the group through the basics of Program Evaluation – different types of evaluation and the roles of all involved in it while also expressing that the current evaluation initiative aims to be empowering, and participatory, while maintaining a utilization focus. The morning ended with a visioning exercise to see the positive and negative results of what the movement could experience with Program Evaluation, and lightning talks by the participants about the programs they have executed.

 
The GLAM Content Donation group's example of their logic model

The second half of day one focused on theory of change (a way to design and evaluate social change initiatives), and logic modeling (visually representing the relation of a program's resources, activities, outcomes etc.) as a tool for mapping program theory of change. To get a better understanding of all the terms and concepts being presented, participants broke out into groups based on program type and worked together to create logic models around their program. This was a helpful exercise for the participants, as it allowed them to see how they can use logic models to better articulate their program process and possible measurement points they could include in an evaluation plan.

Day two started by participants sharing their takeaways and remaining questions about the presentations and group work from day one. Takeaways included gaining a shared vocabulary about Program Evaluation and further understanding about concepts of theory of change. Questions about Program Evaluation included the tracking of qualitative data and time management – do participants have the capacity to undertake Program Evaluation? The day proceeded with more definitions and further exploration of evaluation design and logic models, more participant lightning talks, and a presentation about different data sources. The presentation on data sources covered the subject of existing, or secondary, program data such as workshop sign-in sheets, notes, and user logs, as well as primary data collection strategies such as interviews, focus groups, and surveys along with an overview of the upcoming WikiMetrics (formally known as UserMetrics API), and how program leaders will be able to use it for evaluation.

Evaluating an evaluation workshop edit

Applying techniques of Program Evaluation to the event itself, pre- and post-workshop surveys were filled out by participants. The results (see below) demonstrated success: Most participants had entered the workshop with little to moderate knowledge about Program Evaluation terms and concepts, but the majority left the workshop with a better understanding of these terms and concepts.

 
Participants judged their understanding of various Program Evaluation terms higher after the workshop (red) than before (blue)
 
Participants' rating of their comprehension of various Program Evaluation concepts after the workshop

Furthermore, the majority of the participants were highly satisfied with the process of, and logic models generated by, the break-out group sessions. 63% reported they felt "mostly" (50%) or "very" (13%) prepared to implement next steps in evaluating their program. When asked what next steps they were planning to implement in the next 45 days, the participants' most frequent responses were:

  • Develop measures for capturing outcomes (47%)
  • Conduct a visioning activity to articulate their specific program's impact goals and theory of change (42%)
  • Develop their own custom logic model to map their specific program's chain of outcomes (42%)

Participants also said what kind of assistance they need most: broader community involvement, quality tools for tracking data, survey strategies, materials to teach other program leaders, and an online portal for engagement. This survey has been critical in allowing us to learn what worked, what participants need during and after workshops, and what we need to further improve upon in future workshops. All of the participants in the workshop look forward to further engagement on a movement wide level about Program Evaluation!

Next steps edit

So what's next and how can you get involved? You can visit our temporary space on Meta. There you will find updates from the Program Evaluation & Design team & community, our share space where you can share program experience and get help from other program leaders, resources about all things program evaluation, event information, and a FAQ to answer all of your questions. We also will be regularly blogging about program evaluation & design in the movement, and exciting activities related to it. We look forward to growing the movement's impact through Program Evaluation!

--Sarah Stierch, Program Evaluation & Design Community Coordinator