Learning and Evaluation/Archive/Program Evaluation and Design/News/Budapest Pilot PRE-POST Survey Results
Twenty-six international participants came together in June 2013 for the pilot Program Evaluation & Design Workshop in Budapest, Hungary. The workshop brought together 21 Wikimedians from 15 countries. The participants – all with a track record of doing program work – represented five different program types: Edit-a-thons/Editing Workshops, GLAM Content Donations, Photo Upload Contests (Wiki Love Monuments, WikiExpeditions, WikiTakes), On-wiki Writing Competitions (Contests, i.e. WikiCup) and the Wikipedia Education Program. Participants were asked to complete PRE and POST workshop surveys in order to assess the workshop activities impact in terms of its set objectives:
- Participants gain a basic shared understanding of program evaluation.
- Participants will work collaboratively to map and prioritize measurable outcomes, beginning with a focus on the most popular programmatic activities.
- Participants will gain increased fluency in common language of evaluation (i.e. goals versus objectives, inputs & outputs versus outcomes & impact).
- Participants will learn about different data sources and how to extract data from the UserMetrics API.
- Participants will commit to working as a community of evaluation leaders who will implement evaluation strategies in their programs and report back to the group.
- Participants will have a lot of fun and enjoy networking with other program leaders!
The majority of the pilot workshop participants entered the workshop with no or only basic understanding of eight of ten program evaluation terms included in the survey, only the terms program, qualitative, and quantitative were well-known to the group at the beginning of the workshop. By the end of the workshop the majority left the workshop with an applied or expert understanding of nearly all the key terms included on the survey. Importantly, the core concept terms “theory of change” and “logic model,” while still less understood than the other terms, demonstrated highly significant gains along a similar trajectory as the other selected terms that were less known at PRE survey time.
Specifically, understanding of each of the selected terms demonstrated the following growth from PRE to POST:
- Cohort: Understanding grew from 19% reporting applied or expert understanding at PRE to 78% at POST
- Inputs: Understanding grew from 38% reporting applied or expert understanding at PRE to 100% at POST
- Logic Model: Understanding grew from 25% reporting applied or expert understanding at PRE to 47% at POST
- Outcomes: Understanding grew from 40% reporting applied or expert understanding at PRE to 84% at POST
- Outputs: Understanding grew from 30% reporting applied or expert understanding at PRE to 95% at POST
- Metrics: Understanding grew from 50% reporting applied or expert understanding at PRE to 63% at POST
- Program: Demonstrated a growth trend from 63% reporting applied or expert understanding at PRE to 74% at POST
- Qualitative: Understanding maintained with 75% reporting applied or expert understanding at PRE and 74% at POST
- Quantitative: Demonstrated a growth trend 75% reporting applied or expert understanding at PRE to 84% at POST
- Theory of Change: Understanding grew from 12% reporting applied or expert understanding at PRE to 53% at POST
In addition to actual change in understanding of a new, shared vocabulary, participants also demonstrated a high level of success in accessing several core learning concepts that were presented and modeled throughout the course of the workshop. At POST survey time, participants rated their level of understanding of six key learning concepts that were part of the workshop presentations all of which they rated rather high.
Furthermore, the majority of the participants were highly satisfied with the process of, and logic models generated by, the break-out group sessions. At both PRE and POST survey time participants shared one word or phrase that best represented their feeling(s) about evaluation. At PRE survey, motivations, while somewhat “curious” also presented aspects of feeling pressured to participate while at POST survey time there was much more excitement expressed, along with a fair bit of their feeling overwhelmed. When asked what next steps they were planning to implement in the next 45 days, the participants’ most frequent responses were:
- Develop measures for capturing outcomes (47%)
- Conduct a visioning activity to articulate their specific program’s impact goals and theory of change (42%)
- Develop their own custom logic model to map their specific program’s chain of outcomes (42%)
Although most participants offered specific ways that the workshop could be improved, the majority of participants felt confident in their ability to implement next steps in evaluating their program and they shared the ways that the Program Evaluation and Design Team could best support them in those next steps (i.e., broader community involvement, quality tools for tracking data, survey strategies, materials to teach other program leaders, and an online portal for engagement) toward which the team continues to direct progress.
Complete responses are summarized in the Results Summary (see below)