User:EGalvez (WMF)/Evaluation/Navigation bar
Grants: Evaluation
LEARN
PLAN
MEASURE
SHARE
TALK
NEWS
Translation box
Let's Talk! - Program Evaluation and Metrics
Let's Talk!
Program Evaluation and Metrics
Summary
editThe focus of the Program Evaluation and Design team is peer learning, practice sharing, and the adoption of shared language and approach to program evaluation across the movement, to support the discoverability and innovation of programs that will achieve high impact. Importantly, the team is charged with developing learning resources and tools to support program leaders to self-evaluate their program efforts and impacts.
The first year
editThis past year, the Program Evaluation and Design team has initiated a handful of strategies designed to help program leaders evaluate the impact of their work and share best practices. These efforts toward program evaluation capacity building are part of an initiative intended to support program leaders in designing and running effective programs as well as grantmakers in making good decisions about how and where resources may be invested in order to achieve impact.
In brief, goals for program leader self-evaluation and sharing have included:
(a) Supporting program leaders in learning to use evaluation to improve their program design through:
- Working with grantmaking programs to align evaluation language and clarify expectations
- Direct training and capacity building for program leader on program evaluation (i.e., In-person and virtual meet-ups)
- Engagement in a growing community of program leaders, evaluators, and decision-makers (i.e., evaluation portal, Facebook group, IRC, Wikimedia conference events)
- Technical assistance and supports for program leader self-evaluation (i.e., guidance, learning resources, consultation, template and tool development)
(b) Working in collaboration with program leaders to develop and share a high-level understanding of which programs and activities reach different outcomes and levels of impact through:
- Logic Model/Theory of Change mapping
- Measurement strategies and program metrics standardization
- High-level data coordination and reporting as demonstrated by the first set of evaluations on seven programs: Evaluation reports (beta)
The year ahead
editIn the second year of the program evaluation capacity building initiative, the team aims to:
- Continue to work with the grants officers to ensure continuous learning opportunities across the movement, including periodic hangouts, IRCs, one-on-one conversations and face-to-face capacity building workshops at larger Wikimedia events.
- Develop and provide the most needed evaluation resources - data, tools, and information - to program leaders and community organizers, using baseline information gathered through meet-ups, dialogues, surveys, and reporting.
- Revisit and report back on the seven current year’s program reports, as well as grow - by at least three new areas - the depth and number of programs reviewed and reported on.
- Work with successful program leaders throughout the movement to develop “program toolkits”: blueprints for program components and processes that have proven in the past to support the achievement of impact through case studies and collaboration on pilot program evaluation strategies with program leaders interested in pursuing deeper inquiry.
Key questions
editProgram evaluation
edit- How should the program evaluation team evaluate and report on the program evaluation and design capacity-building initiative? What measure of success would you find most useful for assessing progress toward your team goals? (Discuss here)
- Examining the evaluation learning opportunities and resources made available so far, what have you found to be (a) most useful in supporting program evaluation and design and (b) in what resource areas do you feel least supported?(Discuss here)
- Examining Evaluation reports (beta), please share about the (a) strengths and (b) weaknesses of the metrics piloted. (Discuss here)
WMF grantmaking overall
edit- 4. How should WMF grantmaking evaluate its various grantmaking initiatives to groups and organizations (e.g., annual plan grants, project and event grants)? What strategies and/or metrics would you recommend for assessing grantmaking to groups and organizations?(Discuss here)
- 5. How should WMF grantmaking evaluate its various grantmaking initiatives to individuals (e.g., travel scholarships, individual engagement grants)? What strategies and/or metrics would you recommend for assessing grantmaking to individuals?(Discuss here)
PARTICIPATE!
editIf you are interested in providing thoughts and suggestions – please contribute! Each of the above questions has a space on the dialogue page in the evaluation portal, and other topics will inevitably be proposed and hashed out there as well. Sub-pages compiling suggestions and recommendations are also encouraged. :)
See also
editInterested in reading evaluation report (beta)? Access the overview and all seven reports here here
Interested in sharing more in terms of specific measures and metrics? Participate in the metrics brainstorm!