Grants talk:APG/Proposals/2017-2018 round 1/Wiki Education Foundation/Staff proposal assessment
Questions
editHi Delphine (WMF) and the rest of the WMF FDC staff,
Thanks for your work on the assessment. We're hoping you can answer some follow-up questions to help us understand, reflect, and act on it.
- In answer P2, is there a way for organizations with a limited geographic focus to get a higher rating than “neither” for potential for impact at scale at P2? How have other geographic-based organizations received higher ratings? I’m also not sure what you mean by “the knowledge divide” — can you explain more? I feel like we do have impact at scale right now (bringing 16,000 new editors each year seems to fit that bill to me), so I’m surprised by this assessment and would like to understand more about what we could do differently to raise this rating beyond a “neither”. Are you suggesting we should expand our scope to other countries beyond the U.S. and Canada?
- I honestly think targeting a population (U.S. and Canadian higher education students) that has 60% women is a strategic strength of our Classroom Program with regard to diversity (question P4), so I also don't understand this assessment. Can you explain more why you think we aren't addressing diversity issues in Wikipedia?
- Among many of these organizational effectiveness metrics, I’m curious what we would need to do differently to get a “major strength” rating — there seems to be no criticisms or room for improvement suggested in either the narrative or the description part of your assessment, as they’re all overwhelmingly positive about our effectiveness. What else do we need to do to improve to a "major strength" rating?
- In regards to B1, could you please describe the process for how you rate organizations in this category?
- Regarding B2, you say that our amount requested is “too high with respect to what this organization is aiming to achieve”. What achievements would you expect from a $750,000 request? In other words, what impact numbers would we need to hit for this to be a reasonable request, according to your assessment criteria?
Thank you in advance for your additional explanations — this will really help us turn your feedback into actionable steps for us! :) --LiAnna (Wiki Ed) (talk) 00:04, 15 November 2017 (UTC)
- This has been a fascinating grant request to watch and this follow up seems like straightforward questions to surprising answers. So far as I know, Wiki Ed has done more to get high counts in the WMF's own Learning and Evaluation/Global metrics than all other funded programs put together, so I would have expected maximal scores since the outreach here is off the charts by any measuring tool of which I am aware.
- My stake in this is that I live in the United States and watch grant requests based here. I do not follow everything that is said here on this report page or the other grant request pages. Many of the statements here from Wiki Ed and the WMF suggest a long history of off-wiki conversation which makes this evaluation report difficult to understand and seem more subjective than the published grading system makes it seem. That's fine; I encourage more conversation among the community, but it is surprising to me to see such low marks for what I thought was the best funded, most organized, most documented, and highest impact single Wikimedia project in the world. It seems to me natural to expect that relatively well funded programs like Wiki Ed should always have higher impact grades than other programs, because with more time and money more will be accomplished. I appreciate the public nature of this and all the work that the FDC volunteers did to evaluate this. I will get the word around for other people to check this out. Blue Rasberry (talk) 19:55, 15 November 2017 (UTC)
Further thoughts
editDear members of the FDC and WMF staff,
We understand that it will take time to answer our questions, especially given you’re traveling. Yet, we’re anxious about the outcome of this funding round as the financial support requested in this grant proposal is vital for our organization. That’s why we’d like to provide you with some thoughts on the current assessment.
- Diversity. From its very first days, our organization has been heavily engaged in countering systemic bias on Wikipedia. We have brought thousands of new women editors to Wikipedia, worked hand in hand with academic associations like the National Women’s Studies Association (NWSA) on narrowing Wikipedia's gender content gap, launched initiatives to increase the number of articles on women in science, actively recruited courses from African-American Studies departments and at historically black colleges and universities (HBCUs), and participated in movement events like the Wikimedia Diversity Conference. Board members like long-time Wikipedian Carwil Bjork-James have bolstered our efforts by raising awareness of the fact that marginalized and minority communities are still underrepresented or misrepresented on Wikipedia (e.g. during a presentation at the American Anthropological Association’s annual meeting in 2016). We agree that language diversity is important and support and applaud movement entities working on this area, as the staff assessments of other entities have acknowledged. But we think tackling gender, race, and content diversity and countering systemic bias on the Wikipedia language version that nearly gets more traffic than all the other Wikipedia language versions combined is also critically important now. Wiki Education’s work in diversity has followed the model of past WMF grantmaking initiatives like the Inspire Campaign, so we remain surprised and perplexed by the staff assessment. From reading other staff assessments, it seems there has been a change in WMF grantmaking that “diversity” is now defined as small language project development, which seems to be a major shift that needs to be announced (especially given I just participated in the Wikimedia Diversity Conference, where we grappled with how to interpret and enact the strategic direction, in which I got no indication that this shift had happened).
- Community engagement. Over the past four years, we have built a trusting relationship with the Wikipedia community of long-time editors. Our staff – many of who also edit Wikimedia projects outside of work – has engaged in a large variety of community events like local meetups, Wikimedia conferences, etc. When the community in the United States asked us for help, we supported Wiki Conference North America with significant funds and staff time. Our Visiting Scholars program provides much-needed access to sources for existing volunteer editors. In order to prevent others from reinventing the wheel, we undertake large efforts to share our learnings with the wider community (e.g., reports on Meta, regular updates on our blog, detailed monthly reports, an annual report full of learnings documentation, and presentations at Wikimedia conferences, including co-presenting with WMF staff at the last two Learning Days). We even published the draft of our current annual plan on Meta so we could get community feedback on our plans for the future. We also have supported Google Summer of Code and Outreachy interns to build the Wikimedia tech community. But the framework for assessment doesn’t say community has to be the Wikimedia community exclusively. For us, community engagement doesn't end with focusing on existing Wikimedians. Based on our commitment to inclusivity, we've been working with a growing community of academics who share our desire to provide the general public with factual and trustworthy information. Building bridges between the existing community and outside communities is an integral element of how we strive to bring value to Wikipedia.
- Capacity. The staff assessment acknowledges our past achievements and leadership, expertise, staff, and experience managing funds, which we certainly appreciate. But I don’t understand the downgrading us for not including volunteers more in our work, since it is both not part of the framework description for this criterion nor is it something that is holding us back from achieving impact. Our proposal acknowledges things that didn’t work for us with the Ambassador Program (part of our commitment to sharing our learnings mentioned in the previous bullet point), but also lays out how we went about solving that issue with an alternative solution. Isn’t the definition of capacity the ability to achieve the impact you set out to achieve? It seems like there is agreement and acknowledgement that we have this ability (we are pretty confident that with the Guided Editing system, we will have the capacity to double the numbers in our Classroom Program without adding staff), so I can’t reconcile the low mark here simply because we acknowledged in our proposal that we had in the past struggled with a capacity problem that we have now demonstrably fixed.
Thanks a lot for giving us the opportunity to participate in this process. We wish you all the best for your time in Madrid. --LiAnna (Wiki Ed) (talk) 21:23, 17 November 2017 (UTC)
- Hello LiAnna (Wiki Ed), a quick response to say your questions and the answers and remarks you gave here were taken into account by the FDC at the moment of the deliberations. The Staff assessment is one among many documents that the FDC takes into consideration to make their funding decision. On the scoring, I want to point out that "neither" is exactly that, ie. it basically describes a state where we cannot evaluate whether something is a strength or a concern, either because there is no precise information on certain topics, or because we don't have history to identify a trend. It is neither good nor bad, but rather a place of observation, which gives us a baseline over time to see what organizations achieve. I will be reaching out to all organizations in the next few days to discuss the deliberations and answer any question you might have about the process. Best, 01:14, 23 November 2017 (UTC)