Grants:IEG/Committee/Workroom/Reflections

In the pilot round of IEG (spring 2013), we used this page to reflect on what went well, and what we'd like to change in our process (on the committee or staff end) for future rounds.

Private wiki edit

I found myself wishing at a couple of points in the review that we had a private wiki. For example, some of the mailing list discussions/consensus building on each proposal felt like it would have been easier with a wiki instead of mailing list or google docs. We could also have used it to collaboratively draft the scoring tables, and for co-drafting messages/announcements from the committee. Did others feel similarly, or do we think the issues with having a private wiki still outweigh the benefits? Siko (WMF) (talk) 17:44, 15 March 2013 (UTC)

Hmm, it did seem that we had quite a number discussions in email threads that would have been easier to read (and respond to) on a wiki. I'd still be able to manage I think but I'd be interested to see what others have to say. Thehelpfulone 21:39, 10 April 2013 (UTC)
Agree, similar to how Arbcom members write proposals and sign them with comments like "first choice", "second choice", or "oppose". This would make keeping track of multiple topics easier. It might also decrease the number of email threads that require tracking. I support experimenting with this in the next round. --Pine 17:49, 11 April 2013 (UTC)
Yeah, I agree with the idea of having a private wiki. — ΛΧΣ21 06:58, 14 April 2013 (UTC)
OK, we'll work on getting on setup in time for round 2 then! (Thehelpfulone, I'm hoping you'll give me a hand there if you have the time :-) ) Siko (WMF) (talk) 17:32, 30 April 2013 (UTC)

Scoring edit

Got improvement suggestions for the scoring process? I'd like to look into using a better scoring tool (perhaps piggybacking on Wikimania scholarships system, though that will take some custom development) in the future, as discussed before. But if we had to use Google forms again next time...would that be horrible? For my part, it was REALLY handy having the scores all come out in spreadsheet form, for easy compiling, I liked that. Siko (WMF) (talk) 22:51, 10 April 2013 (UTC)

  • I felt that the matrix criteria and descriptions were very well done.
  • I hope that in the next round you'll add a criteria for assessing project risk.
  • It would reduce the time and effort required to score proposals if there was a more user-friendly way to see the matrix, the Google doc or other scoring input tool, and the proposal all side-by-side. This simplification might also reduce the possibility of accidental scoring errors. If you have the technical resources to design a less labor intensive UI then please do. Could an intern work on this during this year's Google Summer of Code?
--Pine 23:14, 11 April 2013 (UTC)
Good to hear, agree on risk and making user friendly improvements! We've missed the boat on this year's Summer of Code, I think (which I hear is already short on mentors as well), but we can certainly look into better solutions for the future. And if we must use google forms one more time while building something better, I hope having a private wiki may alleviate some of the pains in the interim. Siko (WMF) (talk) 17:35, 30 April 2013 (UTC)

Expensive cameras edit

I haven't double checked this, but I think that there were multiple requests for expensive cameras in this round that I felt were financially not justifiable. A good camera phone or a sub-$200 camera will do the job for basic photography. I consider it a red flag for a grant request to include a request for a $2500 camera that the grant proposer intends to use for simple group photography. I think we could mention in the guidance for applicants in future rounds that requests for expensive cameras are likely to get a cold reception unless there's strong justification. --Pine 04:03, 12 April 2013 (UTC)

I've been trying to think of a way to expand on this to put it into some more generally useful framework for applicants. Could we have a list of recommended dos and don'ts? --Pine 06:21, 14 April 2013 (UTC)
Indeed. Developing a don't-do-this-and-better-do-this list will facilitate our work. That way, proposals might get better prepared before they reach to us. — ΛΧΣ21 07:04, 14 April 2013 (UTC)
Agree that now that we have some more specifics like this based on round 1, a short and simple guidelines page for applicants is a great idea. I'd like to add a mention that we appreciate cost-conscious requests and that the upper limit of 30k is most likely to be granted for big projects that need teams than to 1 individual (basically, if you don't NEED the full amount, asking for it probably won't help you get selected). I'd also like to add one about focus. If people want to use this thread to brainstorm more dos and don'ts, I'm happy to incorporate them into a page that Heather and I will setup as we do a redesign sprint for round 2. Siko (WMF) (talk) 23:30, 18 April 2013 (UTC)
Thanks for bringing this up. Even i think the amount should be justifiable. Sorry for this late response. As i was questioned for this, let me explain. When i had to make the budget first, i had no idea, so i asked a former consultant of Wikimedia. I shouldn't have taken his suggestion, i am sorry for that. And i thought/knew we could change the budget later, so didn't care much, sorry. Glad that we are having a discussion. -- ɑηsuмaη «Talk» 12:23, 8 August 2013 (UTC)

Sustainability and scalability of the IEG Committee edit

We've had some discussion about the workload for Committee volunteers and the scalability of the IEG program. What happens if we get 50 eligible proposals in the next round? I think we are considering making community support be the initial factor that we use in deciding which eligible proposals should get further consideration by the Committee if we get lots of proposals. I think the current consensus is that the top 20 proposals with the highest community support percentage, as determined by (Support - Oppose)/Total, perhaps with a minimum of 2 or 3 votes, will be used as a cutoff by the Committee if we get more than 20 proposals in the next round. Would anyone like to comment or suggest changes to this proposal? --Pine 04:12, 12 April 2013 (UTC)

A larger Committee is a good idea. Another good idea may be preliminary reviews round to make the job easier, so if something is obviously not feasible, it gets weeded out during preliminary review round and doesn't clutter the review work. Gryllida 23:00, 18 April 2013 (UTC)
Agree, continuing to grow the committee is a good plan. Will be counting on you all to help recruit a new set of members for the next round! :-) The preliminary review idea sounds worth considering, though someone will still have to do that preliminary review work. I can see this being the role of WMF staff to do, and I don't mind doing it though we'll hit some scalability issues there eventually too. However, I think I ruled out all proposals that were 100% not feasible in round 1, and you still had a lot to review. If I had marked more as infeasible, I wonder if there would have been more of a feeling that WMF was trying to overly control the process? Siko (WMF) (talk) 18:09, 30 April 2013 (UTC)
I'll post some more concrete thoughts/proposals around solving this issue here a bit later when I have time. Initially, though, I just want to say that I have concerns with the committee limiting itself to only review a set number of proposals per round. Instead, I'd suggest we think about how the committee review process should be changed to make review of each proposal easier to do so that we can handle more volume. We're going to need to think creatively about how to scale IEGs, because in round 2 I'd like to cap grantees at 15 rather than 8, and in the future we'll want to look at how we scale beyond even this. So capping the number of proposals for review at 20 is not going to be a great long-term solution. I've also got some concerns about the idea of only reviewing proposals with the highest community endorsement. I can see this resulting in more English Wikipedia projects winning out over projects where there are small communities that don't tend to come to meta to participate in discussions (for the same reason that English Wikipedians are more likely to get elected to the WMF board than small-wiki Wikimedians - because it is largely English Wikipedians who vote). Or I can see this resulting in less risky projects being funded, because high endorsement is more likely to favor things that are safe, genius and innovation isn't always recognized by the crowd until someone takes the risk to prove it works, etc. I do want to help solve the problem you're identifying, I'm just not sure this solution is the best one. I'll come back with other suggestions when I have more time to write! Siko (WMF) (talk) 23:42, 18 April 2013 (UTC)
Hmm.
  • You seem optimistic that we'll have 15 or more proposals that seem worthy of support. I suppose we'll see when we get there.
I am eternally optimistic, guilty as charged! But that's the dream we're working towards, so I hope you'll come along for the ride (it's been a good ride so far, hasn't it?) :-) Siko (WMF) (talk) 18:09, 30 April 2013 (UTC)
  • I'm very interested to hear how you think the Committee can juggle more discussion and tradeoffs during the deliberation phase after scoring. The discussion was very time consuming with just eight slots available, so I think we will need significant changes to the review process for the Committee to have the capacity to handle more slots and more complexity.
I hear you. I'll post an alternate proposal below, will be curious to see what you think. Siko (WMF) (talk) 18:09, 30 April 2013 (UTC)
  • On the subject of using community support as a basis for a cutoff, please look again at the math formula I proposed. The formula outputs a number between -1 and 1 which could also be read as a percentage. Large communities will have little advantage beyond having an easier time getting the first two or three votes to meet the minimum participation requirement.
I see your point. I'm nervous, still, given that I've seen over and over again that smaller wikis have a harder time getting ANYONE to come comment on meta. I do not believe this is because there is more consensus against a project than there is on larger wikis, I believe it is generally related to global participation patterns, language barriers, wikibarriers, and other human engagement patterns. What would you think about deferring this decision as follows?: We'll make changes to the submission process and community input period to more strongly emphasize the importance of endorsement and demonstrating local support for an idea. Before formal review begins, we'll see if this boosted endorsement or discussion on projects. If there appear to be more eligible projects than we can possibly review, we'll try applying your formula as a strategy for narrowing things down for consideration. I guess what I'm saying is that I'm not yet willing to commit to this method, but if it turns out to be necessary, I'm very willing to experiment with it as a strategy. Does that seem reasonable? Siko (WMF) (talk) 18:09, 30 April 2013 (UTC)
  • On the subject of innovation, I feel that the IEG process should try to respect the decisions of local wikis if there is clear consensus against a proposal, but we may consider proposals where there is either a consensus of support or no clear consensus for or against a proposal. I am very reluctant to override a consensus from a local wiki, and I believe that WMF tries to do this only if there are legal reasons that support an "office action" intervention. I think there are ways to use pilot projects and scaling to support innovation while managing risk.
--Pine 22:11, 19 April 2013 (UTC)
I also feel the IEG process should try to respect the decisions of local wikis and agree we should not fund something that there is consensus against, that would be absolute madness. However, the most common case I've seen for small-wiki proposals (in fellowships, IEG, etc) is not consensus against a proposal, it is absolute silence. I think that is the first problem we should be trying to solve for here, because we've already got a decent buffer against the anti-consensus case (we encourage endorsement/discussion, and one of the criteria for review is community support). I've spent some time working with folks on smaller wikis over this past year, starting with Tanvir's fellowship, and the pattern I've seen so far is that small wikis often have just a few active participants, and those folks tend to keep quiet on wikis outside of their own or take a very long time to come together with a broader statement in favor of trying something. Change and action on those wikis often operates based on 1 person doing something, and then others join in later. We need to work on boosting meta participation globally (and I have a few ideas in mind for that), but that will take time. In the meantime, I don't think we can consider silence = consensusagainst (or for, really). Regardless of this long rant, I think we're generally on the same page here, Pine :-) Siko (WMF) (talk) 18:09, 30 April 2013 (UTC)

Alternate proposal to address sustainability without sacrificing scalability edit

Based on the discussions here, and survey results, here is an alternate proposal (I like to think of it as an ecosystem, not just one silver bullet, which is why there are so many points to this proposal):

  1. increase emphasis on endorsement and community discussion in the submission process - that helps get community member asking some of the questions that the committee will otherwise need to ask.
  2. increase the community feedback period - same as above.
  3. emphasize that the committee's review work beings in IdeaLab and in engaging with proposals while they are being drafted. - frontloading the conversations and feedback helps the committee get involved in shaping good projects early on in a distributed fashion, and may make the scoring go more quickly since you'll already be familiar with the projects.
  4. keep growing the committee - recruiting more members each round means people can take more breaks when needed and when we divide and conquer on proposals we'll have more people to divide amongst.
  5. divide into subject matter expert groups and divvy proposals by topic - that could allow us to minimize the number of proposals that any 1 member has to review, without capping the scale of the program and total number of proposals we're able to review.
  6. develop guidelines with standard questions for each project type - this allows each expert group to focus on their area, and helps proposers package their ideas in ways that should quicken the review.
  7. keep working on refining review tools - simplify scoring where possible, use a private wiki where mailing lists get annoying, and consider having expert groups gather on Skype/Hangout/Webex when 1 conversation can help forge consensus more quickly than hours of writing.
  8. if need be, implement more staff pre-filtering of proposals - staff should stay on top of eligibility review, but if the eligible proposals list is still too long and we've done all we can do with the above points, we might select a short list to carefully review based on Pine's community support score, feasibility, etc.

Does this seem worth trying for round 2? Other proposals or suggested changes to this one? Siko (WMF) (talk) 19:03, 30 April 2013 (UTC)

  • I like that group of suggestions, but what I'm missing is how these will help us after the initial scoring phase when the Committee discusses which projects to fund. Even if we have a larger committee and if we break into subcommittees when we do the initial evaluations, and even if we have initial discussions in subcommittees which make recommendations to the full Committee, we will still need to do budget reconciliation as a whole group. Reconciliation in Round 1 was a complicated task with just seven slots and some early initial agreements about what proposals we were interested in funding, and I'm not seeing how these procedural suggestions will improve our capacity or efficiency during that phase in Round 2 so that it's realistic to think that we can do a thorough job with fifteen slots and more committee members. More slots and more members both will add complexity and add additional phases to the scoring process. Adding subcommittee phases makes sense for expanding our output to fifteen slots, but it doesn't reduce the workload burden on the Committee. If we're going to expand to fifteen slots, which I have reservations about doing, then I think we need to move away from lengthy email discussions and toward having live meetings on IRC or WebEx, but that creates its own set of scheduling issues. Unless there is such a compelling reason to expand to fifteen slots that we as a Committee are willing to take on more complexity and we agree that we have enough volunteer time for us to handle all phases of a more complicated and time-consuming selection process with consistently high quality, I think we should stay small to keep our quality high and our workload manageable. There are advantages to staying small. --Pine 18:32, 2 May 2013 (UTC)
Small is beautiful, but we're a global, scaled movement and (I'm just going to come out and say it) programs that can't scale to fit this are most likely to get cut from WMF in the long-run. I'd like to keep IEG going and growing. We need this to scale, and I believe we'd be doing the movement a disservice if we didn't provide as many projects the opportunity to be funded as we possibly can. I rather think we've got the opportunity here to do some big things to support the editing community at large, so we should rethink the committee processes creatively when/where it gets in the way of doing things at some kind of reasonable scale for the movement. Suggest that for the coming round, an internal wiki is likely to facilitate some of the final deliberations better than mailing list - think RfC, that scales. Webex is also a good option when we need real-time discussion, and I schedule and facilitate international calls regularly so a few more does not feel that daunting to sort out (couple of calls, couple of time-zone options, divide and conquer). We might try designating a sub-committee to balance the short list from each expert working group. Or, I can take the subcommittee recommendations as I did last time from the individual scoring, compile them in terms of what appears to be floating to the top, and do some more of the budget-balancing work at this end to come up with the final result, or some proposed version of a final result for committee members to confirm support of on-wiki or on a call - volunteers don't need to do everything, we can continue to staff where needed, though I do want to be sure we've got ample community input. I'm committed to figuring out how we keep volunteer workload manageable, Pine, and I do want to find a solution that will work for all. It doesn't have to get more complicated at scale, and we don't have to solve for everything at once, we can keep iterating round by round and only grow as/when we're able to. But I cannot work with a solution that is just "we won't grow." Siko (WMF) (talk) 00:14, 3 May 2013 (UTC)
OK, if we're going to scale up then I think we should move briskly to implement some of the process changes that we're discussing. How about having subcommittee scoring and Webex meetings, followed by your compiling the subcommittee priorities and recommendations into a combined recommendation to the whole Committee based on the top few recommendations from each subcommittee, and then another Webex for the whole Committee to do reconciliation? --Pine 00:38, 3 May 2013 (UTC)
Additionally, I suggest that we increase the length of time for scoring and have a midpoint Webex meeting in our subcommittees during the scoring phase. During the meeting the Committee members can discuss questions and thoughts that we have about proposals, and adjust our scoring based on our discussions. --Pine 20:08, 6 May 2013 (UTC)
That all sounds reasonable to me! Once others have had a chance to weigh in on this page, I'll make sure we incorporate the outcomes of these discussions into updated process pages (planning to start sprinting on those later this month) so that we'll be organized in plenty of time before round 2 begins. Thanks for continually being so thoughtful about all of these moving pieces, Pine. :-) Siko (WMF) (talk) 20:16, 6 May 2013 (UTC)

More community feedback edit

A larger time range for community feedback seems sensible at two points: after people write up their applications before Committee starts scoring them; and after we made preliminary decisions before the Committee finalises them. It was a lacking thing this time (the first one was too small and the second one simple didn't exist). Gryllida 22:55, 18 April 2013 (UTC)

  • Agree 100% that the first feedback period should be made longer and we can do more to elicit more discussion during that period. Will incorporate this change for round 2!
  • In terms of adding a second period, I'm worried that this will make the committee's review take longer (some rejected proposers already felt like it took a long time), and increase the workload on the committee - you'll need to do a second review of each proposal, basically, if you have a second feedback/discussion period. I don't see any way of getting around that if we go this route. What I might propose trying first is having the committee do more preliminary engagement with proposals as they are forming. Spend more time discussing them with the community at the same time that the community is discussing them. Give your feedback and input early. Then, do 1 formal review to give scores and feedback as a group. That way, the decision may feel like less of a surprise, people will have seen it building from the early discussions, and will have more of the input they need to reapply in the future. What do you think? If you still believe we should have 2 rounds of feedback and the committee thus wants to do a double pass on reviews, I'd love to hear ideas for how we keep this from turning into an long cycle of committee work, discussion, and objections, then more committee work. I worry about losing some of the BE BOLD decisiveness that should come with a nimble-risk-taking-style individual grants program, I guess. Siko (WMF) (talk) 18:20, 30 April 2013 (UTC)

List of (a bit more specialized) questions edit

I think questions and communication are essential for better understanding the proposals. Would it make sense to have lists of questions prepared beforehand? I think, even before actually getting proposals, we can ask a bit more concrete than the general matter of "purpose, plan and budget". For example, for most of tech proposals, I would ask "what software tools and resources will you use and why will you select them among other options?", "are any parts of your proposed project (potentially) technically challenging, and if so how will you address them?", and "would you want to include any user testing for evaluating the software as an outcome of your project?". Sharing questions like these within the committee and with proposers beforehand might help streamlining the process and reduce the communication cost and turnaround. --whym (talk) 21:43, 23 April 2013 (UTC)

I like these suggestions. Standardized questions for certain types of proposals could be developed alongside the list of recommended dos and don'ts that we discussed a few sections up on this page. --Pine 21:56, 23 April 2013 (UTC)
I, too, love this idea, let's do it! Siko (WMF) (talk) 18:21, 30 April 2013 (UTC)

Open source and free licensing edit

For the next round, I hope that we include in the eligibility criteria that all materials and software code produced with the support of IEG funds, and any software code on which the new code is dependent with the exception of OS code, must be open source, published, and released under licenses that are compatible with Wikimedia content and MediaWiki licenses as applicable. I hope that WMF also can include the legalese to implement these requirements in any grant agreements that are signed by grantees. --Pine 22:10, 23 April 2013 (UTC)

Agree! Will incorporate into eligibility requirements for coming round. This legalese already exists in our standard agreements, as I recall, but I will double check. Siko (WMF) (talk) 18:22, 30 April 2013 (UTC)

Survey data from IEG participants, for discussion edit

Hi all! It is still in-progress, but here is where I'm drafting the report on what we've learned from the survey you all took, as well as the one that IEG proposers took. I wanted to try to wrap my head around all the various perspectives from those surveys before coming back to this page of committee-focused discussions. Hope you'll find the report useful for continuing our work here! All input welcome, as always :-) Siko (WMF) (talk) 18:26, 30 April 2013 (UTC)