Grants talk:APG/Proposals/2015-2016 round1/Wikimedia Argentina/Staff proposal assessment

Latest comment: 8 years ago by Anna Torres (WMAR) in topic Other issues

FDC staff assessment feedback edit

Thanks for evaluating Wikimedia Argentina’s proposal and for expressing your thoughts. We would like to comment on some points of your analysis and to provide additional information about them.

And thank you so much for taking the time and effort to respond here in detail, Anna as well as board and staff of WMAR. KLove (WMF) (talk) 07:26, 15 November 2015 (UTC)Reply

Concerns exposed in your evaluation edit

To make it easy to understand, not just for you but for the members of the FDC and for the community as whole, we will review one by one to all the concerns exposed in your evaluation.

Evaluation Methods edit

WMAR's complicated and extremely detailed plan will be a challenge to track efficiently. We continue to struggle to understand WMAR's progress year over year, given the inconsistent approach to evaluation

2014 impact report: Yes, our evaluation method is different from the one presented in our current annual grant. The main reason is that the 2013 annual proposal grant wasn’t planned by the current work team and the results (reflected in the 2014 impact report) don’t necessarily relate with those planned for 2014 and 2015.

2014 annual plan: We started with an evaluation method that is quite similar to the one we are using right now. We work with per-activity evaluation forms that help us to see where to focus or if any change is needed during a project implementation. We believe that both, short term results and long term results, are needed to build a more sustainable movement.

What’s the main problem with our evaluation method?

WMAR believes that our main problem was the misunderstanding in the progress report 2015. Even having comprehensive metrics to show what we had achieved, we believed that using the (basic) global metrics and providing accompanying storytelling would be enough.

After this feedback we understand our assumption may have been wrong. We were encouraged by WMF staff to to look into other affiliates’ reports, to learn from others and to add quality metrics and other quantitative metrics to our analysis. Similarly, we were told to point out anything we might think was worth highlighting into our Annual Plan, to help FDC members with our numbers and progress. That’s what we have done:

  • To plan and introduce new and better metrics
  • To highlight our numbers from 2014 and 2015 to help the FDC members understand our progress

On the other hand, the new metrics we introduced are not coming from nowhere, we looked into others’ reports, and also looked within GLAM and Education Outreach pages (specifically these GLAM indicators and these ones for Education) regarding metrics.

In terms of quality, we worked very close along with other chapters, to learn and to improve our quality performance, and we set up our first quality indicators for editing articles and for uploading photos (find it here for GLAM and here for Education. To set up these indicators, we also worked with our community in Argentina, with very active editors and very active Commons contributors, who reviewed them. We also read learning patterns (like this one to design activities according to students and teachers’ interests as a way to improve our results with students as editors in the classroom.

This past month we worked to create the 2016 evaluation forms, to track our activities. As It seems that is rough and difficult to understand, I have translated them into English and you can find them on the following links (please forget our quick translation):

  • Education Program:

- Online course evaluation form
- Face to face training project evaluation form
- Wikiambassador evaluation form

  • GLAM Program

-Contest and challenges evaluation form
-Edith-a-ton evaluation form
-GLAM Online course evaluation form
-Digitizing project evaluation form

None of those evaluation forms are ready yet, but you can take a look to see how we evaluate our activities in WMAR. No evaluation forms have been developed yet for our new program.

We want to emphasize that we evaluated our programs and decided to change the Federalization program for the Community Support program because the first was too much effort in terms of money, human resources and time, and we got low results. In your staff assessment you say:

WMAR is documenting its learning well. We do not yet see strong evidence that WMAR is applying learning effectively. For example, we sometimes see WMAR continuing to pursue programs and approaches that have proven ineffective in the past (e.g. Federalization, overly complex approach to program design)

Let us explain this matter better:

We're not doing any Federalization program in 2016. Maybe it's confusing for you and for the FDC member as a cross cutting line is called "scaling up-federalization" but as we explained in our progress report and in the current anual report the focus and the meaning of what we understand as Federalization has changed completely. In the last anual proposal we thought the Federalization program as the one responsible to build and create editors' communities around the country. After one year, WMAR, understand Federalization as scaling up activities. In the current anual proposal we define it as “the way to the promotion and acquisition of projects put forward by WMAR including communities which did not belong to the Wikimedia community, but were however points of reference within their own contexts and supported us in positioning our projects as innovative opportunities in matters of culture and education" That mean, per example to give virtual support to a teacher outside Buenos Aires within Wikimedia projects, and help him/her to achieve the results expected.

In the other hand, "learning" also means to adapt international contests such as WLM to our local context by designing a new one according to our Latin American context –you use our WikiTour example in a learning pattern– or our editing challenges to our editor’s interests ((here).

Complex approach program design: All SMART objectives were reviewed by the FDC staff and the learning and evaluation program. And even so, if you are not referring to them in our FDC discussion page we never got a question about our complex planification to provide you a more simple planning overview if needed.

Thank you for this additional information, Anna, as well as an example. I will be sure that the FDC reviews it. KLove (WMF) (talk) 07:26, 15 November 2015 (UTC)Reply

Low results edit

We understand your concern. Our main objective is to improve our results in every level, although it should be pointed out that in some cases we are seeing numbers getting better and better year by year. For instance, we have been working in our Education page in Outreach (find it here) and in our GLAM page in meta (here) where you can find all the global metrics updated. There you can see that we continue improving.

In your staff assessment you said:

Based on past performance, some of the proposed targets may not be realistic. For instance, Wikimedia Argentina has created or improved 617 articles so far in 2015, while their target for 2016 is 3,310. We realize that activities scheduled in the second half of the year may impact 2015 performance.

This must be a mistake or misunderstanding. So far 3117 articles have been improved or created within WMAR activities this year. 2680 have been articles created or improved by GLAM activities and 437 articles improved or created by the Education Program. In this sense more than 8 million bytes have been added. To point out other metrics, more than 7000 documents were released and more than 2000 new users were registered.

Thank you for providing this updated info, Anna. We hadn't seen that number of articles improved/created. We reviewed the # you submitted on your progress report from mid-2015. We saw 457+160 listed in the table in this report. We use that to get a sense of where the organization is at the halfway point, fully recognizing lots can be done in the second half of the grant term.
Thank Katy, just to give you and the FDC members a better understanding of our metrics (they are quite updated in the anual grant proposal too)
GLAM
Articles improved by image (as 30.10.2015)= 813 (articles are not counted twice. If an articles has been improved or created and an image has been added, we count the article in one category)
Articles created or improved (as 30.10.2015)= 1870
Education
Articles improved by image (as 30.10.2015)= 36
Articles created or improved (as 15.11.2015) = 401 (+40 of an ongoing editing week with teachers)= 441
Total improved by images: 849
Total improved or created: 2311

--Anna Torres (WMAR) (talk) 12:50, 15 November 2015 (UTC)Reply

Capacity edit

We are concerned about the feasibility of this complicated plan, particularly given the evaluation required to complete it satisfactorily.

WMAR has a professional staff. All the members involved are trained and have been working on setting our evaluation method. We are capable to carry out this plan that has been designed under the view of our community and our learnings and failures coming from the movement and other chapters’ experience. We also have a close and engaged community that has been and will be trained in our evaluation method to report properly all the activities where they will be leading.

We strongly believe that the goals and metrics presented are possible to reach and achieve and, again, most of them have actually been proposed by WMF or have been in use by other chapters with great performances. In this sense, along with global metrics we added other common metrics such as number of institutions involved, new partners, media coverage etc., and some new quality metrics to make the FDC staff, FDC members and the community, understand how a program can be improved by analyzing its qualitative aspect, that can, also, be transformed in learnings and movement improvements.

Budget edit

Proposed results are not commensurate with funding. Number of staff is high relative to proposed impact.

WMAR is not increasing its staff this year. We are only four people: an ED, a Communications Manager, an Education Manager and a part-time Administration Manager.

It is understood WMAR is not increasing staff this year. Thanks for making that clear. KLove (WMF) (talk) 07:26, 15 November 2015 (UTC)Reply

The distribution of our daily work comes as follows:

  • ED: In charge of strategic planning, GLAM program, Community Support Program and staff daily work.
  • Education Manager: In charge of the Education Programs from their implementation to evaluation, under the ED’s supervision.
  • Communication Manager: In charge of the communication strategy under the ED’s supervision. Also, works as assistant for the GLAM and Community Support program.
  • Administration Manager: part time. Home-based and two days per week located in the office. In charge of the daily administration under the ED’s supervision.

We respect your opinion but we don’t agree with the assumption that we have more staff than necessary.

Thank you for giving us a good sense of the roles of each of the staff. KLove (WMF) (talk) 07:26, 15 November 2015 (UTC)Reply

This can be better understood by explaining what we can’t do because of our limited human resources:

  • Federalization program (2015 –this one is getting replaced in our 2016 proposal): We have at least five large geographic communities in Argentina. As you may know, Argentina is one of the largest countries on Earth, and distances can well be of two or three thousand kilometers from one point of the country to the other. All our trips must be done by plane or we’d spend, for example, 18 hours to arrive to Misiones Province. We don’t get to our communities easily, and that is why we suspended the program and changed it for a new one. We don’t have human and economic resources to sustain this program. Also, after the FDC staff site visit, they gave us the advice to focus our strategy in Buenos Aires, even when we expressed our worries that we don’t want to disengage with our partners and communities in the rest of the country. That's why the scaling up-federalization strategic line is cross-cutting this year as a way to approach differently the demands and needs that may appear outside Buenos Aires without exposing us to wear ourselves out.
  • Education program: We accommodate not more than 5-10% of the demands of schools wanting to work with us. We are forced to restrict our efforts to schools having a strong and stable internet connection for their students, which are the least. But, again, we try to work with all the education sectors, because the percentage of students that have good Internet access in school does not represents Argentina’s education system. That’s why invest time in teaching students and teachers and propose offline school activities, where students can improve Wikipedia and other projects at home (where normally the connectivity works better).
  • GLAM program: Due to the lack of human resources we focus our strategy in Buenos Aires and invite online participants to improve content regarding the material, documents and other material released, for instance, through our digitization project. We make our best efforts to cover topics that represent Argentine culture and history as a whole. Also, we invest in scholarships to bring volunteers to our activities in Buenos Aires. We think that is not fair that just the volunteers living in the city can enjoy WMAR activities or have a proper access to the cultural sector. We work for inclusion and not for exclusion.

We do our best with the resources we have in a large, complex, and unequal country, in the so called Global South. We can’t be in two places at the same time, and operatively we are just three people to carry out our activities. --Anna Torres (WMAR) (talk) 23:37, 9 November 2015 (UTC)Reply

Thank you Anna, for all of this context. I am grateful and am sure the FDC will be as well. Cheers, KLove (WMF) (talk) 07:26, 15 November 2015 (UTC)Reply

Other issues edit

  • We suggest the FDC staff to deliver their feedbacks regarding reports earlier. That way, we can know if our results are as expected or not, if our strategy is right or is wrong. We felt that this year when we got the feedbacks, we did not have room for maneuvering, as we were immersed on our annual planning and full of activities.
Hi Anna, I hear you that it would have been good to get our feedback on your progress report earlier. However, the reports were due on July 30, and I was out of the office for a few weeks in August. We arranged all of the Round 1 check-ins as soon as possible after our team was fully back in the office and able to review the reports in detail. In any case, I agree that it is important to get timely feedback and I'm sorry it didn't come early enough. For the impact reports, as I think we discussed, I am wondering if you or other orgs may want to submit them early (before March 30, three months after the grant closes) to get feedback earlier? If they are submitted in late Jan, for instance, we could review them and speak in February. But that means submitting earlier than required. Let's talk more about this one. KLove (WMF) (talk) 23:10, 14 November 2015 (UTC)Reply
Hi Katy, we are referring mainly to the Impact Report. We know that the progress report was submitted after Wikimania and you gave us your feedback in a month, which was ok for us. But we thought that maybe, having the feedback of the Impact Report before and knowing earlier if our strategy was what expected for that year would give us the chance to compare what worked the year before and what did not. Some projects might be planned the same way and maybe need to be change as they do not give us the results expected, from your point of view . Does it make sense?--Anna Torres (WMAR) (talk) 01:44, 15 November 2015 (UTC)Reply
Ah, yes, Anna! That feedback from the impact report was indeed quite delayed. I'm sorry for that, and I hope I'll do better next time. That's exactly why I was thinking about the timing for the impact report--it comes in on March 30 and yet on April 1 Round 2 proposals are due, and FDC staff switch straight away into in-depth analysis, as you know! :) Let's think more about this timing for impact report. I would like to offer you something that works better for you! KLove (WMF) (talk) 07:09, 15 November 2015 (UTC)Reply
Thanks Katy, now I understand better too :) I am sure that we will find the way, maybe by having more monthly contact, per example. Thanks again!--Anna Torres (WMAR) (talk) 12:53, 15 November 2015 (UTC)Reply
  • We would like, if possible, to have detailed metrics by country. It is really hard to take informed decisions when we are missing crucial data.
Which detailed metrics are you looking for?
In the last year staff assessment we got metrics per country regarding number of active editors, per example. I copy the table here:
Editors Country Wikipedia October 1, 2012 October 1, 2013 October 1, 2014
All editors {{{country}}} {{{language}}} {{{overall2012}}} {{{overall2013}}} {{{overall2014}}}
Active (5+/mo) {{{active2012}}} {{{active2013}}} {{{active2014}}}
Would be great to have at least, three or four times per year, the chance to have these metrics per country. We can see that the Spanish Wikipedia is improving (general view) by following the statistics provided by WMF, but we don't know how we are doing in Argentina and how our work impacts in number of editors, per example--Anna Torres (WMAR) (talk) 01:49, 15 November 2015 (UTC)Reply
Got it, Anna. I'm looking into whether/how we can get this data for you and others. KLove (WMF) (talk) 07:09, 15 November 2015 (UTC)Reply
That would be great!--Anna Torres (WMAR) (talk) 12:54, 15 November 2015 (UTC)Reply
  • It would be great to have at least one call per month with the WMF evaluation and FDC support team as a way to follow our work and improve together.
I'd be very happy to consider this and I really appreciate your suggesting it. Let's discuss this in December, and I will also follow up with the evaluation team. KLove (WMF) (talk) 23:11, 14 November 2015 (UTC)Reply
That would be great for us. We are really committed to improve, if needed, our strategy with your monthly advices and feedbacks.--Anna Torres (WMAR) (talk) 01:52, 15 November 2015 (UTC)Reply

--Anna Torres (WMAR) (talk) 23:37, 9 November 2015 (UTC)Reply

Other news edit

Yesterday,November 11th, and after months of working with the Academia Argentina de las Letras 65 books were digitized through our digitizing project. That means +9000 pages. The institution is already working on 20 more. We will be uploading the books asap. --Anna Torres (WMAR) (talk) 03:39, 12 November 2015 (UTC)Reply

Great news, Anna Torres (WMAR)! Thanks for the update. KLove (WMF) (talk) 19:19, 13 November 2015 (UTC)Reply
Return to "APG/Proposals/2015-2016 round1/Wikimedia Argentina/Staff proposal assessment" page.