Talk:Learning and Evaluation/Evaluation reports/2013/Edit-a-thons

Your feedback is welcome, and can you do better?

edit

Hello everyone!

As many of you may remember, the Program Evaluation and Design team invited program leaders, such as yourselves, to voluntarily participate in our first Data Collection Survey. Thank you for your participation! As a pilot survey, it has provided us insight into what you are tracking regarding data in the programs you implement, and how we can better support you in order to evaluate the impact your programs are making.

We have been looking at that data for our current focus programs: edit-a-thons, Wikipedia editing workshops, GLAM content donations, on-wiki writing contests, Wiki Loves Monuments and other photography initiatives. When the survey closed, we had responses from 23 program leaders from around the world, who reported on 64 total programmatic activities. Our team also collected data from 54 additional programmatic activities, to fill out areas where we had gaps in reporting.

 
Your feedback is important - assume good faith and let's evaluate together!

We're excited to announce that we have our first draft report ready for you to. It includes data reported and collected about edit-a-thons:

https://meta.wikimedia.org/wiki/Programs:Evaluation_portal/Library/Edit-a-thons

We evaluated the impact edit-a-thons are having based on their goals, which often include the creation of content on wiki, and the recruitment and retention of new editors. This report is not final and we would like to improve it over time. Since this is our first time doing a report like this, we want your feedback.

After you take the time to read the report, we would like your feedback on the talk page about:

  • Whether the report is understandable and written in a way that makes it useful to you
  • What kind of information you would like to see more of / less of
  • What we could do to improve the collection of data (e.g. would you rather like to send us your cohort data and let us do the analyis? – this might help with reducing your workload)
  • Any other feedback you have.

Secondly, we want to know if program leaders have more impactful or better quality data than is reported here. We had a lot of great data come through, but, we also know, 26 edit-a-thons reported, there are a lot more out there doing this specific kind of event. So please tell us: Can you do better than is reported here? This is a chance to brag and celebrate the work you've been doing, but didn't perhaps get a chance to report during this first survey. We can't wait to hear back from you!

So, in case you have run an edit-a-thon that you think was more successful than any other edit-a-thon already included in our report, please send a quick email to me and I will follow up with you on the next steps.

Over the next couple of weeks we'll be publishing additional reports, using your feedback. Thank you so much for your time and contributions. And if you need anything, feel free to email me or contact me on wiki. SarahStierch (talk) 19:20, 21 November 2013 (UTC)Reply

Hoxne Challenge

edit

It is misleading to call this a "contest" (I participated). Liam Wyatt announced that the stub article was going to be raised to FA status on the day, which obviously wasn't going to happen even with a room full of editors. It was an aspiration, is all.

General thoughts. I believe we have to think very carefully at present about the dividing line between "editathon" and "workshop". Editathons started life, I believe, as more like meetups with a content theme. Workshops are most naturally thought of as training events, with a spine of informative content about the projects to complement the acquisition of some facility with wikitext.

Editathons are routinely described afterwards as "very successful", by criteria that I don't entirely get. They do obviously derive more from the meetup end of the scale (i.e. the social dimension). Certainly less from the tracking of content created and participation. Now in the context of evangelism I don't discount "soft factors" at all. But we seem to be working with an axis where at one end is "good time had by all", and the other "training with real transfer of skills".

The Hoxne example, where there is a specific task, is really not representative. If you like, it was an attempt to reproduce in meatspace the activity of a WikiProject, speeded up and with interaction with topic experts. I travelled to London, added three tightly-worded paras with a British Museum curator, and went home. This all is nowhere on the axis I mentioned. No training element (assumes very competent encyclopedist editors in the first place). No doing your own thing or charming the newcomers. Charles Matthews (talk) 21:26, 21 November 2013 (UTC)Reply

Hi Charles! Thanks for your feedback. To me, there is a great distinction between a workshop and an edit-a-thon (though on occasion, edit-a-thons have some "workshop qualities"), did we fail to establish that in the history about edit-a-thons? Just curious.
Hopefully you were able to read through our conclusions, as our surveying showed that program leaders had specific types of goals they were generally focusing on (retention of new editors, adding of content to Wikipedia, etc), and we're unable to track "soft factors," without surveying participants. Then we'd be able to learn (without making a dreaded assumption), with surveys before and after events, or perhaps even quizzes or fun activities, to learn why participants attended the event, what skills they brought with them, and what they left with, and if they think they'd edit more. Without direct tracked conversation or surveying of participants, we really can't make assumptions - though I always like to assume people have fun :)
Regarding Hoxne, it was mentioned here, when we asked the community to provide feedback. While it's not a traditional edit-a-thon, I think it is a valid member of the edit-a-thon tree, as it brought people together to focus on a specific subject (or article) for improvement. But, if you think having Hoxne included is invalidating the idea of an edit-a-thon, I'd like to learn more so we (you, me, whoever) can work on the history section. SarahStierch (talk) 22:06, 21 November 2013 (UTC)Reply

Well, I applaud the aims of what you are doing here, but I feel a great need to be analytical about what you call the "tree". As I said, I think of it more as a spectrum.

During the past year or so, I participated in half-a-dozen WMUK events, typically at the "workshop" end, and a couple being edit-a-thons of a recognisable type (I'll hyphenate if you like). I also had good reason to think over the use of training and edit-a-thon events in relation to several Wikipedian in Residence positions. The Hoxne event and a World War I event at the British Library have, here in the UK, attained a sort of "exemplar" status; in other words they are taken as points of reference for the edit-a-thon concept. The same is happening to the Ada Lovelace events, it seems.

Basically, I would be happier unpacking all of these models and looking at them as I would in planning a workshop program: you take the allotted time and roughly allocate it to different objectives. Then at the next workshop you reconsider whether the different segments achieved their aims, and tweak accordingly. It is much harder to do that with any event planned on a "drop-in" basis, clearly. And if "outreach" of an undifferentiated kind is the stated aim, there is a harder job in store in asking what was achieved.

We (Education committee of WMUK) did have a look at some of the evaluation issues, prompted by our Board, and there is a preliminary page up at https://wikimedia.org.uk/wiki/Training_Metrics. It's already a bit complicated, and I don't want to claim too much for it as stands. My point is not that we have got it all right, and certainly I'm not a champion of any "metric" approach that is too blinkered.

It comes down to saying this, perhaps: the analytical approach, to me, suggests that some sort of "log" of edit-a-thon/workshop events is what is required. Then we could talk rather more easily about comparisons. Charles Matthews (talk) 06:14, 22 November 2013 (UTC)Reply

Spurious accuracy

edit

While a statement like "standard deviation was $239.98" is technically true, it also implies a level of accuracy that doesn't exist, given the relatively few data points involved. More to the point, anything beyond two or three significant digits (anywhere in the report) probably is not only meaningless, but is distracting. I suggest that this and other similar figures be revised by rounding. For example, in this case:

"standard deviation was $240."

John Broughton (talk) 21:57, 21 November 2013 (UTC)Reply

Hi @User:John Broughton! It's so great hear from you! This was something the team was deliberating about a lot. Do we show detail or more high level? How meaningful or meaningless should the means be (ha!). Right now, I believe Yuan is updating the edit-a-thon page, and we'll update the other pages (not on wiki yet) soon. This feedback was really helpful. Thanks! SarahStierch (talk) 23:16, 21 November 2013 (UTC)Reply
Hi @User:John Broughton, thank you so much for your suggestion. It is absolutely not necessary to have two decimals for all the means and standard deviations. Now We just kept two digits for the mean and standard deviation of Budgets and rounded that for participation numbers. We leave one digit for hours, because we think 0.1 hour (6 minutes) which needed to be presented. Thanks again for your opinion. YLi (WMF) (talk) 00:20, 22 November 2013 (UTC)Reply
there are principles of data precision, where you round calculations to reflect the accuracy of the base measured quantities. need to reflect on how accurate your measurements are, and then let the Significant_figures flow through and round all calculations to suit. fancier would be confidence interval for smaller data sets. (you appear to have accuracy of 2 digits, so you should round to that; please don't say 2.00 new users, this number abuse is picayune but makes the math majors cranky) Slowking4 (talk) 03:13, 23 November 2013 (UTC)Reply

A just-the-numbers section

edit

Excellent work on the report; I found it to be useful and informative. I notice that various interesting pieces of statistical information are presented throughout the report. I appreciate that the data is presented in context, but would it be possible to have a section at the end that recaps the different statistics reported? This way, if I need to recall them later I won't have to dig for them throughout the report. Thank you! harej (talk) 14:42, 23 November 2013 (UTC)Reply

Thank you for the feedback and ... the summary table will be appearing soon =) JAnstee (WMF) (talk) 18:09, 26 November 2013 (UTC)Reply

Text organisation

edit

First of all, thank you for the work and love you put into this report - it is great to see thorough analysis on the effectiveness of such a popular programme.

Reading this page as a very tired non native speaker, I would have some suggestions regarding text organisation for future reports. The report is not easy to scan, i.e. to find the important information quickly, and I believe the reason is that this report follows a very narrative structure ("we did x, y, z and then we found result $, Ł, and ¤") that describes what you did and what the result was in chronological order. However, my goal as a reader is to find the results and then if I am interested, to learn how you got to the result.

I would suggest starting sections and paragraphs with topic sentences that contain the conclusion and using the following sentences or paragraphs in a paragraph or section to explain the arguments, process behind those claims or results. This would make it much more easier to get the gist of the report fast, and for non-native speakers as well. Imagine that you only read the first sentence of each paragraph, would you still get the meaning? Now imagine, that you only got to read the first sentence of each section (maybe you are low on time; maybe you get tired reading English texts after a while; maybe the person who translated the report into your language abandoned after he or she finished the first sentences), would you still get the meaning?

At least for the summary section, or a new "executive summary", I would recommend the above exercise of starting with the most important thesis and then going into details in the following parts and not the other way around.

Hope you find the above a useful suggestion, --Bence (talk) 20:59, 26 November 2013 (UTC)Reply

@Bence, this is great feedback, thank you. Frank will probably way in regarding the summary section, as that was his main area of work for the report. I have one question - do you think using a template like Template:Nutshell would be helpful in each section? Perhaps with brief bullet points summarizing highlights from each section? SarahStierch (talk) 21:27, 26 November 2013 (UTC)Reply
It might be a matter of personal preference, but multiple nutshell templates would only increase visual clutter. I believe the main text in this case (when this is a text that wants to convey information, and not one that tells a story or aims for some artistic goal), should serve the goal of giving the information as efficiently as possible without outside aids - if the text works without aids, the added aids can improve it, but I am not sure it works as well the other way around.
Nevertheless, for example the first two sentences of the page do not carry any "meaningful" information, that would be a prime real estate to provide a summary and highlight all the findings whether with or without a nutshell template (one per page of the template should be fine, but nuts have small shells :). –Bence (talk) 21:50, 26 November 2013 (UTC)Reply
OK. I actually like nutshell templates, and I like the idea of having section summaries in easy to read boxes instead of another section of summary. I do agree about providing a summary in the lead. Perhaps that would be better - key learnings - rather than filling in additional text in the body. What do you think? (I still want to experiment with nutshells though). SarahStierch (talk) 21:56, 26 November 2013 (UTC)Reply
(Meaning I'd like to experiment with the nutshell section idea perhaps, as I tend to read things in boxes and bullet listed over en masse text, but having some type of summary section up top in bullet points would probably be just OK too without bombarding more text into the body). SarahStierch (talk) 22:04, 26 November 2013 (UTC)Reply
(edit conflict)
A summary in the lead would be a huge improvement. Boxes could work, but they seem like extra work (they are a popular method of getting the message through in textbooks, here on wiki they might be made collapsible and then require extra clicking, defeating some of the original purpose) and the extra text could potentially slow down the overall reading time of all readers (or at least those who do read the full text).
It is probably too late for this text, but I suggest that with proper organisation and self-discipline, you can edit the text of the other reports in a way that the topic sentence of each section is found in the first paragraph (ideally as the first sentence) followed by supporting information in the following paragraphs/sentences and optionally a concluding sentence. Then if you play my exercise with the imaginary reader who is either lazy, tired, in a hurry or just isn't able to read a lot of text, you get a result where she still gets the message even if she misses minor details or the way you got the results. –Bence (talk) 22:15, 26 November 2013 (UTC)Reply
(An other way to look at putting topic sentences at the beginning is to think of them as hooks that entice the reader to read on. If you tell me "Edithatons don't work" at the beginning [just an example] or that "We have found that edithatons are usually done for free", I'll be damned if I don't read on; if you start the section with explaining that "We have looked into whether edit-athons work... " "We have asked three questions about the budget...", I have to be very committed to read on and to find out what were the answers you got.)–Bence (talk) 22:19, 26 November 2013 (UTC)Reply

Very helpful indeed. I'm going to work on this while developing the publishing text for the workshop report. Also, I've posted the overview which I did add some bolded summaries. Let me know what you think! SarahStierch (talk) 22:25, 26 November 2013 (UTC)Reply

The bolded first sentences are working for me (in general, if you can bold the first sentence as the most important one - as opposed to the second or any other, you have achieved the goal of putting the most important information where it will be found fastest. In this case, being a long page, the added visual emphasis also helps find the key message of the whole page - keep up the good work! Putting a summary at the beginning and bolding it does effectively achieve the same goal in this case; but with new text, it might be faster to keep this order of information in mind from the start.) –Bence (talk)
Alright, I did quite a number to the edit-a-thon page. It's a lot of info, but, now you can glance at the top and get key takeaways. Let me know what you think! (And be bold and make edits, too). SarahStierch (talk) 20:41, 27 November 2013 (UTC)Reply
Thank you for your edits, I think the page became much more readable.
Some questions, suggestions that came to mind with the new content (might be answered in the body or on other pages already):
  • The report sets as a basis that somehow edithatons are the most reported programme and talks about a group of programme leaders. It would be useful to repeat somewhere at the beginning who was it that got asked, and who replied allowing you to say that edithatons came up most frequently. (I know the background, but the page kinda leaves this question in the attentive reader.)
  • Sometimes with the summaries you focused on the data you were able to get ("programme leaders could report on budget"), instead of the result that data tells you ("x percent of programmes are run for free; the others average $y"). This might be a clash between your goal as researcher to explain your work, and my goal as the reader to learn about edithatons. With all appreciation of the incredible work that went into the research, I would personally recommend putting the focus more on the learnings about edithatons than on the learnings about reporting about edithatons.
  • The bolded sentences about pages and characters added are unclear whether 3 pages = 24k characters, and whether this is 3 pages per user per edithaton or as a total. (Not sure you gathered the data, but the number of unique pages improved at edithatons would also be interesting, btw.)
But all this is minor stuff, reading the summary I already learned new things about edithatons and what to expect/look out for in a few minutes, saving me or the potential new readers a lot of valuable time to get excited and read the rest or explore other programmes :) -Bence (talk) 21:10, 27 November 2013 (UTC)Reply
Thanks again for the suggestions. In response:
  1. I do mention how many program leaders responded and how many reports we additionally mined at the top. I also gave a bit of vague information about who replied, but I can't provide much more detail to keep respondents and the edit-a-thons anonymous (as we promised we would). I did expand a tiny bit in that first bullet point... and Jaime and our intern team could probably provide more data about who was there (x percentage of chapter people, x percentage of individuals, etc).
  2. I tried to keep the summaries short, or I'd just be repeating what is already in the often already short paragraphs reporting specific details. I was told to tell a story, so I'm doing my best, based on the numbers given to me by the data side of our team. I think that if someone is interested in the bolded header they'll get motivated to read on in the data summary. As a reader, whom if I wasn't invested in this report as a staff member, would not read this entire document, I'd only read what I can by and interests me. I'm also not a bit numbers person, I like high level, so, that's probably why I lean more towards it in my writing. But at this point I'm rather burnt out staring at this document. :)
  3. I worked on editing the two bolded areas (at the lead and in the body) that talk about the 3 characters thing. I was feeling uneasy about that too. Thanks for the second opinion!

SarahStierch (talk) 21:40, 27 November 2013 (UTC)Reply

Thanks for all the extra effort you put into this - I realize that after a point there are diminishing returns in further editing and you also need to focus on the other reports. (Just shortly: 1) perhaps what I was looking for was the total number of people who responded, but the explanation that you asked a mix of people already helps as "programme leader" is really a new term that the PED team just introduced into the Wikimedia vocabulary 2) Sure, a narrative conclusion also works, doesn't have to be numbers [this point was referring to only a minority of the summaries, like the one on staff hours]; the summaries for the other sections was great where you had both the data and the results e.g. the one on retention; 3)thanks it is clearer now. ) –Bence (talk) 21:55, 27 November 2013 (UTC)Reply

What is an edit-a-thon ?

edit

Hi,

What is the scope of an event to be called edit-a-thon ? 26 edit-a-thons seems low to me. In French, there was 13 « edit-a-thon » (the last was last week-end and I didn't the workshops each month in Rennes) : fr:Wikipédia:Journées contributives. And 46 more according to en:Wikipedia:How to run an edit-a-thon. Why weren't they all counted ? (I guess there is a threeshold based on datas available).

I participate to the 7 in France so I can give some details if needed (budget, donations, hours, articles, etc. ; Benoit Rochon could probably give details for the 6 in Canada).

Cdlt, VIGNERON * discut. 21:34, 26 November 2013 (UTC)Reply

Hi there! The actual total was 46 edit-a-thons - 26 were reported by program leaders (speaking many languages, but we made all of the data anonymous so I don't know if French edit-a-thon program leaders reported data). The additional 20 were mined by our team for additional information, and those were all English edit-a-thons (a random sample from edit-a-thons produced in 2012 in English projects). Due to time constraints, we were not able to gather data on every single edit-a-thon around the world that took place in 2012. We rely primarily on self reporting by program leaders, and we would not be able to get this pilot report out at a reasonable time if we didn't stop at our data mining :)
However, if you do have data to report about edit-a-thons you produced or attended, please let me know! Myself and Jaime would be happy to work with you to bring this data into our updated reporting. This is a pilot report, and we hope it will motivate program leaders in many languages to participate. Also - we did reach out to program leaders at the Wikimedia France and Wikimedia Canada chapters. So they were asked to take survey and report on their programs (including edit-a-thons). Again, due to anonymizing of the data, I'm not able to say who did report their data. SarahStierch (talk) 21:44, 26 November 2013 (UTC)Reply
I understand. Thanks already for that report, it's very motivating.
So, I'll check to my « program leaders » if datas were send and I'll try to improve metrics for the next edit-a-thon ;)
Cdlt, VIGNERON * discut. 22:26, 26 November 2013 (UTC)Reply
Great! For the survey we created this spreadsheet which features the data that we were requesting from program leaders, and it can be a handy way to help track a few things. We'll also have better tracking tools and devices in the future :) SarahStierch (talk) 01:56, 27 November 2013 (UTC)Reply

Other Metrics

edit

I'm not convinced that editathons are a good way to recruit new editors, and I think these figures bear that out, but that shouldn't concern us unduly. There are much better ways to measure editathons than in terms of new editors successfully recruited. Though we may be able to improve that pretty low percentage a bit by targetting our editathons at people who have that combination of altruism, spare time and IT skills that makes for a possible Wikipedian, though of course not every institution is going to be keen to let us try and recruit "their" E volunteers to become our E-volunteers. I saw GLAM originally as being about expert engagement in order to achieve quality improvement. Increasing the quantity of our articles may be a side effect of that, but better quality is not always longer. So we need to be careful at measuring results by bytes added on the day. It would be useful if we could find metrics to measure quality improvement. It would also be useful if we could find ways to measure cross training of existing editors, an important spinoff of editathons, in my view a more important and realistic one than new editor recruitment. I've started to look at reactivation, trying to reactivate inactive editors by inviting them to nearby events, and this will be an important part of my testing in 2014; Hopefully by Wikimania 2014 I will have enough results to know whether this works. But to my mind the biggest success of editathons is likely to be in retention of our existing editors. Of the ten established Wikimedians who took part in the Hoxne challenge in 2010 all ten were still active three years later. Of course that is too small a sample to take as definitive, and I would appreciate it if others could look at their own past programs from 2011 and before to see if their experience matches London's. But it is a very promising result. Another thing that I would really like to test is whether editathons succeed in deepening editors involvement with the movement, my suspicion is that they do, but measuring this is a non-trivial task. WereSpielChequers (talk) 00:18, 27 November 2013 (UTC)Reply

@User:WereSpielChequers Thanks for your opinion.We do have data and matrix to measure the existing users retention and we also produced graphs to present it. It might appear lately if our team decided to show that graph. And for quality measurements, we have the bar graph to compare the number of good articles, featured articles where you can look at Graph 8( Edit-a-thons bar graph: increasing quality). And you can also see the Appendix summative data table about number of pages created or improved and dollar to pages created , which are key parameters to measure the quality improvements as well.YLi (WMF) (talk) 01:15, 27 November 2013 (UTC)Reply
Hi Yli, I'm aware of some of those metrics. But I didn't think we'd cracked some of the measurement problems such as FA standards being different between language versions of Wikipedia. WereSpielChequers (talk) 13:13, 28 November 2013 (UTC)Reply
Hey! WSC! Thanks for stopping by. Just wanted to mention - we've also been doing research into the retention and productivity of experienced editors. Do they edit more during the time period of the edit-a-thon than they do on a daily average 30 days prior to the event? Are they more motivated to edit in the 30 days after the event ended? We are seeing some interesting leanings that might just show that. So yes, we're absolutely investigating it. But, new editor recruitment and retention was selected as one of the most important priority goals by program leaders in the surveys, but "Retaining existing editors/contributors" was not. Once time allows, we will be investigating this further - perhaps the theory of change will need to change for (some) edit-a-thons? So yeah, it's surely something our team wants to explore more - we just need more data to do it (and we've been pretty heavily engrossed in timelines for this initial report). SarahStierch (talk) 01:54, 27 November 2013 (UTC)Reply
Thanks Sarah, great to hear about your research, it would be phenomenal if it turned out that GLAM involvement increased longstanding Wikimedians short term activity and also longterm retention; Though for short term uplift I do wonder whether GLAM events attract the currently active rather than increase their short term activity. For example we usually promote events to existing editors most effectively by watchlist notices, and pretty much by definition that is more likely to attract those who are currently active. Taking the proverbial oilrig worker, romantic or service person who alternates between months when they are with us and months when they are working or with a new partner, I suspect they are much more likely to notice and attend an event when they are active and checking their watchlist. As for new editor recruitment I think we need to differentiate between the practical and the desirable here. If someone can crack this and make GLAM an effective way to recruit new editors then that would be great, and it is certainly worth testing. But we also need to focus on the things that GLAM is good at. I appreciate that people are being ambitious in trying to turn GLAM into an editor recruitment program. But I'm not seeing lots of success in that, and there is a risk that the people who are happy for us to train their members or volunteers to edit Wikipedia are not the GLAMs who have the most to offer in other ways such as content releases. WereSpielChequers (talk) 13:13, 28 November 2013 (UTC)Reply

Follow up and embedding

edit

Thanks for doing this research - I know how hard people worked to herd cats on this. I would like to make a reflection coming at it from a background in education. At no level of teaching, and I have taught three year olds through to adults, would you expect to do one session, however good, and then send the learner off to do it by themselves. The lessons need re-inforcing and the learners need encouragement. I have experienced this myself; I'd get sent on a course to learn a new computer programme, get taught well, get all enthusiastic but don't use that programme for months and so forget all the learning. In my mind an editathon can be like this. We need to follow up on the learners with emails, or better personal calls, to encourage them to keep editing. We could do it again after a few months and offer refresher events. This won't get 2% to 100% but I think it could make a big difference. Anyone done any follow ups and have experiences? Jon Davies (WMUK) (talk) 08:44, 27 November 2013 (UTC)Reply

I agree, and we have seen this primarily with workshops (we will be posting that report for review in the upcoming week(s) (it's a holiday in the US, so things are rather slow around here) - where we saw that those users that were retained seem to have come from events that were the "longest" by hours - which has led us to ask the question: can a series of workshops (or in this case, edit-a-thons) that are specific towards a set group of people retain new editors? We don't know until we experiment (we being the community). I do know that I get a lot of request for frequent events, and one of the biggest comments from newbies who attend edit-a-thons about why they don't edit after is because it goes after their radar. Perhaps if we had consistent events and re-invite those specific people (not just tell them "Every week we have an event, so come," but do a specific outreach invitation that is peronalized and not just on wiki or some passing reminder) perhaps we'd have a retention. We'd of course love to see people experiment with this.
In our next steps section in the overview we suggest that we (us and the community) need to be bold regarding experimenting with different program design strategies. I think that the audience specific series with personalized invitation is one of those opportunities. SarahStierch (talk) 15:55, 27 November 2013 (UTC)Reply
Hi Sarah, we have one UK test that is exploring the longterm impact, I've now run three evening sessions at Conway Hall. Though not all the participants have been to two let alone all three sessions. But the feedback, especially from some of the older editors is that it takes more than one session to get comfortable and confident. WereSpielChequers (talk) 12:50, 28 November 2013 (UTC)Reply

GLAM relationships

edit

I was a little surprised that the aims of editathons didn't include relationships with GLAMs (or other "host" organisations). A lot of editathons appear to take place in GLAM premises (and I suspect that donated resources such as venue are often provided by the GLAM) and often relate to themes reflecting the collection of the GLAM. Sometimes they are organized by a wikimedian-in-residence. So I would have thought that creating/strengthening the GLAM relationship was a goal. GLAMs are often expected to be more "digital" and some even have explicit KPIs to achieve or at least KPIs to report on. For GLAMs, editathons based around their collections are a means of being more "digital" and they often publicize the event for that reason (to be seen to doing more digitally). So I suspect that a number of aspects of a GLAM-hosted editathon are likely to be decided with an intent to further develop GLAM relationships. Similar remarks apply to other host organisations e.g. Universities. Kerry Raymond (talk) 15:28, 27 November 2013 (UTC)Reply

Thank you

edit

As someone who has participated in edit-at-thons, and as someone who has often wondered how much edit-a-thons are really contributing to new editor acquisition/retention, this report is very useful. Kudos, Steven Walling (WMF) • talk 22:11, 19 December 2013 (UTC)Reply

More history needed

edit

Hello, Jimbo did not propose the first editathon. Nor was the first use of the term in an event name as recent as 2011 (I don't think; depending on what counts as an event). They've been discussed since at least early 2004: WikiProject Fictional Series proposed one in early 2004, months before Jimbo's post on the theme. SJ talk  00:27, 5 January 2014 (UTC)Reply

Hi SJ. We asked the community, and posted it to many mailing lists, to share their stories about the history of edit-a-thons. I encourage anyone who has a story to share to expand the history section on the page. Sadly, there isn't an actual *history* of edit-a-thons available anywhere online that I was able to find. You can see the discussion that the community was involved in here. SarahStierch (talk) 00:49, 5 January 2014 (UTC)Reply
Thanks, Sarah. I don't have a thorough history myself :) and I know you did ask many people. I'm just noting that the current text reads as a definitive history, rather than a compilation of the stories gathered so far. I'll add the above context to that section. SJ talk  17:07, 5 January 2014 (UTC)Reply

Can you share the names of the editathons included?

edit

Thank you for putting this data together and analysing it. I understand some of the reasons to anonymize data. However it seems only a few of the editathons included are named (in the introduction). Is it possible to share the full list of editathons, now that the rest of the data has been merged and clustered beyond individual recognition?

Regards, SJ talk  00:27, 5 January 2014 (UTC)Reply

Hi SJ! The data reported at the bottom actually have unique "Report ID" numbers that can be matched across the last three tables so that you can actually regenerate the dataset missing only event names and dates (See Appendix heading "More Data" for the complete input, output, and outcome data used in the report). However, I must restate the need for caution, at this early stage in the reporting with such small numbers, we are aware that the data do not represent all programming, and that the data are too variable to draw comparisons between programs statistically. I think we perhaps could share a list of those edit-a-thons which we mined as an example of the representativeness of the current data set since those data are publicly available, however, those program leaders who self-reported were assured their participation and reporting would be kept confidential, so that they must remain. Still, we have only mined events from the English Wikipedia and as we have not hit any critical mass we do not expect full representativeness in the data at this stage.

Edit-a-thons mined:

  • Ada UK 2012: Women
  • Blinded 2012: Women scientists
  • Boston WLL:
  • Cambridge Ada 2012: Women
  • Gangs NY 2012: New York City
  • India: Women's history
  • IU WLL: Indiana University Presidents
  • Maryland WLL:
  • Movies 1: Movies
  • Portland WLL: Oregon
  • Princeton: Princeton history, culture & women
  • Princeton Sports: Princeton sports
  • Princeton women: Princeton women
  • Seattle WLL: Seattle
  • Smithsonian WLL: Smithsonian Finland: LGBTQ
  • Umass: Biographies of minorities/women
  • Wikipedia Loves West Hollywood 1: West Hollywood
  • WWHM 1: Women's history
  • WWHM 2: Women's history

JAnstee (WMF) (talk) 19:38, 6 January 2014 (UTC)Reply

Priority Goals

edit

In the text you say you offered 18 priority goals plus an other option, but I could not find these following the link. Is there a more direct link possible? Also will these be used again for the 2014 Evaluation report or is there a discussion about how these can be developed? Fabian Tompsett (WMUK) (talk) 14:44, 19 February 2015 (UTC)Reply

Hello Fabian! The goals referenced are found in the overview of the reporting. They have been targeted in data collection this year as well and will be reported on again in our upcoming reporting for programs evaluations. You can read more about where we are at in the process in this recent blog post. We aim to have those reports published to meta again in staggered rollout from late March through June. As far as development of the list, please make suggestions on the talk page of that overview report so that we might do so. JAnstee (WMF) (talk) 23:30, 20 February 2015 (UTC)Reply
Return to "Learning and Evaluation/Evaluation reports/2013/Edit-a-thons" page.