Talk:Community Tech/Event Metrics

For older discussions, see Talk:Community Tech/Tools for program and event organizers/Archive 1

Comments re: Revised project plan and priorities (Feb. 20, 2019) edit

Leave your thoughts here about the Feb. 20th post showing a new project schedule.

Comments re: ‘Details on metrics planned for two new reports (Feb. 12, 2019)’ edit

Leave your thoughts here about the metrics planned for our first two Event Metrics reports, as defined on this page, Event Metrics/Definition of metrics.

Comments about: ‘Metrics changed, delayed or dropped’ (Oct. 26) edit

This section was created to gather your responses to the post Metrics changed, delayed or dropped."

Comments about 'Metrics changed or limited' edit

Leave your comments here about the changes to these metrics described here.

  • ‘Pageviews to files uploaded’
  • ‘Namespace’
  • ‘Still exists?’ & ‘New page survival rate’
  • ‘% women (estimated)’
Pageviews to files uploaded. I don't quite understand why there is a problem because there are existing tools that do this, e.g. BaGLAMa and I know the State Library of Queensland loves watching their stats. So are we trying to do something different to BaGLAMa here? Or is what's being proposed what BaGLAMa is already doing and I don't realize it? Kerry Raymond (talk) 00:37, 27 October 2018 (UTC)Reply
Thanks for the question Kerry Raymond. I'll have to look at BaGLAMa more closely. In the meantime, I'm pinging our lead engineer, Mooeypoo, who will want to look at BaGLAMa as well and will be able to give you a better answer than I. —JMatazzoni (WMF) (talk) 02:23, 29 October 2018 (UTC)Reply
Hi Kerry Raymond thanks for the question! From a cursory glance at BaGLAMa's tool, it seems it's actually giving us a different figure; the API that the tool is using can only give us page views for the articles, and not the image themselves. What this tool (and others) seem to be doing, is collecting the page views from the articles where the image is embedded, for all time, since the article was created. That is, if you inserted an image yesterday, the information you get from the API that's being used is the pageviews the article had forever. That's inaccurate, to say the least. I don't know how the tool works without looking at the code, but knowing how the API works, the information is different than strict views to the actual images.
There is, theoretically, a way to only look at page views from the moment the image was inserted, but that requires parsing every revision of the page, backwards in time, to find the first iteration of [[File:...]] so we can ask for pageviews only when the file was embedded; that is a huge amount of work, technically, and is also a significant increase in the general load that we will have on the system. It is, for this moment, unreasonable, technically.
Looking at the available data, then, we realized that we can give you data that is helpful is gauging the impact of the embedding of the images -- average pageviews on the articles the images are embedded in -- without showing numbers that might be very inaccurate, especially for events that are smaller or have happened recently.
I hope this helps explain things, MSchottlender-WMF (talk) 18:33, 30 October 2018 (UTC)Reply
Maybe I'm missing something here. I thought we *were* trying to track the pageviews of WP articles containing the image. If we are talking about direct access onto Commmons or via other non-WP projects, I think the numbers from these are probably trivial (judging from my own photos on Commons). I may be mistaken but, since BaGLAMa reports only in whole months, I think BaGLAMa runs once a month and does exactly what you are proposing here, for each image, it gets the articles that link to it, and grabs the pageviews for those articles for the last month. So I think it is approximate given that. It will overcount pageviews for an article that had the image added mid-month and undercount when the image is removed mid-month. Swings and roundabouts. Since BaGLAMa2 is most working with large image donations from GLAMs, the inaccuracies hardly matter, as it is mostly reporting big numbers anyway. Now I think (again I might be mistaken) that BaGLAMa2 is storing each month's data and building on it month-by-month, so they don't need to know when the file is added/removed from the article, just whether or not the image is in the article at each run. Now something I have mentioned earlier in relation to the computational cost over time of these metrics is to understand how we use them. When you are in an event and campaign, the demand is for near real-time. When participants are doing edits, they want to see them showing up quickly in the metrics; it is motivational. Frequent updates are needed when the event/campaign is active. But when it becomes inactive or has only low levels of residual activity (which it may occur long before the stated event end time, which an organiser is motivated to make as late as possible in order to easily get subsequent tracking of participant activity), then it doesn't need such frequent updating as it is mainly used for reporting. And allowing the update frequency to decay over time reflecting activity level makes a lot of sense. Do it hourly when the event is active (meaning change in # edits, # files uploaded), drop it back to daily, then weekly, then monthly, potentially even to manual "update on demand". If you work with which articles contain the image at the point of update but the update frequency changes with activity, then the stats will be quite accurate when the numbers are small (as the event is active then and the update of the cumulative stats is frequent) but can become coarse (e.g monthtly) once the event is inactive and the overall numbers will be larger so the accuracy matters less. The goal isn't necessary to have perfect stats. They have to be accurate enough for the purpose they are needed for and there are two purposes and they differ. My method avoids the need to work out when an image is added/removed from an article and also seeks to manage the overall computational demands of all the events cumulative over time (since page views keep on increasing forever and the number of events increases forever) while maintaining an accuracy level acceptable in the two scenarios that the metrics are used for (active and inactive). I am assuming in all of this is that we will be storing the cumulative statistics and adding to them and not that we will be generating the lifeime statistics on-the-fly each time? Kerry Raymond (talk) 23:28, 30 October 2018 (UTC)Reply
Yes, Kerry Raymond, views to the articles images are placed on is the metric we're planning. Thanks for the info about BaGLAMa; you may well be right that it's keeping an ongoing tally. And you've put your finger on the issue when you say "I am assuming in all of this is that we will be storing the cumulative statistics and adding to them and not that we will be generating the lifetime statistics on-the-fly each time?" Grant Metrics, the program we're transforming into Event Metrics, currently calculates everything on the fly when the user requests an Update. We're planning to add the ability to store page ids for articles edited and created during the event, to speed up queries (ticket here). But we had not thought about keeping running tallies of various figures. It's an interesting idea that would solve a variety of problems. I'll discuss it with the team, though my guess is it would be a pretty big change. —JMatazzoni (WMF) (talk) 19:07, 1 November 2018 (UTC)Reply
I think cumulative aggregation of the stats is the ony way to go for a couple of reasons. The first is to reduce the growing computational burden of increasing events and increasing passage of time, O(E) not O(ExT). The second is to mitigate the risk of changes in APIs, databases, etc. They change, and the metrics tool changes to match, but you don't have to worry about extracting legacy data from legacy systems (as people who build new versions of things often don't take the trouble to roll-forward the legacy data -- I recall this happened with the pageview data, a new interface and all the old data became unavailable). Kerry Raymond (talk) 05:53, 2 November 2018 (UTC)Reply
Namespace. On the principle of "count and reward" desired behaviour, I'm a little jaundiced against edit counting of any kind, because it isn't really counting something we care about. It measures your personal enthusiasm for Saving your work regularly and is also sensitive to your choice of editor tool (you generally have a higher edit count with source editor than Visual Editor) and is very gameable as we see in the regular editing community where our top contributors are generally doing mass trivial edits. I suspect we do edit counts in the existing Dashboard because it's easy to do and it's what we "always" do, but if you think about what you are really wanting from events, it's the metrics of of bytes/words added. In the regular editing community, there is a positive purpose for bytes/words *removed* as this is generally associated with removing/fixing problematic contributions by others and hence we would not want to count/reward only positive growth in articles for a regular editing community metric. But events/programs/campaigns do not generally remove content. Removing content by newbies won't be done according to policy, because they don't know policy, so therefore their removal of content is generally because "I don't like it" and is to be avoided. Indeed, I usually tell new users that deleting someone else's content does not make you friends (by which I mean, newbies face enough barriers without upsetting other people likely to be page watchers on that article) and they should focus on additions rather than changes until more experienced. Again, in terms of rewarding desired behaviour, I would be perfectly happy counting only mainspace edits, because at the end of the day we want content in mainspace. Personally I avoid Draft space because Article for Creation is best avoided for events. AfC basically exists to filter out the flood of promotional articles through a process of exhaustion. Events usually have both a "good faith" theme and "good faith" contributors so letting these folks and their new articles anywhere near AfC is a big mistake (it rarely ends well). If people are keen to draft, I generally point them at User space (usually the Sandbox) and then move the content from there. So if we want to spend our finite computational resources on counting Draft/User/etc edits, by all means do it but let's report it separately to mainspace edits (and not as a single total). Personally I think our finite computational cycles could be better spent elsewhere :-) Kerry Raymond (talk) 00:37, 27 October 2018 (UTC)Reply
Thanks for these insights Kerry Raymond. On the "Words added" metric, we would love to bring this but it's out of reach. We will do "Bytes added", which is actually net bytes changed, but it's probably close enough, since as you say event participants don't do a lot of deleting. On namespaces, these are my takeaways: 1) if we do include Draft, we need to keep the figures separate from Main. Good point. 2) You do not agree that we need to include Draft. Believe me, we have enough to do without adding this second namespace, which we do predict will increase wait times. So if others agree then this is something we'd be very happy to skip. Anyone else? Is counting Draft edits important to you? —JMatazzoni (WMF) (talk) 17:43, 30 October 2018 (UTC)Reply
"Still exists" and survivor rate. Personally I would not show this in the public display. These are "bad news" statistics and not motivating to the participants (or their employers). Maybe limit this info to the organisers by default. If it can't be done with categories, so be it. I think category-limited events/programs/campaigns are more likely to involve regular editors (which is why we need the limiting capability to eliminate their other activity) and this group are less likely to have issues with survival. Kerry Raymond (talk) 00:37, 27 October 2018 (UTC)Reply
Kerry Raymond very interesting points. Event Metrics requires a login, so is generally for organizers and their colleagues only—though making an employer a co-organizer or your event is a valid way of sharing data, so in that case they would see the onscreen metrics (see mockup). I'm not including the "Still exists" figure in the Wikitext reports, which are meant to be public (ticket here). This figure is in the downloadable csv report. Are you saying we should eliminate this "bad news" metric from the on-screen display—reserving it only for the downloadable spreadsheets? Do others agree? —JMatazzoni (WMF) (talk) 17:56, 30 October 2018 (UTC)Reply
"% of women, Baptists, vegetarians, men who play poker". Again, I think this should be an organiser-only metric as there is a danger of outing in small events and the risk of sending the signal to the participants without that characteristic that they weren't really welome (if you track women, does that mean you didn't really want men to come?). But I see no issue with allowing the organiser to give a name to a characteristic that matters to the success of the event from their perspective and personally ticking a box to indicate whether or not a participant has that characteristic and then getting metrics about that subgroup for their own statistical purposes. Maybe they want to see how men vs women perform in the event or Catholics vs Protestants or whether eating meat makes you a better Wikipedian. I don't think the rest of us need to know or care what they are tracking and why, but events organisers and researchers might find it useful to have arbitrarily-chosen characteristics available to them for statistical purposes. Kerry Raymond (talk) 00:49, 27 October 2018 (UTC)Reply
Kerry Raymond, just to be clear, the %women/men/other figure is not associated with individual users; it's an aggregate figure for the event overall. (So there'd be no ability to sort statistics by gender, as in your example.) That said, I agree this figure is likely to draw undue attention from people, so it makes perfect sense to reserve it for the downloadable CSV only. I'll take it off the on-screen display. Unless others disagree? —JMatazzoni (WMF) (talk) 18:14, 30 October 2018 (UTC)Reply
With small numbers of participants, percentages can be outing. Imagine if the event has 10% transexuals and, as a participant from an organisation, I know many of the other participants as long-term colleagues, so my ability to figure out that the newcomer to the organisation is the transexual is pretty good. That's why I think the organiser needs to decide if the percentage of a characteristic should be made available to the participants/public. There are times when it is sensitive or may be demotivating to those who are/aren't possessed that characteristic. "I didn't realise they really only wanted vegetarians to come" and there are times when it isn't. I think if we allow the organiser to characterise the participants in some way, it follows that the organisers might like to see the breakdown of the statistics by that characteristic. Why else are they bothering to characterise them? If 20% of the participants are librarians and they are doing 60% of the edits, it's telling me something as an organiser about librarians vs non-librarians that may inform future events? Maybe the non-librarians are less IT-savvy and I need to present the "how to edit" more slowly. But so long as the characteristic value is in the downloaded spreadsheet, the organiser can get the stats they want about the various groups by filtering so this capability doesn't need to be hardwired into the metrics (from my perspective). However, I cannot say if all our organisers are generally spreadsheet-savvy. Kerry Raymond (talk) 09:20, 31 October 2018 (UTC)Reply


  • ‘Pageviews to files uploaded’ Would it be easier to measure this from the date of the event, maybe 1 month, 6 month intervals? For me this is more motivational than an exact computation, so users can get a general idea of their impact.
Hi Avery Jensen. Remember we're measuring views to the article pages the image gets placed on. So if we measure from the event (meaning, I assume, if we measure only the articles the image is placed on during an event), we would significantly under-count views to many images, which can get placed on more and more articles over time. This metric is designed more for organizers of image drives than for those who create events focused on articles, where a few images may be uploaded for the purpose of illustrating specific content. Does that make sense? —JMatazzoni (WMF) (talk) 18:29, 30 October 2018 (UTC)Reply
  • ‘Namespace’ I can't tell you the meaning of namespace, although I am familiar with draft (but not main). Is there a more common term? Event planners my not be technical geeks. Kerry is right, friends don't let friends use "draft", especially new users. It is possible to create something on a user page then move it out, but the last time I tried that, there was no visual editor on the newbie's page. We ended up looking for someone to enable their autoconfirm.
Another vote against including Draft space. Thanks Avery Jensen. As I say to Kerry above, I will be very happy not to have to include Draft, since doing so is a lot of work and will increase processing times significantly. I'm going to remove this from the requirements unless I hear from pro-Draft folks. (Namespaces, FYI, are a way of classifying content and pages into areas like Talk vs. User, etc. Here's a list of all of them.) —JMatazzoni (WMF) (talk) 18:38, 30 October 2018 (UTC)Reply
  • ‘Still exists?’ & ‘New page survival rate’ Can't think of a reason for this, unless you want to track internally. If there are experienced Wikipedians at an event, the page usually sticks, even if it is nominated for deletion.
Interesting. The thought here is that groups who are primarily interested in creating new content want to know if any of their new pages have been deleted. I assume this does happen with groups that a) include newbies and b) let them create new pages (not always the best idea, I know). If people are not keen on this figure then that is good to know. —JMatazzoni (WMF) (talk) 18:51, 30 October 2018 (UTC)Reply
  • ‘% women (estimated)’ This should not be public, at least on an event-by-event basis, maybe for a year-end report if anything. Events about women attract more women, events about African Americans attract more African Americans, events about LGBT attract more LGBT, this is already understood, but it is probably better to track this internally or for grant purposes. Estimates can be made from photographs, but not everyone wants to be photographed so this is not necessarily accurate. People would probably rather be welcomed at an event for their skills or specialized knowledge than for their membership in an underrepresented group, although the WMF may have legitimate reasons for tracking this to measure the effectiveness of their programming and make funding decisions. Just the act of asking this question of everyone routinely will help make people more conscious of the demographics of their events. Avery Jensen (talk) 02:47, 28 October 2018 (UTC)Reply
Thanks Avery Jensen. See my answer to Kerry above, but yes, I agree, we will include this figure only in the downloadable spreadsheet, where users who want it can easily find it. —JMatazzoni (WMF) (talk) 21:27, 30 October 2018 (UTC)Reply

Comments about 'Metrics dropped or delayed' edit

Leave your comments here about our plans to delay or drop these metrics.

  • ‘Words added’
  • ‘Description’
  • ‘Wikidata claims added’
  • Bytes, edits, words ‘changed subsequently’
See my rant above about why it's better to count bytes/words added than edit counts. In English (I can't speak for other languages), word count is a very tradition metric for writing. University students get asked to write 1000 word essays etc. So I am in favour with newbies in particular (and their employers where relevant) getting that metric because it's something they understand and may be comfortable with including in their own reporting (particularly institutions). Byte count is something we can always do easily but is less meaningful to people. I think it's OK just to report byte count for languages where word count isn't meaningful or is expensive computationally. I don't think any word count needs to be perfect (we are all well trained by Microsoft Word to accept without question whatever number of words it claims to be in a document) so I would be happy with using an average number of bytes per word in that language as a quick-and-dirty metric which is trivial to calculate if it would be too computationally expensive to do it "properly".
Thanks for this Kerry Raymond. I've added your comments to the ticket investigating a "Words added" metric. —JMatazzoni (WMF) (talk) 19:59, 30 October 2018 (UTC)Reply
Description. Happy to leave this as dropped/delayed. Generally the theme of the event will be in the description of the Event and it can be reasonably assumed the articled edited are within that theme. So long as the article title is a link, you can click on the link and read the lede para and find out what it is about. I don't see this as urgent. Kerry Raymond (talk) 01:15, 27 October 2018 (UTC)Reply
Wikidata. I don't have a strong view here because I don't run Wikidata events. I don't really think newbies are doing Wikidata events (are they?) so I would think we would be dealing with beyond-newbie editors. Unlike Wikipedia, on Wikidata, I think plain old edit counts are a perfectly reasonable metric for activity and these are easily obtainable. Just do new item counts and use edit counts to measure other activity. Kerry Raymond (talk) 01:15, 27 October 2018 (UTC)Reply
Subsequent activity. I think we do need to tease out the motivation here. What is it that we really want to know? Then what's the best way to measure it. Subsequent activity might be interested in the person or the content. Does the person continue to edit? If so, do we care if they are doing it within the confines of the event theme (e.g. category limited) or are we just interested if they survive and flourish as contributors? Or are we wanting to know if the content they created (presumably new articles) continue to be developed? By the original creator? By others in the event/program? By the community at large? I can see the obvious value of knowing if our newbie event participants go on to contine to contribute, but simple edit counts show us that. If we care about what they are editing, we have category-limits to assist with that edit counting (although I realise not all event theme neatly fit a category tree). I am less sure what we learn about the further development of the content. What's good or bad here? If an article is perfect, it doesn't need further editing. If an article is about a controversial topic, it may get a lot of editing, but that's a statement about the topic not the article. The amount of subsequent editing of the content doesn't seem to be a measure of anything interesting to an event organiser. I think pageviews might be more significant measure of interest in the content but, as we know interest in an article is usually based on topic, not on the article. OK, if we create an article at an event, and it goes on to get thousand of hits a day, it was well-worth creating, but if a topic is notable, does it matter if it is a less popular topic? We often run events to contribute to content gaps and those topics may be less popular but it doesn't make the activity less worthwhile. Kerry Raymond (talk) 01:35, 27 October 2018 (UTC)Reply
Thanks Kerry Raymond. Good input. I'm very happy to leave this in the "dropped" column. —JMatazzoni (WMF) (talk) 21:34, 30 October 2018 (UTC)Reply


  • ‘Words added’ Back before computers had the capability to count every word in a particular text, it was perfectly acceptable, for example if you were writing a 500-word paper, to estimate the number of words per line and multiply by the number of lines. It would be more useful to have a standard measurement that could be compared across events than to have an exact measure.
  • 'Description' Not important to me, but my audience is the person who made the edits, and they already know the content.
  • 'Wikidata claims added' Some kind of ballpark figure would be more interesting than nothing at all. While our emphasis is on learning to edit, and not on Wikidata, it is sometimes interesting for people to see they have an edit count on WikiData without editing it directly, just from connecting articles on language wikis. While I am not a big fan of 'edit-count-itis' either, an edit count or something similar might give some minimal information.
Thanks Avery Jensen. I've added your comments to the ticket investigating a "Words added" metric. —JMatazzoni (WMF) (talk) 21:43, 30 October 2018 (UTC)Reply
  • Bytes, edits, words ‘changed subsequently’ Don't see any reason for this. If someone wants to monitor a particular article they have other ways to do so. A lot of the articles are on the long tail anyway, popularity is not as much a consideration as filling knowledge gaps. Avery Jensen (talk) 03:24, 28 October 2018 (UTC)Reply

Changing the name of this page to 'Event Metrics' edit

Some time this week I plan to change the name of this page to "Event Metrics". The current page name, "Tools for program and event organizers," has always been a mouthful—and was always meant as a placeholder until we could figure out just what we were going to do. Last month, I proposed the product direction that we're now pursuing: to extend the Grant Metrics tool with new features and rename it "Event Metrics." The new page name not only describes that project, it's something you can say to someone in a sentence, as in "you'll find the details on the Event Metrics page, on Meta." Once we get the new page name in place, we'll get to work renaming the Phabricator tag (currently "Event Tools"). Renaming the Grant Metrics tool itself may take a little longer. Please let me know if you have thoughts or questions about this change. —JMatazzoni (WMF) (talk) 22:03, 23 October 2018 (UTC)Reply

"Event metrics" works. I prefer "program metrics" because this tool works to track long-term programs including classes over a term or programs like en:Wikipedia:Wikipedia Asian Month where groups of people all get tracked throughout a month.
Maybe a program is an event, but usually I think of an event as a type of program. Blue Rasberry (talk) 12:56, 24 October 2018 (UTC)Reply
Thanks Blue Rasberry . We thought about Program Metrics but "program" is organizer/funder jargon to some extent; many people don't know what it denotes (especially since "program" has a different meaning in the software world). We thought about Program & Event Metrics, but feared it would get confused with the Dashboard. In the end, the simplicity of Event Metrics won out. Thanks for your blessing on that (misgivings notwithstanding). —JMatazzoni (WMF) (talk) 21:03, 25 October 2018 (UTC)Reply
It probably depends on whether you want the option of hosting anything other than metrics in the future. The outreach dashboard hosts training modules. Avery Jensen (talk) 01:30, 28 October 2018 (UTC)Reply
Thanks Avery Jensen. I would love to add more training and event-management features. But for now, Event Metrics is a metrics tool. When we the time comes when we can add more features, then we will be happy to rename the program again. —JMatazzoni (WMF) (talk) 21:46, 30 October 2018 (UTC)Reply

Comments on ‘Which new features should come first?’ edit

Comments on the proposed timeline edit

Your comments

Comments on the wireframe for the ‘Event Summary’ data screen (v1) edit

  • Do the layout and organization of the Event Summary screen wireframe seem helpful? Are the metrics shown the right ones? What do you like? What is missing?

Your comments

Please comment on the proposed product plan for Event Tool edit

  • Does the proposed product plan seem like the right direction overall?
  • What is missing that would make it work better for you?
  • Are there any features here you think we could drop or put on our low-priority list?
Your comments here
Hi JMatazzoni (WMF), before I offer any feedback can I ask you to clarify your post? You write: "Organizers expressed support for many of those ideas—particularly automated wiki page creation, event signup, and assistance with wiki account creation. But as the team investigated these event-management ideas, a few things became clear: accommodating the wide variety of organizers’ needs would make these tools more complex than they at first appeared. [...] Such complexities made event management features a big job. [...]Which is not to say that the event-management tools explored will never be built. [...] Those future investments may well take up some of the Event Tool work discussed in these pages. This seems to imply, but doesn’t outright state, that these event management aspects of the project will not be included in the software you are writing. Can you please clarify.--Theredproject (talk) 01:26, 22 September 2018 (UTC)Reply
Thanks for the question Theredproject; sorry my post wasn’t clear. Your conclusion is correct: the proposal I’ve made does not include event-management features. The Wishlist wish that created this project is focused on metrics, which makes improvements there a given. And the proposed program of metrics features (detailed here and here) is already about as ambitious as a Wishlist project can get. I know that some of the other features discussed are important to organizers, and my hope (and belief) is that future projects will be able to address them. JMatazzoni (WMF) (talk) 01:11, 25 September 2018 (UTC)Reply
@JMatazzoni (WMF): as you have described it, the tool you are describing neither preserves existing features that have become part of our workflow, nor does it solve the key problem of discoverability. As such we worry that it will be a time and money sink that will lead to a tool with a new interface to learn and a reduced utility.
As you know because we spoke about in our video conversation, Dashboard already does account creation at sign in. So this is one of several reasons why A+F is concerned and frustrated: we are afraid that you are putting a lot of labor into reinventing the wheel here, and putting a few bells and whistles on it, but essentially the same wheel. Except, it will have a different interface that our 500 organizers will each have to spend several hours getting retrained on. That is 1500 hours of community labor.
The key problem with Dashboard as we see it is discoverability and Wikipedia integration. The tool was not designed to be used as something to find events with. That means that you either have to create a Wikipedia meetup page, which our research and practice has shown is largely ineffective for outreach beyond existing very active users, and very difficult to maintain at scale; or you have to create some other form of outreach post, such as Eventbrite, meetup.com, FB, etc. We have had to create our own interface & system (http://www.artandfeminism.org/find-an-event/ built on Wordpress) to centralize all of those outreach posts, though this means that the event does not appear on wiki.
I sincerely apologize for sounding harsh here, but I want to be honest. I have conferred with my Art+Feminism collaborators and speak on behalf of them. --Theredproject (talk) 00:58, 26 September 2018 (UTC)Reply
Hi Theredproject. Your honesty is appreciated, as always. I’d like to know more about your ideas for improving event discoverability. What is the goal? It sounds like you’re imagining something that would be on the wikis…. You’d want to target people by location, clearly. How else? E.g., by category? Readers or editors? Etc. The WMF’s decision to discontinue work on the Dashboard hasn’t changed, but I’m happy to investigate ideas for helping you connect with your audience. —JMatazzoni (WMF) (talk) 18:00, 27 September 2018 (UTC)Reply

Please comment on proposed metrics features and reports edit

Please give us your reactions to the Sept. 6 posts, Proposed metrics features—what do you think? and  New data and reports in detail. I’ve offered the sections and questions below to help structure the discussion.

Comments on metrics proposals overall edit

  • What do you think? Will these reports and features work for you (at least to start with)?
Your comments:
Apart from the annotations that I have made in some section, in general the metrics proposal is very complete, with a series of features that, at least in our case, will be very useful. Thank you for all the updates. --Rodelar (talk) 21:55, 10 September 2018 (UTC)Reply
Looks good overall. Hopefully this will be a one-stop shop for metrics that will make life easier in terms of reporting as I'm pulling stats from a range of online and offline sources at present. Specifically for the kind of longer-term stats for things like page views, image views, bytes added, editors retained etc. Stinglehammer (talk) 14:19, 13 September 2018 (UTC)Reply
  • Overall if the team proceeds as they described then I will be happy with the outcome. This is great for this iteration. I have some comments below which are all details. If the WMF team could not implement or respond to any of my comments then I still support the direction of this development because I realize there will be ongoing discussion and iterations of this in the future. What is already proposed is a great help and improvements that are valuable to me and which I believe will be valuable to others. Blue Rasberry (talk) 18:47, 19 September 2018 (UTC)Reply

Comments about ‘Lots of new data and ways to get it’ edit

  • Are the proposed reports the right ones?
  • Is the assumption that wikitext tables should provide only key figures instead of the whole dataset correct?
Your comments:
This all look pretty good. Just one thing that I notice is the nomenclature. In organisation-speak, input metrics measure what goes into an activity, output/impact metrics measure the desired benefits of the activity. So for events like we run, there are two perspectives on this, the point of view of the Wikipedian/chapter/WMF in terms of their inputs (the cost and effort to organise the event) and that of an organisational partner who may be providing the event participants. So metrics like “articles created” might be seen as impact from the Wikipedia perspective but from the participants (and their organisation eg GLAMs) these are input metrics while “page views since edit” are impact metrics. We are not measuring Wikipedia input metrics at all here (how much grant/chapter money, how much staff time, how much volunteer time). I’m not saying that we should be doing so, but it is important that when dealing with partners, we understand the metrics from their perspective because we need their buy-in. Kerry Raymond (talk) 01:15, 7 September 2018 (UTC)Reply
Personally I am ok with simpler tables (or other forms of visual representation) online, so long as complete spreadsheets are available for download for whatever detailed analysis is desired. One thing that might be useful then are the diffs of the individual edits so that people who do want to analyse what occurred in more detail can do so, e.g. was there an edit summary, was a citation added, etc. not asking for this analysis to be done but being able to get to the before and after text of the diff would facilitate some interesting backend tools. We might also want metrics on stuff like use of source vs VE edits, desktop bs mobile, etc (not sure if we can get these from a diff or need to get them in other ways). Kerry Raymond (talk) 01:23, 7 September 2018 (UTC)Reply
  • Yes all of this is correct. Yes it is a real challenge to limit categories and Wikimedia projects and yes this proposal expresses correct understanding. Blue Rasberry (talk) 18:48, 19 September 2018 (UTC)Reply
  • It is great to have different kinds of reports, for specialist and non-specialist, and different formats too. Impact and cumulative metrics are fundamental! I think that the wikitext table assumption is correct; if you choose to give a report in a wiki it doesn't have to be too detailed but it needs to be quickly understandable. --M&A (talk) 10:24, 20 September 2018 (UTC)Reply

Comments about the ‘New filters for customizing reports’ edit

  • Which of these filters is the most important for you (i.e., what should we develop first)?
  • What is missing?
  • Are you interested in namespaces beyond Main, or is that just an extra complication?’
Your comments:
Sometimes new articles are created or new participants are included that were not initially planned. Will it be possible to include them in the report? that is, the list for the report is closed or can be modified?. --Rodelar (talk) 11:46, 10 September 2018 (UTC)Reply
Hi Rodelar. What you describe should be fine. Which is to say, I don't believe there is any problem if the organizer wants to add articles to the Worklist or usernames to the participant list and then trigger a new Update to the metrics. JMatazzoni (WMF) (talk) 18:04, 10 September 2018 (UTC)Reply
  • Super report behind the scenes This scheme of reporting makes no mention of advanced analytics for superclasses of users. There are several classes of superusers. One major superuser class is the WMF community engagement team, which currently puts too much pressure on casual Wikimedia community members to interpret metrics and report them back to the WMF. The skill set for organizing and hosting an awesome event is unrelated to the skill set for interpreting the data of the outcomes of the events. In planning for these reports, anticipate that 99% of the userbase wants a very simple report and never wants to see the more complicated report. The WMF should never even ask the users to generate a report to give to the WMF, and instead, when the casual users generate their simple report for their own use that should send the more complicated report to the WMF or whatever other backend research repository might want that report. I want WMF to get their reports; I think that the casual users should not be a middleman to the WMF's access.
There are various metrics listed here and that is great. This is a first iteration and this is a start and any attempt is good enough. The immediate problem with this is that most of the information it exposes does not match with typical audience needs. When users get overwhelmed with information then that either discourages use or develops cultural problems.
Another way to say this is to imagine how Google and Facebook and Twitter do things. They have some entry-level reports for typical users. They also have mid-level reports for media professionals who are engaged full time in assisting institutions to engage in those platforms. Behind the scenes Facebook etc have very deep needs that are behind their users’ interests, and they collect all the data they want internally and do not attempt to get their user communities to manually report that back to the mothership. Some parts of this WMF reporting scheme makes me imagine a WMF desire that every community of users engage with every part of the data, and even manually interpret it back to the WMF in reports that humans choose to send back to the WMF. Blue Rasberry (talk) 18:42, 19 September 2018 (UTC)Reply
Hi Blue Rasberry . I’ll try to address your points in turn:
  • You write that “most of the information [in reports] does not match with typical audience needs.” I’d love to simplify these reports if that’s so. The proposed metrics listed were arrived at by compiling organizer requests and looking at the reports they’re currently compiling and posting. If you or anyone else has nominations for data we can drop, please speak up.
  • Regarding your larger point of having classes of users, I’ve tried to address this idea in a minor way by differentiating between the amount of data in the spreadsheets versus the wiki table downloads. The spreadsheets will include the data superset, on the principle that users can more easily edit and sort them. The wiki tables will include just the highlights that I’m imagining organizers will want to post publicly.  Once again, if the mix on those is wrong, please suggest changes. (When the time comes we’ll post mockups of these, which might make analyzing them easier.) Articulating different roles in the system is a good direction to keep in mind for the future.
  • I agree completely in principle that the WMF should not need to make organizers submit data to us that we already have in this system. And various groups in the organization definitely see how that data could be very helpful as we try to better understand organizers’ needs and role, so that we can support them more effectively. But the WMF is not  is not Facebook or Google—in a good way, in this case. We observe a higher standard when it comes to data privacy. Issues of what the WMF is and isn’t allowed to do with the data in this system are liable to be complex, and will have to be worked out over time. I’ll be very interested to hear from any organizers who have thoughts about how protective they feel about the data in this system. —JMatazzoni (WMF) (talk) 23:43, 20 September 2018 (UTC)Reply
  • IMHO the more important filters are Participants and Worklist. I don't think we need to have all the namespaces, but maybe some of them can be useful, such as User: or User Talk: or Help: but it is not a priority. --M&A (talk) 10:42, 20 September 2018 (UTC)Reply
That's useful feedback M&A. Thanks. My sense from my research is that other organizers will agree with you about Participants and Worklist, and we'll take that into account when prioritizing these features. If others are mostly interested only in Main Namespace, that would be great to know, since it would spare us a lot of trouble. Anyone? —JMatazzoni (WMF) (talk) 17:31, 21 September 2018 (UTC)Reply
@JMatazzoni (WMF): I say drop everything related to new account registrations or retention because those are not metrics that either community organizers or institutions want. Instead of new accounts there could be a marker for "newer user" versus "experienced user", but lots of new users have accounts which are years old. Community organizers and institutions might need to know who needs extra support, but that is not the same as "newly registered" or "retained". These new user metrics are a major influence on institutional investment. They are guiding the professional direction of everyone employed to advance the wiki space and no one ever established that these metrics actually matter to outreach. Blue Rasberry (talk) 15:12, 25 September 2018 (UTC)Reply
Thanks for the suggestion about retention figures BlueRasberry. Event organizers are a varied lot with differing needs. Retention figures are of interest to Wikimedia chapters, for example; and the Foundation requires these numbers from grantees, for reasons that are understandable. Still, I see your point and think it makes perfect sense to omit these from the downloadable Wikitext reports, which are designed to be posted publicly. I don't see any harm in including such figures in the downloadable csv reports, which are accessible only to the organizer of the event. In terms of the onscreen report, I'm not sure. These are mostly private but may be shared with others; for people who are interested, having them onscreen may be a convenience. What do people think? JMatazzoni (WMF) (talk) 20:57, 27 September 2018 (UTC)Reply
@JMatazzoni (WMF): There is no need to settle this issue but I want to say enough to be understood, not just for this discussion but for the future development of things.
Putting "new user" metrics in back-end reports is fine. The problem with putting them in user-facing reports is that the WMF does not have a good understanding of how the metrics it requests is driving individuals to make career decisions and institutions to invest resources. The practice of tracking new users was lightly set at Learning and Evaluation/Global metrics. The decision to emphasize new user recruitment never went through a community consultation process or even ever had much discussion.
Chapters only like "new user" metrics because the WMF requires it as a condition of funding, and not because they had a conscious choice of outreach strategy. If you ask chapters what they want, they will tell you they want more content in important articles and more regular experienced editors. For as long as metrics tools guide organizers to increase their count of new users, that means that chapters will invest fewer resources in supporting experienced editors in developing popular articles and more resources to encourage totally new users to make new articles, typically on fringe topics. Tools greatly increase the scale of this behavior.
Whatever you do is not a neutral decision. If you promote these metrics you guide how partner organizations invest of millions of dollars. If you leave out these metrics you guide those organizations to invest millions of dollars in another direction. So far as I know the WMF has never published any consideration of the difference between its goals and partner goals or just how much money partners have to divert to satisfy WMF demands at the opportunity cost of their other options. More than forcing the decision one way or the other, I wish we could transition this from a topic dictated by the WMF on a whim to a topic which had some serious reflection. The endless drive for new users makes for a grind of recruiting new users then dropping them for more new users without appreciating follow ups and ongoing development. It is possible to make new users into regular users, but the current reporting and evaluation paradigm is not incentivizing grant recipients to seek more desirable content or editor retention.
Please do seek other opinions and thanks for giving me space for feedback. Blue Rasberry (talk) 01:07, 28 September 2018 (UTC)Reply
Hello @Bluerasberry:. Thank you for this feedback. First I want to point you to Grants:Metrics which is the basic framework for reporting metrics since 2016 and more importantly to the fact that this was a revamp of Learning and Evaluation/Global metrics based on a community consultation. With this in mind, I would nuance the idea that this is a "topic dictated by WMF on a whim". I subscribe to your general comment that choosing (a set of) metrics is not neutral and that it drives the way people will design their events or activities. This means that yes, metrics need to be chosen carefully. In the work I do as a program officer, I often question metrics, even the agreed upon ones, to make sure that people using them are using them as their own, rather than using them just to please WMF, as I recognize that this has been a problem in the past (as an example, I feel that using "bytes" as a metric has rarely been fully explored as a tool to drive change and adaptation of programs, for example, but rather as "just a metric" with no further meaning behind it). Please bear with us as the Wikimedia movement, as a fairly new grantor organization, also are learning to use evaluation in the best possible way, and to adapt our expectations and the way we use metrics altogether to support change based on local context.
Second, I do not subscribe to your blanket statement that "If you ask chapters what they want, they will tell you they want more content in important articles and more regular experienced editors.". Having worked in chapters many years, and with chapters for even longer, I do not think that this is an accurate picture of reality. Some chapters and organized groups may indeed find that work with experienced editors is more important, but others do find it interesting and important to focus on new editors. Indeed, the retention techniques they might have to deploy to make new users them regular editors are equally important in their work. Work in emerging communities especially, which brings a lot of new editors to our projects, is especially interesting in that regard. I agree with you that "new users" should definitely not be the end of our thought process and that we as a movement should definitely work on how to support those users beyond them being "new". In older communities, having new users is also sometimes paramount to maintaining a project alive or healthy, so that metric is not harmful but benificial to understanding how a community evolves. Your context might be one that does not need to look at new users, and I respect that, but my experience is that this is not true across the board.
Finally, while as stated above, I agree that using some metrics or others is not neutral, in our work as grants program officers, hopefully we try and make sure that the metrics that people will report on are the metrics that make sense for their project and activities, in their context. For example, if focus is about "adding content to bridge a content gap" in an editathon, then the number of articles or content added will be given more weight than the number of new users in evaluating the success of a project. On the contrary, if the focus is "bring more women to edit", then the "new users" metric will of course be given more weight. The reason why there are also "grantee defined metrics" available is to make sure that we can refine as much as possible the goals of an activity to match the intended impact and change. In short, if you have experienced the contrary, ie. a push for metrics that you deem not relevant, I urge you to ask for a conversation with the program officer in charge of looking at/funding your projects so you can bring your concerns about the relevance of metrics to what you are trying to achieve. As an example, I have worked on FDC applications from organizations that simply didn't use some of the Grants metrics for some of their programs, because they were not relevant at all to a given program. And I don't see a reason to push anyone to use metrics that they have no use for, especially when a good rationale was given.
I think that context is key, and it seems to me that your experience differs from mine grandly, so I don't think we can assume that some metrics are simply irrelevant, or "dangerous". What may be "irrelevant" or "dangerous" is, I will agree with this, the use made of them. But that can be changed with a good conversation. Best. Delphine (WMF) (talk) 08:52, 1 October 2018 (UTC)Reply
@JMatazzoni (WMF): I want to push back on you about talk of data privacy. I know it is not your intent, but when you say that the WMF will accept reports voluntarily given by community members then you are shifting responsibility for understanding data privacy from the WMF to the Wiki community, and also you are shifting blame for when something goes wrong from the WMF to the person who submitted the report. The position which I would like these tools to take is that they collect public information which do not trigger data privacy concerns beyond the norm of typical wiki editing.
Please try to avoid developing software which raises new and undiscussed ethical concerns unless you also intend to either (1) organize community consultation or (2) take responsibility. In commercial projects the community of users is a reservoir for outsourcing legal liability and blame. In the Wiki community our user base gives their trust in exchange for an expectation of a safe environment, so software development has to happen differently here. I recommend at this pilot stage to avoid the creation of new workflows which create data privacy concerns which did not exist previously.
If the WMF has an expectation of seeing already public data, then please publicly observe that data in the way you have in the past without having a new consent process to have community members consider whether individuals need to take on responsibility in this space. Blue Rasberry (talk) 16:07, 25 September 2018 (UTC)Reply

Comments about ‘Assumptions, limitations, ideas for future release' edit

  • How important would auto-updating (on demand) of on-wiki metrics tables be to you?
  • How important is mobile for you? If we optimize pages for mobile, is reporting or setup more important? Or both?
Your comments:
Being able to auto-update matters, as the impact metrics are the gift that keeps on giving over time, but please let the participants trigger it, not just organisers so our partner orgs can keep getting the metrics even if the Wikipedia organiser is no longer interested. Kerry Raymond (talk) 01:26, 7 September 2018 (UTC)Reply
Thanks for the feedback Kerry Raymond. I talked to the engineers and having readers trigger the updates is a tough one. Here are two alternatives—one that will work out of the gate and a second that would probably come as a phase-two improvement. In the near term, if you include your partner as an "organizer" in your program, then (as long as they have a wiki account) they will be able to see metrics and trigger updates for all your events. I imagine that solution will work better with some partners than others. In the longer term, instead of changing the permissions scheme we would probably switch Grant Metrics over to a system where updates happen automatically on a schedule, every month or whatever. It's a lot of processing but it could be done. Your thoughts? JMatazzoni (WMF) (talk) 21:42, 13 September 2018 (UTC)Reply
Yes, I do generally add the contact point within the partner org as one of the facilitators. But people move on and, as you would know, we don't allow "role" accounts on Wikipedia (and even if we did, the likelihood the departing person remembered to hand it over would be low). But the need for metrics varies over time. During the event and in the immediate aftermath, people are very keen to see the up-to-date metrics (and want it real-time as that's always the largest number which is people want to show off). But by a few months later (after the immediate post-event reporting), the need becomes less frequent, e.g. someone needs to write an annual report. I think an auto-update monthly would suffice for that purpose, so long as the date of the update is visible so you can write "As at 17 March 2020, ..." so I think the auto-update works as a long-term solution in the long term. Kerry Raymond (talk) 02:52, 16 September 2018 (UTC)Reply
Agree with Kerry Raymond, auto-update matters. As mentioned I think being able to paste in a worklist of articles that were created/improved by using article titles only would be useful (as happens on the outreach dashboard). Saving should auto generate the urls. Plus number of citations added, gender breakdown of editathon participants and number of plays to videos and audios through mediaview extension would be good to pull into as important metrics. Stinglehammer (talk) 14:25, 13 September 2018 (UTC)Reply
Hi Stinglehammer. You've listed a number of ideas here. I'll address them one by one:
  • Create a Worklist of redlinked articles based on copying and pasting a list of titles. Yes, this will work.
  • Report the # of plays to uploaded audio and video files. Yes, we can get this count. I've added this metric to the specs for all the relevant reports in the project-page listings.
  • Report gender breakdown for events. Yes, a lot of people are interested in this and we can include it. I've added it to the specs for the relevant reports. But Mediawiki provides no reliable way to determine users' gender, so until Grant Metrics includes a user signup facility (not planned for the near term) this metric will have to be manually entered by organizers—based on their own head count or a sign-in sheet, etc. (It will be reported as a summary number for the event, not as a designation for individual participants.)
  • Count # of citations added. I spoke again with our engineers to explore other ways of doing this. But even if we're willing to accept approximate numbers, identifying citations is still a heavy lift. If anyone has ideas for this that don't require us to parse all the wikitext, we're listening. — JMatazzoni (WMF) (talk) 22:35, 13 September 2018 (UTC)Reply

Ah yes... I’ll move the question later. Thanks :) Nattes à chat (talk) 23:24, 31 January 2020 (UTC)Reply

Comments about ‘New data and reports in detail' edit

  • Are these the right metrics? What could we drop? What missing?
  • Do the Wikitext tables’ show the right short list of figures?
Your comments:
Stinglehammer made a request that we track the # of plays to uploaded video and audio files. I'm happy to report that this is something we should be able to do, and I've added it to the appropriate reports in the post Sept. 6, 2018 (continued): New data and reports in detail. —JMatazzoni (WMF) (talk) 19:53, 12 September 2018 (UTC)Reply
  • Commons data Any metrics are an improvement here. I understand that generating a metrics report will be iterative so if you cannot provide the following then fine. What the WMF is currently proposing here is "# of views to all images uploaded (cumulative, since upload)". This sounds correct, but I want to clarify that the views desired here are the views to the Wikipedia articles where the images are posted, and not to the Commons files themselves. It is unclear to me whether the intent is to report views of files on Commons or views of files in context in Wikipedia articles and elsewhere. A Wikipedia pageview count is most useful, then if you like supplement that with other view metrics. Wikipedia pageviews should be 100x more than any other metric when they exist at all. Same with videos and audio files above - plays are a great metric but mere pageview impressions matter also. If I could request even more detail, distinguish pageviews to media by text language. The most common requested divide will be views on English language Wikipedia articles versus all views collectively in non-English contexts.
Another issue here, sort of unrelated, is reporting entire Commons categories. This entire reporting system imagines that program organizers want to collect reports of what their userbase did, but this is a WMF desire with the bias of measuring the efficacy of WMF grant funded outreach programs. I advocate for the institutional need which is different from and conflicts with WMF desires. Institutions typically would want to combine entire Commons categories of files to get reports, and not just see reports of certain files uploaded. For example, a museum would want a Commons report for all items in the category of the museum, including fan uploads unrelated to their outreach, and not only their own programs. From the institutions' perspectives they want to measure total impact of Wikipedia, and not only impact of a particular outreach program. The per-upload metrics are for WMF benefit more than user benefit. Blue Rasberry (talk) 18:39, 19 September 2018 (UTC)Reply
Thanks Blue Rasberry for  your reminder that organizer priorities may be different from the Foundation’s, and for your interesting questions.  I’ll answer them one by one:
  • Re. your question about whether page views will be views to the articles where the image has been placed: Below, in my answer to Sadads, I say yes, that’s what we’re planning. And that figure should be possible to get. But I just spoke to one of our engineers who expressed a concern about how long such a query might take—especially if made for an entire program instead of just an event (which is a feature I’d like to provide). It is easy to imagine how deep and sprawling such a search could become, so performance is something we’ll have to evaluate as we work on this feature.
  • Re. your request to “distinguish pageviews to media by text language”: We’re planning to give you the ability to filter results by wiki. So if what you want is to distinguish English vs. non-English, that should be possible (by getting the sum total then subtracting the English subtotal). What we probably can’t do is provide a breakdown of pageviews for each wiki a file might be placed on, since a popular file could be on hundreds of wikis and a report might encompass hundreds of files. Such a report could become epic in scale and simply impractical.
  • Re. your request to get a report on all the images in a category or group of categories regardless of whether they  were uploaded as part of an event. I’m not sure about that. We were planning to let users turn most of the filters on or off. But the idea that organizers would want numbers about files created or edits made outside the time period of an event or program had not occurred until now. It’s something to think about, though there’s a fear, again, that if the scope of searches gets too big, system performance will bog down. (Have you tried TreeViews?) —JMatazzoni (WMF) (talk) 22:29, 20 September 2018 (UTC)Reply
Update for Blue Rasberry  : I tried using TreeViews, and it has some neat features but I couldn't get it to work (or find any help documentation). So that confirmed my view that this was hard. Then I tried GLAMorgan, which seems to do a pretty neat job of detailing all file usage, with breakdowns per wiki (though it lacks some features one might want, like the ability to exclude subcategories or specify a date range beyond a month). Here's a link to a report I made. So that left me of two minds. On one hand, it looks like this is possible. On the other, it appears there is already a specialized tool for it, and if it really is limited to one month that suggests that performance is an issue. Also, it does seem like this type of detailed breakdown is more information than most organizers will need—please correct me if I'm wrong. —JMatazzoni (WMF) (talk) 16:49, 21 September 2018 (UTC)Reply
I want to +1 the concern about "just the pageviews of the file" -- when working with GLAMs its important that we have a good snapshot of the data for where files are embedded (most likely in a way similar to how GLAMorgan captures these reports). The problem with this data is that it isn't always accurate in that we can't query historical placement of files on pages the last time I checked. Sadads (talk) 11:50, 20 September 2018 (UTC)Reply
Hi Sadads. To confirm on the question about pageviews you and Blue Rasberry have, the answer is yes: the metric for views to uploaded images will be, in Bluerasberry's words, "views to the Wikipedia articles where the images are posted." So we're good there. I'm not sure I understand your question about "historical" placement of files. Can you please describe more fully what the concern is? JMatazzoni (WMF) (talk) 17:40, 20 September 2018 (UTC)Reply
@JMatazzoni (WMF): He and I both want traffic measurements which date from the placement of a media file in an article. If anyone posted a photo from a museum into an article, then 5 years later the museum has a wiki collaboration, then the museum wants the data from the time of the insertion of the image, and not from the start of their engagement.
The bias to avoid here is that the WMF has a goal of community engagement and wants reports from the time when organized wiki community engagement happened. The partner institutions, in contrast, wants reports from the time of audience impact. From the institution's perspective, they do not care who posted the media or why, they only care about the impact to the audience and want those metrics. The GLAMOROUS tool provides reports like this but it is closed for general use. Blue Rasberry (talk) 16:12, 25 September 2018 (UTC)Reply
Thanks for clarifying BlueRasberry.  The request, then, is to be able to get impact metrics broadly, without reference to a particular event—for a Commons category, in the example. I see the appeal. Broadening the definition of the tool in this way is liable to make it work for more types of people. So in that sense, I like the idea. The limiting factor here will be performance. If we don’t constrain queries somehow (e.g., to a time period, a participants list, etc.), there’s a distinct danger searches will just time out. So what I can say is that I hear the request. We’ll try to preserve as much flexibility in setting system filters as we can while keeping an eye on performance—and see where we end up. We may also be able to experiment with different types of constraints, like (for this example) a low limit on the number of categories the query can include. (You may wish to subscribe to this ticket, where issues related to this are being discussed.)  JMatazzoni (WMF) (talk) 22:50, 27 September 2018 (UTC)Reply
Hi, regarding pageviews of pages in a specific Commons category, how would this work differently from GLAMorous? GLAMorous can provide overall pageviews of a specific Commons category in a specified time period. It can also show graphs of overall views separated by language Wikipedia. GLAMorgan can show the specific breakdowns of views of individual images within the various language Wikipedias. It also can count views of an image that happened before an image category was added. For example, on the Ebensee concentration camp page, there's an image we have in one of our collections which was added by another user several years ago. I added the category for "Images from the Harold B. Lee Library" to the Commons page, which added those page views to previous months. I support consolidating tools into a single tool, since tool discoverability can be poor, but I hope you won't have to duplicate work. I don't know exactly how the tools work, but I'm preparing a short presentation on these two tools at WikiConference North America--you can see my notes if that would help. Rachel Helps (BYU) (talk) 16:36, 8 October 2018 (UTC)Reply
  • remember the audience I wish to continue to issue that the interests of the Wikimedia Foundation conflict with the interests of institutional partners which would contribute content to Wikimedia projects. The WMF has decided to heavily emphasize recruitment of new users, retention of those users, and the creation of new articles as measurements of success. Praising these things in the design of reports comes at the cost of undervaluing other metrics. The most important metric is audience reach, which is what media competitors to Wikipedia (Facebook, Google, etc) sell. This metrics reporting scheme captures some of the audience need in pageviews, but we only have one audience-related metric when in comparison we are slicing the WMF-requested metrics in several different ways to talk about new users making new articles. The problem with new articles is that they are usually on topics which are unlikely to get much traffic. My guess is that 70%+ of WMF funded projects are to have new regular users develop new articles on topics which have no proven popularity with the audience, and there is much less of a WMF culture of respect for any project which, for example, might get experts to develop very popular existing articles as a short term project after which those experts might not be regular Wikipedia editors.
I do not want a culture to develop which makes institutions believe that if they do not want the WMF-suggested metrics, then wiki outreach is not a match for them. This collection of metrics has something for them among the options. Recruiting editors to have Wiki loyalty is the WMF sales pitch. Recruiting readers to engage with an institution’s content is the better sales pitch for the mutual interest of Wiki community, WMF, and the institutional partner. Focus on wiki editors versus wiki content are different goals and even though WMF loves editors, partner institutions love their content much more.
Another way to say this is that the ideal report will include a range of information, including the information of interest to the organizers or institutional partner including that information which has nothing whatsoever with what their program participants edited on Wikipedia. If a museum hosts an event, their report needs to have metrics on what their program participants did and put that in the context of all Wiki content which exists about that museum and its collections. Blue Rasberry (talk) 18:45, 19 September 2018 (UTC)Reply

Event Tool metrics—what do you want? edit

What type of data do event organizers most want—we need your input! To give you some ideas, I’ve posted a breakdown of the breakdown of the reports available now from both Grant Metrics  and the Program and Events Dashboard. Our operating assumption is that the Event Tool will use Grant Metrics for reporting (in fact, it may ultimately be just a series of enhancements to Grant Metrics).  So as you look at the lists linked to above, pay particular attention to Grant Metrics, especially to any key features you think it lacks.

What is the main purpose of the reports you want to make? edit

  • For whom are you producing metrics  (e.g., grantors, your managers…) and what are the main things you need to understand or demonstrate (e.g., type of content created, content impact, editor retention…)?
    • Answers:
We use metrics for grant reports, our managers (board) and in some cases, for reports sent to institutional partners. We are interested in content creation and new user engagement. Ariel Cetrone (WMDC) (talk) 14:00, 23 August 2018 (UTC)Reply
Well if WMF is going to want the event reported on (e.g. reporting on an existing grant or for applying for a new grant), then whatever the WMF wants has to be included. I find the participants (and in organisational contexts, the senior management) like to have both input metrics (# participants, # edits, # articles) but also want (and really love) the impact metrics like the # article views since the edit which Dashboard provides *and* how it increases over time (you get a snapshot each time you check but you can't get a graph over time). What is missing on the Dashboard is that it gives you metrics on image uploads but not on the number of articles that incorporate the image and the # article views, which is a problem if the program/event is about photos rather than edits. If it's about photos being uploaded and added to articles, then the edit to add to the article puts that article on the impact metric so the situation is OK. There are separate tools like Baglama etc which do report those metrics for images but I don't think I can set one of those up myself (they mostly seem to be used for large image content donation organisations). I'd really prefer if Dashboard could be "one-stop shop" to capture all impact metrics for both articles and images. Having seen Dashboard, I know a number of WikiProjects and individual contributors have said "I'd like a dashboard that tracks all the article views arising from all of our articles (meaning ones tagged by the project) or all of the articles I have ever contributed to". So do be aware that the desire for impact metrics is bigger than just programs/events (which tend to target new users) as there are likely to be some real performance issues if Dashboard starts being used by active editors or WikiProjects. Personally I dislike the way that Dashboard wants to display "recent edits", especially when I have no control over what recent means. Indeed, I am rarely that interested in recent edits, I want total edits. In my ideal universe, I can send a link to the Dashboard to anyone (WMF grant manager, head librarian, etc) and know that if they look at it today or next year, they will see the total edits. Of course, I also want the full spreadsheet of information too, so I can do whatever analysis I want , but rarely do senior staff want to be given a spreadsheet for them to analyze. They want the single Dashboard URL to do it all for them (everything they want on one screen). Kerry Raymond (talk) 22:59, 23 August 2018 (UTC)Reply
These are all really great points Kerry Raymond. Thanks! Just a note, in case you missed it: we'll most likely be using Grant Metrics for reporting from Event Tool. (I've posted a breakdown of the reports available now from Grant Metrics and Dashboard, in case you'd like to compare or look for gaps.) JMatazzoni (WMF) (talk) 21:26, 24 August 2018 (UTC)Reply
Editor retention metrics should be opt-in. I say this because editor retention statistics are usually very bad and you probably don't want anyone to see them. They are bad for two reasons. Reason 1. Many participants in programs/events do not see it as anything different from a Clean Up Australia weekend or a fun run for charity or attending an interesting seminar. It's something they will give a day to because they think it would be interesting to know how to contribute to Wikipedia or that we need more women biographies of artists on Wikipedia, but rarely do they come along with the intention of becoming an active editor and devoting every spare minute to it (like I do!). While many of them do have a secret desire to write an article about a usually non-notable topic, all our rules are a big turn-off to them. You really don't see a lot of people continue to contribute after an event of their own free will (obviously if done in an educational setting where there is course credit or please-the-teacher opportunities, things might be different but that is not free will). I am inclined to believe that *active* Wikipedians are born not made. Reason 2. Even when a person does continue contributing, they often do it under a different account name (because they forget their username or password and making a new account seems easier) or they decide they didn't make a good account name choice in the first place (too close to their real name, or too silly) and they don't know they can ask to change their account name. Or they just do it as an IP. I see loads of edits on my watchlist from "new" accounts/IPs that are not typical of a true newbie (e.g using templates, detailed policy knowledge in discussions, etc) -- as someone who trains genuine newbies, I know what their edits look like. So a lot of active editing does takes place that way. So, even if you have retained people from a program/event, you probably can't track it. But I know users sometimes tell me that I trained them or they met me at an event but their account history doesn't support that story, so I am well aware that retained contributors are likely to be using different user names over time and hence not able to be tracked. Kerry Raymond (talk) 22:59, 23 August 2018 (UTC)Reply
(copied from below) ...to allow us to report on which events were most useful / which trainers were most effective in editor retention Lirazelf
Metrics reports to judge effectiveness makes sense Lirazelf. The name of the trainer is not currently part of the event data in Grant Metrics. Would you need to have them listed, to make this type of comparison, or would you just know who did what? (I'm thinking that listing the trainer's names might make some people uneasy.) And is there usually one lead trainer? Or would a report need to be able to handle multiple names? JMatazzoni (WMF) (talk) 17:33, 23 August 2018 (UTC)Reply
  • On the topic of retention, I'm sharing a brief summary of the main points of feedback I've gotten from grantees when I've asked about a retention metric over the last few years. This feedback has come from multiple projects that have include 60+ grantees. I'm adding it here since it's not aggregated anywhere. In general, a retention metric has been a constant wish from many many grantees over almost 5 years, but it's metric that has to be calculated (mostly) by hand; the significant time investment to calculate has meant its only viable for really small programs, but that doesn't diminish the usefulness to larger programs.
    • Benchmarks: As said above, retention numbers are usually low and that can be demoralizing to some organizers. One thing that has helped fight that demoralizing effect is looking at the retention for the language project overall. So say only 10% of a program's participants keep editing, if the overall retention of the language project is 5%, that 10% looks much better than before.
    • Multiple time intervals: Since retention isn't calculated by most metric tools today, there is no good baseline on what the "right" time interval should be for programmatic work. Almost every grantee I've spoken with has asked for multiple time intervals - 30 days, 60 days, 6 months, 1 year - so that they can see this change over time. Having a short and long time interval was especially necessary since the length of a program can vary widely. Here's a mock of what it could be.
    • Multiple namespaces: Right now, almost all metrics are calculated off contributions to the main namespace of a specific project. But this doesn't account for a reality where work in Draft, User, or File namespaces are required or recommended (e.g. after ACTRIAL). Also, it doesn't account for a reality where people will edit whatever they want, and the organizer can't know what they've done and where (i.e. on what project or in what language). So having a retention definition that covers multiple namespaces and multiple language projects is a practical necessity.
Tracking edits to all namespaces makes sense Shouston (WMF). I can most easily imagine including all edits in a downloaded report, where we can just include a column labeling the namespace where the edits were made (so users can do a spreadsheet sort and isolate the namespaces they want). But I'm wondering about what I call the "summary" statistics we put on the main reporting screens—for things like total "Pages created" or "Pages improved". Do people want those figures to include talk, user, draft and other namespace edits? Or should those totals remain focused on mainspace only? Anyone? JMatazzoni (WMF) (talk) 18:04, 29 August 2018 (UTC)Reply
    • New editors: When WMF changed from Global Metrics to grant metrics, we asked about whether or not to have a retention metrics, and importantly, who was retained. Overwhelming, we heard that it should be new editor retention, not existing editor. Mostly this was because existing editors were editing before and will likely edit after the event/program; counting their "retention" as due to the program seemed untrue. This wasn't true for all programs, but was the majority of answers.
    • Editing retention vs. program retention: There are two different kinds of retention: Does someone edit Wikimedia projects again? vs. Does someone come back to program again? The first is the one we typically talk about. The second has become very useful for programs that run multiple events, or like someone noted above, only happen once a year. For example, if you only run an editathon on International Women's Day, but you've done that editathon for 6 years, you'd like to know how many "repeat" or "retained" people you have in your program (regardless of what editing they do the rest of the time). -- Shouston (WMF) (talk) 23:44, 27 August 2018 (UTC)Reply
We use metrics for grant reports but also for our partners, they are usually interested in knowing the results of the event. About the data, type of content and edition retention. Also, I agree with Kerry Raymond, impact metrics would be nice. --Rodelar (talk) 14:54, 28 August 2018 (UTC)Reply
Just the same, we use metrics both for internal purposes (to refer to board and members) and for external one (for partners and also for communication, if they are good enough). I would like to have retention/medium period metrics too, in addition to the ones strictly related to the event. --Marta Arosio (WMIT) (talk) 08:31, 3 September 2018 (UTC)Reply
  • What types of output formats are most important to you (e.g., spreadsheet download, automatic wikitext output, nicely designed onscreen displays in the tool…).
    • Answers:
A spreadsheet would be useful for internal purposes. It would also be nice to have an onscreen display of outcomes as well as a template of outcomes to share on the event page.Ariel Cetrone (WMDC) (talk) 14:18, 23 August 2018 (UTC)Reply
Spreadsheet is ok but would nice if we can share the outcomes on the event page. --Rodelar (talk) 14:54, 28 August 2018 (UTC)Reply
I agree, it is useful in order to communicate and commit to have a clear onscreen display. --Marta Arosio (WMIT) (talk) 08:31, 3 September 2018 (UTC)Reply

Metrics about event participants edit

  • In addition to basics like # of edits made, what data do you need about participants (e.g., editor retention metrics, gender or other demographics…)? Understand that some user data may be sensitive from a privacy viewpoint.
    • Answers:
I am particularly interested in motivating factors for new participants and participant demographics. -Ariel Cetrone (WMDC) (talk) 14:49, 23 August 2018 (UTC)Reply
Although I find the demographic data problematic, it would be interesting to know the number of women participating and new users. Apart from that, edition retention. --Rodelar (talk) 14:54, 28 August 2018 (UTC)Reply

Metrics about the work your event produced edit

I don´t know if it´s possible but it would be very helpful to be able to track the number of articles, both new and improved, in different languages, not just in a wikipedia. Also, number of references and pictures uploaded and used. --Rodelar (talk) 14:54, 28 August 2018 (UTC)Reply
  • How important is it to demonstrate/track impact or quality of the work (via pageviews accrued, persistence of edits or articles, article “class” ratings….)
    • Answers:
Many of our participants are new editors, so its not particularly important to demonstrate an immediate change to article "class'. I find that participants and institutional partners appreciate knowing how many pageviews have accrued. It can also be a motivating factor for future events. -Ariel Cetrone (WMDC) (talk) 14:53, 23 August 2018 (UTC)Reply
For Wikipedians in Residence pageviews, improvements in article class are important to explain the benefit of their long term involvement with a partner organization. Sydney Poore/FloNight (talk) 01:21, 25 August 2018 (UTC)Reply
Thanks for your suggestion about tracking article-class improvementsSydney Poore/FloNight. It appears that only five Wikipedias (plus English Wikivoyage) use article assessments, however. So while this is something we can track, I don't know whether it makes sense to build a tracking feature that will be relevant only on five wikis. What do you think? JMatazzoni (WMF) (talk) 17:42, 29 August 2018 (UTC)Reply
Good point. It's important but maybe not an immediate priority if only 5 wikis do it now. If Article quality improvement rate] ever happens or something similar it might be possible to include. Sydney Poore/FloNight (talk) 22:41, 29 August 2018 (UTC)Reply
Pageviews, the impact of an article in a given period allows us to assess the importance of what volunteers have done and it is also useful for partners. --Rodelar (talk) 14:54, 28 August 2018 (UTC)Reply

Metrics about the event edit

  • Besides basic event info like the number of attendees and pages edited, what information do you need to show about events?
Who attended, how many new users, how many articles created, how many articles improved, did new users continue to edit after the event (the latter can be done through other means, but if there was a way... Lirazelf (talk) 21:11, 16 August 2018 (UTC)Reply
I second Lirazelf. Plus pre-event experience level, reasons for attending the event...-Ariel Cetrone (WMDC) (talk) 14:56, 23 August 2018 (UTC)Reply
Length of the event. Contests and themed events are often longer than one day. Sydney Poore/FloNight (talk) 01:25, 25 August 2018 (UTC)Reply
Thanks Sydney Poore/FloNight. I want to be sure I understand your request: Are you saying you need to see the length of the event in a report? Or are you saying you'd like to be able to set the time frame of the report separately from whatever the time frame of the event is? JMatazzoni (WMF) (talk) 00:54, 28 August 2018 (UTC)Reply
Well, both would be good! To show the number of the days of the event is critical. Such as it occurred from April 1 to April 15th 2018. Also, it would very useful to filter for days of an event to help understand if the number of days of an event (for example running 4 Saturdays in April vs. running the first two weeks of April) matters. Or is a series of events is better than one off events. Sydney Poore/FloNight (talk) 01:06, 28 August 2018 (UTC)Reply
Number of participantes (users), number of new participants, articles created and improved, experience level before the event. Agree with FloNight. --Rodelar (talk) 14:54, 28 August 2018 (UTC)Reply

What filters or limits do you need for reports? edit

  • If we can provide the ability to limit or filter reports, what limits would be most important, and for what types of reports? E.g. how important would it be to:
  • Limit reports by a “worklist” you've entered (of pre-identified articles to be improved or created), so that you'd show stats only for edits to those articles or for users who worked on them? Or similarly limit results to a wiki category?
  • Set a custom time frame for reports, to track statistics over a longer time?
  • Create aggregate reports on multiple events, based on a shared event parameter like partner, location or some other factor?
  • Answers:
* Most important: Limiting reports to the work list would be helpful. We try not to be too rigid with our work lists while still sticking to the theme for the day. We don't want participants to feel like we are setting the standard for what thematic articles should/shouldn't be edited. That said, some participants experiment by editing things completely unrelated to the theme and it skews the numbers when determining how many edits were made to relevant content. -Ariel Cetrone (WMDC) (talk) 15:11, 23 August 2018 (UTC)Reply
* Somewhat important: Sharing custom reports pulled over a long period of time may be useful for our partners. -Ariel Cetrone (WMDC) (talk) 15:11, 23 August 2018 (UTC)Reply
* Somewhat important: Aggregate reports would be useful to help determine success factors. i.e. attendance levels for two events in same location or why two similarly themed events had vastly different attendance. -Ariel Cetrone (WMDC) (talk) 15:11, 23 August 2018 (UTC)Reply
Worklist is important. In the events, we establish a period of time, usually one month, for the participants to continue editing online, therefore a limit could be a time frame. --Rodelar (talk) 14:54, 28 August 2018 (UTC)Reply

What else? edit

  • What other reporting features do you need or ideas do you have?
    • Answers:

Event tool feature ideas, August 2018 edit

These are the open questions for event organizers from the August 9th update, copied from the project page. We'd love to have your thoughts!

Step 1: Organizer creates an event in the system (and Event page on wiki) edit

What types of information would you like to be able to report on? edit

  • channels used to promote the event
  • where and when the event took place
  • goals of the event (learning to edit a Wikipedia article, creating templates…)
  • how many people showed interest before the event (comment on some canal…)
  • how many people came
  • what people actually have been asking for and been actually informed about
  • whether there was something feeling as a lake: material, skill, connection problems…
  • after the event, did people seem to have assimilated the provided information

--Psychoslave (talk) 12:44, 11 August 2018 (UTC)Reply

Hi @Psychoslave:, I want to make sure I understand: You want to put these items into the event record so you can run reports later? E.g., would you want to look and see whether you got more participants from Facebook or Meetup promotion? Something like that? And when you say "where the event took place," do you literally mean the venue? Or are you thinking in terms of wanting to report back to different "Event Partners," as I might call them? —JMatazzoni (WMF) (talk) 00:17, 14 August 2018 (UTC)Reply
Yes, report them, and also being able to gather and analyze data to answer questions like "what communication canal seems the most efficient to attract people to a given kind of event in a given region?", so organizers have indicators how to prioritize their communication efforts. It's also important to have a ratio of people that showed interest and people that actually came to the event. For example we already went through thousands of like to a Facebook post regarding an event and only two people came to it. I think it's good to have event organizers being aware of that kind of gap, both for logistical and moral reasons.
And yes I meant literally where the venue took place, as I expect that the same communication methods/canals won't work with the same efficiency in different local contexts. So it's good to have the good picture of what works elsewhere, especially to grab whole new ideas or improve the way you are already communicating, but having specific local data that show some communication methods just doesn't work well in a given context is also very valuable. --Psychoslave (talk) 03:54, 14 August 2018 (UTC)Reply
In case of using Facebook events, how many people were reached by the events. Maybe also how much effort was put into organizing the event; for example, how many trips to the venue, how many calls to the sponsors and so on.--Reem Al-Kashif (talk) 20:48, 11 August 2018 (UTC)Reply
@Reem Al-Kashif: How would the data about how many responses you had to your Facebook event get into our system? We might be able to look that up, based on you entering a link. What if you enter the data manually, from Facebook? JMatazzoni (WMF) (talk) 00:38, 23 August 2018 (UTC)Reply
It is inaccurate at this stage to even thing about reporting data. The unique data which is otherwise unattainable is a list of participants. Collect this here, and with this information, any other computation can come from public datasets later. I do not want to distract this project from collecting a user list, but in the longer term, an alternative input will be any Wikimedia list like list of Wikipedia articles, category of photos, etc. The point here is to collect the list of items which will be the subject of computation in a later process. Blue Rasberry (talk) 13:29, 12 August 2018 (UTC)Reply
I think the basic requirements of an event pages are: 1) information about the date, time, address, how to reach it (or how to connect if it is a remote event), program, what attendees need to do in advance or to take with them 2) list of participants 3) list of to-do during the event, such as list of articles to be written or reviewed. --Marta Arosio (WMIT) (talk) 08:42, 13 August 2018 (UTC)Reply
Who attended, how many new users, how many articles created, how many articles improved, did new users continue to edit after the event (the latter can be done through other means, but if there was a way... Lirazelf (talk) 21:11, 16 August 2018 (UTC)Reply
Thanks Lirazelf. I'm going to copy this to the post above, under Information about the Event. JMatazzoni (WMF) (talk) 00:50, 23 August 2018 (UTC)Reply
In my opinion, an event page should inform about the date, time, address (¿coordinates?), transport to get to it, what attendes have to take with them, schedule, list of participants, list of articles or to-do tasks. --Rodelar (talk) 20:14, 17 August 2018 (UTC)Reply


We have a project page with a list of events, and links to a separate page for each event. The events are organized by word of mouth, and agreed on ahead of time, since you need a critical mass of experienced people to hold an event, and we have been working in two languages, so you need people who can translate. The time and location of the in-person part of the event is on the event page so the link can be emailed, but this is also very informal since we have been organizing with an online component across different time zones. Avery Jensen (talk) 07:17, 21 August 2018 (UTC)Reply

How important a feature is automatic wiki page creation and updating to you? edit

  • Are you willing to accept a new and perhaps more standardized page design to get auto page creation?
  • Any help to streamline the report process is welcome, as long as it actually reduce the time which is required to provide this feedback and doesn't limit expression. That is, let a text field for free input. Actually, it would be great to even give an option for audio/video feedback. That would be in line with the integration of oral knowledge. If everybody in a room is fine with recording the event, it might also be great to record the whole event, as it would not only give access to its intrinsic pedagogic value, but also allow edithatons organizers to learn from each other by watching them. --Psychoslave (talk) 12:44, 11 August 2018 (UTC)Reply
@Psychoslave Love that video feedback idea!--Reem Al-Kashif (talk) 20:51, 11 August 2018 (UTC)Reply
Hi Psychoslave. Re. your qualification that the auto creation and updating should not limit expression, I hear you. In terms of creation, the idea was that we might output a generic page, which you would then be free to customize. The exception would be certain sections of the page that change frequently—like a user list and worklist—that would be subject to auto-updating. Those sections would not be customizable. I hope that's more clear. In theory, would that work for you? Would it be valuable? JMatazzoni (WMF) (talk) 17:03, 23 August 2018 (UTC)Reply
Hi @JMatazzoni (WMF):, from what I understand, yes, it would work for me, although having some interface which is programatically queriable would also be great and allow more flexibility to mix and match data. Psychoslave (talk) 13:55, 28 August 2018 (UTC)Reply
That could be really useful, and a huge time saver, although I'd want to be able to customise event blurb Lirazelf (talk) 21:12, 16 August 2018 (UTC)Reply
Yes, definitely. --Rodelar (talk) 20:14, 17 August 2018 (UTC)Reply
  • What parts of the Event page change the most after creation so would benefit from automatic updating (e.g., the worklist?)? Is updating important to you?
  • I need a page but it does not need to look good. The user experience is similar to meetup.com - users need details on how to join the event but just as meetup.com is not the essential part of the experience of an event, the Wikimedia event page is not the essential part of a Wikimedia program or event. The point of the event page is only to communicate how to join the real event and also to collect the user account name in a signup process. Blue Rasberry (talk) 13:31, 12 August 2018 (UTC)Reply
  • The page needs to be updated after creation, both with the participants that enroll in gradually and with the worklist, which often isn't complete from the beginning. --Marta Arosio (WMIT) (talk) 08:42, 13 August 2018 (UTC)Reply
The worklist, and it can be good to add any press / photos. Lirazelf (talk) 21:12, 16 August 2018 (UTC)Reply
Worklist and participants. --Rodelar (talk) 20:14, 17 August 2018 (UTC)Reply
Sign-up list for participants and a worklist. It is also good to have a place to list outcomes so people can show their accomplishments at the end of the session. We also post a link to the Code of Conduct, we use the Technical COC because we don't want to write our own and that one has had the most eyes on it. Avery Jensen (talk) 07:23, 21 August 2018 (UTC)Reply

If participants’ usernames were in a database you could consult any time, would you still need to publish them on the Event page? Why? edit

Yes, to let people which were at the event contact each other through this list, without having to rely on a tiers availability. Of course that is in the case people agree to appear in this public list. Psychoslave (talk) 12:44, 11 August 2018 (UTC)Reply
Yes, I think it encourages people to attend.--Reem Al-Kashif (talk) 20:53, 11 August 2018 (UTC)Reply
No, in lots of programs people do not even know they are part of a cohort. The point of outreach is to get people to engage in a program, and not to get them to join a group. There are many reports to run with remixes of usernames. If a user joins one event then probably they go into 5-10 computations, and it makes no sense to assume that there will be an event page for each computation. Blue Rasberry (talk) 13:34, 12 August 2018 (UTC)Reply
Well, I think both can be true. Maybe we need some different vocabulary to describe how connected the people are. Maybe "participant" for the disconnected, "cohort" for those who attend at the same place/time, and "team" where people perceive they are working together on the task (which might be as a cohort or not). For example, I would describe my 1lib1ref group as a "team" (they were working together to achieve a particular goal). I think "teams" are interesting because you have league tables for teams and a bit of friendly competition among teams can be good for getting a lot done :-) I don't say we need league tables in the first version but it's where I'd like to see this go longer term.
I think the main requirement is to have the participants' list somewhere; to have it in the event page is not strictly necessary, but it would be better in order to have all the information in the same place and to commit people more. --Marta Arosio (WMIT) (talk) 08:42, 13 August 2018 (UTC)Reply
Not necessarily. I think it would actually be helpful to have the option to not publicize all participants, because some people do not consent for their participation in an event to be public (but they might be fine with including their edits in aggregate reporting). Rachel Helps (BYU) (talk) 17:49, 13 August 2018 (UTC)Reply
Agreed that having options here is nice, but the current Dashboard lists all the participants and I haven't had an issue with anyone being bothered by this, so I think that could be "version 1". Kerry Raymond (talk) 07:28, 14 August 2018 (UTC)Reply
Good to have the option, and to allow us to report on which events were most useful / which trainers were most effective in editor retention Lirazelf (talk) 21:13, 16 August 2018 (UTC)Reply
Interesting thought. I'm copying this to the section above, under What is the main purpose of the reports you want to make?JMatazzoni (WMF) (talk) 17:29, 23 August 2018 (UTC)Reply
Yes, because I think it´s useful that all the basic information of an event is on the same page. Besides, it can be interesting to facilitate contact between the participants. For cases that do not consider it necessary to publish the list, this option could be optional. --Rodelar (talk) 20:14, 17 August 2018 (UTC)Reply

No, some people want to be anonymous or some join online. If they sign the event page with four tildes it is easy enough to get their user name later for the dashboard. If they don't sign the event page you can just ask their user name and add it to the dashboard yourself. I have had event coordinators have me sign in directly to the dashboard on their laptop. Avery Jensen (talk) 07:30, 21 August 2018 (UTC)Reply

Other comments or suggestions for Step 1? edit

Avoid acronym
A first tiny feedback: if possible, let's avoid to introduce yet an other acronym. Event-tool(s) is not that long, and it keep the topic clear for anyone, which to my mind is good for staying as open and welcoming as we can. --Psychoslave (talk) 12:17, 11 August 2018 (UTC)Reply
Agreed. Acronyms also don't translate well in some languages (including Arabic).--Reem Al-Kashif (talk) 20:44, 11 August 2018 (UTC)Reply
Good point. The original working title for this was "Editathon Tool," which is hard both to write and to say! But Event Tool is easier in all ways. Thanks for the reminder. JMatazzoni (WMF) (talk) 23:04, 13 August 2018 (UTC)Reply
Huge massive agree on this one. Far too many acronyms already...
Treatment of personal data
Regarding For organizers only, we legally have to inform and obtain consent of people if we do keep some personal data. So reading event participants will not see or use the ET, it's not clear whether this was taken into account: each participant should have possibility to access data related to its own person, although the whole gathered data sets should not be accessible to every participant. Whatever the tools enable to collect and cross-process, the processes and resulting publication should conform to law like General Data Protection Regulation. --Psychoslave (talk) 12:17, 11 August 2018 (UTC)Reply
Hi @Psychoslave:. In saying this tool is "for organizers only," I was trying to respond to input I've had from many organizers, who want to keep event participants focused on the wiki rather than on a a separate tool. So it was more an expression of a design principle for the project. Sorry if the remark wasn't clear. You're right, though, to observe that there are many privacy issues we'll need to work out if we build everything described. —JMatazzoni (WMF) (talk) 23:04, 13 August 2018 (UTC)Reply
Better advertising about the event
Currently, we use Wikipedia:Geonotice to advertise upcoming events based on user locations. There are several drawbacks to this approach. 1) Not a lot of event organizers know that they can file a request to post a notice on this page. 2) It's a low volume page that require admin intervention to post (as it involves editing MediaWiki:Gadget-geonotice-list.js), and quite often admins don't see the request until after the event was over. 3) The event is only posted to watchlist, which means it only reaches those that already registered. If you're developing a tool, this feature should be made compatible such that it can be posted site-wide to those viewing within the specified geographic range. OhanaUnitedTalk page 05:30, 18 August 2018 (UTC)Reply
Thanks for this idea OhanaUnitedTalk page. As you say, Geonotices can be targeted fairly precisely. And among editors, Watchlist is a widely viewed page. If you're aiming at readers, do you have an idea for where a notice might appear? (As you probably know, Central Notice puts banners on article pages, but it can be targeted only at the country level, which seems too broad for in-person events. Mmaking it more granular would be a difficult project, I'm told.) JMatazzoni (WMF) (talk) 18:02, 23 August 2018 (UTC)Reply
If you want to reach users who already use watchlist, then they are probably proficient enough to know how to edit. The goal of outreach events is aimed at reaching and recruiting new editors. As for Geonotice, we enter latitude and longitude coordinates and users within this box of coordinates will see it. Why can't central notice do the same? OhanaUnitedTalk page 00:22, 24 August 2018 (UTC)Reply

Step 2: Participant sign-up edit

If we offer only a standard signup form (at first), would that work for you? What would a standard form need to include? edit

  • As a result, organiser should be able to know:
    • how many people intend to come
    • a point contact for each of interested people
  • Nice to have would be:
    • pointing people to pertaining documentation regarding the event, for example to give specific instruction what participant can prepare to take out the most of the event (example, create Wikipedia account if they want wish, before the event)
    • specific questions attendees might have

Psychoslave (talk) 12:56, 11 August 2018 (UTC)Reply

Thanks Psychoslave. Including basic event info on the signup page seems workable. Just to make sure I understand: you'd like to do that that because people might be coming to the signup page from Facebook or a Tweet or some other abbreviated notice? In other words, you would be sending people right to the signup page, instead of to the Event page (which would include a link to signup)?
Name, phone number (if possible), email, what questions do people hope to get answers for in the event, how much do they know about Wikipedia.--Reem Al-Kashif (talk) 20:58, 11 August 2018 (UTC)Reply
I need nothing except a sign up button and a shortcut URL. The event is the attraction not the event page. I want participants off the event page and engaged in the event as soon as possible. Blue Rasberry (talk) 13:35, 12 August 2018 (UTC)Reply
I would love to have a simple standard signup form. I would like to have the nickname/name, a contact (talk page is fine, as long as they have it, if not something else), the organization they work for (if it is GLAM event), maybe the level of knowledge of Wiki projects.
Speaking of participants, it would be nice to have the chance to flag if they were actually present to the event or just registered and never popped up and a way to monitor their activity after the event (one of the main points of the event is if they are actually useful to engage people in the medium term or not). --Marta Arosio (WMIT) (talk) 08:58, 13 August 2018 (UTC)Reply
Definitely need name, email address, username (a prompt to create a Wikipedia account would be lovely) Lirazelf (talk) 21:15, 16 August 2018 (UTC)Reply
Name, username, email. --Rodelar (talk) 20:14, 17 August 2018 (UTC)Reply
No. it's just another piece of paper to keep track of. I personally do not want the responsibility of handling personally identifying information, signing non-disclosure agreements, etc. Avery Jensen (talk) 07:35, 21 August 2018 (UTC)Reply
Hi @Avery Jensen:. Just to be clear, the signup form concept is for an online form, not a piece of paper. Though you are correct that there would likely be implications for privacy policy that we'd have to work out. JMatazzoni (WMF) (talk) 17:54, 21 August 2018 (UTC)Reply
Whether it's physical paperwork or virtual paperwork, IMO it's another barrier to entry if you try to engage someone by having them fill out a form. Ideally they should be able to walk in, be welcomed with a cup of coffee, and start editing. This is the educational mission, the metrics should be as unobtrusive as possible. Avery Jensen (talk) 20:22, 22 August 2018 (UTC)Reply
Ability to allow sign in into different groups.
Story: there is an event spanning a couple of days, with different sessions/activities on morning and evening. People may attend one or more sessions, so a single user would sign into one or more groups. Moreover, the number of slots of each group/room may be different and need.
(yes, I'm afraid this makes more complex, since now it's not just having events and users, but also third entity between them)
Platonides (talk) 22:34, 22 August 2018 (UTC)Reply

If we can offer a tool that lets you select from a predefined set of signup form options, what would you need it to offer? edit

  • e.g., questions about gender or age, a requirement to approve a safe-space policy...?
  • Be selective: the simpler this tool is, the more likely it is to get built.
I would like to have question about gender, age, level of studies, kind of organization they work in, level of knowledge of wiki projects and where they live (I think about online events also). All answers can be optional, in order not to scare people away. --Marta Arosio (WMIT) (talk) 08:58, 13 August 2018 (UTC)Reply
Thanks Marta Arosio (WMIT). You mention where they live. Our security/privacy policies are very restrictive about what we can know about people. Can you say more about your purpose in asking this? How precise do you need to be? Would zip code or city and state/province/etc. be good enough? JMatazzoni (WMF) (talk) 01:08, 23 August 2018 (UTC)Reply
I think that zip code, province or city would be great. I think it is important, especially if the present event is an online one, because if you know where people live you can call them for local events. --Marta Arosio (WMIT) (talk) 07:25, 3 September 2018 (UTC)Reply
Name, date of birth, gender, knowledge of wiki projects (could be good to reflect on after the event / to tailor training accordingly), twitter or other social media handle, telephone number, email address, preferred method of contact Lirazelf (talk) 21:17, 16 August 2018 (UTC)Reply
Gender, age (could be 0-20, 21-30, 31-40, etc., for example), level of knowledge of Wikimedia projects, how did they hear about the event. --Rodelar (talk) 20:14, 17 August 2018 (UTC)Reply

In particular, should a checkbox for email-list opt in be standard or optional? edit

Standard! --Marta Arosio (WMIT) (talk) 08:58, 13 August 2018 (UTC)Reply
Standard! If you need to change any previously advertised arrangements ("we will be in room 405 instead of 306"), you need a way to communicate efficiently with those who may have signed up with the old information. You need a way to remind them a day or so in advance that the event is on and remind them what to bring etc. Also after the event, most newbies won't know how to "Talk" or won't feel comfortable to use it. They prefer email and other more familiar means of communication, plus most of them forget their password so the account needs to have an email address so they can do password reset. Kerry Raymond (talk) 07:33, 14 August 2018 (UTC)Reply
Standard, please Lirazelf (talk) 21:18, 16 August 2018 (UTC)Reply
Standard! --Rodelar (talk) 20:14, 17 August 2018 (UTC)Reply
No, if I want someone to have my email I will give it to them. We should not be exposing new users, or for that matter old users, to potential bad actors, no matter how small the risk. Avery Jensen (talk) 08:04, 21 August 2018 (UTC)Reply
Hi @Avery Jensen:. It's possible there is a misunderstanding here, so just to clarify: This would provide users the ability for participants to request that they become part of the organizer's email list. Users wouldn't see each other's email address. I.e., not everything in the signup form will go to the Event page. Does that make sense? JMatazzoni (WMF) (talk) 17:43, 21 August 2018 (UTC)Reply
You are talking about creating a new email list on something like MailChimp and connecting everyone to it automatically? I'm assuming this would not be for one of the public Wikimedia mailing lists, or for linking people via the email function attached to their user name, assuming they have it enabled. There is a sign up form here in the sidebar, but I don't know if it's functional or what I would be signing up for. This may have been part of Wikiproject X. You may be looking at different types of organizers, a large established user group with paid employees or a paid Wikipedian-in-residence may have one type of needs, a librarian organizing an event at their local library for the first time may have different needs. Avery Jensen (talk) 19:37, 22 August 2018 (UTC)Reply
Hi Avery Jensen. To answer your question, no, we're not talking about creating a mailing-list system. The idea is this: in talking to people, it's clear many organizers are using facilities like Meetup or Event Brite to sign up participants. As part of that process, the participants supply email addresses that the organizers can download, but the organizers have no ability to ask whether the participant would like to opt in for future emails ("Would you like to hear about our future events?"). So I was imagining a signup system that would include such an opt-in for participants. The organizer could then download emails from the system and put them in whatever email system the organizer is using. It's something many people want. But to be honest, any talk about collecting and especially sharing emails raises all types of issues for us about security and privacy. In particular, my understanding is that organizers are often obligated to share participant information with partner organizations who host events—typically GLAMs, universities, etc—which raises a whole other set of issues. So in the end this may not be a realistic feature idea. Does that make sense, and what do you think? JMatazzoni (WMF) (talk) 04:23, 23 August 2018 (UTC)Reply
I'm on a few mailing lists but at this point I don't remember how I got on them, possibly through Eventbrite. IME the people who sign up for an event on Meetup are not the same people who attend. I don't use Meetup any more, it is too public. I will decide whether I am going at the last moment and take a chance on there being any coffee left when I get there. More and more people are keeping their names out of the public and prefer face to face communication. If an RSVP is needed usually it is done through Eventbrite. Maybe I would use a mailing list option if I had one and knew how to use it, and maybe it would even encourage me to try more events, but my impression is that these tools are meant for the larger scale or professional organizers who already have access to such things through their institutions. Avery Jensen (talk) 03:53, 26 August 2018 (UTC)Reply
Yes, we probably need a way to ensure if we have the right to make contact in relation to future events as this is probably a requirement for privacy legislation in many countries. In Australia (at least) you always have the right to contact someone about an existing "business relationship", so if someone signs up to an event, we have the right to contact them about that event using whatever contact details were collected, but not necessarily for a following event. Having said that you would probably get away with contacting them about a subsequent event, so long as you immediately removed them from your list at that point if they complained. I note that there are practical problems about actually "removing" people from lists as they may re-enter the list via other means and get contacted all over again, so it is often more a case of retaining their details with a big "DON'T CONTACT" attached in order to prevent a second record for that same person to be created. It is almost impossible to deal with situations with DO NOT CONTACT requests where you have different contact details for the same person from different interactions (e.g two different email addresses) as you simply can't tell if these are the same person or not. It is ironic that in order to NOT CONTACT a person you actually need them to provide you with all their contact information, so you can actually identify them in the future as the person requesting DO NOT CONTACT. (I don't think the legislators of privacy laws ever fully thought this stuff through). Kerry Raymond (talk) 00:55, 24 August 2018 (UTC)Reply
Again, privacy legislation may vary around the world, but it would probably be best if any tool used to sign up people (or otherwise collecting their contact details) makes clear that the information may be shared with other organisations involving in "providing the service" which in our case is "organising the event" or whatever. In some events, there is building security to be considered -- the security staff must know who is attending in order to allow them into the building. Similar if you arrange parking, you may have to share some details with the parking people (e.g. names or registration details). If staff of an organisation are attending an event organised by their organisation or at the request of their organisation, then I think the organisation probably has a right to know which of their staff are registered to attend and which actually did attend. However, if an organisation hosts an event, I don't think they have a right to know of their non-staff participants beyond the information they strictly need for "providing a service" (e.g security), but they probably do have a right to some aggregated statistics (how many came, what metrics for the group as a whole) if they are a partner in organising the even. I mostly do events in rooms provided to me for free by a library because they see the Wikipedia activity as being aligned with their organisation goals (either as a "knowledge" activity or as a free event to which their community is invited -- different libraries may have different goals for the use of their meeting rooms), so I see the library in that instance as a partner in the event. Whereas if I just rented a space on a commercial basis, I would not see them as a partner and would not think they had any right to know what took place, who attended etc (other than for "providing the service" e.g. security). I would not share aggregated metrics with a commercial venue provider (and I doubt they would want them in any case) but I would with a "partner" venue to help justify their decision to give me a free venue in this instance and hopefully again in the future. A variation on the privacy problem is that I do encounter people who are uncomfortable that every edit they do on Wikipedia is recorded and is visible to anyone (this isn't just at events, but organic newbie editors are somewhat surprised that I can see all their contributions). To counter this at events, I usually point out that they do not need to use a real name as a user name and they don't generally need to link their real name to their user name for the purposes of the event. I've occasionally had push-back saying that they don't choose to be recorded like that, to which I say "well, that's the way it is, don't contribute if you aren't comfortable". Nobody has ever actually walked away at that point, but I think you do have to make clear what is non-negotiable. I would tell someone to edit as an IP if that was a solution (they usually are not using an IP address that links directly back to the individual) but I've never needed to. Another variant of this is the paid editor policy requiring disclosure of the employer either on the user page, talk page or edit summary. And often programs/events have employees present and during their normal working hours or as part of their duties or with an intention to edit something related to their employment. Obviously naming their employer is a big disclosure if they are sensitive about privacy p-[0and I often have to manage concerns about this. I explain that paid editing disclosure is part of the Terms of Service and therefore isn't negotiable. If they don't like it, they should leave. Again nobody has actually left, but I do get the questions like "well, what if I want to contribute on my own time, nothing to do with my job, later on? Do I have to leave that employer information on my user page? Can I create a different account?" etc. As I mention elsewhere about measuring editor retention, I strongly suspect that if these individuals desire to contribute after the event, they almost certainly will create new accounts without any paid edit disclosure and will probably use that account even if they are contributing "on the job" or with COI. People do not like the paid editing disclosure. There is no easy way to deal with the paid editor disclosure. Maybe we should put it upfront in promoting the event but I fear that would put people off coming. I tend to take the view that nobody really does work away from attending the event because of it, but of course it could happen. Kerry Raymond (talk) 01:45, 24 August 2018 (UTC)Reply
One of the advantages of these events is that even if someone has a COI they will probably meet an experienced Wikpedian at the event who does *not* have a COI and will probably be willing to edit their organization's article without being asked. It is a two-way street, as some experienced Wikipedians with concerns about harassment will no longer edit except at events. Avery Jensen (talk) 03:59, 26 August 2018 (UTC)Reply

Other comments or suggestions for Step 2? edit

Step 3: Wiki account creation edit

What do you think about the Proposed solutions/feature ideas? edit

  • The issue here is that that Wiki blocks more than 6 account registrations at one IP address in one day. All available pathways to address this have high failure rates. This is a devastating problem because when a group of people organize an event, and they have sponsors, and they recruit a lot of people to be in the room, then if Wiki administration then prohibits account registration for the participants then the result is devastating. Having an on-wiki admin shut down an event, when they themselves do not do events, is demoralizing, wastes huge amounts of money, greatly disrupts the relationship with the sponsor, and gives a horrible new user experience for coming to a wiki event when they cannot participate.
The horrible tension remains in place! There is no wiki-legal process for organizing events in the way that typical event organizers would find natural and workable! The current process in place requires the breaking of many wiki rules! There are multiple rules sets in place and they conflict with each other. The process either needs mediation or a tool.
There is a desired tool. It is discussed in en:Wikipedia:Event coordinator and elsewhere from the discussion archives. The takeaway is that instead of granting advanced userrights for a trusted user to be "event coordinator", then instead there could be a bot on an event page which creates accounts for new users. The current process is that new users at events get their wiki accounts from someone with a userright. The new proposed process is that no human gets an advanced userright, but instead the new users who register on an event page get publicly logged as part of a cohort and are easy to observe if only the event page publicly reports who they are. Also, their user account registration should log them as being part of a program or event, and the program itself should have a record of the organizer in charge who can take blame. Blue Rasberry (talk) 13:48, 12 August 2018 (UTC)Reply
I think that all three proposals are great, and that they would really help the organization. --Marta Arosio (WMIT) (talk) 09:12, 13 August 2018 (UTC)Reply
I agree with Blue Rasberry's comments above. Account creation is one of the biggest timewasters in my (admittedly small) experience with events. The new "event planner" user right helps, so I don't have to request having the add user user right each time, but I still need an extra person to help with account creation, and participants end up logging in twice (one through my account creator special page and once back on their own computer) with our current setup. Rachel Helps (BYU) (talk) 17:53, 13 August 2018 (UTC)Reply
At the moment, I ask people to sign up in advance, which means a lot will come having already done that. I tell people to sign up on their phones having turned off their Wifi (which means they aren't all coming in on the same IP address) which deals with the more tech-savvy in the room (not everyone knows how to turn off their Wifi it seems). Some can also signup on other language Wikipedias (if they are multi-lingual) or on Simple English Wikipedia or Commons or other sister projects (if monolingually English-speaking). And I mop up the rest with my account creator right. OK, this stuff will work, but I would still like it to be easier. One of the things you can do is pre-register your event with its IP address and you can make the 6-account restriction go away. But in practice, I am rarely in a venue that I know well, usually it is organised by a partner organisation and the individual I deal with will not know the IP address and my request to find out will disappear into their Bynzantine labyrinth of tech support who won't understand why I need to know ("just turn on your Wifi and choose OurPublicWifi" is the kind of unhelpful response I get). So, that method is a loser. But, why can't I pre-register the event and be given a password and then, on the day within the time frames pre-registered for the event, I invoke something (lets call it GameOn), supply the eventname and password and the IP address I am using at that point in time becomes free of the limit for the duration of the pre-registered event. Why can't that work? The only difference to the existing mechanism is that the IP address is only known at event time not in advance. Kerry Raymond (talk) 07:52, 14 August 2018 (UTC)Reply
That's an interesting idea @Kerry Raymond:. I will have the team look into it. It's clear this is an area that is not working as it should. Thanks. JMatazzoni (WMF) (talk) 18:47, 14 August 2018 (UTC)Reply
Kerry Raymond and Blue Rasberry , I've created a task on Phabricator to investigate the various ideas that have been proposed for easing the account-creation bottleneck. That way we can assign an engineer to evaluate feasibility and make a recommendation. I've copied your suggestions to that page and subscribed you both—I hope that's OK. Anyone else with ideas about how to solve this problem is welcome to please join the discussion. JMatazzoni (WMF) (talk) 19:14, 24 August 2018 (UTC)Reply
The three proposals seem good to me. I would also emphasize in the creation of the account, if they don´t include the username, the registration is not finished: in this way we guarantee that the people registered in the event come with the created account, saving a lot of time. --Rodelar (talk) 20:14, 17 August 2018 (UTC)Reply
Hi Rodelar. Requiring wiki registration for signup would definitely decrease the problem, though some organizers have told me they get a lot of drop-ins the day of. Still, that was the inspiration for the the proposal to make registration more seamless as part of signup. It doesn't solve the problem, but may reduce it. In your experience, how many attendees do you typically have to register on the day of the event? JMatazzoni (WMF) (talk) 19:16, 23 August 2018 (UTC)Reply
Hi JMatazzoni (WMF). Normally, very few of them (or even none) since we work hard previously to get all the attendees to have the account created (through the form). This allows us to start the activity without the delays caused by the creation of accounts. --Rodelar (talk) 19:23, 23 August 2018 (UTC)Reply
Users are routinely asked to create their accounts before coming to an event, and most of them do. But once they have an account they still have to be autoconfirmed with the 4 days and 10 edits, or you have to make sure there is someone who can give them the autoconfirmed status who will be at the event. The "event planner" proposal looks impossible to navigate; if that's the best they can do I would prefer to just invite established users and forget about anyone who needs training. Either that or the person who sets up the event needs to be able to flip a switch in the dashboard to give attendees autoconfirmed status. Another possibility is to have new users complete a training course that would give them the option of getting autoconfirmed as the result of completing the course successfully. This would free up the trainers considerably, for something that should be a routine task. There were a couple of self-paced courses in the "Wikipedia + Libraries" online course that were very quick and taught the basics of notability and WP:RS. It would be great if all newbies would go through that as a matter of principle, before they write their first article. Avery Jensen (talk) 08:32, 21 August 2018 (UTC)Reply
Thanks for your comments Avery Jensen. I hadn't even thought about the problem of getting everyone Autoconfirmed! I've added it to the problem statement on the main page. The right to grant Autoconfirmed is more widespread than the Event Creator right. But I'd like to know: how big a problem is this—for you and anyone else? How many users do you typically have to autoconfirm? What is the process like? Is it cumbersome? JMatazzoni (WMF) (talk) 19:41, 23 August 2018 (UTC)Reply
Although as a course coordinator, I have the ability to confirm users, I don't usually use it. Firstly, for training type events, I try to set up activities to do edits to existing articles (sometimes by creating a stub in advance) so new users don't create new articles. New users writing new articles is a world of pain in so many ways. However, sometimes I am involved in supporting editathons (e.g. Art + Feminism) where the explicit goal is to create new articles (and I am not normally organising the event, merely being the only experienced Wikipedian in the room). In these situations, I let them go into Articles for Creation and then I work like fury to try to ensure a couple of decent citations is present in each one (whether done by the participant or by me) and then I move them into article space. My aim is to get everything into article space by the end of the event (or soon after). Where possible, I try to ask the organisers to have drawn up a list of topics from some source that makes notability highly likely (e.g. articles in offline encyclopedias, winners of recent prizes/awards, etc). This maximises the chances of the topics being notable. However I do find myself supporting events whose organisers aren't capable or too lazy to come up with a list of topics who say "oh, we're going to let people choose their own topic, it's better that way" and I have to desperately try to find citations for topics of dubious notability and often obvious COI is involved and/or copyvio ("but my university won't mind if I copy something from their website"), and often I am left with a real quandary about whether to move the more problematic articles out of draft at the end. Clearly the newbie and the organisers want this to happen very much but equally I'm a Wikipedian and I really resent being expected to support an event that run so contrary to my requests. All of this is by way of saying you should not confirm people unless you have confidence that they are likely to create new articles on notable topics that don't breach the most important of our rules (e.g. copyvio) and my experience with the untrained newbie in a typical editathon situation is that you cannot have that confidence. Even when the list of topics is drawn up in advance, you just cannot be sure that the newbie will bother to add even a single citation. Too often the newbie thinks "oh, I'll write the article and then I will add the citations" (note the mental model that the article and citation are not an integral whole but two separate things) and then doesn't get finished with the writing part leaving the article without citations (and probably with some copyio as well). Better not to confirm them so their articles are in draft and you have some control over when (if ever) they move to mainspace (I rely on the fact that newbies never figure out what "move" does even when they have reached autoconfirmed status). Only Unix users will ever intuitively figure out what move does (I am often surprised by the number of well-beyond-newbie editors who don't know what move does, which is a consequence of it not being called something more obvious like "rename"). Kerry Raymond (talk) 22:21, 23 August 2018 (UTC)Reply


It wasn't a problem for us because we had already found out about WP:ACTRIAL and didn't invite anyone who wasn't already autoconfirmed. Our only announcement was on the WMDC website. Only an admin can give autoconfirm status without the wait, but they may not tell people they can do it.
Just a note on Kerry's comment, the last time I checked, Articles for Creation has a backlog of about 3,000 articles, plus the drafts are deleted after 6 months or possibly earlier depending on circumstances, so most of us recommend a user title rather than a draft title, i.e. User:Username/Foo rather than Draft:Foo. After the 4 days, the newbies can move it out to Foo, but the few times I have checked, they never made another edit after the day of the event, and as Kerry points out, there is no one to show them how to move an article and they are unlikely to discover it on their own. Before ACTRIAL there would always be one or two newbies who wanted to create articles, but the experienced editors would be keeping an eye on things from various social media and help get the articles to the point where they were stable enough to prevent them from getting deleted. Quite a few articles have been created by newbies who were topic experts but were creating articles by the seat of the pants. Avery Jensen (talk) 03:15, 26 August 2018 (UTC)Reply
Another detail, I have just heard some feedback from someone with account creator permission that it does not work if a location is blocked, has anyone checked the autoconfirm permission to see if it does the same thing? This might be a consideration for someone who is holding an event at a library or other location where vandalism might have been a problem in the past. Not sure who is still following this, pinging JMatazzoni (WMF), Sage (Wiki Ed). Avery Jensen (talk) 22:40, 14 October 2018 (UTC)Reply
Avery Jensen: For event organizers who have account creator rights on English Wikipedia, the P&E Dashboard account creation feature will work even if the location of the event is blocked; the account creation actions happen from the IP of the dashboard itself. Of course, if it's a hard block or requires autoconfirm to get past the block, then the new accounts will still be affected by the block.--Sage (Wiki Ed) (talk) 19:28, 15 October 2018 (UTC)Reply

Other comments or suggestions for Step 3? edit

Step 4: Participants check-in edit

What do you think about the Proposed solutions/feature ideas? edit

  • People at the event should be able to input themselves into this roster class (or obviously before they come), if they wish to. That way organizers just have to give the URL at the start of the event, and those who want can fill the corresponding form, far easier and efficient. Organizers might have a different form to enter lists, but more importantly to give a separate count of how many people joined the event, giving either an exact or approximate answer. --Psychoslave (talk) 13:10, 11 August 2018 (UTC)Reply
  • The problem with in-person check in is the account limit discussed in part #3. There can be a 6-person limit for account registration. For events with fewer than 30 people it is easy for organizers to count number of registered and talk in person to participants to see that they have registered. If only the problem of #3 can be solved and all participants can get registered then otherwise day of registration is easy. Blue Rasberry (talk) 13:39, 12 August 2018 (UTC)Reply
  • Class Roster, with both the chance to check if previously registered people did actually show up and to enroll new people, sounds fine. It would be nice if there was a chance to monitor all the contributions of the attendees in a certain time (the time of the event but also the days and weeks after, especially for newbies, in order to check if they were actually committed). --Marta Arosio (WMIT) (talk) 09:18, 13 August 2018 (UTC)Reply
  • I'm not sure if it's required in the US, but I like to get consent before tracking participants through their username. It takes a long time for participants to type in a specific URL, so I usually use a paper sign-up (it also procrastinates the input until after the event, when I have more time to sit and type something). It would be ideal if participants could sign in through a link on their own talk page--maybe that's something I could change about my editing event workflow? I really don't know a good way to solve this problem. Rachel Helps (BYU) (talk) 18:01, 13 August 2018 (UTC)Reply
  • I generally explain that all contributions to Wikipedia are visible to everyone and are linked back to their account. I think a consent process might mislead people into thinking they can contribute in a private way that they cannot. In practice, if I find that I don't have a list of the user names, you can often figure out the user names if you know the articles being worked on (e.g. if you have supplied a list) or the category they are in. If they are employees of an organisation, you have to do the paid contribution declaration on the user page, so you can search for that. There are lots of ways for the cunning-as-a-rat event coordinator to collect the user names even if they aren't freely given. In practice most people happily give them to me so I am mainly concerned with tracking down ones written in undecipherable handwriting (e.g. my own) or because user names are case-sensitive. I don't tend to bother with arrival processes beyond login/sign-up to Wikipedia because usually the time for the event is quite short and I can't waste time with people filling in forms. Better to do any form filling as part of signing up in advance to the event, as my events don't have a need to maintain an attendance register (there's neither reward nor punishment for attending or not). Kerry Raymond (talk) 08:10, 14 August 2018 (UTC)Reply
Class Roster, with the chance to check who showed up and enroll new people if needed. --Rodelar (talk) 20:14, 17 August 2018 (UTC)Reply

Other comments or suggestions for Step 4? edit

API for getting official Grant Metrics stats for Dashboard events / campaigns edit

Hey User:TBolliger (WMF)! Following up from WMCON, and in the spirit of this wishlist request and the 'suite of tools' vision you have for event organizer support, I wanted to put adding an API for getting Grant Metrics stats back on your radar. That would allow me to build a feature into the Dashboard so that users can easily get the official Grant Metrics stats according to whatever the latest definitions of global metrics and such are, without having to do extra data management. Beyond improvements to the Dashboard itself, that would be the most helpful thing I can think of to better support the people who voted for this.--Sage (Wiki Ed) (talk) 21:51, 14 May 2018 (UTC)Reply

User:DannyH (WMF), User:JMatazzoni (WMF): I'm restoring this from the archive, since it may have fallen through the cracks with the handoff from Trevor. If a lot of the focus will be on extensions to Grant Metrics, then adding a way for the Dashboard to talk to Grant Metrics would be really helpful for the programs that have already gone pretty far down the road with developing their program processes around the Dashboard. (I'm thinking especially of Art+Feminism and Wikimedia NYC, which both now run a lot of edit-a-thons through the Dashboard, but could benefit from easy access to the evolving metrics capabilities of Grant Metrics.)
Joe, I'd also love to talk with you — perhaps after this round of consultation with editathon organizers — about what you learn around participant sign-up and account creation. As you know, the Dashboard has features for these things now, but we've mainly built them and iterated on them around the needs of the Art+Feminism workflow, and I'd like to find out where that breaks down for other organizers whom I've had less contact with.--Sage (Wiki Ed) (talk) 18:54, 10 August 2018 (UTC)Reply
Hi Sage (Wiki Ed). Thanks for reminding us about some way to get Dashboard data into Grant Metrics. That makes sense to me. And yes, let's get together. I'll email you. JMatazzoni (WMF) (talk) 22:43, 23 August 2018 (UTC)Reply
There seems to be a presumption that participants aren't interested in metrics. Some of my groups are very keen to see the Dashboard. They want to see their stats at the end of the session (but of course the Dashboard takes a while to update which is always a disappointment to them). Obviously my ideal would be real-time (or very near real-time) stats, but more realistically could we send out an email to the participants with a link to the Dashboard once the Dashboard should have been update to capture the event/session. And, because that article view figure just grows and grows and grows, follow-up say once a month with "wow, 1.04M page views of the articles you updated during Event Name". It might be nice to put the dashboard links for the participants onto their User page. Kerry Raymond (talk) 08:29, 14 August 2018 (UTC)Reply
Hi @Kerry Raymond:. I love the idea of motivating participants with impact metrics and am capturing your remark here as a request for 1) metrics during the event, to build enthusiasm and 2) metric report as a follow up, to build engagement ("The article you creates has been viewed 10,000 times!"). I did hear pretty clearly from organizers who use the Dashboard that they would prefer not to send users to a separate site. So that is a design principle of this project. But what you suggest could be achieved in a number of ways, I believe. E.g., if Grant Metrics produces real-time reports (or reports the organizer can refresh at will) then organizers might project a report page on a screen during the event. I haven't looked into generating metrics reports for emails to users, but is seems feasible (though probably not as a phase-1 feature). Does that make sense to you? —JMatazzoni (WMF) (talk) 19:09, 15 August 2018 (UTC)Reply
I have not had any pushback about going to a different site to use the Dashboard. But the groups I am using it with are generally face-to-face with me and at my programs/events, I am quite well known to at least some people in the room, so I don't tend to have "trust" issues. Also I mostly use it with librarians, and I find librarians love metrics. If you ask them about their library, you will be hearing in no time about how they hold "8.4 million volumes on 153 miles of shelving" so they tend to love Dashboard (and would love it even more if it wasn't real-time).
I'd really like to see events within programs to have league tables. 1lib1ref would be a good example, where there could be friendly competition among libraries or librarians teams. Or between countries/etc. But to make it work, you probably need to allow organisers to come up with a variety of comparable metrics. Obviously, all other things being equal, a larger team of people can out-perform a smaller team, so I think at a minimum you need to be able to have "average per person" metrics. I realise we won't get league tables in V1.0. But I think we really overlook the value of friendly competition in Wikimedia. Lending teams work very well for Kiva (the micro-lending organisation), the rivalry is quite intense between teams which are based solely on self-identification, which might be by nationality, by hobbies, groups of friends, or even ideologies, with the two top teams being the Christians and the Athesists who are definite rivals. Meanwhile my team Team Australia is 2nd in the "country" (aka "local area") league. (Click on these links and scroll down to see the metrics presented.) There is nothing like reading a message sent to your team that you've been overtaken in your specific league to make the team re-double its efforts to regain or improve its position. Kiva teams have been studied in the academic literature and, although the article itself is paywalled (boo! hiss!), nonetheless this abstract confirms what I have observed first hand about Kiva team competition. Like Wikipedia, there is nothing to "win" in Kiva by contributing more, but nonetheless teams have been a great success for Kiva and I'd love to see them for WikiProjects, 1Lib1Ref, Art+Feminism, etc. Kerry Raymond (talk) 02:52, 24 August 2018 (UTC)Reply
It would be nice to have some nicely formatted metrics you could copy/paste to a talk page. I just pasted the results of our first meetup and you can see how hard they are to understand at a glance: Numbers for the Arabic Wikipedia Day Edit-a-thon. When you paste them to the page they are in a column, but when you save them, they are on one line. We are just doing this for fun, we don't ask for money so we don't have to present any data, but people would find this very interesting if they could see the results. I'm still waiting for the results of the other event, for some reason it isn't instantaneous. Avery Jensen (talk) 08:59, 21 August 2018 (UTC)Reply
Hi Avery Jensen. Yes, some kind of wiki output of stats seems a very reasonable request. From which program are you copying those stats? JMatazzoni (WMF) (talk) 22:32, 23 August 2018 (UTC)Reply
As per the above link, Programs & Events Dashboard https://outreachdashboard.wmflabs.org/ Avery Jensen (talk) 02:30, 26 August 2018 (UTC)Reply

Comments from bluerasberry edit

There is a call for answers to questions in the section "Event tool feature ideas, August 2018" but I want to summarize a desired workflow here.

First everything in the August update is what I want to see. All of it is correct and none of it is incorrect. I explain things in my own way here but I think the progress is right.

I want to emphasize my agreement with @Sage (Wiki Ed): The correct development process is the one that anticipates current and future third-party development. The data needs for event reporting are diverse and different stakeholders will want different reports. There are endless variations on potential reports. Reports will differ for the WMF grants team, Wikimedia chapters, universities, GLAM institutes, STEM institutes, government organizations, and web traffic analysts.

What Sage and I both want is for an early data export function in the tool, through an API now or eventually, so that various dashboards can access key data. The most relevant dataset which Wikimedia projects are not exporting is "Wikimedia users in a cohort". Wikimedia does not export this because we do not even compile this list in Wikimedia projects. Meetup.com, Facebook, etc have processes by means of which individuals can click a button to join a group. I want an on-wiki process for anyone to click to join a group. From that point, the dataset / list of who is in the group is available for export into the subsequent processing of custom tweaking and then from there to any platform's visualization.

Although I am asking for a "click to join event" button in Wikimedia projects which can export to outside projects what I really want in the longer term is this kind of process:

1. build list

  • participant can "click to join" from a Wikimedia project page
  • organizer can add
  • alternative ways, like add any list, or who is in a wikiproject, or everyone who edited an article
  • human choice compiles list

2. import / export stage

  • make list from #1 available to 3rd party
  • also allow any 3rd party to input lists here to go into WMF computation

3. computation on list

  • typically only organizers will want to see this
  • wide variation in need
  • WMF wants user data, does not care about media topic
  • partners want topic data, do not care about users

4. report of computation

  • WMF grants report is the present urgency
  • orgs which fund wiki want a different report




Blue Rasberry (talk) 13:25, 12 August 2018 (UTC)Reply

Thanks for this Blue Rasberry (talk). In terms of your column 2, import / export stage: Both Grant Metrics and Dashboard let you output .csv files with lists of usernames. Grant Metrics lets you copy a list of usernames out of a spreasdsheet or whatever and drop them into your event-setup page. It validates the usernames against the wikis and does some other nice things, like getting rid of duplicate names and bullets. So in terms of both export and import, Grant Metrics seems to cover the use case you describe. Does that bulk copy and paste satisfy your needs in this regard? (I'm not sure if Dashboard has this type of bulk input—Sage (Wiki Ed)?) —JMatazzoni (WMF) (talk) 23:43, 23 August 2018 (UTC)Reply
JMatazzoni (WMF): Yes, the Dashboard lets you bulk paste in a list of usernames (and validates that the users exist).--Sage (Wiki Ed) (talk) 23:47, 23 August 2018 (UTC)Reply

Comment from Avery edit

The most labor-intensive part of an event is not getting people signed in, it is developing the work lists. We spend an entire event working on lists to have for the next event. It took me a solid week to get this far teaching myself lua modules WikiProject_Arabic/Medieval_Arabic_women_poets It still has too many languages in it but I haven't figured out how to take out individual languages yet. And it keeps collapsing because it is based on WikiData entries and we have poor coordination with that group. There is also a listeria bot that looks very promising but again you have to take it apart to see how it works and again this is time consuming. This type of international collaboration is becoming more common. It would be nice to have some kind of pre-built module blocks you could just put together for each event, that would show the red links in the various language wikis, along with some basic information in your base language. Avery Jensen (talk) 09:21, 21 August 2018 (UTC)Reply

Hi @Avery Jensen:. This is interesting input but may not fit within the scope of the Event Tool. However, I'm pinging my colleague @AStinson (WMF):, who is working on a project that might help with the problem you describe. JMatazzoni (WMF) (talk) 17:51, 21 August 2018 (UTC)Reply
The work list idea from @Avery Jensen: rings a bell. There could be a work list generator based on different criteria: keywords, languages, article status (stub, GA, etc.). I might be wrong the citation hunt tool works a bit similarly, suggesting articles based on some criteria. With a similar mechanism, generating a list of articles for an edit-athon could be worth exploring. African Hope (talk) 12:03, 25 August 2018 (UTC)Reply
In our first attempt at a work list, we had people interested in editing articles about chemistry and Arabic, so we looked for Arab chemists by trying to figure out what categories were being used for chemists in Arabic and English and look for gaps. Unfortunately most of these articles have already been written in both languages, even when alchemists are figured in: List of Arab scientists. Maybe SPARQL would help here, but I don't want to try to teach myself SPARQL just to find out if it can do anything. Ariel is actually a genius in this area, her edit lists have multiple citations and a high chance of notability, but we got together on short notice and were not able to sync with her WMDC schedule. Our second list for Arabic women poets was more successful, as might be expected there was a content gap, plus I think it a good habit for events to include something from an under-represented group.
If you are interested in Citation Hunt, Merrilee Proffitt has a nice description on her blog that was meant for #1lib1ref. Avery Jensen (talk) 04:34, 26 August 2018 (UTC)Reply
Hi, I believe the work list project you're referring to is a Google Summer of Code project that I mentored this year. The basic idea is to create work lists based on PetScan queries or manually added articles, and share a link to that work list that allows users to collaborate on it in real time by claiming individual articles and scoring their progress. The Phabricator ticket describing the project has a lot more details, and, though Google Summer of Code is over, I believe our student Megha is interested in continuing to develop the tool as a volunteer. I'd be very interested in your feedback on the direction we're going.
Also, about Citation Hunt: you can use it to browse snippets of articles completely at random, or only snippets of articles within a certain category on Wikipedia, by searching for and selecting a category in the input space that reads "Search for a topic". There is, however, no way to create custom lists of articles in beyond that, which is why we're going for a separate tool. I'm the maintainer of Citation Hunt by the way, so if you have any questions or feedback on it, also let me know :) -- Surlycyborg (talk) 20:31, 28 August 2018 (UTC)Reply

On developing new outreach tools edit

I noticed that there is a notion in the movement that each problem requires a new tool to be written, then piloted for one year, only to be replaced a year later by something else. The Dashboard was just adopted by the WMF to replace the educational extension and people went to great lengths to switch to it (without receiving ANY support from the WMF at all). And we did that happily, although it was not known who will maintain the Dashboard tool, due to shortage of funds and/or developers in the WMF (one big exception is Sage, the original author of the Dashboard, who is always very kind to help even though it's not really his responsibility now). And in this situation, the community voted for improvements in the Dashboard. We were very happy that finally the Community Tech team might turn their attention to the tool and work on it a bit. Instead, the suggestion is to add one more tool to the list. While I agree that different tools can suit different types of users, I'd rather see improving existing tools (especially when this was voted for in the original community wishlist) than creating a new one. --Vojtěch Dostál (talk) 14:24, 22 August 2018 (UTC)Reply

I share the point that some of the existing tools are great and definitely need to be improved. As for the current Outreach dashboard, I wish there could be an easy way to track metrics from several wikis (fr.WP, fr.Wiktionary, en.Wikiquote, etc). I might ignore how to do that, but I am not able to do that for now. I am generally forced to used Quarry for cross-wiki stats. Any improvement regarding that would be superb. African Hope (talk) 12:08, 25 August 2018 (UTC)Reply
+1, yes I never thought of this but I know I have done edits on other language wikis that were not tracked, we also did Wikisource and Commons. This example was across several language wikis: wikidata:Wikidata:Europeana_Art_History_Challenge/Italy Avery Jensen (talk) 04:46, 26 August 2018 (UTC)Reply
African Hope and Avery Jensen ask for "an easy way to track metrics from several wikis (fr.WP, fr.Wiktionary, en.Wikiquote, etc)." It's a good suggestion. Right now, Grant Metrics can track across multiple Wikipedias as well as Commons and Wikidata. So, I think it could provide metrics for Avery's European Art History Challenge, if I understand the event properly. The issue with tracking other types of project is that concepts like "articles" and even "edits" aren't necessarily common. E.g., Wikisource has "works" and "authors" and their drives proofread works to get rid of errors. We'd like to offer metrics that are meaningful for the projects tracked. Does that make sense? Or would some metrics (and some irrelevant or confusing metrics) be better than none? And if we could add one type of project not currently tracked, what would people want? JMatazzoni (WMF) (talk) 18:59, 28 August 2018 (UTC)Reply
African Hope and Avery Jensen: it's not very intuitive at this point, but on P & E Dashboard you can track multiple wikis for one event by adding an assigned article for each additional wiki you want to track. For example, this one has edits tracked across 6 language Wikipedias plus Wikidata. Making a better interface for that, so that you can simply choose the wikis you want to track at the time you set it up, is something I'd like to do sometime in the coming year or so. Our current plan is to hire an engineer in January, which will give us a lot more capacity to fix the rough edges of P & E Dashboard.--Sage (Wiki Ed) (talk) 20:02, 29 August 2018 (UTC)Reply
Thanks for bringing this up Vojtěch Dostál. Sage and the WikiEd Foundation, who built the Dashboard as you know, have been wonderful partners. And we understand that this transition may be inconvenient for some. But the we believe that not continuing to develop the Dashboard as a Wikimedia tool is the right decision for a number of reasons.
The WikiEd dashboard was created for college professors who are incorporating Wikipedia editing into their courses. The students work on assigned articles over the course of a semester, and they're expected to raise the quality of their articles, for example from C-class to B-class. By contrast, the important use case for us is editathons, which are usually single-day events, or a series of scheduled single-day events. There isn't usually a stable group of people who continue over multiple meetings. And there is no expectation that people are going to raise the quality rating on the page they're working on.
So the Dashboard interface has lots of features that don't make sense for editathons—assigned articles, reviewing other people's articles, quality rankings, assessment tools.... These embellishments make the tool hard for some people to learn and use. In addition, WikiEd is continuing to develop the dashboard for their own use cases; the two versions of the Dashboard, theirs and ours, are connected, so coordinating development is a challenge, with changes that we make sometimes getting overwritten by changes WikiEd makes. And the program is also written in a language (Ruby) that is unfamiliar to many WMF developers.
You mention the idea that people who voted for the wishlist proposal were voting for Dashboard improvements. That some people may have felt this way is understandable, since the wish does mention the Dashboard a number of times. But in the discussions about the wish while it was being vetted and before voting began, our director of product management, Danny Horn, said unambiguously that “The Programs and Events Dashboard is dependent on the WikiEd Foundation, and the Community Tech team can't do any more work on developing that tool.” He went on to request that the proposers of the wish “recast your proposal in a way that expresses the goals that you want to reach, without involving WikiEd's dashboard.” Following that request, Bluerasberry, who proposed the wish, added language so that, as he says, “This proposal seeks support for the idea of generating automatic reports, regardless of the method used to generate them.” In April of this year, on the project page, our intention to build “granular, tactical tools that address different use cases than the dashboard currently serves” was further discussed.
In creating Grant Metrics and now the Event Tool our goal has been to keep the technology as simple as possible while serving the needs of grantees and event organizers. With your help, we can achieve that goal. Thanks for your understanding and please let us know what we can do to help Dashboard users such as yourself make the transition. JMatazzoni (WMF) (talk) 17:10, 28 August 2018 (UTC)Reply
JMatazzoni: I'd like to push back on the idea that editathons are significant different use cases than what the Dashboard serves. It's true that there are some features that either don't make sense for editathons, or are relevant in different ways for editathons versus education program courses, but those are basically user interface design tweaks that are absolutely possible to accomplish. (They are also changes that would be almost exclusively done with JavaScript and CSS, rather than Ruby, as the entire event interface is done in the React JavaScript framework. The Dashboard already has a concept of different event types including Editathon, and the interface shows or hides some features based on the event type. Much of the work I have planned for improving the Dashboard for Editathons will focus on customizing the interface further along these lines based on event type.)
It's also not the case Wiki Education is continuing to develop the Dashboard only for our own use cases; we're committed to making it as useful as possible to the global community, and developing it in ways that can serve the needs of both the Wiki Education Dashboard and Programs & Events Dashboard as much as possible (and to making improvements that are relevant only for Programs & Events Dashboard, in many cases). It's the same codebase, and we just have switches that enable some features on one server, and other features on the other server. We've worked closely with Art + Feminism organizers to improve the Dashboard for their editathons, and I'm very interested to work with other editathon organizers as well to fine tune it, shave off the rough edges, hide the unhelpful features, generally make it a joyful tool for people who organizer a wide variety of events. I think the core of it is a great foundation for editathons, education program classes, and many things in between, and the task now is to put as much refinement into other use cases as we have for Wiki Education's classroom use case.
A big part of why I promoted this proposal to P&E Dashboard users, even though it was made clear that Community Tech would not be able to work on the Dashboard directly, is that I was hoping the approach of "granular, tactical tools" meant that we could coordinate this project with the Dashboard's current features and future plans, and build granular tools that (in some cases) the Dashboard could tap into. That includes, for example, making it possible to get Grant Metrics data via an API, so that Dashboard users could easily get those same metrics and benefit from the continued evolution of that tool without being forced to choose between one set of tools and another, or duplicating the work of setting up an event and inputing data. (In that vein, the Dashboard already taps into several other tools that CommTech is involved with, including the pageviews tool, the plagiarism checking API, and the pageassessments API.) We'll have a chance to talk about this when we have our planned chat soon, but in line with the spirit of this proposal I think there are a number of areas where granular tools for event organizers could be built without reinventing the wheel. I'd very disappointed if this proposal went in a direction is more like competition (and making organizers choose between two incompatible tools) than cooperation and building things in a way that as many people as possible can benefit from.--Sage (Wiki Ed) (talk) 21:03, 29 August 2018 (UTC)Reply
Hello all! I have been waiting to jump into the discussion, as I was a bit puzzled by the development of this proposal. I strongly believe we need to improve how we set, manage and aggregate data from edit-a-thons, and I was very happy to talk with JMatazzoni (WMF), who I believed did a very good job in leading the discussion he and I had on my personal experience in organizing programs and events. My surprise with the development of the proposal is that --as far as I could tell-- my line of argument in the discussion with JMatazzoni (WMF) was that we should improve the Dashboard, not get rid of it. I am a heavy user of the Dashboard, and I strongly believe it does a good job --though improvements are necessary!
I understood from what JMatazzoni (WMF) said in our video meeting that there is a sense that the Dashboard is less used than it should be. This is definitely not my experience as the Brazilian and Portuguese communities have relied on the Dashboard systematically for education and outreach activities. I was surprised to learn we are an exception and this is something we should definitely change.
Having said that:
It is not clear to me why the proposal for a tool for program and event organizers has evolved as to be a proposal for the development of a new tool and not just the improvement of the Dashboard. I hope someone can provide a case for this new tool, as I strongly believe from what I have read here and as an active program and event organizer we would be better off by improving the Dashboard.
It is not clear to me why --if it is the case that less organizers than what we would have expected are using the Dashboard-- we just don't launch a communications strategy to disseminate the use of the Dashboard. The challenge to have people use a new tool nobody knows and uses for event and programs is very likely bigger than the challenge of disseminating the use of the Dashboard, that at least leading program and event organizers already use and are happy with (even if we believe it should be actively improved).
I hope this makes sense, and thanks a lot to JMatazzoni (WMF) for his work on this proposal. I am sorry for jumping in a bit late, but I am not fully sure how I can contribute with the specific discussion if I don't understand why the proposal has evolved the way it has. Hope you are all well and greetings from Brazil!
--Joalpe (talk) 22:19, 29 August 2018 (UTC)Reply
Hi Joalpe. I’m sorry if I gave that impression. The reason the Wikimedia Foundation is discontinuing work on the Dashboard is not because of a lack of users. (I probably mentioned that the other organizers I’d spoken with before you were not dashboard users, but I know that many others do use the dashboard.)
As I discuss more fully in my response above, the reasons for the WMF’s decision have to do with the fact that the dashboard is more complex than many event organizers need. That that complexity in itself makes the dashboard harder to work on, in addition to the fact that its technical-development environment is not an easy one for our engineers. And, overall, that working on the same codebase across different organizations is a challenge. I hope that helps. JMatazzoni (WMF) (talk) 22:40, 4 September 2018 (UTC)Reply

Thanks everyone for comments. User:JMatazzoni (WMF), thank you for helping me understand your line of thought, but your answer is still a bit unclear. As far as I know, there is no transition from the Dashboard planned anywhere at all - I hear about it for the first time from you. In fact, you (WMF) have just told us to switch to the Dashboard (as we just did). Why would you not develop a tool that you have just told us to use? This is something I find hard to grasp and I'd be happy if you could elaborate on this. --Vojtěch Dostál (talk) 19:03, 2 September 2018 (UTC)Reply

Hi Vojtěch Dostál. Thanks for catching that: The word “transition” was a poor choice. The dashboard will continue to be an option for educators and event organizers. It’s likely some organizers may use both tools and some will use one or the other. We think that people should use the tool that best suits their needs. Meanwhile, as you can see on this page, we're eager to talk about the new tool and to find out what features—particularly for event metrics—organizers need. JMatazzoni (WMF) (talk) 22:44, 4 September 2018 (UTC)Reply

Mobile device friendly edit

I've not seen this mentioned (I may have overlooked), but it seems important for the Event Tool to function well on mobile devices. At a minimum this would include:

  • Set up for event organizers
  • Registration/sign up for participants (or organizers)
  • Live event metrics
  • Creating and viewing reports

All of these are critical for people who primarily use mobile devices. There is a growing group of people in our movement primarily use mobile. And for an event that is primarily on the go like Photo walks, museum crawls, etc. there are always people at these who are participating on mobile.

Maybe this already planned. If so, that is good to know. :-) Sydney Poore/FloNight (talk) 00:49, 25 August 2018 (UTC)Reply

Hi Sydney Poore/FloNight. I have to say I hadn't considered mobile, in part I think because this is a tool fo organizers (aside from the signup page). Can you say more about this need? Do you know of event organizers who operate only on mobile? Are there editathons or other events that are conducted on mobile? Provide links if you know of them. Thanks. JMatazzoni (WMF) (talk) 01:10, 28 August 2018 (UTC)Reply

Dashboard edit

Hi we had an issue with a person creating a dashboard during an event when there was allready another one. We are concerned that some of the events might be duplicated without organizers being aware. Do you know if there is a possibility of activating notifications when a pseudo is added on a dashboard? We've had problems i the past also of people setting up dashboards of one of our event and including it in a report without letting us know. Nattes à chat (talk) 07:48, 28 January 2020 (UTC)Reply

@Nattes à chat: I think this tool is inactive and not in development. You probably want the Programs & Events Dashboard. Please clarify and perhaps move this question. Blue Rasberry (talk) 13:20, 29 January 2020 (UTC)Reply

Tracking Wikiquote Contributions edit

Hi, we usually use the Hashtag Tracker tool and the Programs and Events Dashboard in our edit-a-thons. We wanted to try the Event Metric for our SheSaid Campaign 2023. I am not really familiar with this tool yet, but upon seeing the tracked projects, it's only for Wikipedia and Wiktionary. Is there a reason why Wikiquote project is not yet included? Kunokuno (talk) 13:21, 12 October 2023 (UTC)Reply

Return to "Community Tech/Event Metrics" page.