Talk:Wikimedia Foundation Annual Plan/2024-2025/Product & Technology OKRs

Questions

edit

1. Volunteering on the Wikimedia projects should feel rewarding. We also think that the experience of online collaboration should be a major part of what keeps volunteers coming back. What does it take for volunteers to find editing rewarding, and to work better together to build trustworthy content?


2. The trustworthiness of our content is part of Wikimedia’s unique contribution to the world, and what keeps people coming to our platform and using our content. What can we build that will help grow trustworthy content more quickly, but still within the quality guardrails set by communities on each project?


3. To stay relevant and compete with other large online platforms, Wikimedia needs a new generation of consumers to feel connected to our content. How can we make our content easier to discover and interact with for readers and donors?

Just make it work. Interactive content, discoverability of the expensive tools we build and forget, sister projects integration (instead of burying them) and unbreaking the things that are broken would be a good start. We defined this 8 years ago, and is still pending. -Theklan (talk) 18:06, 29 February 2024 (UTC)Reply
The silence is so loud. Theklan (talk) 19:17, 6 June 2024 (UTC)Reply


4. In an age where online abuse thrives, we need to make sure our communities, platform, and serving system are protected. We also face evolving compliance obligations, where global policymakers look to shape privacy, identity, and information sharing online. What improvements to our abuse fighting capabilities will help us address these challenges?


5. MediaWiki, the software platform and interfaces that allow Wikipedia to function, needs ongoing support for the next decade in order to provide creation, moderation, storage, discovery, and consumption of open, multilingual content at scale. What decisions and platform improvements can we make this year to ensure that MediaWiki is sustainable?

WE1 Contributor experience

edit

Objective

edit

Both experienced and new contributors rally together online to build a trustworthy encyclopedia, with more ease and less frustration.

The paragraph is so vague, that it could mean one thing and the opposite. When you say "we will make improvements to critical workflows for experienced contributors, we will lower barriers to constructive contributions for newcomers, and we will invest in ways that volunteers can find and communicate with each other around common interests."... what do you mean? Are we going to build something modern or just make some patches about whatever the sentence means? -Theklan (talk) 17:59, 29 February 2024 (UTC)Reply

What is the most significant at this moment is to allow for a technical platform that will make mobile editing easier. Also, the infrastructure to be improved for people with visual impairment, including emphasis in an open source speech synthesizer allowing blind people to read Wikipedia easier. NikosLikomitros (talk) 02:00, 3 March 2024 (UTC)Reply
Oh, this exists: https://www.mediawiki.org/wiki/Extension:Wikispeech. The possible improvements are also blocked because of security reasons. Theklan (talk) 08:14, 3 March 2024 (UTC)Reply
Hello @Theklan and @NikosLikomitros -- thanks for reading these draft objectives and for your comments and questions. I'm sorry for the delay in responding, but I think I can give some clearer answers now that the plans have been coming more into focus. In thinking about this contributor experience objective, one thing that has been important to me is making sure we do work that benefits multiple different types of contributors -- not just experienced contributors, or newcomers, but both of them. I know there are many other kinds and sub-types of contributors (movement organizers, technical contributors, etc), but these are two broad groups that I think are helpful to think about.
For the experienced contributors, my thinking is that there are many improvements that need to be made to existing tools. Through the Community Wishlist and through many discussions on wiki and elsewhere, we hear a lot about the bugs or shortcomings of the tools that experienced contributors are already using -- things like AbuseFilter, watchlist, notifications, etc. So in other words, rather than building whole new sets of tools for that group, we want to make sure that the things they use and do today (with great success) continue to work well and can be extended as needed. (I know that one of the obvious tools/workflows that is not currently working is Graphs, and I know we're all having a separate and longer conversation about the path forward for that.) But thinking about the overall concept -- that the need for experienced contributors is more about making sure existing things work than building new things -- does that sound right to you? Or do you see the need for us to build wholly new things for experienced contributors?
For the newcomers, we do think that new, modern experiences need to be built. New generations using the internet are largely doing it from mobile devices, and they expect the experiences they use to be simple and intuitive. But editing Wikipedia is overwhelming -- especially opening up the editor from a mobile device. We think that a powerful way to help new people to edit Wikipedia is by providing guardrails and guidance through the editing process. For newcomers who have a specific edit in mind, we have been building Edit Checks, and are talking about continuing to do so in the coming year. And for newcomers who are looking for tasks to do, we are planning to expand the types of structured tasks (also known as "suggested edits") that are available. Structured tasks break edits down into step-by-step workflows so that a newcomer can complete them one step at a time from their mobile device. So far, with the structured tasks we've built for the mobile apps and for the web, we've seen good outcomes for newcomers and healthy improvements to content. Those results are making us want to invest further. @NikosLikomitros, I think this is an important way we can make mobile editing easier -- by developing new kinds of editing workflows that work especially well on mobile. How are you thinking about it? What do you think would make mobile editing easier? MMiller (WMF) (talk) 19:04, 20 March 2024 (UTC)Reply
Thanks for the long answer, @MMiller (WMF), this is a bit better than the overall vagueness of the text we are discussing, as there are some ideas. In general terms, it makes sense to divide the job between different sectors and public, as we can't have a general one-size-fits-all solution for all our problems. But the dichotomy presented here seems a fallacy to me. The question is presented in a way that every answer makes the proposal good. Improving anything is always better that not improving it. I think that there is few people who would answer that the WMF shouldn't improve something, as dismissing improvements is not a very logical standpoint.
That said, the dichotomy itself is also a problem. "Should we improve this or, instead, do that?" is the main problem the WMF faces when justifying what has been done: "hey, we can't improve everything, so we centered on this very particual issue". No, we can't be discussing about "this" OR "that", we should be thinking on how to do "this" AND "that". Because we need both things. And this is also the problem the Wishlist had: just making a popularity contest of things that should be solved instead of just solving them creates scarcity, obsolescence and, at the end of the day, our projects die.
The problem presented here is also biased. The language used for the discussion of what we need is English. Also, English Wikipedia users are the ones with more time and possibilites to discuss on Meta or elsewhere. So we end thinking that the problems of the experienced users of English Wikipedia face are the problems themselves, thus the problem I should solve. But there are experienced users elsewhere, there are experienced users in Commons, in Wikisource, in Dagbani Wikipedia or working with libraries in non dominant languages. There are experienced users whose main problem is not the wishlist or AbuseFilter. There are more Wikimedia projects without a module to show coordinates than those they have something done. Am I saying that we shouldn't improve the wishlist or the AbuseFilter (or other moderation tools)? No, we should. But the WMF has been doing that for years, with small tweaks and patches every other month that will, indeed, make their tasks easier for some experienced users, but are not solving the big picture, but improving a very small niche of tools that work fairly good compared to most of our ecosystem and architecture. Think on a newly created Wiktionary or Wikipedia: they lack everything there.
I said this in the discussion section below (Talk:Wikimedia_Foundation_Annual_Plan/2024-2025/Goals/Infrastructure#WE2_Encyclopedic_content), but there is silence there too. We have plenty of tools that could make our (I include myself) work easier, but nearly all of them are impossible to be found. The WMF creates tools for newbies and experienced users that can't be found. And every year, new tools are done, with no integration which will eventually be lost in the sea, like virtually every other tool created in the last decade. There's no vision to integrate those in the workflow, make them part of MediaWiki (some are in toolserver, other in wmflabs, other are completely external). There's no way to find them. The Design Team made a menu called "Tools". Well, there are no tools there. There are other things, but no real tools. The WMF should center on improving that, because that is what makes experienced users experience better. Think on a tools menu, organized by type, and even possibilities to make some more prominent depending on the kind of things we are doing. That's infrastructure for experienced users. And that's also extremely useful for newbies.
So, should we create something new? Not really. The new thing should be just ordering what we have, integrating that in our software, stop making external tools that will be forgotten and start thinking on a new way of showing them. The "new thing" is just a new way of thinking, we don't need to reinvent the wheel.
And, meanwhile, we should also solve issues that are extremely evident. Graphs is one of them. By the way, saying that the WMF doesn't have any plan to solve that in the future is not having a conversation. There are no plans to fix this, there are no plans to fix the PDF-Book creator (broken by the WMF five years ago). There are no plans to solve any of the issues Wikisource is having with visual edition. There are no plans to improve Commons appart of the tweaks (again) for the Upload Wizard. There are no plans to make Wikiversity a place where teachers can create lessons with everything needed for that (fun fact: H5P is compatible with every teaching platform except... I let you guess which not). There are no plans to make the translate tool solve issues without going to wikicode. There are no plans for interactive content. Well... in general, there are no plans. Only tweaks here and there.
The second block here is about newbies. We are not going to have new editors only because we suggest them to add a link. Just think on this question: what happens in the moment you create an account? The answer is terrible: nothing. There's not even a guided tour of what you can do. The Growth experiments done with suggested edits disappear once you create your user page. This is contrary to any UX guideline in the whole world. I don't even have them, because I created my user page when that was the regular thing to do. Gamification is, in general terms, good, but this is not how we are going to recruit new participants. The learning curve has been going down as our platforms evolve, but our strategy for new wikimedians is still based on recruiting new encyclopedists. That's the main issue here. There are other ways of participating: video creation, multimedia elements, podcasts, curation, OER, media tagging, 3D objects, books, learning itineraries... no one of those is possible for us. That's what attracting newbies should look like. Eventually, making it easier for mobiles is also a good move. But that's not even a strategy, that's something we need to have for all the previously cited issues. Improving the mobile experience is something we should have, not the plan itself. Adding a link is relatively easy, even on mobile. It could be made easier, no doubts. But new contributors are doing other things that are also knowledge and they can't even add them to our platform not because the learning curve is higher, or because mobile editing is not the best one. No. They are not doing that because it is impossible. THAT'S INFRASCTUCTURE.
I know the answer: but we can't do everything. Well, that also false. It's actually possible. We just need to write it, and then start working. That's how things are done, step by step, with a roadmap, and thinking on what we need. There's plenty of money, expertise and strategical thinking. We just need to use those. How to do it comes after what we should do. The WMF usually works in the opposite way: instead of thinking on what we need and then acting accordingly, we only plan things that are possible withing the current status quo. That's a losing strategy. Sincerely, Theklan (talk) 22:22, 20 March 2024 (UTC)Reply
@MMiller (WMF) (I know that one of the obvious tools/workflows that is not currently working is Graphs, and I know we're all having a separate and longer conversation about the path forward for that.) Who is "we're all" referring to? There has been no substantive WMF updates to mw:Extension:Graph/Plans since November 2023, no WMF response on task T334940 since January (other than a mailing list message that I had to repost there because no one from the WMF did), and I don't see anything in this plan about developing the "sustainable architecture" for interactive content called out in said mailing list message. -- Ahecht (TALK
PAGE
) 18:03, 2 April 2024 (UTC)Reply
Hello @Ahecht -- thanks for checking out the plan and commenting. I know that it has been difficult for graphs to be unavailable for so long, and I appreciate that you've been part of the conversation (and you're probably up-to-date on why this has been a hard problem to address).  Over the past few weeks, I have been working with a product manager and principal engineer to define a path forward for graphs, and I plan to post about it next week on Meta.  We will ping you (and others) to get your thoughts so that we can figure out if our proposed plan will work well for communities and for readers, and to figure out how volunteers like you can help. MMiller (WMF) (talk) 22:40, 5 April 2024 (UTC)Reply
Now that we have the plan, and that the plan proposes a patch, instead of making a real change on our infrastructure, I wonder if we can talk again about the infrastructure itself. The proposal goes against all logic: reducing interactivity and possibilities, instead of giving more. How can the next fiscal year OKR's discussion make something on this? Or should we allow any possibility of interactivity and improvement (which is part of our strategy, not something I, me, myself, personally would like to see done) die without any date? Theklan (talk) 14:28, 11 April 2024 (UTC)Reply
Still no answer. Theklan (talk) 08:40, 27 April 2024 (UTC)Reply

There are several things that should be worked on under this point. I was going to include Thumbor, but the Content Translation team has shown some recent intrest in it, and I was only expecting an short attention towards it anyway. These things are:

  • With Extension:Graph being in "discussion" phase, there needs to be some attention towards EasyTimeline, the other graph extension on WMF wikis. Stuff like being able to make a line graph with EasyTimeline, which Ploticus, the software behind it is capable of doing. I also look at phab:T137291 as being dead, because as per phabricator etiquette, that bug should not be open for whatever replacement there will be for Extension:Graph, if there ever will be one. Rather, there should be a new bug.
  • Wikibooks should be able to activate Growth and Edit check features. Wikibooks has some similarities with Wikipedia, and I am sure that some of the Growth and Edit check features will work there, some of them even in it's current form.
  • VisualEditor on mobile should change its toolbar in line with where the cursor is. That can bypass issues of how small the screen is on mobile and allow different tasks to be carried on mobile. Look at how articles on Wikipedia are structured, they clearly follow the same basic rules, which VisualEditor can use to change the interface.

In more longer term, as preparation for the next annual plan, there should be some research on the login system. The login system is tightly coupled with MediaWiki itself currently, and with the browsers moving more and more to an privacy orientated thinking, it should change. In https://bugzilla.mozilla.org/show_bug.cgi?id=1696095 the Firefox team said that the WMF login system needs to be overhauled, both to work for Firefox and other browsers. In https://phabricator.wikimedia.org/tag/authentication-experiments-2022/ there was research on moving towards Apero CAS or Keycloak, with seemingly Keycloak winning. I am feeling like there will be a need for using that work later and when that time comes, it will be an large undertaking, probably at least 2 years, if not more. WMF can not really afford not looking at it, because after the deployment of temporary accounts, all edits will have to have logged in users. Login will become a single point of failure.--Snævar (talk) 13:12, 21 March 2024 (UTC)Reply

Hello @Snævar -- thanks for the detailed notes. I didn't know about the connection of EasyTimeline to Graphs, so we can incorporate that in our thinking. I can ask the teams about Growth and Edit Check features for other projects -- I'm glad to hear you think that those features are improvements, and I'd be interested to know what you like (or don't like about them).
For Visual Editor on mobile, could you describe in more detail what you mean? Or maybe even share a screenshot?
I will see if there is more information I can find out regarding the login system. MMiller (WMF) (talk) 23:54, 25 March 2024 (UTC)Reply
Hello @Snævar -- I'm still interested to hear your thoughts on VisualEditor on mobile, but I wanted to return with some more info about deploying features on sister projects. For features like Edit Check and the Growth features, some aspects depend on the presence of Visual Editor. For instance, Edit Checks and the Growth newcomer tasks rely on Visual Editor to work. Another consideration is that the checks and tasks are generally built for Wikipedia workflows -- adding images, or checking for references may not be applicable in the context of Wikibooks. That said, other parts likely will work fine, such as mentorship. French Wiktionary actually has the Growth features enabled, but some aspects of them are not available.
If you're interested in seeing how the Growth features could work on Wikibooks, please get in touch with the Growth team at https://www.mediawiki.org/wiki/Talk:Growth. MMiller (WMF) (talk) 22:07, 5 April 2024 (UTC)Reply
As citations are mandatory in Wikipedia content and makes all of us kinds of "librarians", what about the gamer's aspect ?
As long as sharing free knowledge is only based on XP, there will be no effort to make the rules simplier, the process to learn easier and faster.
As experience is cumulative, the gap between experienced contributors and new ones is widening by the time and the so called "noviciat" period gets longer.
It's important that, as for driving a car or mastering a computer, the access to contribution to Wikipedia get's easier.
For this we can :
- analyse the content of our Wikipedia articles wit AI
- evaluate the contents by crossing data and searching available external sources
- segment the parts that can be ameliorated or corrected
- encourage small contributors to engage in editing on segmented parts of articles
- create games to motivate people to edit and simplify the edition by templates of editing in the game
- collaborate with female gamers to create new games that will attract new publics like women
These are some propositions but the main idea is to change the XP/New contributors model and address concretely the gap problem by the angle of creative games. Waltercolor (talk) 08:58, 10 April 2024 (UTC)Reply
Just note that analysis of article content using AI is something available only in English and "somehow" in a handful of other languages. Theklan (talk) 19:34, 10 April 2024 (UTC)Reply
Also, citations are not mandatory in Wikipedia. Each Wikipedia sets its own standards. Look at the article at the top of the Main Page at the German-language Wikipedia ("Artikel des Tages"). Look at the previous ones. Most of the content has no citations. I doubt that the French Wikipedia would make the same choice as the German one. At the English Wikipedia, only certain specified types of content require inline citations. Every community makes their own choices.
If you are interested in a "game" approach, you may be interested in Citation Hunt. WhatamIdoing (talk) 01:51, 12 April 2024 (UTC)Reply
Wikipedia contributors need also a better way to speak together. In the wikimedian platforms, the conversations are often poor because, I believe, experienced contributors have a centripete approach. They tend to bring the conversation to details or focus on the person. It's rare that people elevate the conversation to see the big picture. Mainly their answer are practical, to bring an immediate resolution of the problem, sound like : do you know that, it already exists, it's impossible, etc... and never ask to the person : what do you mean by that ? Can you explain ? Tell us more.
What can the Foundation do to ameliorate the quality of the conversations inside of the community (and this is important because this community is "autonomous", makes it's own decisions, it's own rules ?
What I remarked in my conversations in various environment with various wikimedian and non wikimedian people about parity in the digital culture : there are 3 watch-points one must track in a conversation or a collaborative work :
- Interactions (are they fluid ? Network-like and correctly distributed ? Or is somewhere a clotting occuring with some persons grouping in an informal sub-group, building an instantaneous hierarchy and engage in an include/exclude process to formate the discussion ?
- Social credit (this has been studied by Florian Grisel - 2023)
- Intimacy (Online images amplify gender bias - Nature 2024). I recently made a test about images on Commons for a presentation about Parity in the digital world. The result of the images displayed after a research on Commons with simple keywords was sometimes so awful, with a lot of obscene and outrageous content, that it's just something you cannot present in a meeting. I than compared this result on Commons with the result of Google images and also the images contained on the corresponding Wikipedia article with the same keyword. The difference is just enormous and Commons is highly problematic (keywords were average words, nothing special, any kid could search these terms for doing a homework, and that's the problem).
So I believe checking how interactions, social credit and intimacy are going on in a conversation or a collaboration is something anyone can be done very simply. And if something goes wrong, one can simply tell to the people : here you did something which hinders the good functioning of the team.
(It's more difficult to do something when faced to problematic images).
Perhaps there could also be more researches and investigations from the Foundation about how these 3 points work in the actual or past conversations on our wikimedian spaces.
We have all the histories, we have LLM, we can study how this very special wikimedian community interacts and also perhaps find now how we could do better than having only never more than 10 to 15% female contributors since nearly a quarter of a century. Waltercolor (talk) 09:13, 13 April 2024 (UTC)Reply
I agree with Waltercolor that Wikipedia editors tend to bring the conversation to details or focus on the person. It's rare that people elevate the conversation to see the big picture. Mainly their answer are practical. This "big picture" approach to annual plans, even though I think it is the right approach, is very likely to be perceived as frustratingly vague and even incomprehensible to many online contributors. WhatamIdoing (talk) 18:48, 18 April 2024 (UTC)Reply
Thank you @Waltercolor for the feedback!  I agree with your assessment that the gap between experienced contributors and new editors is widening. Much of what you are suggesting relates to work the Growth team has been working on, and will continue to work on in the coming year (under WE 1.2):
Structured Tasks like “add a link” and “add an image” offer a structured and guided approach to editing, making it easier for newcomers to understand and contribute effectively. They are powered by machine learning and provide clear onboarding and instructions. Both tasks have been shown to help increase the number of new account holders that try editing, which also funnels into increased new editor retention.[1]
Additionally, the Newcomer homepage serves as a central hub for new editors, providing them with suggested initial tasks along with help and mentorship. Recently, we’ve introduced “Positive Reinforcement” features, which, while not strictly gamification, incorporate automated feedback mechanisms such as "Leveling up" and an Impact module that provides an overview of your contributions' impact.
The Editing team is also working on an Edit check project that should help provide new editors with the “in the moment” policy guidance they need to be more successful.
One of the challenges we’ve seen with Structured tasks is that the resulting edits are not always appreciated by patrollers.  How do you think we can strike a balance between providing new editors with room to learn and make mistakes, while also addressing the frustrations of patrollers who need to correct those mistakes? Thanks, - KStoller-WMF (talk) 22:26, 17 April 2024 (UTC)Reply
Thanks @KStoller-WMF for the feedback. If edits are not appreciated by patrollers, you can trust them. Except some cases of evident biases (good sources that are unknown by the patrollers), their appreciation is normally trustworthy so, there is a need to review the learning process.
If structured tasks do not generate correct editings at the end, it's perhaps because the tasks are not well segmented. Generally speaking, the segment of tasks should be aligned with the minimum requests of the edition of Wikipedia to make the edits viable.
Concerning the basic tasks of a Wikipedia edition, I believe there are three fundamental (and iterative) processes one must implement for a Wikipedia edition :
- CITE: gather the information you want to add on Wikipedia. Label the origin of the content by noting the references of the concerned ideas (author, title, etc.. or ISBN, DOI, etc..). This is an "offline" tasks (does not need to be on the WP site).
- INTEGRATE : make the content Wikipedia-compatible. Rewrite the text, put in the appropriate section, format the text and the citation, check if the addition is coherent with the rest of the text, if no duplicate, if the relevant info is needed somewhere else to correct another article with related topic, etc... (This is an operational task. Needs to master a minimum of edition technique, to know the formal rules of edition and to verify the logic of the addition in the context of existing content).
- DISCUSS : Is the addition pertinent ? Is the content related to the citation, not copied, not original, not duplicated, not too long, related to the existing content, is it formally correctly edited, etc... (This is the assessing of the relevance and the harmony of the formal and content-related properties of the addition in the environment of the article.)
So if you propose tasks to the newcomers, first explain the environment where the addition is landing.
Ideally, newcomers should be trained by groups of 3 persons. Each one chooses a content they want to add in a Wikipedia article. For each content, the 3 work in group and take turns. Each one is doing only one of the tasks : cite, integrate, or discuss. They then take turns for the following addition and experiment one of the other tasks etc.. 3 times until each of them has done at least one time one of the tasks. This means they will for example discuss the addition of another editor, or gather the material from a topic choosen by someone else, or integrate the material provided by one of their fellows.
That's how we could learn to be an editor, given that we would include in the training an aspect of critical lecture of the addition, training early people to be them patrollers, and first of all, their own patroller.
Workshops with many new editors working together also shapes a groving new community of editors by allowing interaction in a group of learning people with a "classroom" effect, creating links, empathy...
This should be experimented to see if it works better than the traditional "mentor" learning process that is proposed with a one to one disymetric learning process.
What people need to become true WP editors is more than just learning one technical task. They need to know how they will do to integrate a new piece of information in an existing framework (Remember ? Our logo is a jigsaw puzzle). So understanding the environment is fundamental for them to know and that aspect must be integrated in the learning process to avoid a "last straw" effect.
In order not to elaborate a too complicated program of learning that leaves newcomers indefinitely "novices", it's possible to draw simple segments of 3 basic tasks : 1. Sourcing (gathering info + labelling it), 2. Integrating (using the wiki code and presentation) and 3. Discussing (evaluating the relevance, veryfying the logic of the text).
Working in groups of peers trains people to learn from each other and, with the "classroom effect", form a first little "kernel" of community that eases their further integration in a larger community. Waltercolor (talk) 09:04, 18 April 2024 (UTC)Reply
In my experience, If edits are not appreciated by patrollers, that proves only that the edits are not appreciated by a small fraction (=the people who complain) of a small group (=patrollers).
A couple of months ago, I put more than 100 medicine-related articles at enwiki into the system for the "Add a link" Structured Task. On average, the edits made by new editors using this task were better quality than edits made by new editors outside this task (for example: no broken wikitext, no vandalism).
The complaints I have heard about this include:
  • Some patrollers have a personal preference for limiting the number of links in an article. Some patrollers think that the average number of links that we see in a Featured Article is "too many". These patrollers do not appreciate edits that add links. At least one of them dislikes links because he dislikes the color of blue used for links (he says the color is hard to read on his computer).
  • Some patrollers think that adding links is unimportant. These patrollers think that their time is being wasted by checking all of these unimportant edits. If everyone would stop editing, then patrollers would have less work. If people are going to create work for patrollers by editing, then these patrollers want the edits to be contributions that they personally believe are valuable to patrollers, even if it's not valued by readers. For example, a few patrollers would like newcomers to stop adding links (an easy task with a low reversion rate and direct value to readers) and instead add sources (a difficult task with a higher reversion rate and of doubtful value to readers – to give you an idea of the scale, I will use this ref added last week by @Waltercolor. According to that academic paper, one reader will probably click on this new ref about once a year. A similar calculation for the average link in the same article indicates a link will be clicked on about twice a month).
  • Some patrollers blame the symptom (e.g., adding a duplicate link) instead of the real problem (i.e., newcomers make mistakes). They think that if nobody added links, then the newcomers would magically be capable of making more complex contributions. These patrollers own contributions are either minimal (they only revert other people's attempts without making complex contributions themselves) or prove them wrong (because their first edits show mistakes).
On this last point, it's a fact that I'm going to die some day, and so will everyone reading this. If we don't bring newcomers on board with easy tasks, and if we don't show them the grace that editors showed us when we made our mistakes, then there will not be anyone here when I die. To find an editor who will match my volume, I need 30,000 (!) newcomers to make their first edit. That's 30,000 opportunities for someone to make a mistake. The alternative is that Wikipedia dies when my generation does. I'd prefer the mistakes. WhatamIdoing (talk) 18:41, 18 April 2024 (UTC)Reply
If people who make 30 000 edits would tell more about the techniques they use for being able to edit so much, and especially would explain step by step the precise procedures they use to edit so fast and a lot, we would have now a big big library of techniques and tips that would help a lot new editors to understand how the Wikipedia editing really works.
Instead of that, prolific editors hide the way they work. They never speak from that, they never explain how they work and do not give any detail about how they are concretely doing the edits step by step. One reason is : Wikipedia is in fact a big competition platform for volunteer editors "gamers". What is rewarding on Wikipedia, is editing as participating to a race and getting higher and higher in the ranking of the most prolific editors. Unveiling editing procedures that allow to get a good place in the ranking would be a threat for the competing editors who could loose their advantage over the others by giving their tricks. We must be aware that Wikipedia volunteer editors are not only here for the sake of sharing free knowledge, that's the ideal, in the reality they are also rivals, that's one of the reason the onboarding is not too efficient or too welcoming.
Another aspect is that, in such a high competitive environment, nobody never speaks about the human and social cost it represents to be so competitive and under pressure of hyper-production with an extremely high frequence of editing.
Prolific editors should also unveil if they use tools or scripts to assist them in the edition. Newcomers should also benefit from appropriate tools to help them to edit and especially, this is extremely important to ensure their autonomy and the accuracy of their edition, good tools to self-evaluate their content before publishing. Waltercolor (talk) 08:32, 19 April 2024 (UTC)Reply
My daughters are learning to write. It's a shame they make so many mistakes, they don't differentiate the words correctly, their ortography is awful and the prose is very weak. They shouldn't write anything till they master it.
Absurd? Yes. That's what it looks like to ask newcomers to master something before they start contributing. Theklan (talk) 19:50, 19 April 2024 (UTC)Reply
I completely agree with @Theklan. It's absurd to expect newcomers to have mastered the art of contributing to Wikipedia. Editors learn how to edit by making edits, including by making mistakes.
@Waltercolor, here is my favorite way to contribute:
  • I read the news and learn something that I didn't know.
  • I wonder if Wikipedia has that information, so I search Wikipedia.
  • If the information is not there, I add it (examples from today). If Wikipedia already has this information, then I smile and go back to reading the news.
I do not have secret methods, and I explain my methods to anyone who asks. If you want to see other editors explaining how to edit, then I suggest that you spend a month answering questions at w:fr:Wikipédia:Forum des nouveaux. WhatamIdoing (talk) 20:39, 19 April 2024 (UTC)Reply
I can see that it's so hard for trained editors to get out from the "ask and we will answer" method. This doesn't work and we get poor result and low retention. Waltercolor (talk) 08:36, 20 April 2024 (UTC)Reply
The research indicates that low retention is often due to newcomers having their first edits reverted or getting warning messages dumped on their talk page. AFAICT an inability to figure out what kinds of contributions to make does not seem to be a significant factor. People who create accounts already know what they want to contribute. WhatamIdoing (talk) 01:50, 23 April 2024 (UTC)Reply

WE1.1

edit

Sorry, I'm reading the text and I don't understand what the "key result" means. Could it be written in a way that we can know what you are trying to do? -Theklan (talk) 13:54, 26 March 2024 (UTC)Reply

The structure of planning is the same as done last year - called "Objectives and Key Results". The "top" level is the Objective - this is the goal. Below each Objective is one or more "Key Result" (KR). These are proposed achievements which, if attained, will show that we are getting closer to the objective. They are measurable. Then, lowest level is the "Hypothesis". Each KR might have several of these - practical, short, specific tasks to do, in order to achieve the applicable Key Result. Here's an english WP article about the format, also available in quite a few other languages. There's a lot more text that could be written about the organisational-theory behind this way of structuring work, but I've written this as a quick/simple summary of the structure. LWyatt (WMF) (talk) 16:42, 26 March 2024 (UTC)Reply
I'm not asking what "key result" means. I'm asking what THIS key result means. Theklan (talk) 18:02, 26 March 2024 (UTC)Reply
Hello, @Theklan! Thanks for asking about WE 1.1, which is: “Develop or improve one workflow that helps contributors with common interests to connect with each other and contribute together.”
The purpose of this KR is to make it easier for contributors to connect with each other and find a sense of belonging on the wikis. We’ve seen (and probably you have, too) that contributors do some of their best work and have some of their most rewarding experiences when they are part of a group. However, there’s a lot that can be done to improve how contributors find and connect with each other on the wikis.
  • Newcomers often struggle to build up their skills and find a sense of community on the wikis. It’s not easy to find out how to connect with other editors, join communities based on interest areas, and receive relevant mentorship. This can lead to newcomers making poor edits or abandoning editing entirely, due to lack of support.
  • Experienced editors may understand the wikis, but they may crave community. Some editors may prefer to work alone, but many others may want to be recognized by other contributors, share and learn skills, tackle problems collectively, and even develop friendships on the wikis.  
  • WikiProjects can be community spaces for experienced and junior contributors, but they often fall short of the ideal. They lack proper communication infrastructure, they are not easily discoverable, they are difficult to create, and many of them are largely dormant.
As a first project, we would like to improve the discoverability of events and groups that contributors can join. So, for example, let’s say someone is passionate about topics related to environmentalism and conservation. We want to create a way for them to be able to find events (such as edit-a-thons) and groups (such as WikiProjects and/or affiliates) related to environmentalism and conservation. This way, they can connect with others who share their interests, and ultimately have a more enriching and productive experience on the wikis, so hopefully they come back to participate more in the future
Beyond this first project, we are hoping to work on more projects that can help with connections between contributors on the wikis. To do this, we would like to hear from volunteers like you, who have a lot of experience bringing people together. So, with all that being said, we would love to learn more from you. What do you think of this work? What do you find to be the biggest challenges or opportunities related to connections between contributors on the wikis? How can we make it easier for contributors to find communities and a sense of belonging?
Thank you again for asking about this KR! IFried (WMF) (talk) 17:24, 28 March 2024 (UTC)Reply
Thanks for answering, IFried. This ways is easier to understand, I don't know why things are not explained in a way that can be discussed, instead of using corporate jargon that could mean anything.
Regarding the idea: yes, this is important. But let's analyze the odds. I don't know where you live, but the odds to have an environment related edit-a-thon near you are close to zero. Having a system where you can know what's happening around is relevant, but most of our community works online, asynchronous and, many times, without even knowing each other personally. Events are happening all around the world, but the flux is normally the opposite: you go to the event because it happens near you (or a friend invites you) and, then, only after going there, you might start writing about that.
The opposite happens most often, and I really think it should be a priority: we engage online. WikiProjects are underperforming because they are not native, arr very hard to mantain and depends on complex external tools instead of being something straightforward. There's no standard infrastructure for a WikiProject, and only some basic stuff exists at English Wikipedia. There's a giant void there. A simple fact: one of my Hackathon projects is trying to understand how the WikiProjects infrastructure works in English, because at Basque Wikipedia (and virtually any other) we don't even have a messaging system for members of a WikiProject.
If you are really trying to improve things and the working time is limited, I would prioritize the latter, as an event alert platform could be a feature of a larger participation place.
If you are in Tallinn for the Hackathon, I would be really happy to talk about this. Theklan (talk) 19:24, 28 March 2024 (UTC)Reply
@Theklan, thanks for your replyǃ I’m glad that my response provided some clarity. I know you and I have talked about this a bit in another setting, but I also wanted to write down the thinking so that others can see and weigh in.
First, like you, we (the Campaigns team) believe that WikiProjects would benefit from more development and support. They have enormous potential to create community, mentor newcomers, and generate impactful content, but they often fall short, due to many of the issues you have described.
As a first step, we want to make it easier to find WikiProjects and events. This way, more contributors can find people and communities that share their passions, rather than feeling like they’re alone on the wikis.
You brought up a concern about the lack of events in a given city, which is understandable. However, many events are online or hybrid, so people can join them remotely. Examples range from larger campaigns (such as Wikipedia Asian Month, CEE Spring, AfroCine month, and Indic Wikisource Proofread-a-thons) to smaller events like community calls, trainings, and edit-a-thons. Also, some WikiProjects (like Women in Red) organize online events within their WikiProjects. These events may be well-known to some experienced users, but they may be invisible to newcomers or junior editors, and we would like to make them more discoverable by everybody.
Overall, we see our first project as a stepping stone in engaging with WikiProjects. We want to talk to the people behind WikiProjects. We want to learn what’s working, what’s not working, and how we can help. This way, we can see what product development we can do to create a better future for WikiProjects. We hope that anyone reading this can recommend who we might talk to to learn more about how WikiProjects work!
One idea we want to explore in the future is how Event Registration can be generalized or expanded to accommodate WikiProjects. Event Registration has features such as: creating an event page, signing up to be involved, automatic confirmation emails after registration, integration with the Programs & Events Dashboard, optional collection of participant demographic data, and the ability for organizers to send mass emails to participants. You can imagine how those features could be useful for a WikiProject, not just for events, (i.e. you could imagine someone signing up for a WikiProject, and answering some survey questions about their topical interests). We can also explore the possibility of adding new features, such as the “event alert platform” that you mentioned.
Overall, we may find that Event Registration has certain features and tools that work for WikiProjects, or we may find that WikiProjects need something else. We don’t have all of the answers now, but I want to clearly state that we’re interested in learning more about WikProjects through our work, and we hope that you can join us in continuing to share feedback. Finally, I unfortunately won’t be attending the hackathon, but some other folks from our team may be, and I am sure they would love to talk about WikiProjects with you. Thank you again for your thoughtful feedback! IFried (WMF) (talk) 17:29, 19 April 2024 (UTC)Reply

WE1.2

edit

Dear Lord! Every day we have to clean the mess done by "suggested edits" and you're going to increase it? IOIOI (talk) 23:44, 29 March 2024 (UTC)Reply

Hello @IOIOI, I'm sorry to hear that Suggested Edits have been causing frustration on your wiki. Can you tell me more about your experience with Suggested Edits? Are there any previous discussions or insights from the Polish Wikipedia community regarding Suggested Edits that I should be aware of? Thank you for taking the time to review and provide input on this! KStoller-WMF (talk) 21:13, 1 April 2024 (UTC)Reply
@KStoller-WMF Yes, there is a discussion in progress, where I've proposed again to remove newcomer task image suggestion functionality. It creates a lot of problems, other newcomer tasks aren't so harmful. Adding internal links make no mistakes (nor any value neither). Advanced Suggested Edits usually are completely wrong, but the amount of them is very, very minimal. But image suggestions edits is a significant number and ~30% of them are wrong, useless, wrong caption etc. Nobody should be surprised, because names and captions of images in Commons are out of any control/validation. IOIOI (talk) 17:36, 8 April 2024 (UTC)Reply
If someone adds a relevant image, with a weak caption, isn't that an incremental improvement of the page? If I add a picture, and you fix my weak caption, we improve Wikipedia. Is that not desirable? Or would you rather have a worse article than to be "forced" to collaborate with me on improving it? WhatamIdoing (talk) 20:38, 9 April 2024 (UTC)Reply
If I vandalize an article, and because of my vandalism you notice the article and decide to fix all the unsourced BLP statements, isn't that also incremental improvement? AntiCompositeNumber (talk) 03:05, 10 April 2024 (UTC)Reply
No, it's not. It might be a trigger for improvement, but it is not an improvement itself. WhatamIdoing (talk) 19:09, 18 April 2024 (UTC)Reply
@IOIOI Thanks for letting me know about that discussion on Polish Wikipedia. I've reviewed the discussion and Trizek (WMF), from the Growth team, has responded.  I won’t repeat everything that he has already said there, but rather highlight a few key points:
  • The Growth team is known to be receptive to feedback. Please let us know if you have feedback on how we can make these tasks better, and less of a burden to patrol.
  • Each language community has the ability to customize Suggested Edits via Community Configuration.  It looks like there are several ways the Polish community can adjust the pl.wiki configuration that should improve the task.
  • The value of “add a link” and “add an image” is greater than the resulting edit.  These tasks are designed to help new editors get started.  They are quite successful at increasing the percentage of new account holders who edit for the first time in a constructive way. You can read about the “Add an Image” experiment here.
  • As mentioned by other participants in the Polish Wikipedia discussion, new editors make mistakes, and some imperfect edits are expected in the learning process. In experiments we’ve run, we actually see that the “add an image” revert rate is lower than the average newcomer revert rate.[1]
WE1.2 is about increasing the number of new account holders who edit, but you’ll notice that we are aiming for “Constructive” activation, meaning that we will be looking closely at revert rates and ensure that the work we do isn’t leading to low-quality edits.
To ensure Wikipedia and Wikimedia projects remain relevant across generations, it's vital to attract new editors while also respecting the workload of patrollers and experienced editors. I recognize that we need to find a balance between helping new editors get started (by providing smaller, structured, and more task-specific editing workflows) and ensuring we aren’t overwhelming patrollers. Do you have any suggestions for how we can strike the right balance? KStoller-WMF (talk) 20:16, 10 April 2024 (UTC)Reply
(Please ping on reply.) Honestly, it feels like this sort of thing is the consequence of over-reliance on metrics. ‘Suggested Edits’ functionality, while it has its positives too, has mostly been a nuisance for the active members of the communities. And that is not because the newer editors are all bad and cannot be trusted: that is, because it tries to guess automatically things that human editors do manually. Without the further human involvement the result ends up like in the ‘Tom Price’ article: continuous automatically generated edits by new editors who click on buttons and do not know any better. If you want to increase newer editor involvement and editor satisfaction with the editing tools, this is not the way to do things. This is the way to satisfy pointless metrics, sure. What would be actually increasing editor involvement is improving actual editing tools, reader to sign-in experiences, performance metrics of editing tools etc. in ways that are currently unknown. Tools like BrandonXLF’s QuickEdit improve the editing experience much more than the metric-driven tools that are currently being built by the WMF. And that is the problem, because, frankly, users should not step in to create basic editing improvements, the WMF should. (Edit: an example of good WMF-developed feature in this regard is Edit Recovery.) stjn[ru] 15:28, 17 April 2024 (UTC)Reply
Hello @Stjn, thank you for providing that Tom Price example. I'll chat with the Growth team about how we can avoid situations where the same article is suggested to newcomers again and again like this. Although we are purposely trying to funnel new editors into making small edits to get started, the Suggested Edit prioritization algorithm shouldn’t funnel so many newcomers into one article like that.  
Growth team work is date-driven, but I personally don’t think we are focusing on pointless metrics.  For example our “Add an image” experiment focused on increasing the number of new account holders who try editing, improving newcomer retention & productivity, and decreasing revert rates. When evaluating Suggested Edits and Structured Tasks, we ensure they have a lower revert rate than the average new editor revert rate. [1]
Per your comment on improving “actual editing tools”, Part of the WE1.2 work will be to improve existing editing tools: Edit Check will build upon and improve VisualEditor to provide more guidance for new editors.  And the WE1.3 Key Result is specifically about improving existing editing and moderation tools, though we’re still defining and selecting which ones. Are there some you think are more important to focus on than others? KStoller-WMF (talk) 00:26, 18 April 2024 (UTC)Reply
Well, it would’ve been impossible for the work not to affect editing tools entirely. Still, most of Growth tools work seems like a very pie in the sky thing (and at the same time easily justifiable by metrics, even if the overall impact is not great) while the more substantive work that, arguably, would have more impact to increasing the edit(or) numbers doesn’t get the same attention.
Easy examples of work that doesn’t get any priority that would be much more impactful than adding metric-driven tools: 1) improving the default wikitext editor to modern standards (it was introduced in 2010 and still uses interface scripts from 2010 without much change); 2) adding section editing to visual editor; 3) simplifying editor workflows (see QuickEdit example); 4) improving various visual editor workflows to simplify them; 5) improving the performance of the visual editor; 6) adding link/file/category autocomplete to wikitext editor by default; 7) adding Citoid to wikitext editor by default. That’s just off the top of my head.
I don’t think the current work is pointless or something, but to me it seems to be driven by the pursuit of things the WMF can say ‘see, we got X of Y to Z!’ instead of a genuine appreciation for innovation and improvement of editor workflows. Even though that could arguably make more people participate in Wikipedia in comparison to a tool that (not trying to be disparaging) dumbs down the Wikipedia experience to clicking two buttons in perpetuity. Has there been a test, for example, in what is the median percentage of the edits without newcomer tasks for people who have used them? To me as an editor it seemed that many people using these tools did not ‘graduate’ from them, but mostly kept their work to clicking buttons. stjn[ru] 00:49, 18 April 2024 (UTC)Reply
@Stjn, Thanks for thinking through this critically.  The annual plan key results are very “future focused”, but all teams should also have the time and capacity to support some of the more “sustaining” work that you are suggesting.  The Infrastructure section of the annual plan covers this in better detail.  
Regarding your query, “what is the median percentage of the edits without newcomer tasks for people who have used them?”, the Growth team has previously investigated a similar question. Our findings suggest that newcomers who engage with tasks like "Add a Link" and other Newcomer Tasks, typically progress to other types of article edits in subsequent sessions, rather than limiting themselves to Newcomer Tasks.[1]
That being said, clearly there are some newcomers who never graduate to regular edits. We are about to scale a suite of Leveling up features that encourage newcomers completing Suggested Edits to progress to more valuable tasks. We find that 73.1% of newcomers who see the "increase your skill level" Leveling up dialog will click through to see the new task.[2]  
Once again, thank you for your valuable feedback. Striking a balance between innovation and enhancing core editing tools is indeed crucial, and should continue to be a focus as we prioritize work in the coming fiscal year. - KStoller-WMF (talk) 22:39, 23 April 2024 (UTC)Reply

WE1.3

edit

"We're aiming to touch a number of products over the course of the year, and want to make impactful improvements to each." is a sentence so vague that it could be done whatever the team does. What is the strategy to incorporate the tools we already have in the system? What is the strategy to handle the huge amount of tools we use but are not inside the regular workflow? What products are you going to touch during the year? What des impactful improvement mean? -Theklan (talk) 13:57, 26 March 2024 (UTC)Reply

@Theklan: Thanks for the questions - in short, they’re all ones that we’re in the process of trying to answer right now. We’ve published these KRs as early as possible to gather feedback on the overall direction, and now we’re going to be figuring out the specifics.
Working on tools we already have is exactly the plan - we want to make improvements to existing moderator tooling, not embark on new projects. Admins and patrollers have long been asking for small improvements and bug fixes to the tools and features which already exist, and so that’s the work we’d like to prioritise this year. We know that there is a seemingly endless amount of work to do to keep Wikimedia projects reliable and free of bad content, and so we want to make that process easier and more efficient for the editors doing this work, which is why we’ve opted for “satisfaction” as the core metric here.
Which specific products is still a question we need to answer - there are so many tools used to patrol, review, and act on bad content, and each has a substantial backlog of community requests, so we’ll need to figure out which need the most attention. To give you a sense of this, some that we’ve been considering so far include AbuseFilter, the Spam blacklist, Recent changes, and MediaWiki’s page protection and deletion tools.
Where do you think it would be best for us to spend our time? Are there particular patrolling or administrative features that you think could use attention?
Could you elaborate on what you mean by tools that are “not inside the regular workflow”? I’m not confident I know what you’re referring to.
In terms of how we might think about “impactful improvements” - we included that language to indicate that we want to make user-facing changes to these tools that genuinely make them easier or more efficient for editors. We could spend a lot of time doing purely technical maintenance on these tools, and I’m sure we will do some of that, but we also want to make sure that our work leads to meaningful improvements for users. Hope that helps, happy to answer other questions you might have. Samwalton9 (WMF) (talk) 09:04, 28 March 2024 (UTC)Reply
Thanks Sam for the explanation. Things should be explained this way, as the discussion is way richer, and more productive.
I think I have a long answer for this, sorry. I'll try to make it shorter, so the point remains.
As I said to @MMiller (WMF) above, this approach is centered in what I will call "first world problems". Do we (in general) need better AbuseFilters, Whatchlists or administration procedures? Yes, maybe. Those we have now work. They are not perfect, but they are doing their job quite well. I understand that English Wikipedia admins, or those with more experience, will be asking for better admin tools, but what are asking those that don't even know things could be asked for? What do we need in order to engage new users? Let's ask the question: what are Malayalam Wikisource editors asking for? What are the tools we should integrate in Commons, Quichua Wikipedia, Swahili Wiktionary or French Wikiversity?
This is my point here: we can make a little happier some users, or we can improve the resources for a lot of new and forgotten users.
Think big. Imagine that you want to add bold text and you need an external tool for doing that. Nonsense, isn't it? Well, that is happening in our platform at large. We have external tools (i.e. toolserver, toolhub, wmflabs...) for virtually any process that goes beyond adding a link, bolding or inserting an image. Wikisource visual edition is impossible. Wiktionary needs a huge amount of templates with intricate relations. We can't edit layouts in Wikisource. We can't upload videos natively. We can't crop an image in Commons natively. We can't improve an image without downloading and re-uploading it. We can't download the texts students have done and edit those as a book. Adding an structured area in an image depends on external tools. Quickstetements, or flickr uploader, or Wikiprojects resources are external, and impossible to use for new users. And even then, they are complex. Imagine that tagging an image in Facebook required to use an external service... Well, that's what we do. All these processes are external and "not inside the regular workflow".
But this is not only limited to those external tools (that should be internal). It is also about modules and templates. Wikimedia doesn't have a native way to handle coordinates. Every project must install their own modules and templates, with different approaches and compatibility problems. When a new project is born it doesn't even have Module:Wikibase or Module:Infobox deployed. Is not only that global templates and modules ARE A MUST, I'm talking about most Wikipedias lacking a working Module:Math, and those installed are different in structure, functions and even scope, making impossible to import code from one project to another, due to conflicting modules and templates.
That said: do we need another change to the watchlist layout? Maybe. We had some in the last years. Is that the most important problem we should be solving: absolutely not.
Now, you have to choose between making a small but loud speaking subset of wikimedians a little bit happier (that happiness won't last for long), or to make a huge difference for a large set of forgotten Wikimedians.
I would choose the second one. Theklan (talk) 20:31, 28 March 2024 (UTC)Reply
@Theklan Thanks for sharing your thoughts, this was a great read. I agree with you in general, but I think we need to find the right balance between ensuring that what we have now works as expected, and that we're maintaining it and ensuring surprise issues like Graphs breaking doesn't happen again. We could easily imagine that, say, AbuseFilter has some unknown problem which could cause it to break in the future or cause widespread disruption. Spending some focused time on it, doing some maintenance and working on some small features, will help us get eyes on the software and be more confident that these kinds of problems won't happen. At the same time you're totally right that we need to take big bets and work on grander problems to ensure that Wikimedia projects remain relevant for decades to come. I think we need to constantly evaluate whether we're making the right tradeoffs. For my team, this year we've been working on a big new project (Automoderator) which doesn't immediately solve any of those day-to-day or maintenance problems, but is one that we hope will improve admin and patroller experiences across the board. Then next year we plan to work on these maintenance projects. Maybe the year after we'll shift back towards another big project that has a larger and broader impact again. Samwalton9 (WMF) (talk) 11:58, 2 April 2024 (UTC)Reply
I mentioned above this sentence:
I know the answer: but we can't do everything. Well, that also false. It's actually possible. We just need to write it, and then start working. That's how things are done, step by step, with a roadmap, and thinking on what we need. There's plenty of money, expertise and strategical thinking. We just need to use those. How to do it comes after what we should do. The WMF usually works in the opposite way: instead of thinking on what we need and then acting accordingly, we only plan things that are possible withing the current status quo. That's a losing strategy.
We can see the problem here, perfectly rephrased in your words:
Maybe the year after we'll shift back towards another big project.
Only one thing can be done per year, and should fit within that year plan. A real problem, because there's no way to solve strategic issues with that constraint. A perfect way to lose. Really disappointing. Theklan (talk) 18:38, 2 April 2024 (UTC)Reply
For an individual team, I think that one significant project at a time is often the best approach. If you do multiple things at once, you tend not to make much progress on any of them. WhatamIdoing (talk) 19:36, 4 April 2024 (UTC)Reply
That depends on:
  1. The size of the team
  2. The size of the projects
  3. The management of both
Obviously, if the approach is that of picking low hanging fruits, one fruit per year per team seems correct. If the approach is strategical thinking and improvement, then not even per year makes sense.
Theklan (talk) 06:41, 5 April 2024 (UTC)Reply

WE2 Encyclopedic content

edit

Objective

edit

Increased growth in encyclopedic content is achieved through tools and resources that are easier to access, reuse, improve, and can reliably ensure trustworthiness of the content as per policies and guardrails used on Wikimedia projects.

When you say "Tools and resources (both technical and non-technical) that are available for contributors to use for their needs can be made more discoverable, and reliable." are we saying that the tools are going to be part of our infrastructure, and directly accessible from the editing or reading section, or does it mean a completely different thing? -Theklan (talk) 18:01, 29 February 2024 (UTC)Reply

@Theklan Hello. We already have a number of tools and features that are available and accessible from the platform. However, they can be hard to find at times, or lack connections to a workflow that an editor, or a group interested in editing uses regularly. This ends up causing a disconnect between what they are using to achieve a goal vs what they can use for a richer experience. Besides tools and features, this can also extend to support that are offered programmatically. The intention for this very high level objective is to set a direction towards a more connected system with the tools and systems we have right now, and progressively build in that direction. Runa Bhattacharjee (WMF) (talk) 16:09, 5 March 2024 (UTC)Reply
Sorry, I'm a bit lost, because this is as vague as the previous sentence. Can you explain a little bit what do you mean by "progressively", "towards" and "more connected"? Because this can be anything from a page where tools are listed, to a real integration of tools in the editing process. Theklan (talk) 16:44, 5 March 2024 (UTC)Reply

FYI. Following further discussions and iterations, the text of WE2 has now been updated - [Diff of the specific change].
Previously it was:
Increased growth in encyclopedic content is achieved through tools and resources that are easier to access, reuse, improve, and can reliably ensure trustworthiness of the content as per policies and guardrails used on Wikimedia projects.
Now it is:
Communities are supported to effectively close knowledge gaps through tools and support systems that are easier to access, adapt, and improve, ensuring increased growth in trustworthy encyclopedic content.
LWyatt (WMF) (talk) 10:27, 26 March 2024 (UTC)Reply

Well, this is still vague. What exactly is the team trying to do? Theklan (talk) 22:17, 26 March 2024 (UTC)Reply
That's explained in the adjacent column:
"Encyclopedic content primarily on Wikipedia can be increased and improved through continuous engagement and innovation. Tools and resources (both technical and non-technical) that are available for contributors to use for their needs can be made more discoverable, and reliable. These tools should be better supported by WMF, through feature improvements achievable in short cycles. In view of recent trends around AI assisted content generation and changing user behaviour, we will also explore groundwork for substantial changes (e.g. Wikifunctions) that can assist scaled growth in content creation and reuse. Mechanisms to identify content gaps should be easier to discover, and plan with. Resources that support growth of encyclopedic content, including content on sister projects, projects such as Wikipedia Library, and campaigns can be better integrated with contribution workflows. At the same time, methods used for growth should have guardrails against growing threats, that can ensure that there is continued trust in the process while staying true to the basic tenets of encyclopedic content as recognised across Wikimedia projects."
In simpler language:
  • "Tools and resources...can be made more discoverable, and reliable" means supporting projects like the https://toolhub.wikimedia.org/ or finding ways to alert editors to the existence of a tool at the moment it is needed.
  • "These tools should be better supported by WMF, through feature improvements achievable in short cycles" means supporting projects like the Community Wishlist.
  • The sentence about AI-assisted content means they'll talk about how to defend the wikis against ChatGPT (for example, by providing a more reliable alternative in the form of Wikifunctions).
  • "Mechanisms to identify content gaps should be easier to discover, and plan with" means they hope to make it easy to identify missing articles (for example, to calculate statistics on the gender gap and make a list of articles to be created during an event).
  • The sentence that mentions the Wikipedia Library says that they will try to make life easier for people who organize edit-a-thons or otherwise try to organize the creation of content.
  • The last sentence probably indicates an intention to run a lot of A/B tests.
Note that all of the individual tools and projects I name are just obvious examples, used purely for illustrative purposes. They might have more, different, or better projects in mind. That's because the point of this exercise is not to delve down into the details. In management jargon, this exercise is a "30,000-foot view" (10,000 m, or the typical altitude of a passenger jet in flight), and it is deliberately intended to focus on the broad outlines. The response is meant to sound like "There's a river over there, and a highway crossing it" (features that are visible from a high altitude). The response is not meant to sound like "That playground should have some picnic tables" or "The school building must have exactly 17 classrooms" or "There are not enough pine trees in that forest" (individual, specific projects).
The reason managers do this type of exercise is to help them focus on the overall work. The goal is to say "Your highway and my river intersect, so either one of us needs to change course, or we need to plan for a bridge". For example, after reading this page, a manager might wonder:
  • Why is so much focused on Wikipedia, and away from Commons?
  • Do we have enough data analytics staff to support all the projects that will be needing that service? Do we have the right tools to measure all of this? (Answer: No, but the situation is better now than it was a decade ago.)
  • Have we found the right balance between work focused on readers, new contributors, and experienced contributors?
  • Is this a good mix of current needs, old problems, and future-oriented research?
  • Are we paying enough attention to potentially disruptive events, like the new Temporary accounts or deploying Vector 2022 to the last wikis?
  • When will we decide to address some of our huge technological problems, like CAPTCHA, global templates, maps, and long-term abusers, or are we going to wait until they become emergencies and then claim that nobody told us (every single year for the last decade and a half) that these areas need serious work?
What you're not supposed to do with this sort of task is think about the exact details of individual projects. This is actually meant to be a bit on the "vague" or "abstract" side. WhatamIdoing (talk) 01:27, 27 March 2024 (UTC)Reply
Thanks for your explanation about the jargon. I wonder when will be the moment to talk about what is going to be done, if the first two steps are vague on purpose, and the last one will be too late because the project planning has gone. This is a real doubt: by design, we are talking vague, but there's no place for knowing what is going to be made in the timeline. Theklan (talk) 07:13, 27 March 2024 (UTC)Reply
We are already talking about what is going to be done. For example, "explore groundwork for substantial changes (e.g. Wikifunctions) that can assist scaled growth in content creation and reuse" is something to be done.
Do you mean, "When will we talk about the details of each individual project?" WhatamIdoing (talk) 22:48, 27 March 2024 (UTC)Reply
(Please ping on reply.) I mostly struggle to comment on ‘objective context’ things meaningfully because the entire column is written in the vaguest possible terms without really making a consideration for explaining which projects/consequences/decisions will be driven by this high-level description. But in this case I think I see a possible way to comment. In the past, the WMF has developed ‘encyclopedic content’-related solutions which are driven by pointless metrics and not by the meaningful impact the work of the WMF developers is doing to the quality of the wikis and the ability of the communities to cope with the increased backlog. One glaring example of such a feature is Content Translation: it is something no one asked for, it is something no one asked to advertise among the newer editors who cannot contribute to the same standard or fix the issues caused by the bad functioning of the tool, and it is something that continuously produces erroneous texts in all of the wikis where it is not restricted by hand. The development of Content Translation has been the worst thing to happen to Wikipedias in the last 10 years. It continuously gets advertised more and more to the people who are the least capable of producing good content using it. For years it had practically no ways to disable that advertising, or to disable the availability of the tool to those editors. It has glaring wikitext or translation errors produced in most translations, errors that no one on the team responsible for the tool seems to have time to fix, see e. g. phab:T314836 / phab:T357621. But no changes in direction have been made, and every single WMF entity (Growth team, Language team, etc.) makes more and more efforts to continue to push the tool in more and more places despite its shortcomings and problems because it probably has some good internal metrics that WMF devs can feel good about it.
So, the question from me is: will the WMF commit to actually working with communities while developing and pushing the further adoption of the newer features related to encyclopedic content, or will the consultations with community be mostly formal and the pointless user metrics will continue to reign the day? I can sort of understand why IP Masking is not a question that required community buy-in on the development stage (though I still think that WMF Legal’s and IP Masking team’s conduct of the development process was badly done and had significant problems due to not listening to community feedback on the proposed solutions that will be seen as the adoption of the IP Masking will be err, someday, adopted), I cannot understand completely why someone at the WMF decides that something as impactful as ‘streamlining badly translated machine translations into every single Wikipedia’ is something that should not have required extensive consultations with communities and active community work to make the tool’s impact the least damaging instead of the most as it was done. stjn[ru] 15:03, 17 April 2024 (UTC)Reply
@Stjn, so the funny thing is... yesterday, I was wondering whether there was a way to force an editor to use (only) ContentTranslation when creating new pages. Today, you are saying that nobody wants ContentTranslation, especially for new editors.
I'm cleaning up messes at htwiki. I don't speak that language, but I "speak" wikitext, and they have wikitext problems. There is a new French-speaking editor there whose method of translating appears to be:
  • Open an article at frwiki in a wikitext editor.
  • Paste the entire contents into Google Translate.
  • Copy results of Google Translate.
  • Paste unmodified Google Translate results into an article at htwiki.
  • Hope someone else can figure out why the page is a mess.
At ruwiki, you'd just block the person and be done with it, but at a tiny wiki like this, we want to retain anyone who can actually speak the language. I think if we could avoid the problems caused by this inappropriate method (like spaces being added in the middle of the wikitext code, e.g., "[[Link] ]" or "{ {Infobox}}", and treating File: and Template: names like translatable text), then the editor could become more useful. The ContentTranslation tool isn't perfect, but it's much better than this. Given a choice between what I get without ContentTranslation, and what I would get with ContentTranslation, I would choose ContentTranslation every single time. If you look at that article, I bet you would make the same choice, too.
More generally: When did we stop being collaborative communities? We complain that there are newer editors who cannot contribute to the same standard or fix the issues. When I was new, and I made mistakes, experienced editors fixed them for me. Now, when a new editor makes mistakes, we experienced editors complain about their mistakes. We don't think "Good news – we have more content! It needs to be improved, but it's great that we have this information now." Instead, we think "Ugh, how terrible – all these new editors keep adding sentences and paragraphs and whole pages. It's so much extra unnecessary work for me. If they would just go away, or only fix tiny typos, I wouldn't have to do all this tedious work to make the page look pretty." The larger communities have definitely shifted from welcoming content to rejecting as much as they can. We see new information as a serious burden now. What caused that shift? I assure you that it's not because our first edits were better than today's newcomers. For most of us, our first contributions were worse. WhatamIdoing (talk) 05:34, 18 April 2024 (UTC)Reply
@WhatamIdoing: I do not doubt that there are editors who are struggling with the wikitext markup and/or machine translations. That has always been the case and will continue to be the case, sadly. What I think is destructive about ContentTranslation is that it streamlines badly formatted machine translations and basically makes making them as easy as possible. So no, while those Haitian Creole articles are not good, they are not a justification to be pushing a tool that makes the problem much, much worse without community opt-in.
And for the second point, the point of my complains wasn’t to say that Wikipedia should be closely guarding against new editors who cannot contribute to the same standard. The point was to say that we shouldn’t have tools that are making the existing problems with the backlog of badly formatted and badly translated articles much, much worse. ‘Improving’ content is impossible if the whole page reeks with the problems that can be only ‘improved’ by retranslating the page from scratch. Moreover, the WMF shouldn’t advertise those tools to the editors who are the least qualified to use them without asking the communities first. That is also not really ‘collaborative’ if you ask me. There is a phrase in Russian that translates as ‘a bear’s favour’, and basically describes a favour that is actually a disservice. The whole ContentTranslation exercise and the way the teams responsible conduct their push for CT usage without really considering the toll it takes on communities has been nothing but a bear’s favour. stjn[ru] 11:24, 18 April 2024 (UTC)Reply
The existing wikitext system streamlines "completely broken" machine translations. I'd consider "badly formatted", with an opportunity to reject all unmodified or barely modified machine translations (you do know about that setting, right?), to be a significant and valuable improvement over what I'm getting now.
In the larger communities, it can be easy to miss a discussion. The Russian Wikipedia requested ContentTranslation in 2015. WhatamIdoing (talk) 19:05, 18 April 2024 (UTC)Reply
Once again, hopefully not to a wall. Yes, it is possible to do bad things with wikitext. No, that does not justify the existence of the tool that makes making bad things practically unavoidable (in bogus code or in machine translations). ‘you do know about that setting, right?’ — I think this is also showing, err, disregard for humans. Same as this whole endeavour, really. You expect the communities to be the ones that need to care about internal settings of the tool no one asked for and no one was consulted about, instead of the team who gets paid money to develop it. It is not anyone’s job to fix a broken tool for the people who developed it and continuously push it and advertise it despite the tool being, to say the least, very controversial. No one’s but the team’s developing it, that is.
As for the latter point, it was easy to miss that discussion because it was not really a discussion. One user suggested making it available, almost no one commented. Compare this to any discussion about the problems of the tool, e. g. w:ru:Википедия:Форум/Архив/Предложения/2023/08#Убрать инструмент перевода. stjn[ru] 09:37, 20 April 2024 (UTC)Reply
"The tool no one asked for" except for the people at the Russian Wikipedia who asked for it? Are they "no one" in your opinion? Or did you mean "The tool that other editors asked for, but I think they were wrong to ask for it"? WhatamIdoing (talk) 01:47, 23 April 2024 (UTC)Reply
I took issue with your characterisation ‘Russian Wikipedia asked for X’, not with your clarification that some editors did. OK, some editors were asking for it. The thing is, no one was asking communities and no one is still asking communities. I constantly randomly discover random new advertisements of this tool that no one asked or even informed us. Even though the team members perfectly know that Russian Wikipedia has a defined consensus against this tool being advertised to new editors. Because it is god-awful and allows to create unedited machine translations full of wikitext errors. stjn[ru] 12:16, 27 April 2024 (UTC)Reply
@Stjn I asked for the Content Translation, I'm really happy we have it, our community has now more articles thanks to that, and they tend to be larger and better organized. The main problem with ContentTranslation is that the code can be a mess, and you need some editing after saving, but it's definitively something very well needed. Theklan (talk) 19:55, 19 April 2024 (UTC)Reply
There is no doubt that content translation works well in some wikis, especially in small and emerging communities. However, this is problematic for large and medium wikis, as it allows newcomers to easily create poor quality machine translation articles. (You can find several tickets of restricting access to Content translation, e.g. zhwiki, trwiki, jawiki).
The ideal situation of Content translation should be that users create good quality translation articles with the assistance of machine translation. But the existing measures seemingly cannot prevent poor quality translations or encourages quality of translations, for instance, the measure of the unmodified contents CJK and languages without spaces is broken and has not been fixed yet. Thus, it is recommended that review existing measures and explore other measures and system, e.g. providing tutorials to the newcomer and reminding for the common mistakes. Otherwise, what communities can do is to restrict the access to Content translation to avoid bad quality translation. Thanks. SCP-2000 04:18, 27 May 2024 (UTC)Reply

WE2.1

edit

Could I see an example of "existing knowledge gaps"? IOIOI (talk) 00:06, 30 March 2024 (UTC)Reply

In FY24/25 we're looking to continue our support of the movement's ongoing efforts at closing knowledge gaps in key areas, such as the gender gap. Other knowledge gap priority areas will be built on consultations with Communities and learning from the WMF's Knowledge Gap Index. PWaigi-WMF (talk) 08:50, 8 April 2024 (UTC)Reply
Somewhat vague but mostly seems to be about supporting efforts like Wikipedia Library. Good if so. I think Wikipedia Library definitely deserves more awareness from Wikimedians and maybe more attention paid to its internationalisation. Translating:Wikipedia Library Card Platform on translatewiki.net shows pretty bad results, and the users also need to look pretty hard into the settings to be able to switch the language in the app. Those seem to be good areas to work on. stjn[ru] 15:37, 17 April 2024 (UTC)Reply
Thanks for your shout out for The Wikipedia Library. We certainly intend to support The Wikipedia Library in FY24-25, particularly as part of WE2.3, which aims to make more diverse image and reference material available. In FY23-24, we increased our focus on non-English language partners and have already added De Standaard, Duncker & Humblot, Leuven University Press, Central European University Press, and Mohr Siebeck.
We agree it would be great for the library to be translated into more languages. We can look at ways to increase the completion rates for the UI messages available on translatewiki.net. How do you think we could encourage translation of the tool? Over the years we’ve tried to cut down on unnecessary translations, to reduce the amount of work to be done, but ultimately as a volunteer effort we’d love to hear what your suggestions are to incentivise translation efforts. A more visible language selector is definitely a good idea. We have a Phabricator ticket at [[Phab:T226804|]] for this, and we would like to be able to prioritize work on it this year as part of our efforts to improve the library for non-English users.
What is more in focus for the WE2.1 key result is work supporting editors and campaign organizers to identify and more easily work on knowledge gaps. For example, it is difficult to see all of the relevant articles needing improvement for a particular topic in a particular country. In FY24-25 we want to make it easier for topic- and language-based communities to work on the contributions that will have the biggest impact for them. PWaigi-WMF (talk) 18:29, 19 April 2024 (UTC)Reply

WE2.2

edit

WE2.3

edit

The text says "partner with (...) the Wikisource Loves Manuscripts learning network.". Is there any plan to make Wikisource visual edition possible and make the system less dependant on complex template systems? -Theklan (talk) 14:37, 27 March 2024 (UTC)Reply

Thanks for your interest in Wikisource. In recent years, the Wikisource technical community, with the support of the Wikimedia Foundation, has improved the Wikisource workflow in many ways, including improvements to Wikimedia OCR (new OCR engine and additional features), WS-Export (improved reliability), and the new Edit-in-Sequence beta feature. We do understand that VisualEditor is a much needed feature on Wikisource but the existing ProofreadPage extension would need to be greatly overhauled to make it compatible with VisualEditor. Functionality specific to Wikisource would need to be added, and even then it would perhaps not avoid template complexity. Additionally, there are consistency issues that need to be resolved before that can be implemented. For example, different templates are in use across the different language versions of Wikisource. FRomeo (WMF) (talk) 16:45, 28 March 2024 (UTC)Reply
So, if this needs to be solved... why isn't it planned? That's the question. Saying that something is complex doesn't solve issues. Theklan (talk) 18:18, 29 March 2024 (UTC)Reply
  • language and geography gaps - which languages you have on the list to cover? IOIOI (talk) 08:55, 30 March 2024 (UTC)Reply
    Thanks for your question. Within the Wikisource Loves Manuscripts learning network, we have participants representing the following language communities: Arabic, Assamese, Bengali, Bikol, English, Luganda, Kannada, Malay, Odia, Portuguese and Punjabi. We still have work to do with AfLIA and BHL, and their associated Wikimedia communities, to identify some shared priorities for next year. If your language community needs support to access source materials (such as images, digitized manuscripts, or paywalled reference materials), please feel welcome to book a meeting with the Culture and Heritage team. FRomeo (WMF) (talk) 10:17, 2 April 2024 (UTC)Reply

WE2.4

edit

WE3 Consumer experience

edit

Objective

edit

A new generation of consumers arrives at Wikipedia to discover a preferred destination for discovering, engaging, and building a lasting connection with encyclopedic content.

  1. @OVasileva (WMF): I object to talking about "consumers" as if Wikipedia was another commercial project pushing out "content". I object to talking about encyclopedia articles as "content" as if they were a commercial project. In addition, this all seems very vague, a sort of bland corporate statement. What are you actually proposing to do? If you don't propose to do anything new, that's good too, but say that instead of talking cagily about "work[ing] across platforms to adapt our experiences and existing content, so that encyclopedic content can be explored and curated by and to a new generation of consumers and donors. " Because that's just buzzwords and corporatese.
I don't want to sound brusque, but I'm finding it difficult to understand what you're actually proposing here. 🌺 Cremastra (talk) 00:52, 21 February 2024 (UTC)Reply
Hi @Cremastra - thank you for writing this out! We are definitely still working on the language here and appreciate your feedback. I think the reason that we went with "consumers" is because we wanted to widen the audience from our usual "readers" to include people that might learn or use Wikipedia in different ways - for example, people who are more visual learners or people who use assistive technologies and might not necessarily be reading, but consuming the content in a different way. Readers didn't seem to fit this wider group. I agree that the con to "consumers" or "knowledge consumers" is that is that it sounds like a much more generic term. We're definitely welcome to suggestions on what other terms might fit better (for example, for a while we considered "learners" but that also felt a bit too vague). Do you have any ideas around this?
In terms of the proposal itself - what we want to do is make it easier to learn on Wikipedia by making it easier to discover content (here meaning articles, but also other encyclopedic content such a lists, images, etc). This would most likely mean thinking of different ways people can find articles or topics of interest, working with communities to curate content, and to present existing curated content in engaging and easy to find ways to readers and other consumers. Hope that makes more sense - I realize this is a pretty wide umbrella as well, so can definitely provide more examples too if that would be helpful! OVasileva (WMF) (talk) 13:47, 21 February 2024 (UTC)Reply
@OVasileva (WMF): Thanks for the reply, this is very helpful.
  • More support for accessibility projects like Spoken Wikipedia would be good. Because articles are always getting changed and updated, there's a lot of work to be done there. This is far-fetched, but maybe a tool that allows contributors to that project to easily record their work? Specifically, I was thinking a tool that can be activated on any article, (probably from a drop-down menu), that allows editors to record and upload their reading of the article there, via the tool's interface.
    • I've created a visual mock-up of what this could in theory look like at User:Cremastra/sandbox (all the buttons and links are dummies).
Thanks, 🌺 Cremastra (talk) 21:10, 21 February 2024 (UTC)Reply
Thanks for the idea and mock @Cremastra! It aligns with the idea of the objective of making it easier for the community to curate content and make it easier to discover for others. I'll share it with some of the other folks that will be working on this objective to get their thoughts as well. OVasileva (WMF) (talk) 09:39, 22 February 2024 (UTC)Reply
The wording is really worrying, not only from the "consumers" side, also from the "donors" word. Your team was the one that hidded on purpose our sister projects, making them virtually impossible to be found. What are the plans to "making our content more easy to discover and interact with" if the sister projects are now impossible to be found? Is there any plan to make them more visible and evident? Theklan (talk) 18:03, 29 February 2024 (UTC)Reply
@Theklan What was the specific incident with the sister projects? 🌺 Cremastra (talk) 16:42, 16 June 2024 (UTC)Reply
Sister projects were displayed in the left menu. With Vector 2022 they are buried in the bottom of a "Tools" menu which doesn't actually have tools inside. A total mess that makes impossible to find those project to anyone outside the users who know that the links are actually there. Theklan (talk) 18:07, 16 June 2024 (UTC)Reply

WE3.1

edit

Which are those experiments? What are they trying to do? -Theklan (talk) 14:01, 26 March 2024 (UTC)Reply

Hey @Theklan, thanks for the question. We’re currently discussing the exact experiments we want to run, but are open to ideas (I know we’ve discussed similar directions in the past with you, so curious what you think). We’ll be publishing more info around this as soon as hypothesis planning begins. In general, we’re thinking the experiments would fit within two rough categories of browsing experiences.
The first category would be around making it easier for people to traverse content/find information they are looking for and that is interesting to them. This could include things like providing recommendations on main pages and alongside articles, presenting existing recommendations such as the main page in more interesting ways, looking into ways that the community could curate content more easily, potentially highlighting work that’s already been done on collecting content per topic or category, etc.
The second category would be around surfacing relevant content within a page and making it quicker and easier to find. This could include things like highlighting specific parts of an article that might be relevant, providing summaries or simplified versions of articles (Similar to Txikipedia), or making it easier to answer specific questions/find specific information within the page.
This work would span across the desktop and mobile web, as well as the apps. Right now, we’re expanding the ideas for experiments. We probably won’t have the capacity to do all of the ideas above, but it’s better to start with many. As we start the fiscal year, we’ll narrow down which experiments or prototypes we think have the most potential and begin with those. From there, the plan would be to commit to building the ones that we think have the most benefit to readers. Hope this helps - it’s still early days but we will be publishing more details on this soon. In the meantime, we’re open to suggestions and ideas in this area if you have any. Specifically, I’d be interested in learning on any new learnings you’ve had with Txikipedia and thoughts as to how that might connect to this project. OVasileva (WMF) (talk) 16:31, 28 March 2024 (UTC)Reply
Ok, I'm going to assume good faith, and try to understand that the discussion for the fiscal year 2022/2023 took two years. There's an open ticket, that was attacked, closed and claimed out of scope talking about this issue three years ago: task T293405. Still happy to help there if this is taken as a priority. The same applies to a portal (i.e. curated content) proposal task T303258. I was menaced to be banned from Phabricator because I proposed that (a public apology would be good). Again, if there's appetite for improving things, I'm happy to help.
Anyway, if the question is why people is not visiting sister projects, it might be because the Design Tean decided to hide them and bury those in a "Tools" bucket without any logic behind. There are some solutions proposed for this: task T334792 or task T287609 (which even has a mockup, but was closed without being solved). Also noting that this decision was taken based on false data, and closed without aknowledging that the data was inexisting (https://mediawiki.org/wiki/Talk:Reading/Web/Desktop_Improvements/Archive5). Any of the ideas proposed at the Phab tickets would be better than the current situation. And would take around one week of work to be implemented. Maybe less.
You also talk about content summarization. I wrote a detailed mail with a plan to @SDeckelmann-WMF about this, I can resend you the idea for an annotation software that would created user curated summaries and promote the use of the user space while encouraging readers to log in. Let me know if you are interested.
Lastly, Txikipedia needs a community to adopt it. We have lots of technical issues there. It could be great to promote a similar approach to other communities, but then the pain should be reduced in Txikipedia to make it easier to use and technically better. We can also work on that this year, so you have a bundle to offer to other communities. Theklan (talk) 07:57, 30 March 2024 (UTC)Reply
Hey @Theklan, thanks for your continued interest here and for working on some of these ideas with some of us at the Hackathon. And sorry for the late reply - we were waiting for everyone to get back from the hackathon to continue discussing our plans for next year.
Like I mentioned above, we’re planning on addressing this from various sides - some will cover new features for browsing and discovery while others might focus on reusing and improving the design of existing features - such as main pages or something like Txikipedia. With that in mind, I think we might have something more concrete and along the lines of T293405, but we’re starting the year with a few different experiments to see what has the most impact. Most likely, what we end up building will be a combination of various different approaches and ways to make browsing easier. In general, I think right now we’re leaning more towards starting work on more generalised browsing (more similar to main pages) than specific topic browsing (more similar to portals). The idea there is that if we build for a more general solution, it would be easier to specify later on. That said, this might change as we begin to perform the experiments and learn from them.
In terms of summaries - yes, I’d be interested in looking across a few ideas! Could you send it to me? Currently, the way we’ve been thinking about experimenting with this is to use the prototype for a Wikimedia-made model that creates summaries automatically, which are then reviewed or approved by editors in some way. It’s still early days, but we’re hoping to have some rough prototypes in the first few months of the year. OVasileva (WMF) (talk) 07:24, 22 May 2024 (UTC)Reply
Sorry, but this doesn't look like a plan, but like some loose ideas.
  • If you are looking for a modern looking main page, with interactive content, that any project can copy. here it is: eu:Azala/Grid. It took literally the time of the Hackathon and one more day to be built. So task T293405 is partially solved (as solved as it can be).
  • Something like Txikipedia is not a proposal, as this needs a community to make it happen. Is not something that can be solved from design or navigation, but from proposing the idea to communities and making it interesting.
  • Making browsing easier is not related to having a project for children. Those are completely different things. Making browsing easier and our sister projects more visible requires unbreaking what was broken by the Design Team when decided to hide those links from the public based on false data.
  • A tool that summarizes automatically is the worst approach: it would only work, eventually, in English, and the summary would be made by a machine, instead of using collective intelligence to do that. Summarizing can be done collectively if we add a simple annotation tool that our millions of readers can use to annotate what is relevant to them. @SDeckelmann-WMF has the details about this.
  • I'm still waiting for a public apology for what happened at task T303258. It would be nice.
Theklan (talk) 09:21, 22 May 2024 (UTC)Reply

WE3.2

edit

WE3.3

edit

WE4 Trust & Safety

edit

Objective

edit

Improve our infrastructure, tools, and processes so that we are well-equipped to protect the communities, the platform, and our serving systems from different kinds of scaled and directed abuse while maintaining compliance with an evolving regulatory environment.

Problematic content on Commons

Better algorithms an human monitoring for Commons. As I mentioned above about the user's experience : I recently made a test about images on Commons for a presentation about Parity in the digital world. The result of the images displayed after a research on Commons with simple keywords was sometimes so awful, with a lot of obscene and outrageous content, that it's just something you cannot present in a meeting. I than compared this result on Commons with the result of Google images and also the images contained on the corresponding Wikipedia article with the same keyword requests.

The difference is just enormous and Commons is highly problematic (keywords were average words, nothing special, any kid could search these terms for doing a homework, and that's the problem). It corroborates the statement that "Online images amplify gender bias" (Nature 2024). Concerning personal "pornfolios" and offending and sexist content created by editors, this should be addressed in a new way. The Tech Lab - WWW Foundation (Tim Berners Lee) is focusing on perpetrators. They are elaborating new approaches for new policies in their Helping End Online Gender-Based Violence program. Suggestion : collaborate with them. https://techlab.webfoundation.org/ogbv/policies-and-data

Transparency of the tracking and system of punishment

On Wikipedia a lot of personal informations are publicly available and kept. This allows a "social profiling" which can be used by admins to modulate their sanctions of the contributors.

Crossing publicly the data allows also the shaping of the group of eligible voters for some community decisions.

How far can the community rulers go in the use and crossing of such personal public data ?

Are the contributors warned about the fact that all of their participation, including the discussions on the platforms and the way they vote in Community decisions, can be used to profile them ?

As the Terms of Use are clear and available, there is no clear communication about the internal Terms of Use of the projects themselves. And the conditions differ a lot by projects and by languages.

The projects are autonomous, the community groups enact and change their own rules, but who knows what these rules are exactly and which "rule price" a contributor has to "pay" to be able to participate to a given project? Is the contributor ready to pay such a price for sharing free knowledge ?

So I suggest that each project publishes a "pricelist" page. This page should list all the sanctions and degree of sanctioning that may be applied for the non-respect of the internal rules and also the conditions of vote for all the given votes.

Means informing about :

  • all the way contents can be suppressed, who is able to do it, is it revocable, etc...
  • all the kind of writing bans on the projects, duration, spaces, etc... who is able to apply them, are they revocable, etc...

In case of ban :

  • do people loose their ability to get communications on their discussion page ?
  • do they have a person who can speak for them (a kind of lawyer ?)

A given project can enact any internal rules but :

  • participants must be informed about the specific rules (not only the TOU).
  • they must be able to know and make an informed choice before they begin to participate to the project. If some clause is problematic for them and could affect their social life outside of the platform (as everything on Wikimedia is public), they must be able to know in advance and not engage in a project with their participation being public and available in a history.
  • for example if someone discovers too late that all his or her discussions are publicly available and this person (for example a teacher who can be recognized by their pupils) didn't know or want that, it's too late if they have been already engaged in a problematic discussion and/or punished by a ban. Everything they wrote will remain "forever" on the platform which is "mercyless".

Suggestion :

  • ask to all projects to list on an official page which kind and severity of sanctions can be used for a contribution and against a contributor
  • link the page on significative pages like the home page, Village pump, aid for beginners, personal talk page of the contributors.
  • if necessary, ask contributors to accept the inner rules of a given project before they participate.

Being clear about the projects rules is important because people will likely be more often and seriously "punished" by local project community rules than by the application of the TOU by the Foundation.

Wikimedia projects, especially Wikipedia are kind of monopoles in the field of sharing free knowledge. A punishment on these projects has a significant impact on the editors. A better transparency about the rules before people engage in the process is necessary.

Presenting the rules of a project in a transparent way protects the persons who apply the rules because it makes clear that the rules are from public knowledge and apply to everyone in the project. It's not a person to person process.

Asking to the platforms rulers to list the punishments, the conditions to participate to decisions and the way they use and cross the data of the contributors make them aware of the way they rule the project.

With supposed Good Faith, punished contributors should not loose the ability to be informed and communicate on their talk page. They should also benefit of the help of a "Good Will" contributor who could speak and advocate for them.

Waltercolor (talk) 10:41, 14 April 2024 (UTC)Reply

Specifically about Commons, you might be interested in the Image filter referendum/en. That was the last significant attempt by the Foundation to offer tools that could be used by individuals to indicate that they did not want to see, e.g., pictures of sexual acts.
For the rest, I do not think this is feasible at 99% of the wikis, and I am not sure whether the biggest 1% would want to do it. Unwritten and contradictory rulesets allow people with power and privilege to maintain their status. For example, at the English Wikipedia, we have one rule called w:en:WP:ONUS that says I get to remove anything that doesn't have an agreement to keep it, and another called w:en:WP:NOCON that says we normally keep "long-standing" material unless there is an agreement to remove it. So if I want to remove some content, and we can't agree, then I say that ONUS applies and do what I want, but if I want to keep the content, and we can't agree, then I say that NOCON applies and also do what I want. I've been trying to fix that particular problem for a couple of years now, with no success. WhatamIdoing (talk) 02:34, 15 April 2024 (UTC)Reply
(Please ping on reply.) It is sort of baffling to see a mention of captchas in good light given that the WMF has continuously ignored the accessibility challenges caused by captchas, see phab:T6845. I understand that the WMF is not a commercial entity, so technically it does not have the same obligations under Americans with Disabilities Act of 1990 as Google or Amazon, but it is still glaringly bad that the WMF continues to ignore accessibility problems, and the accessibility problems prompted by the over-reliance on badly working, inaccessible and unlocalisable captchas that are more likely to catch humans than bots is saddening to see. stjn[ru] 15:11, 17 April 2024 (UTC)Reply
Hello, @Stjn . I'm Suman, a Director of Engineering at WMF. Thank you for reading the plan and for your comment. We agree that Wikimedia Captchas can be improved. Balancing effectiveness and accessibility for bot detection is not an easy problem to solve. In evaluating any potential new Captcha solutions, accessibility is one of the acceptance criteria. The Captcha improvement solutions we're considering are localisable. Captchas as a mechanism will likely remain one of the more effective means of mitigating scripted abuse, especially large-scale scripted attacks such as the ones we saw earlier this year. One area we are researching is how to use request signals to help us understand if a request is part of an abuse attack. This will enable us to use Catpchas and other mitigations as sparingly as possible to inconvenience as few users as possible. Do you have examples of Captcha solutions (or other similar scripted abuse mitigation solutions) that handle localisability and accessibility well? Please let us know so we can ensure they are on our radar.
SCherukuwada (WMF) (talk) 14:49, 19 April 2024 (UTC)Reply

I strongly agree that "IP-based abuse mitigation is becoming less effective" and we have to start to develop the new anti-abuse workflows which less rely on IP based mitigation. I would like to see not only working on development of sock investigation tool, but also look at ways to reduce the effect of open proxies (See also the relevant discussion), as we can see that the internet censorship and the usage of VPN / proxy is becoming more common. Thanks. --SCP-2000 07:49, 27 May 2024 (UTC)Reply

Hello, @SCP-2000. I'm Suman, a Director of Engineering at WMF.
Thank you for your comment. We've been re-reading the unfair blocking discussion as we plan various efforts under WE4 to approach this problem. In order to reduce collateral damage of IP blocks, we intend to:
1. find other ways of mitigating abusive behavior without relying solely on IPs.
2. use more signals to determine how effective an IP block will be
As part of the work in WE4, engineers and researchers will work together on bringing these mechanisms to our anti-abuse workflows. SCherukuwada (WMF) (talk) 09:16, 4 June 2024 (UTC)Reply

WE4.1

edit

WE4.2

edit

WE4.3

edit

WE5 Platform evolution

edit

Objective

edit

Evolve the MediaWiki platform and its interfaces to better meet Wikipedia's core needs. What does "This includes continuing work to define our knowledge production platform, strengthening the sustainability of the platform, a focus on the extensions/hooks system to clarify and streamline feature development, and continuing to invest in knowledge sharing and enabling people to contribute to MediaWiki." mean? This can be one thing, the other, or the opposite. Which are the exact plans in the next years to make our platform less obsolete? -Theklan (talk) 18:05, 29 February 2024 (UTC)Reply

WE5.1

edit

WE5.2

edit

WE5.3

edit

WE6 Developer Services

edit

Objective

edit

Technical staff and volunteer developers have the tools they need to effectively support the Wikimedia projects.

WE6.1

edit

WE6.2

edit

WE6.3

edit

I'd like to hear more about this sustainability scoring system for the Toolforge platform, e.g. the ideas and the scope. What can you share at this point? --TuukkaH (talk) 09:33, 28 March 2024 (UTC)Reply

Thanks TuukkaH for your question; it’s a good one. The proposed sustainability score for the Toolforge platform aims to measure the ecosystem's overall health by considering a range of technical and social factors. Since no single metric can capture the whole picture of sustainability, we plan to evaluate various factors individually and then combine them into a "global sustainability score." This score will serve as a benchmark, similar to how software quality is gauged on platforms like libraries.io or GitHub's community standards (example), but with a focus on the Toolforge ecosystem as a whole. As an example, rather than asking “is this project’s code available in a public repository?” we would be asking “How many tools have their code available in public repositories?”
Key themes for the scoring framework include ease of deployment, tool metrics availability, and enabling non-maintainers to help with tool issues. We're set on developing this framework with input from technical volunteers and the wider Toolforge community as part of our key result work, including defining the different factors and their weight in the overall score.
It’s important to bear in mind that this work is scoped to the Wikimedia Cloud Services team, and as such will mainly be on the technical side. An example includes work on reducing the steps needed to deploy tools and introducing CLI access from local machines to cut down on SSH/bastion host reliance. Another example could be creating aggregated dashboards with tool maintenance and activity metrics, to make it easier to identify impactful tools that are at risk due to insufficient maintenance, and in general make decision making more data driven.
We are hoping that this framework, together with the technical improvements planned during FY 24-25, will set the stage for future work on broader issues like identifying and supporting critical tools within the ecosystem. SStefanova (WMF) (talk) 19:36, 28 March 2024 (UTC)Reply
The problems on Toolforge are largely social, not technical. All of the technical problems have been related to WMF-induced increases in entropy. Trying to fix social problems with scorecards and technical changes doesn't work, it just chases away maintainers who are already busy. AntiCompositeNumber (talk) 20:16, 2 April 2024 (UTC)Reply
Hi AntiCompositeNumber, thanks for sharing your thoughts.You are right, the issues in Toolforge aren't just technical. We acknowledge the complexity of these issues and view this initiative as a starting point for a broader conversation and action plan.
The drive behind any purely technical improvements will be on reducing the burden on maintainers (and especially, newcomers) by, among other things, transitioning Toolforge into a more user-friendly, modern platform where users no longer have to use SSH or perform similar manual steps. Another of the planned improvements is to make it possible to have a backend API and a separate frontend within the same tool, something that today requires creating two separate tools.
Even though technical improvements don't directly solve social problems, they can significantly influence any solution—either paving the way or posing barriers. As such, they offer a practical starting point to explore the issues before diving into the social aspects head-on. We're stepping into complex territory, aware that simple fixes, if any, are long gone. If you have ideas or suggestions, they are very much welcome. SStefanova (WMF) (talk) 15:26, 5 April 2024 (UTC)Reply

SDS1 Shared insights

edit

Objective

edit

Our decisions about how to support the Wikimedia mission and movement are informed by high-level metrics and insights.

Metrics are not the only thing that matters. Community relations do not get measured by metrics. Accessibility problems (which are rampant) are hard to measure by metrics (but even if we were to measure them, we’d probably fare fairly badly). For some reason no one tries to measure tool satisfaction (e.g. ContentTranslation, UploadWizard, etc.) by metrics. Honestly, it seems safe to say that the WMF mostly has an over-reliance on metrics, because metrics can justify the already existing decisions that no one wants to overturn because of the sunk cost fallacy. If you want to really support volunteers, you need to measure different metrics from what is typically meant by this in the WMF. stjn[ru] 15:16, 17 April 2024 (UTC)Reply
@Stjn, community relations do get measured by metrics, and individual tools could be measured by metrics. See phab:T89970 for one nearly decade-old proposal to do just that. WhatamIdoing (talk) 05:12, 18 April 2024 (UTC)Reply
Hi @Stjn, I'm Kate Zimmerman, the Senior Director for Research & Decision Science, and I'm responsible for this objective. I agree that metrics are not the only things that matter. This is why we talk about using metrics and insights to inform decisions – but we also recognize that there are other factors that go into decision-making.
To your point, not everything we care about is easily measured. In some cases we do survey research (e.g. the Community Insights survey), structured qualitative interviews (such as when we conduct user research for product development purposes), or more informal discussions (e.g. Talking:2024) to better understand how to support volunteers and readers (or other consumers of Wikimedia content). And we're always looking for better ways to understand community needs. I welcome suggestions on areas we should investigate! KZimmerman (WMF) (talk) 19:54, 26 April 2024 (UTC)Reply

SDS1.1

edit

SDS1.2

edit

SDS1.3

edit

SDS2 Experimentation platform

edit

Objective

edit

Product managers can quickly, easily, and confidently evaluate the impacts of product features.

  • User:TTaylor (WMF), I think that evaluating the effect shouldn't stop at "launch", which this wording unfortunately (but I think unintentionally?) implies. I think we need phab:T89970 microsurveys as a low-level, ongoing survey, so that every launch has not only an easy opportunity to provide simple feedback, but also a baseline to compare it against. WhatamIdoing (talk) 23:32, 20 February 2024 (UTC)Reply
You are correct, WhatamIdoing, it is not meant to imply that impact evaluation stops at launch. The generalized capability we would like to have is commonly described as A/B testing, where a feature under development is shown to a small % of users and compared against a control or alternative group. Some product teams have done experiments like this before, with significant effort required to set up, run, and evaluate them. We want to streamline the effort involved so that more product teams can run experiments like this. TTaylor (WMF) (talk) 14:33, 21 February 2024 (UTC)Reply
Thanks for the quick reply, @TTaylor (WMF). I love a good mw:A/B test, but I'm not sure that it's enough.
For one thing, they don't usually capture sentiment changes. The A/B test can see that I'm still making 20 edits/day, but it can't see that I'm mad.
For another, they can produce spurious results. At least among core community members, I can be in the "control" group and still have my day disrupted by a product launch. When we launched the visual editor in 2013, people not using the visual editor were having problems, because they had to clean up a lot of unwanted whitespace changes and other problems that the visual editor produced at that time. If you start the A/B test at launch, you might see some of the "A" group stop editing because the product is bad, and some of the "B" group down tools in protest over the mess that the remaining "A" group is making (or spending their day arguing outside the mainspace, or writing AbuseFilters, or whatever it is that isn't what they would normally do). In such a scenario, an A/B test could show the two groups as "equal", so supposedly no harm's being done, when what's really equal is the number of people who quit editing because of the problematic change.
What I do like about the idea of improving the A/B infrastructure is that it should make it easier to do partial rollouts. Particularly for appearance-related changes at the larger wikis, deploying a new thing to 10% of registered editors, and then waiting to see what problems are revealed before expanding to a larger group, is gentler than deploying the new thing to 100% of registered editors at the same time. WhatamIdoing (talk) 06:51, 22 February 2024 (UTC)Reply
WhatamIdoing, your last paragraph nailed exactly the outcome we'd eventually like to achieve. I agree that experiments can produce unwanted outcomes or side effects, or be designed in a way that produces invalid results. We are planning for one of the key results for this objective to be focused on experimentation guidelines, to try an avoid the kind of outcome you described. I don't want to pretend this is easy or we'll always get it exactly right, but we do want to get better and more efficient at running experiments so that we can learn much faster what works and what doesn't. TTaylor (WMF) (talk) 13:07, 22 February 2024 (UTC)Reply
Imagine, two years ago the design team made an A/B test for the Zebra design, and the deployment is still pending. We can't wait for years for every change, it doesn't make sense. Theklan (talk) 18:34, 29 February 2024 (UTC)Reply
Sometimes that means that they're not satisfied with the design. mw:VisualEditor/Single edit tab was started eight years ago and hasn't been deployed everywhere, either, but that's because the Editing team is dissatisfied with it, not because they are being lazy.
(The problem in this example is that a single editing button is easier for newcomers, because they can start editing without first figuring out what 'edit' and 'edit source' mean and which one they want to use. [Is that source like programmer's code or source like references? For languages that use words like wikicódigo, what does that word mean?] But one button is worse for editors like me, because I want to use the editing environment that's best suited for the particular change that I'm planning to make. Also, people have to learn how to switch between them, which is not obvious to them. It's complicated, and they decided not to deploy it further until it was improved, and that other tasks, like improving the mobile visual editor, were more important than improving this feature.) WhatamIdoing (talk) 17:00, 1 March 2024 (UTC)Reply
I don't think so. It was closed as Resolved, but it is not: task T341275. Theklan (talk) 18:13, 1 March 2024 (UTC)Reply
The task for conducting the A/B test is resolved. The task for using the information collected in the A/B test is open. That is consistent with what we see in practice: they have collected information, and they have not finished the product. WhatamIdoing (talk) 20:57, 1 March 2024 (UTC)Reply
Part of the problem is that the Web team has redefined what "Zebra" refers to, removing the most relevant feature of the Zebra #9 design. See mw:Reading/Web/Desktop_Improvements/Updates#November 2023: Visual changes, more deployments, and shifting focus after the A/B test. The A/B test analysis largely considered metrics unrelated to what was actually being tested. For example, there were no changes to edit buttons but a decrease in edits was noticed. This points to a problem with the test methodology. I'll note that the use of A/B tests to justify prior WMF design decisions is not limited to New Vector or the Web team. AntiCompositeNumber (talk) 21:08, 1 March 2024 (UTC)Reply
The task is marked as "High" priority. 8 months have gone. No moves. I think that we have different views on what delivering means. If this goal helps making things faster, I'm in. Theklan (talk) 09:20, 2 March 2024 (UTC)Reply
"High" usually means that someone is working on it, but not necessarily as their first/most urgent task. "Working on" does not imply "will finish soon". WhatamIdoing (talk) 01:04, 3 March 2024 (UTC)Reply
No, sorry. "High" after "priority" means that the task should be done as soon as possible, in comparison with "Normal", which means that is not the first/most urgent task. That's why we have different priority tags. If "High priority" doesn't mean "high priority" then we should have a new tag when me do mean "high priority". Theklan (talk) 21:19, 20 March 2024 (UTC)Reply
The dictionary definition is not used in this context. For most teams, at phabricator.wikimedia.org, "priority" is used as defined in mw:Phabricator/Project management#Priority levels. "High priority" means "Someone is working or planning to work on this task soon".
(Priority also doesn't mean "as soon as possible" or "first/most urgent task" in English. See w:en:Etymological fallacy.) WhatamIdoing (talk) 21:54, 22 March 2024 (UTC)Reply
Then we need something to denote "high priority". Theklan (talk) 11:47, 23 March 2024 (UTC)Reply
The code used for "this needs to be fixed as soon as possible" is "Unbreak now!", and it is only used for serious software errors ("we can't deploy any software because you broke the servers").
There is no code for "This is really important to me, and even though I don't know what your team is dealing with, or how many other problems you need to address, or if any of them are actually emergencies, I think you should stop work on everything else to do what I want right now". WhatamIdoing (talk) 00:15, 24 March 2024 (UTC)Reply
I'm not proposing that. I'm proposing that "high priority" should mean "high priority". When a team/staff member adds the high tag to the priority field, we should expect that it actually means this is high priority, and not something else. Unbreak now! is pretty obvious, and not the same thing. So, please, don't put on my mouth things I haven't said. Theklan (talk) 07:33, 25 March 2024 (UTC)Reply
Why do you think "working or planning to work on this task soon" is not "high priority"? Does that sound like low priority to you? WhatamIdoing (talk) 00:38, 27 March 2024 (UTC)Reply
Because "planning to work" and "soon" are not defined. Theklan (talk) 20:12, 7 April 2024 (UTC)Reply
I once had a boss who, during a one-hour staff meeting, would tell the same person that three different tasks were that person's top priority for the coming week. Perhaps that's why I don't think that "high priority" is a meaningful phrase. Something can be "high priority" and yet not be "high enough" to actually get any action at all.
With "planning to work on this", I know that the assigned person is making a statement of their intention to take action or to prepare for taking action (e.g., to collect necessary information or resources, to talk to relevant people, to coordinate schedules).
With "soon", I know that they intend to take some action "sooner" rather than "later" or "never". If you were hoping for an actual date, then that's a separate field in Phabricator. Something can be "high priority" and have a due date of next year, or medium priority and have a due date for next week. In practice, most WMF teams don't use the due date field, but if that's what you're looking for, don't look in the priority field for it. WhatamIdoing (talk) 20:33, 9 April 2024 (UTC)Reply
No, I'm not looking on dates. I'm looking on getting things done. Which is not happening. Theklan (talk) 07:59, 28 April 2024 (UTC)Reply

SDS2.1

edit

SDS2.2

edit

FA1 Test hypotheses

edit

Objective

edit

Test hypotheses. Sorry, I'm reading "Bring Wikimedia's knowledge and pathways to contribution to popular third-party platforms?" and I really don't understand what this means. Why aren't we improving OUR OWN PLATFORM, instead of thinking on third-party platforms? Why aren't we investing on disruptive knowledge content creation and everything read so vague? Can the meaning of this sentence be explained, please? -Theklan (talk) 17:56, 29 February 2024 (UTC)Reply

I thought it was pretty clear. We want to understand how newer media affects us, how we can (maybe) be a part of those media and use that knowledge to inform plans for 2025-2026 and beyond. —TheDJ (talkcontribs) 11:53, 1 March 2024 (UTC)Reply
Imagine a world in which a photographer on Flickr could:
  • upload a photo,
  • learn that a relevant Wikipedia[*] article has no images,
  • push a button on Flickr(!) to run Magnus' Flickr2Commons upload tool themselves, and
  • be given a link to edit the Wikipedia article, with simple, specific instructions about how to add the image.
Most Wikipedia articles show one image, or none. Yesterday, I turned an article with two images into an article with 17. That's an extreme example, but as a general rule, I think that if we could make it easier for people to add one or two "factual" pictures, we'd be improving the project.
[*] Possibly also Wikidata, Wikivoyage, and Wiktionary. I don't mean to exclude relevant sister projects, but I think Wikipedia is the obvious place to start. WhatamIdoing (talk) 17:13, 1 March 2024 (UTC)Reply
Thanks @Whatamidoing, that's more clear. I see this happening with OSM, and how we are adding lots of redundant data to Wikidata and OSM at the same time, with difficult paths to link both. However, the problem with discoverability is deeper than a button in Flickr (I wonder, by the way, if Flickr2Commons can be used if you are a new user), and is more related to the difficulty to participate in Commons and the decision to hide the sister projects links from the New Vector. If we want people to know that they can contribute, we must show the potential of doing so. For example, every Wikipedia article at euwiki linked to a Commons category has a link to see more photos. But euwiki is a tiny project compared with the potential to do this in the larger ones.
Anyway, now I understand what the sentence means. Theklan (talk) 18:09, 1 March 2024 (UTC)Reply
OSM is a great example. We would benefit from OSM's data being usable (either by using it directly, or by mirroring it on one of our sites).
I think the problem for this category is less about "the Commons link hidden is in a menu on Wikipedia" and more about "the users are on TikTok and Instagram". WhatamIdoing (talk) 21:06, 1 March 2024 (UTC)Reply
And they will be on Instagram or YouTube. We can't even share content from Commons in social media. Theklan (talk) 09:21, 2 March 2024 (UTC)Reply
Maybe we should change that? Or at least have someone think about changing that? WhatamIdoing (talk) 21:15, 3 March 2024 (UTC)Reply
Sure. I proposed this back in 2022, and it is something that would take around two days of work, maybe three task T309101. Then I discovered that the issue has been around since 2011: task T33338. But not, it was first proposed in 2010: task T27854. Theklan (talk) 06:56, 4 March 2024 (UTC)Reply
I'm not imagining a world you don't have contracts for. -- Sleyece (talk) 13:35, 7 June 2024 (UTC)Reply
How much of the budget is being spent on a maybe there are no contracts for? -- Sleyece (talk) 13:37, 7 June 2024 (UTC)Reply
If you don't start the project at all, you can't get the contracts, or even know what kind of contracts you really want.
The expected result is a list of recommendations about actions to take in the future. I expect the "action" this year to involve some meetings and a report. Perhaps the report will say something like "In the future, we should get a contract with ____ to ____." WhatamIdoing (talk) 20:48, 7 June 2024 (UTC)Reply
Imagine, using a whole year budget to make a report about OTHER platforms, instead of improving OUR platform. Theklan (talk) 06:54, 8 June 2024 (UTC)Reply
Let's instead imagine using a tiny fraction of one small team's part of the whole year's budget to make a report about how OTHER platforms could be connected to OUR platform, so we could have more suitably licensed content or more readers or more contributors. Imagine, maybe, that Basque-language websites would like to link to eu.wikipedia.org to provide definitions or other information about words and names they mention. That would "Bring Wikimedia's knowledge and pathways to contribution to" those websites, right? WhatamIdoing (talk) 19:35, 8 June 2024 (UTC)Reply
No, that would bring the malware, external A.I./ GPT variants and hackers on other platforms to our platform. Then, you're going to need a really big team, and a whole new budget fixing an entirely avoidable problem. -- Sleyece (talk) 21:39, 8 June 2024 (UTC)Reply
The example I gave is a tool that already exists. It does the equivalent of NAVPOPS or Page Preview, except from a different website to read the opening lines of a Wikipedia article. It does not appear to have brought us any bad actors. WhatamIdoing (talk) 01:18, 11 June 2024 (UTC)Reply
If this tool already exists, then we don't need to have it in the annual plan, as it already exists. We are trying here to figure out why the WMF has decided not to innovate, and this is a good example: citing existing tools as innovation. Theklan (talk) 06:03, 11 June 2024 (UTC)Reply
The WMF has decided to innovate. Specifically, they propose that one team spend some time figuring out what sort of innovation would be appropriate.
In response to this, they are being told that they shouldn't start innovating until they have finished enough of the project to have legal contracts in place. Also, we want the community to be involved from the very beginning, and why can't you already tell me exactly what the plan is, in detail?
It would be logical for us to pick one or the other. Either we want them to start innovating, or we don't. Either we want to be involved before there is a detailed plan, or we don't. Which do you pick? WhatamIdoing (talk) 17:32, 11 June 2024 (UTC)Reply
We have been involved in that discussion for years. The WMF has tons of input, from wishes to online research, from conferences/summit results to open letters. This is, again, a false dichotomy. The worse is not that this is a false dichotomy, however. The worse thing here is that all this discussions doesn't matter. There won't be a comma changed from the initial document, and we are losing your and my time. The situation is catastrophic, but not serious. Theklan (talk) 18:22, 11 June 2024 (UTC)Reply
That's not what the goal talks about. Sorry. Anyway, Basque-language websites are linking to eu.wikipedia when it is needed, and they would even use the videos we are making and uploading to Commons if they could. Currently they can't, because Commons doesn't allow it. Imagine investing a couple of days to solve that. Instead, we have budget making extensions for Chrome. Theklan (talk) 16:11, 9 June 2024 (UTC)Reply
So these websites are looking for free web servers? They want to put the videos on their websites, but they want us to pay for the servers and traffic necessary? WhatamIdoing (talk) 17:34, 11 June 2024 (UTC)Reply
Sorry? WHAT? Are you talking seriously? Theklan (talk) 18:04, 11 June 2024 (UTC)Reply
Yes, I'm quite serious. If they want to use videos you are making and uploading to Commons, then they can download copies to their own systems and use them. Commons allows this. Commons even encourages this. Nobody's stopping them from doing that.
The only thing they can't do is leave it all on Commons. YouTube allows direct embedding (i.e., it looks your website, but the video is stored at YouTube, and comes with YouTube's advertisements, because that's how YouTube pays YouTube's bills for the storage and delivery of the video on your website). Commons does not do this. Commons says that if you want a free video on your own website, you have to put a copy of that video on your own web server and pay the extra traffic costs yourself. WhatamIdoing (talk) 01:46, 13 June 2024 (UTC)Reply
I honestly think that you should read the Movement strategy. The first line itself is good, but you can read more here.
If it doesn't seem clear, let me mark some of the ideas on why we should make our platform the central infrastructure of free knowledge:
  • "The vision of the Wikimedia movement describes this expanded scope well: “a world in which every single human being can freely share in the sum of all knowledge.”
  • "We are still far from having collected the sum of all knowledge. Most of the content we have created is in the form of long-form encyclopedia articles and still images, which leaves out many other types of knowledge."
  • "Many readers now expect multimedia formats beyond text and images.[20] People want content that is real-time, visual, and that supports social sharing and conversation"
  • "Knowledge as a service: A platform that serves open knowledge to the world across interfaces and communities. (...) We will transform our platform to work across digital formats, devices, and interfaces."
  • "We will continue to build the infrastructure for free knowledge for our communities. We will go further by offering it as a service to others in the network of knowledge."
  • "We will work to ensure that free knowledge is available wherever there are people. (...) We will be a leading advocate and partner for increasing the creation, curation, and dissemination in free and open knowledge." - Theklan (talk) 09:07, 13 June 2024 (UTC)Reply
Theklan, you said: they would even use the videos we are making and uploading to Commons if they could. Currently they can't.
Why can't they? Is there something wrong with their websites that prevents them from hosting a copy of the video themselves?
I have assumed that when you say they "can't" "use the videos", what you mean is "they can't use the videos through the exact and sole method of pasting a bit of HTML code such as this:
<iframe width="560" height="315" src="https://www.youtube.com/embed/MDnyhGLVkKU?si=6dngCoFuiI1XGpFz" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
into their own website". That is, they can use the videos, but they don't want to use them unless they can use this exact method. WhatamIdoing (talk) 15:10, 14 June 2024 (UTC)Reply
Now try to share a video from Commons at your Mastodon account and you'll see where the problem resides. Theklan (talk) 08:15, 15 June 2024 (UTC)Reply
I don't have an account there (or basically anywhere else). Are you telling me that Mastodon does not accept URLs, so you can't post a message that says "I found a cool video. It's at https://commons.wikimedia.org/wiki/File:How_does_Wikipedia_work_%E2%80%93_A_WIKI_MINUTE_4-5.webm" ? WhatamIdoing (talk) 14:15, 15 June 2024 (UTC)Reply
I'm telling you that if I post a youtube video anywhere in the World Wide Web, in the World Information Highway, while Surfing on the Internet... I simply can watch the video there. You just add a video URL in your Wordpress and badaboom, abracadabra... you can see the video. Not with commons. Theklan (talk) 18:50, 15 June 2024 (UTC)Reply
It sounds like what you can do is provide a URI, which allows the person to go to the video in their web browser at Commons, and what you want is to provide a URI and have the person watch the video in whatever app they are already using.
What I've been talking about is in websites, like a personal blog or a business website. WhatamIdoing (talk) 14:36, 17 June 2024 (UTC)Reply
Indeed, you are talking about personal blogs. Not me. Theklan (talk) 16:43, 17 June 2024 (UTC)Reply
I believe you need to reread the Foundation's stated goals. No offense. -- Sleyece (talk) 14:48, 12 June 2024 (UTC)Reply
I thought it to be the very goal of the movement - hosting free knowledge for all. BIDROHI Hello.. 14:59, 12 June 2024 (UTC)Reply
Thanks for jumping in, TheDJ and Whatamidoing! @Theklan: as mentioned in their comments above, what we're focusing on with this Future Audiences objective is figuring out how to make it possible for more people to find out about, get information from, and contribute back to our projects even when they're not on our platform. We're doing this because we see that people increasingly prefer to get information from a variety of places, in a variety of formats online, and some (especially younger audiences) are exclusively turning to social apps that might never bring them to Wikipedia or even let them know that it exists. Our approach to address this is to do small, quick experiments off-platform to learn what works.
RE: "Why aren't we improving OUR OWN PLATFORM, instead of thinking on third-party platforms?" This isn't an either/or – we're doing both, and we agree that improving our platform is the top priority for the coming year – hence the three other objectives in this draft plan that address different aspects of this:
  • Platform Evolution for investing in core software needs
  • Contributor Experience for making it easier and better for newcomers and experienced editors to contribute
  • Consumer Experience for giving people more of a reason to stay on the platform longer when they do come to us
I know you are very passionate about improving our own platform, but that you've also experimented with distributing Wikipedia knowledge in other places where people like to spend time and learn online (i.e., through making Ikusgela content available on YouTube and TikTok). For the Future Audiences objective specifically: I’m very curious to hear your thoughts about how else we might do this – e.g., last time we talked, we spoke about experimenting with AI to see if it could be used to assist in remixing Wikipedia and Commons content into short videos that could be used on our projects and on other platforms. Do you still think this could be a fruitful area for experimentation? Any other thoughts about other places or ways people get information online that we haven’t talked about? Very open to ideas and suggestions for new experiments! MPinchuk (WMF) (talk) 19:49, 1 March 2024 (UTC)Reply
Well, this needs a really long answer, because we are reaching to the core of our huge problem.
TL;DR: people goes to other platforms for a variety of reasons, one of them being that our platform is not good enough. WMF's core goal is to make our platform better, so less people feels the need to go to other platforms.
Let's imagine that I want to create Open Education Resources (OER) in video format and I have to decide where to upload those videos.
Pros Cons
YouTube
  • Huge platform, extremely popular
  • Make the video, just upload it, it’s done.
  • Constant statistics: views, hot moments, interactions, referals.
  • Can be shared externally wherever I want: my school blog, Twitter, my Moodle course.
  • Works in all platforms, don’t need an account to see it.
  • I can monetize it in large languages
  • Subtitling is easy and there are external platforms to make it even easier.
  • Lots of interactions with audience: comments, discussions, subscribing...
  • Algorithm based recommendations based widely on what you have seen.
  • It’s free (you are the product)
  • (Virtuallhy) no uploading limits
  • Advertisements.
  • It can’t be shared on Wikipedia
  • I can’t monetize it in smaller languages.
  • Interactions can lead to online harassment, moderation is needed.
  • The algorithm can lead to echo chambers.
TikTok
  • Very popular with younger audiences.
  • Remixing is extremely easy.
  • Make the video with your mobile, just upload it.
  • Can be shared easily to virtually any other place.
  • Still not completely enshittified.
  • Automatic captions.
  • It’s free (you are the product).
  • There are some limits to functionality if you are not part of the platform.
  • Videos must be very short.
  • Not on control of the algorithm.
  • You can’t share it on Wikipedia.
  • Swipe culture doesn’t allow for a real learning path.
Wikimedia Commons
  • It’s free.
  • No advertisements.
  • No echo chamber.
  • You can share it on Wikipedia.
  • You can’t just upload the video: it needs some conversion.
  • Maximum of 100 mb uploads allowed.
  • Extremely jelous copyright revision: we prefer videos that have been previously uploaded as free to YouTube just to be sure.
  • You can’t share anywhere, except in Wikipedia.
  • No commenting, no discussing.
  • No statistics: we can’t know if people is actually seing it.
  • No suggestions based on what you have seen: difficult to find more videos of the same creator or topic.
  • Video2Commons is broken.
  • No one uses it as a video showcase platform: not popular. Discoverability is practically impossible if not directly pointed to the video itself.
  • Subtitling platform is difficult to use (not native)
  • Wikipedia users prefer text and they even delete videos when added to articles.
  • Reusing a video is difficult: it needs double conversion.
Our approach with this experiments is that if we just added a way for youtubers to upload it to Commons they would end uploading it to our platform. I highly doubt that this would happen even if you convince YouTube (Alphabet) to add a button to YouTube so people can do this in a easy hassle-free straightforward way. Even if the button exists... which are the benefits? You can't reuse the video, no visibility, no platform, no statistics, you can't add it to other platforms (not even your school MOOC). And that's a lot to assume: the process is complex, you need to convert it, Video2Commons is broken (for years now), and even if you do it you are not making your OER easier to use. Even if you do it you have to convince User:Random123 that the free music you used in the background is actually free. And you have to do that two years later, when you just don't have the link to the audio track itself. And even if you do that... who is going to watch the video there? Why bother?
Our experience with Ikusgela, and why we are uploading the videos to other platforms, is not because we want to add content to third-party platforms. We are not enthusiastic about it. We upload them to other platforms because there they can be shared in schools, MOOCS, mobiles, social media... and we can learn what is working and what not, what is seen and what not, what is needed and what not. We upload them to other platforms because we want people to learn and our platform is not the central platform of free knowledge. And it won't be as long it is broken and unusable. Period. We are convinced OER makers (that's why we are here) and that's why we are uploading the videos (also) to Commons: because we are activists, not because it is easy or we just have more perks. And because we want our videos to be seen in our platforms, and the only way for that is uploading them to Commons. We are ready to suffer the pain, because we are here for that.
The same goes for virtually any other learning object. People and institutions are uploading 3D objects to other platforms because they like them, but also because it is impossible to upload them in our platform. People and educators are building learning experiences using H5P because they can be reused wherever they want, except in Wikipedia. Students are using other annotation softwares to mark what is relevant in texts because they can't make it in our platforms. People are uploading photos to Flickr because the uploader is good and the experience (used to be) warm and designed to encourage photographers to upload more (just pointing them to Magnus' third-party strange uploading system is adding an extra layer of strangeness). People is using Brilliant to learn because it is brilliant and we are not. And so on and so on.
There'll be always people outside. Even if you create the best public and free and universal education system, there'll be people who wants to go to learn dancing salsa, having korean-cooking lessons or practicing calligraphy, and they will find it elsewhere. We can try to figure why salsa is so popular, or even if our calligraphy courses at school are good enough. But our goal is to make the best public and free and universal education system, not just trying to convince those salsa enthusiasts that they should come to our school and stop learning salsa. If we make our platform better, easier to use, more appealing and we learn about what people love from those other platforms that could be added to us, we would have those future audiences with us. The path is the opposite here. If more and more people is going to other platforms and younger people is definitively there, is because our platform is (in comparison) worse every year. We are not going to convince them adding a button to Flickr or Youtube (a button that, by the way, we can't add, because it is not our platform), we are going to convince them if our platform is better.
Sorry about the text-wall, but this needed an explanation. Theklan (talk) 10:54, 2 March 2024 (UTC)Reply
This table is a great summary of some of the issues we face when thinking about other places where people can share knowledge online. To summarize a bit further, I took the liberty of doing some clustering of all the benefits you listed that the popular commercial rich media apps (YouTube, TikTok, Instagram) offer, and elaborating a little further based on research conversations with these creators:
  1. Monetary incentives: direct payment from platform, brand deals/sponsorships, and/or opportunities to get exposure that can lead to better employment opportunities offline
  2. Two-way interaction with audience: statistics on your content and ability to talk to them allows you to immediately see when you've made an impact (through seeing stats on views/likes/comments/follows), learn from and adjust your content to your audience
  3. Algorithmic distribution: means you may not have to build a following over a long time, wait for people to search for & find you - if you go viral, you can get a huge audience right away
  4. Easy to use and be creative in: can choose which format you want to use (long video, short video, static images, audio), very simple to make and upload your content, lots of creative tools in-app or externally that are free to use
Your feedback is focused on the last point: that if we made it easier to upload more kinds of content in more modern formats to Commons, contributors of knowledge content would welcome the opportunity to switch away from those profit-motivated, algorithmically-biased apps and contribute to Commons instead. What we don't know is to what extent the things in categories 1-3—features that are very different from how Wikimedia projects operate today, and/or may fundamentally clash with our values and principles—may actually be more important incentives for most of these contributors.
You didn't touch on the pros and cons for consumers of the content, but there is another long list of reasons why younger audiences will gravitate to learning on YouTube/TikTok/etc. and not Commons/Wikipedia, even if Commons hosted exactly the same kinds of content (some of which are similar to the contributor incentives, e.g., having an algorithm push you things you want without having to search, and others, like the ability to get a variety of content that's not just educational—e.g. entertainment, content my friends or celebrities are making, etc.—all in one place).
Experimenting off-platform, where all those other things are already built out and where the younger audiences already are, allows us to see if there is an opportunity to fulfill some parts of our mission (sharing free knowledge with anyone who wants it, encouraging anyone who wants to join us) in those environments. It also allows us to learn more and take back applicable learnings to our platform (e.g., maybe there is a values-aligned way to enable more targeted content recommendations that creates a better experience for consumers and contributors – this is the kind of thing that might go under the Consumer Experiences objective). MPinchuk (WMF) (talk) 21:03, 4 March 2024 (UTC)Reply
No, my feedback is not centered in the last point. Theklan (talk) 21:10, 4 March 2024 (UTC)Reply
Hi @MPinchuk (WMF): nice to see this discussion. I didn't read that specificity into TK's comments either. There are natural ways we could do each of the things you mention, which don't clash at all with our values. Recognition: exposure, portfolios, direct thanks; Two-way interaction: stats on views, adding likes + follows, more prominent aggregate comments; Algorithmic distribution: native browsing interfaces that learn from your past likes and experience; Ease of use: everything mentioned above, plus best-effort transcoding, tagging, description, &c. We are spending most of our energy looking inward rather than implementing first passes at these things. –SJ talk  22:30, 28 March 2024 (UTC)Reply
Well, I don't think that tracking what people read and using that to "learn from your past likes and experience" to change my experience (e.g., giving different search results to different readers) is compatible with our values. WhatamIdoing (talk) 03:15, 31 March 2024 (UTC)Reply
Suggesting media from the same author, topic or tags after you see a video could be a good solution without tracking. Also, you could like a video if you like it and have them saved in your personal user space. I don't know if that is invasive, but I think it is within our values. Theklan (talk) 07:28, 31 March 2024 (UTC)Reply
The Wikimedia Foundation has tried multiple times to create a "make a list of your favorite articles/images in your userspace" system, and none of the attempts have been a notable success so far. See mw:Gather for one of them. It was killed because it was rejected by the English Wikipedia. WhatamIdoing (talk) 05:39, 1 April 2024 (UTC)Reply
There are three possible answers to that:
  1. 9 years have gone, the community insights and taste could be different.
  2. Media are not articles, building a powerful media viewer is a very different task.
  3. Don't improve English Wikipedia, there are other 800+ Wikimedia projects.
Theklan (talk) 07:13, 1 April 2024 (UTC)Reply
I think the answer they're going with is:
  • Media-heavy shareable content (think: Instagram) is cool, so we should have some of that.
The problem they need to solve is:
  • Who is willing to moderate this content?
The dream is that teachers will use these tools for educational materials. The reality is that they will sometimes be used for harassment and libel. The English Wikipedia has said that it does not want to deal with content like "My teacher's favorite sex positions". Does yours? WhatamIdoing (talk) 16:16, 1 April 2024 (UTC)Reply
I would need the domino meme trying to link having a "like" button, statistics and some kind of recommendation based on structured data/categories/authors/topic and a feature that goes to "My teacher's favorite sex positions". Theklan (talk) 18:04, 1 April 2024 (UTC)Reply
  • If the 'likes' are private, and exist solely to help you remember the names of the files, then Special:Watchlist already exists.
  • If the 'likes' are private, and are used to change search results ("some kind of recommendation"), then we run into real problems with our values (I don't want someone to 'like' an article on w:Cancer and then get a feed full of fake cures for cancer, which is what my friends tell me happens on algorithm-based social media), transparency (Why did it recommend this article to me? Why is it hiding that article from me? Who decides which articles I get to see?), and potentially the sort of political–legal trouble that social media companies get from politicians over the effects that their algorithms have on promoting misinformation and extremism.[1][2] See also w:en:Wikipedia:Perennial proposals#Share pages on Facebook, Twitter/X etc. and several of the discussions linked therein.
  • If the 'likes' are public, then they have to be moderated. At least for the English Wikipedia, with its millions of articles, lists have to be moderated even if the user can't type a title or a single word, because if you can make a list at all, then you can make a list that says w:My w:Teacher's w:Favorite w:Sex positions – or worse.
WhatamIdoing (talk) 06:22, 2 April 2024 (UTC)Reply
I'm not talking about articles nor about English Wikipedia. Theklan (talk) 07:33, 2 April 2024 (UTC)Reply
Do you want Commons to be it's own Social Media platform? Are you suggesting build it out as a competitor to YouTube or TikTok? -- Sleyece (talk) 13:00, 3 July 2024 (UTC)Reply
We are not compiting. We want to be the CENTRAL INFRASTRUCTURE of free knowledge. Theklan (talk) 07:59, 4 July 2024 (UTC)Reply
If we can further accomplish that by transforming Commons into a Social Media app, I'm game. Some of the local Admins over there would pitch a fit, though. -- Sleyece (talk) 12:44, 4 July 2024 (UTC)Reply
Having stats and a like button for media would be enough. Anyway, there's no planning for that. Theklan (talk) 13:07, 4 July 2024 (UTC)Reply
The current plan is to put a like and share button on external platforms the Foundation doesn't have a contract with. So, there's no practical plan to make that a reality either. At least if we run social through Commons we're dealing in the realm of practical reality. -- Sleyece (talk) 13:20, 4 July 2024 (UTC)Reply
Theklan, which stats?
What is the like button supposed to do? If you just want people to feel like they can click a button, then we could put Like into the UI and we're done. But I assume that you want it to do more than just give people a place to click. So what's it supposed to do? Show that people like funny pictures better than informative ones? Pictures of white people better than pictures of Black people? Pictures of pretty women better than ugly women?
Social media platforms are removing or downplaying their old like buttons because the effects that they have on readers/viewers is bad (I don't need to think; I just need to decide whether I like it) and posters (Nobody's 'liked' my post during the last hour! My life is worthless!). Why would we want to have such a thing? WhatamIdoing (talk) 17:22, 4 July 2024 (UTC)Reply
Why would we want to put such a thing on someone else's platform would be my first question. My second question would be when does the Fiduciary Responsibility of the Board of Trustees and the CEO come into play? Can we plan for things that are impractical at a self evident level right up until it's time to start cutting checks? -- Sleyece (talk) 17:31, 4 July 2024 (UTC)Reply
I think Theklan wants a Like button on Commons. I've not personally seen any plans for the WMF to pay for putting a Like button on anyone else's website.
(Sometimes, impractical experiments are valuable. Consider the value of a w:concept car: you don't plan to sell one, but you do plan to learn from it.) WhatamIdoing (talk) 00:23, 5 July 2024 (UTC)Reply
Why are you putting in my mouth things that I haven't said? Theklan (talk) 06:55, 5 July 2024 (UTC)Reply
You said Having stats and a like button for media would be enough. I have said that "I think" you want that Like button to be on Commons. Of course, I might be wrong. Maybe you want the Like button to be on a third-party platform. Why don't you tell us? WhatamIdoing (talk) 15:54, 5 July 2024 (UTC)Reply
Who's "us"? I thought this was your personal account. -- Sleyece (talk) 18:14, 5 July 2024 (UTC)Reply
Us = the eight people who have already commented in this discussion, and the unknown number of lurkers, present and future, who will read it. WhatamIdoing (talk) 01:56, 7 July 2024 (UTC)Reply
If the Foundation spends the entire budget on the equivalent of a concept car, that's a problem. -- Sleyece (talk) 18:18, 5 July 2024 (UTC)Reply
Why are you canvassing every time the worst option? Stop, please. Theklan (talk) 06:54, 5 July 2024 (UTC)Reply
You could answer my questions. You said that you want stats; which ones? You said that you want a Like button; what's the goal? WhatamIdoing (talk) 15:52, 5 July 2024 (UTC)Reply
Are you going to implement it? You have a full explanation in this discussion. Even a table with the rationale. Theklan (talk) 16:08, 5 July 2024 (UTC)Reply
"To ensure that Wikimedia becomes a multi-generational project, we will test hypotheses to better understand and recommend promising strategies – for the Wikimedia Foundation and the Wikimedia movement – to pursue to attract and retain future audiences." It's a stated Foundation goal. -- Sleyece (talk) 19:32, 5 July 2024 (UTC)Reply
Feels like a solution in search of the problem. The resources would be better spent improving the areas where we have issues or where we could be greater, not trying to chase trends. IMO. stjn[ru] 15:20, 17 April 2024 (UTC)Reply
@Stjn: Thank you for reading and thinking about these objectives!
I lead the Future Audiences initiative, and I'm hearing two aspects to your comment (please correct me if I'm wrong) that I'd like to address:
  1. You don't see that there is a problem or risk to our projects or communities from new technology and/or changing trends in user behavior online.
  2. You don't think that the Wikimedia Foundation should invest significant resources in reacting to or understanding new technologies or trends (because they might be passing fads), and should instead focus on addressing known issues for our current contributors, readers, etc.
Let me start with the second point, because here I think we more or less agree. You can see plenty of examples today of technology platforms that are approaching new technologies and trends in the way that I think you are worried about: for example, making impulsive million-dollar bets on new AI products that may or may not live up to the hype of their press releases. That is explicitly not the objective of Future Audiences. The way that the WMF Product & Tech department is thinking about it is that we need to understand the potential impact (positive and negative) that things like generative AI may have on our projects and communities, quickly and with minimal resource investment. The objective of Future Audiences is to quickly/cheaply scout to see if there are promising new things this technology allows us to do that can help us achieve our mission of sharing free knowledge and growing/sustaining our communities in the future, and then recommend that we discuss and decide, in partnership with the Wiki(p/m)edia community, if these are important areas to invest in further. In short, the operating assumption is that we need to and will be investing the bulk of WMF Product & Tech resources in improving and sustaining things for current audiences unless/until there is evidence from small-scale FA experimentation that a more significant new investment in future-facing products/technologies is needed.
And, to your first point: here I (and many others in the Wikimedia movement) disagree and think there is significant risk to the continued existence of our projects and communities if we devote no resources or attention to understanding what's happening in the broader technology space we inhabit, or to exploring how we should (or shouldn't) react to new technologies and trends. "Innovating in free knowledge" was one of the recommendations that emerged from the Movement Strategy process, and this and other calls from different communities who are concerned about decreasing readership and participation from younger audiences guides the work of Future Audiences today.
Does this (very long-winded, sorry!) explanation help clarify anything for you? Am I correctly hearing/summarizing your concerns? Please let me know! MPinchuk (WMF) (talk) 21:22, 17 April 2024 (UTC)Reply
I’m not against innovating in free knowledge. In fact, I support trying to innovate. What feels like an issue to me is the perceived described intent: instead of spending time on improving things that need to be improving and innovating in that way (and I can’t say there aren’t a lot of areas to improve in Wikimedia-land!) the time and efforts are going to be spent on ‘experimenting’ with unknown returns on the stuff that might not even have a real use neither to communities nor to the readers. In the strategy recommendations there is a really good thing described in the first list item (related to one aspect): Consult with communities and experts. See what feels like the area to innovate, where we might be lacking, etc. The current wording of this point feels more like the WMF feels like they aren’t ‘innovative’ enough so they will try something like partnering with OpenAI to translate articles with ChatGPT so we feel in trend. Instead of meeting people where they are and trying to innovate and excel on that. stjn[ru] 21:47, 17 April 2024 (UTC)Reply

FA1.1

edit

Ok... so what is the plan? Why did we discuss if the key result is even more vague than the OKR? Could you explicitly say what are you planning to do so we can discuss over real issues? -Theklan (talk) 14:00, 26 March 2024 (UTC)Reply

@Theklan The plan is to build on what we have learned from experimental research on AI and social apps this year and continue to conduct quick, low-resource experiments. I just published a Diff post that covers what we have learned this year in much more detail, but in brief:
What we’ve learned so far:
  • Most Internet users aren’t yet turning to ChatGPT for knowledge, but AI does a reasonably good job of finding and summarizing relevant information from Wikipedia, and people report trusting information they get from AI more if they know it was sourced from Wikipedia. (Full research report: ChatGPT plugin experiment)
  • Younger audiences increasingly prefer to learn and get aggregated information from people, rather than from impersonal websites. There is a community of successful knowledge-creators on apps like TikTok who serve this audience and use facts and content from Wikimedia projects. These creators are not likely to become contributors to our projects, but they might help us bring more awareness of Wikimedia knowledge and community to a new generation. (Full research report: Social Creator research)
Where we’re going next (remainder of this fiscal year and next fiscal year):
  • Planned next experiment for remainder of this fiscal year: Can we harness Wikipedia’s knowledge to help readers understand the reliability of the information they consume online? We are planning to test this with an experimental extension for the Chrome web browser that we are calling "Citation Needed." (More here.)
  • Other areas we might explore next year (pending results of the Citation Needed experiment):
    • Other tests based on the “Citation Needed” concept of off-platform knowledge evaluation, i.e.: Can we get people to contribute facts that are present on reliable websites but missing from Wikipedia? Can we evaluate the claims made in rich media – i.e., videos, memes – and see if they are present in Wikipedia?.
    • Experiments with creating remixed Wikimedia content (e.g. something like this experimental “Did You Know?” dataset created by a Wikipedian) that can help surface engaging content for new audiences at scale
    • Experiments with reaching audiences more directly in the places they are sharing and discussing knowledge, i.e,. via bots on messaging apps
How to stay updated and provide input on new ideas for experiments for next fiscal year:
Ok, now is a little bit clearer. Still disappointing and contrary to what we really need, but clearer. I explained my points above, and even if your answer didn't summarize the main point, I think that I can't explain it again better than there.
Anyway, there are here some obvious issues. It is not clear what we are trying to solve. A communication problem? A brand issue? A technological problem? The focus goes from one to the other.
Most of the ideas are brand/communications in nature. If new audiences don't know about Wikimedia, and making some limited experiments on social media seems a good way to raise awareness... then it is pretty obvious that we need a social media strategy. Now we don't have one, and some of us have tried to help with that, but they have refused to take that help. Basque or Catalan wikipedia Twitter have more engagement than the official Wikipedia one. That is the affair we should be solving, and an Instagram post won't solve it. If the team is interested on getting help so it is relevant in the future, some of us are here to help. Anyway, this is not related to infrastructure, and that's why it is weird to try to solve a social reputation issue with technology.
The other idea, one that I explained with the video platforms example, is that people is not going to other platforms because they don't know us, but because our experience is worse. The subset of people who don't know Wikipedia and will still install a Chrome (why not Firefox, by the way?) extension, so the extension tells them that Wikipedia rocks is extremely reduced. I won't say that it is zero, but it could be counted by hand. Is it the extension an interesting idea? Well, yes, if we didn't have anything else to solve. And we have tons of things to solve if we want to make our platform attractive for new audiences. Native video is one of those, but not the only one. Theklan (talk) 18:10, 29 March 2024 (UTC)Reply
Still no answer here. Theklan (talk) 08:00, 28 April 2024 (UTC)Reply

PES1 Efficiency of operations

edit

Objective

edit

Make the foundation's work faster, cheaper, and more impactful.

PES1.1

edit

PES1.2

edit

PES1.3

edit

PES1.4

edit

This is an awesome sentence. Can I fix the OWID gadget or not? Is it faster and cheaper to let gadgets die on the vine? Breaking interactivity will be impactful, that's true. -- Sleyece (talk) 13:47, 7 June 2024 (UTC)Reply

General comments

edit

@Theklan -- thanks for reading the draft objectives closely and for your comments. We’ll continue responding to your comments over the next few days, but I saw that you mentioned in a few places the wording being vague. I just wanted to note that the vague wording at this point in the plan is intentional. Our plan will be made of three layers: objectives, key results, and hypotheses, and we’ll be specifying and sharing our thinking as it continues to get more specific in the coming weeks. Objectives are the highest level, that convey directionally what we’re thinking about for a given area. Key results will be the metrics we'll use to evaluate whether we are accomplishing the objectives. And hypotheses will be the specific projects we're planning to try to meet the key results. Your comments about the objectives help us understand whether it seems like we’re going in the right direction in general. -- MMiller (WMF) (talk) 01:59, 6 March 2024 (UTC)Reply

If the vagueness is done in purpose then the document is very well designed, because everything is misty. We can't know wether we are accomplishing the objectives if the objectives can be fulfilled doing one thing or the opposite. Anyway, if this is not the moment to discuss about the lack of vision, goals and general enthusiasm, then it would be good to point when we can talk about real objectives. Because I suspect that key results won't be the moment to propose things because they won't be specific, and hypotheses will be too late. Can we know when solving things will be proposed so we can start discussing what to do? Theklan (talk) 07:00, 6 March 2024 (UTC)Reply
I'm reading the Key Results and the question remains. The OKRs didn't have vision, now the key results don't have mission. Theklan (talk) 13:52, 26 March 2024 (UTC)Reply
@Theklan -- thank you for closely reading the key results and for asking questions.  The reason the key results don’t indicate specific projects is that we try to first lay out our goals before we choose projects to achieve them.  That helps us make sure we have the right goals in mind before we get excited about projects that may not actually be important.  In looking at the key results, the main question we want to think about is, “Are these the right goals?”  That said, we do have some rough thinking about the projects we’ll pursue to achieve those goals, and you’ll be hearing from the various staff who own the specific key results, giving more details on their thinking at this point.  The next step of our plan is the “hypotheses”, which are the detailed work items we will start on to try to achieve the key results.
But I know that you’re most curious about our plans for graphs and interactive content and that the key results don’t mention that.  First, I’ll say that many of the key results are focused on the future and do include innovation needed by newer users of the internet.  But the key results are not the full list of
all
the work we will do – they don't include all the maintenance work: keeping things running, fixing bugs, refactoring code, etc.  Right now, we’re thinking about the graphs situation as part of that "essential work" -- it's something that's broken that would need to be fixed (or rebuilt).  We are working on a possible plan for graphs, but I'm not sure yet what its scope will be or when we would resource it if we proceed with that plan.  It may end up being a project complex enough that it would need a new key result, and then you would see it added here.  I am working now with a product manager and principal engineer on this.  We've talked with several volunteers (like yourself) in the past few weeks to help us figure out a sensible scope.  We’ll be posting more about it in the coming days and weeks.
MMiller (WMF) (talk) 04:37, 28 March 2024 (UTC)Reply
Thanks for the explanation. The leading section in the OKR page claims "The underlying "Hypotheses" for each KR will be updated on the relevant project/team's wiki pages throughout the year to be updated throughout the year as lessons are learned." Reading that, I assumed that those wrongly called hypotheses are explained after the annual plan is closed, and not before. If you say that there'll be a moment to talk about the specific projects, then I'll wait here.
That said, graphs being broken is not my only concern here. It's something that should have been solved months ago, and it is still pending, but solving the graphs issue is not the only problem we face when we talk about the obsolescence of our platform. Let's see what the teams propose to tackle this issue in the upcoming weeks. Thanks. Theklan (talk) 06:57, 28 March 2024 (UTC)Reply
Ok, now that we know that the graphs won't be solved and that any proposal for interactive content will be stopped by the Foundation... what are the steps for adding interactive content? All that we get if this is asked is circular referencing. Can we have any clarity? Will the WMF work on interactive content in the next year? (yes or no) Theklan (talk) 08:02, 28 April 2024 (UTC)Reply
Forgot to comment it more in depth earlier, but it is pretty damning and disappointing to see the complete lack of priorities related to accessibility of the website, even though many people are begging the WMF to do some things related to accessibility like improving captchas (touched on this above) and improving the new default Vector design to be more accessible (the so-called ‘Zebra discussions’). This shows a complete disregard to making the knowledge available to everyone for no apparent reason. It seems like there is no one at WMF who is actually advocating for accessibility to be a priority, and because the WMF is not a commercial entity, we can just ‘ignore’ accessibility even though it is completely unacceptable both to our current editors and to readers who we are supposed to serve. Instead we get poorly named features like Accessibility for reading which mostly has nothing to do with the term ‘accessibility’, and the unprioritised and neglected work that needs to be done in order for us to be able to reach everybody. Frankly, I do not see many of the priorities listed as in any way more important than, for example, fixing the issue of captcha or conducting user testing about the current new Vector design given the feedback that it is inaccessible to neurodivergent people. When in actuality an entity as big as the WMF is needs a dedicated web accessibility team, not just some people in the org who do not ignore such concerns. stjn[ru] 20:40, 5 May 2024 (UTC)Reply
Return to "Wikimedia Foundation Annual Plan/2024-2025/Product & Technology OKRs" page.