Grants talk:Programs/Wikimedia Community Fund/Rapid Fund/Reader views on articles (ID: 22208975)
Feedback from the ESEAP regional funds review team
editHi Vaticidalprophet, thank you for submitting this Rapid Fund request.
We appreciate the clear goals set for this proposal and we are excited about the outcomes this research may bring. Additionally, we believe that similar research can support the Movement in understanding how to improve the user experience on Wikimedia projects. We reviewed your proposal and we have a few questions which you can find below.
Questions:
- What is the research methodology that will be used for the research? Will you be implementing activities online or offline?
- What is your consideration to conduct the research single-handedly? Do you have any other team members or volunteers who would be willing to support you?
- Could you elaborate more why you chose image, article length, ratings of articles, and quality assessment system as the main variables of the research?
- One of the main objectives in this project is improving user experience. Could you please elaborate on how the outcomes obtained from the survey will help you to achieve this goal?
- Do you have relevant academic or not academic experience in running similar research?
- The target audience mentioned is very broad. Are there any target groups you will work with in terms of the project?
- We usually do not fund projects where direct payment to participants is included. Instead of giving prizes in the form of cash, would you please consider giving for example merchandise from the Wikipedia store, vouchers for bookstores, etc.?
- How would you share the outcome and impacts of your research with the Wikimedia movement? You mentioned that you discussed this proposal with community members onwiki. Would you please share the link to that discussion?
Please add your responses on this discussion page by 1 August 2023. Looking forward to hearing from you.
On behalf of the ESEAP regional funds review team.--DSaroyan (WMF) (talk) 11:01, 25 July 2023 (UTC)
- Hi DSaroyan (WMF)! Sorry for the nearly-late reply -- I've been thinking about these.
- What is the research methodology that will be used for the research? Will you be implementing activities online or offline?
- This is a qualitative online survey approach based around showing a subset of relevant articles to a self-selected online survey audience (who, as screening criteria, identify as readers of the English Wikipedia). Participants will be asked to read through the full articles and answer questions on such topics as:
- how informative they found the article
- how they thought it compared to other articles in the study
- how well-written they considered the article to be
- if they perceive any obvious missing, mishandled, or overlooked content (open-ended; this could include elements of somebody's life they felt missing, POV issues they subjectively considered present, article structure such as the headings and subheadings, etc)
- They will also be given a short introduction to the project's quality-assessment system, and asked where, based on the criteria, they would assume the article to be.
- Given the number of articles involved, I would ideally like to set this up across multiple surveys (e.g. inviting participants to answer surveys spaced days or weeks apart, or sending different versions to different participants).
- Do you have relevant academic or not academic experience in running similar research? + What is your consideration to conduct the research single-handedly? Do you have any other team members or volunteers who would be willing to support you?
- I'm answering these together, because they overlap. I have prior experience assisting with mid-to-large-scale online surveys (a friend regularly used Reddit's /r/samplesize as a participant group for independent scholarship over a period of a couple years c. 2017-2019, and I assisted him with survey design and analysis). I have accordingly a good working sense of what goes into surveyship. I am currently working independently, but have informal connections I can use for assistance if needed, such as for analysis of more quantitative elements of the results (e.g. statistics about what ratings participants assign to articles).
- Could you elaborate more why you chose image, article length, ratings of articles, and quality assessment system as the main variables of the research?
- The former two are subjects that frequently come up in discussions of the English Wikipedia's articles/their subjective or 'objective' (assessed) quality. The choice to research images is particularly impacted by the results of the defunct Article Feedback Tool. While the AFT was obviously controversial amongst editors, one theme that stands out to me when reviewing its results was how frequently the "lack of images" complaint came up. This is something difficult for the English Wikipedia to handle, given how the non-free images policy works, so it's important to research how this commitment impacts readers and what could be done to mitigate it if it's severe. (For instance, if it's found that readers consistently rate unillustrated articles lower than illustrated ones, that has implications for projects such as Le sans images and for outreach to BLP subjects to donate photos.)
- Article length, meanwhile, is an active subject of discussion. Currently, the way size guidelines are interpreted makes it extremely rare for articles past ~10k words readable prose to pass featured article candidacy, or retain such status when nominated for review. As the linked discussion demonstrates, there is extremely little research on reader length preferences, and it would be exceptionally useful to study this.
- Quality-assessment processes are considered highly important by internal Wikipedia culture, but poorly understood by readers. The icons don't even appear on mobile, and even on desktop many readers never notice them. One focus of this research is to see how editor perceptions of quality match reader quality. Featured and good article guidelines still look much like their genesis in the early-mid 2000s, and haven't received meaningful research scrutiny to see if they actually match what readers are looking for. Researching this is useful both for informing readers that such processes exist (something I've heard people in e.g. GLAM outreach say they find is appreciated knowledge when GLAM employees learn it) and for seeing if these criteria actually match reader assessments.
- One of the main objectives in this project is improving user experience. Could you please elaborate on how the outcomes obtained from the survey will help you to achieve this goal?
- Much of this is addressed in the prior section. For instance, we can use data from this research to make decisions about article size, hopefully improving reader experience as a consequence -- if we find (for instance) that readers want either substantially more or less length-based splitting than is current practice, we can work that into the guideline. Similarly, I've mentioned two examples of ways we could work to improve image coverage if we find readers have strong image preferences. This survey is useful also as a basis for further research. For the length example, for instance, if there's a clear indication about length preferences, I plan to pursue further where exactly those preferences lie (e.g. if readers prefer more detailed/less 'summary-style' articles than editor preference, at what point do they become "too long"?).
- The target audience mentioned is very broad. Are there any target groups you will work with in terms of the project?
- I plan to advertise the survey in online areas (e.g. social media, websites focused on survey-taking; if there's WMF interest/approval for it, possibly a sitenotice to logged-out readers?). This selects for a younger and more English-fluent audience. I plan to focus on such an audience for now, given the more internet-native tendencies of younger people and therefore e.g. the tendency to treat Wikipedia as "more reliable" than older audiences. Older readers and readers with limited English fluency would be interesting focus areas for further research.
- We usually do not fund projects where direct payment to participants is included. Instead of giving prizes in the form of cash, would you please consider giving for example merchandise from the Wikipedia store, vouchers for bookstores, etc.?
- I'll consider anything that makes the plan more workable. I'm much more used to surveys using cash incentives than other incentives, but can hopefully find something workable. Are there specific non-cash incentives that survey-related grantees have found successful for a broad audience?
- How would you share the outcome and impacts of your research with the Wikimedia movement? You mentioned that you discussed this proposal with community members onwiki. Would you please share the link to that discussion?
- I plan to create a Research:Projects summary of results, which I will discuss on enwiki in relevant fora (e.g. areas related to the featured and good article projects, Manual of Style guideline talk pages). My hope is to start a conversation about the applicability of these results to editors highly involved in such processes, ideally allowing them to reflect reader preferences better. Onwiki discussion has been relatively sparse, as the period where I was formalizing the grant was one of extreme disruption in my life such that my priorities needed to shift for a while, but discussions around the plans did occur here and here. Vaticidalprophet (talk) 17:34, 31 July 2023 (UTC)
Funded
editHi Vaticidalprophet, thanks for your response. We approved your Rapid Fund request in the amount of 1,000 USD. In the future, we would recommend you applying for Wikimedia Research Fund if you want to extend your research. Regarding incentives, usually vouchers for different online or offline stores work well. You can even allow participants to select the voucher type or store. --DSaroyan (WMF) (talk) 07:51, 1 August 2023 (UTC)