Onderzoek:Karakterisering Wikipedia Lezersgedrag/Taxonomie van Wikipedia use-cases
We built a robust taxonomy (categorization scheme) for Wikipedia readers' motivations and needs using a series of surveys. We present the taxonomy of Wikipedia readers, some recommendations for using it, and methods used to design and validate it below. If you prefer to read the peer-reviewed publication (in English) behind this research, we refer you to Why We Read Wikipedia.
See Prevalence of Wikipedia use cases for more information about survey results.
Taxonomy of Wikipedia readers
The following three question survey along with answer options constitute the taxonomy of Wikipedia readers. These questions correlate to the three dimensions of motivation, information depth, and prior knowledge.
Survey title
Titel van de enquête:
Waarom lees je dit artikel vandaag?
The taxonomy
Enquête vragen:
Ik lees dit artikel omdat... Selecteer alle antwoorden die van toepassing zijn [] ik een opdracht voor school of werk heb. [] ik een persoonlijke beslissing moet nemen op basis van dit onderwerp (bijv. om een boek te kopen, een reis te boeken) [] ik meer wil weten over een recente gebeurtenis of ontwikkeling (bijv. een voetbalwedstrijd, een recente aardbeving, een sterfgeval). [] het onderwerp genoemd werd in de media (bijv. TV, radio, artikel, film, boek) [] het onderwerp ter sprake kwam in een gesprek. [] ik me verveel of Wikipedia verken voor mijn plezier. [] het onderwerp belangrijk voor me is en ik er meer over wil leren (bijv. over een andere cultuur). [] anders ... Ik lees dit artikel om ... ¤ een specifiek feit op te zoeken of een snel antwoord te krijgen. ¤ een overzicht van het onderwerp te krijgen. ¤ om een grondig begrip van het onderwerp te krijgen. Voordat ik dit artikel bezocht ... ¤ was ik al bekend met het onderwerp. ¤ was ik niet bekend met het onderwerp en ik leer er nu voor het eerst iets over.
The Taxonomy in more languages
We initially created the above taxonomy for English Wikipedia readers and then translated it to 13 languages. If you plan to translate the taxonomy into your language, we recommend that you carefully read the process we followed, as the result you will see: any survey you run based on the taxonomy will, among other things, rely heavily on the exact text you will use.
- A volunteer editor from each of the 13 languages worked with the researchers on translation. Some volunteer editors chose to have more than one person working on the translation, to be able to brainstorm each phrase or sentence with another native speaker.
- The volunteer editors translated the survey from English to their language and contacted a researcher on the team once the translation was ready for discussion.
- The researcher reviewed the translations sentence by sentence. The volunteer editors were asked to read from their translated text and translate it back into English. The researcher would then match the impromptu verbal translation into English with the actual English text. Many good conversations came out of this approach, as the volunteer translators were very eager to get the translations right.
- While the majority of the translations capture the exact text in English translated to the local language, the exact text became meaningless in some cases. In those cases, the researcher worked with the volunteer editors to find the best solution, which usually involved rewording one or more phrases.
- The examples in the motivation question are sometimes changed to match the cultural context. For example, while soccer may be a good example of a current event for an English speaker, there are much better examples that can be used in the case of Hindi.
If you're going to repeat the same study
If you are going to repeat the same study, you may find the following information useful. It describes what we told the survey participants before they clicked the "Submit" button and the confirmation message they received after they submitted their survey.
Zichtbare beschrijving voor de gebruiker voor het klikken op de verzendknop in de enquête:
Klik op de knop “Verzenden” onderaan deze pagina om je antwoorden vast te leggen. Door op de knop “Verzenden” te klikken geef je ons toestemming om je antwoorden op deze enquête te verbinden met andere informatie over je browsegedrag op Wikipedia (standaardinformatie die over alle lezers verzameld wordt in overeenkomst met ons privacybeleid [1]). Deze informatie wordt gebruikt door de Wikimedia Foundation voor onderzoeksdoeleinden en om Wikipedia te verbeteren. Voor meer informatie, lees de privacyverklaring van deze enquête [2]. [1] https://meta.wikimedia.org/wiki/Privacy_policy/nl [2] https://wikimediafoundation.org/wiki/Survey_Privacy_Statement_for_Schema_Revision_15266417/nl
Pagina voor de bevestiging, zichtbaar voor de gebruiker na het verzenden van de response:
Dank je voor het beantwoorden van deze enquête. Lees meer over dit onderzoek en de resultaten van het onderzoek wanneer deze beschikbaar zijn (in het Engels), op: https://meta.wikimedia.org/wiki/Research:Characterizing_Wikipedia_Reader_Behaviour
Wikipedia reader taxonomy in practice
If you are going to use the above Wikipedia reader taxonomy in practice, please keep the following points in mind:
- The taxonomy is designed based on the information provided by readers when they are interacting with Wikipedia, more specifically reading a Wikipedia article. As a result, we recommend that a survey based on this taxonomy be triggered at the article level.
- Make sure you randomize the sequence at which the questions occur, as well as the sequence at which the response options appear in each question. Otherwise, position bias may be introduced to your responses.
- Make the response to all questions required.
- At the time of this research, we used Google Forms to conduct the survey. This meant that a QuickSurvey widget was triggered for a percentage of Wikipedia readers, and they would be invited to take part in the survey.
- We recommend that you use the questions in this taxonomy and the responses, as they’re represented here (except for the examples in the motivation question, which may be changed to adapt to the local context the survey is running in). This makes it easier to compare the result of your survey results with the survey results reported as part of this research and/or use questions, which have been heavily vetted by the research behind this project.
- Leave a text box for the Other option in the motivation question to allow for users to explain what they mean by choosing this option. This option is our way of tracking, over time, whether new motivations are introduced by the readers that are not currently captured by the list of motivations.
How did we develop the Wikipedia reader taxonomy?
Deze pagina in een notendop: The initial taxonomy was built in English with additional robustness verifications in Spanish and Persian. The 13 taxonomies in other languages are the translations of the English taxonomy. The translation was done following the process described in Taxonomy of Wikipedia readers. |
Our research relies on a taxonomy of Wikipedia readers. We designed and analyzed a series of surveys based on techniques from grounded theory to build a categorization scheme for Wikipedia readers’ motivations and needs.
Stage 1: Building the initial taxonomy
We started with an initial questionnaire, where a randomly selected subgroup of English Wikipedia readers on desktop and mobile saw a survey widget while browsing Wikipedia articles. If the reader chose to participate, she was taken to an external site (Google Forms) and asked to answer the question “Why are you reading this article today?” in free-form text (100-character limit). Some more details about the specifics of this experiment at S1-English for the desktop experiment and S2-English for the mobile experiment. Roughly 5000 responses were collected.
To arrive at categories for describing use cases of Wikipedia reading, we embarked on a three step process following the principles of Grounded Theory.
In the first stage, all researchers worked together on 20 entries to build a common understanding of the types of response.
In the second stage, based on the discussions of the first stage, tags were generously assigned by each researcher individually to 100 randomly selected responses, for a total of 500 responses tagged. All 500 tagged responses were reviewed, and four main trends (motivation, information need, context, and source) were identified, alongside tags associated with each response.
In the third stage, each researcher was randomly assigned another 100 responses and assessed if they contained information about the four main trends identified in the previous stage and if the trends and tags should be reconsidered.
The outcome of these stages revealed the following three broad ways in which users interpreted the question which we use as orthogonal dimensions to shape our taxonomy:
- Motivation: work/school project, personal decision, current event, media, conversation, bored/random, intrinsic learning.
- Information need: quick fact lookup, overview, in-depth.
- Prior knowledge: familiar, unfamiliar.
Stage 2: Assessing the robustness of the taxonomy
We conducted two surveys similar to the survey used in the previous section on the Spanish (details at S1-Spanish) and Persian (details at S1-Persian) Wikipedia which resulted in similar observations and dimensions as above. Additionally, we assessed the robustness of the above taxonomy in two follow-up surveys. First, we ran a survey identical to Survey 1 to validate our categories on unseen data (details at S2-English). No new categories were revealed through hand-coding. We crafted a multiple-choice version survey based on the dimensions identified above with an additional Other field corresponding to each question (details at S3-English). Only 2.3% of respondents used the “other” option, and hand-coding of the corresponding free-form responses did not result in new categories.
Hence, we concluded that the taxonomy in Section Taxonomy of Wikipedia readers is the taxonomy of Wikipedia readers.
See also
- For additional information, please refer to the Why we read Wikipedia in its entirety.