研究:表征维基百科读者行为/维基百科用例的分类

Outdated translations are marked like this.

We built a robust taxonomy (categorization scheme) for Wikipedia readers' motivations and needs using a series of surveys. We present the taxonomy of Wikipedia readers, some recommendations for using it, and methods used to design and validate it below. If you prefer to read the peer-reviewed publication (in English) behind this research, we refer you to Why We Read Wikipedia.

See Prevalence of Wikipedia use cases for more information about survey results.

Taxonomy of Wikipedia readers

The following three question survey along with answer options constitute the taxonomy of Wikipedia readers. These questions correlate to the three dimensions of motivation, information depth, and prior knowledge.

Survey title

Title of the survey:

<div class="mw-translate-fuzzy">
调查标题:
<pre>
為什麼你今天會讀這篇條目?

The taxonomy

调查问题:

<div class="mw-translate-fuzzy">
<pre>
我會閱讀這篇條目是因為
</div>

請勾選所有適用的答案

[] 工作或者學校作業有需要。
[] 需要根據此主題來決定個人事務(例如:選擇要買哪一本書、要去哪裡旅遊)。
[] 想要瞭解更多關於最近發生的事件(例如:某場籃球賽、最近某場地震、某位名人過世)。
[] 此主題被媒體(例如:電視、廣播、文章、影片、書籍等)曾經提及。
[] 在與人對話時提到的主題。
[] 我覺得無聊或是我覺得隨興瀏覽維基百科很好玩。
[] 此主題對我很重要,我想要瞭解更深(例如:認識一個文化)。
[] 其他 ...

我現在想讀這篇條目是為了要
¤ 查詢細節的事實或想找簡單明瞭的答案。
¤ 了解關於此主題的概觀敘述。
¤ 完整深入的瞭解此主題。

在點閱這篇條目之前
¤ 我已經頗為熟悉此主題。
¤ 我並不熟悉此主題,這是我第一次學習它。

The Taxonomy in more languages

We initially created the above taxonomy for English Wikipedia readers and then translated it to 13 languages. If you plan to translate the taxonomy into your language, we recommend that you carefully read the process we followed, as the result you will see: any survey you run based on the taxonomy will, among other things, rely heavily on the exact text you will use.

  • A volunteer editor from each of the 13 languages worked with the researchers on translation. Some volunteer editors chose to have more than one person working on the translation, to be able to brainstorm each phrase or sentence with another native speaker.
  • The volunteer editors translated the survey from English to their language and contacted a researcher on the team once the translation was ready for discussion.
  • The researcher reviewed the translations sentence by sentence. The volunteer editors were asked to read from their translated text and translate it back into English. The researcher would then match the impromptu verbal translation into English with the actual English text. Many good conversations came out of this approach, as the volunteer translators were very eager to get the translations right.
  • While the majority of the translations capture the exact text in English translated to the local language, the exact text became meaningless in some cases. In those cases, the researcher worked with the volunteer editors to find the best solution, which usually involved rewording one or more phrases.
  • The examples in the motivation question are sometimes changed to match the cultural context. For example, while soccer may be a good example of a current event for an English speaker, there are much better examples that can be used in the case of Hindi.

If you're going to repeat the same study

If you are going to repeat the same study, you may find the following information useful. It describes what we told the survey participants before they clicked the "Submit" button and the confirmation message they received after they submitted their survey.

在点击调查中的“提交”按钮之前,用户可以看到说明:

<div class="mw-translate-fuzzy">
<pre>
請點擊此頁下方的「提交」鈕,讓我們可以記錄你的回答。
</div>

請注意當按下了「提交」紐後,你允許我們將您問卷所回答的資訊與您的維基百科瀏覽會期相關聯。(我們對所有讀者都會收集瀏覽會期所收集的資訊,這符合隱私權方針的一般資料[1]。)這份資料將會被維基媒體基金會作為研究與改善維基百科之用。欲瞭解更多資訊,請參見我們的隱私權聲明[2]。

<div class="mw-translate-fuzzy">
[1] https://meta.wikimedia.org/w/index.php?title=Privacy_policy/zh&variant=zh-hant
[2] https://wikimediafoundation.org/wiki/Survey_Privacy_Statement_for_Schema_Revision_15266417/zh-hant

确认页面,在用户提交调查回复后可见:

<div class="mw-translate-fuzzy">
<pre>
感謝您回應這份問卷。您可以到下列網址閱讀到關於這份問卷研究的動機以及(在分析結果出爐後)的成果:
</div> 

https://meta.wikimedia.org/wiki/Research:Characterizing_Wikipedia_Reader_Behaviour

Wikipedia reader taxonomy in practice

If you are going to use the above Wikipedia reader taxonomy in practice, please keep the following points in mind:

  • The taxonomy is designed based on the information provided by readers when they are interacting with Wikipedia, more specifically reading a Wikipedia article. As a result, we recommend that a survey based on this taxonomy be triggered at the article level.
  • Make sure you randomize the sequence at which the questions occur, as well as the sequence at which the response options appear in each question. Otherwise, position bias may be introduced to your responses.
  • Make the response to all questions required.
  • At the time of this research, we used Google Forms to conduct the survey. This meant that a QuickSurvey widget was triggered for a percentage of Wikipedia readers, and they would be invited to take part in the survey.
  • We recommend that you use the questions in this taxonomy and the responses, as they’re represented here (except for the examples in the motivation question, which may be changed to adapt to the local context the survey is running in). This makes it easier to compare the result of your survey results with the survey results reported as part of this research and/or use questions, which have been heavily vetted by the research behind this project.
  • Leave a text box for the Other option in the motivation question to allow for users to explain what they mean by choosing this option. This option is our way of tracking, over time, whether new motivations are introduced by the readers that are not currently captured by the list of motivations.

How did we develop the Wikipedia reader taxonomy?

Our research relies on a taxonomy of Wikipedia readers. We designed and analyzed a series of surveys based on techniques from grounded theory to build a categorization scheme for Wikipedia readers’ motivations and needs.

Stage 1: Building the initial taxonomy

We started with an initial questionnaire, where a randomly selected subgroup of English Wikipedia readers on desktop and mobile saw a survey widget while browsing Wikipedia articles. If the reader chose to participate, she was taken to an external site (Google Forms) and asked to answer the question “Why are you reading this article today?” in free-form text (100-character limit). Some more details about the specifics of this experiment at S1-English for the desktop experiment and S2-English for the mobile experiment. Roughly 5000 responses were collected.

To arrive at categories for describing use cases of Wikipedia reading, we embarked on a three step process following the principles of Grounded Theory.

In the first stage, all researchers worked together on 20 entries to build a common understanding of the types of response.

In the second stage, based on the discussions of the first stage, tags were generously assigned by each researcher individually to 100 randomly selected responses, for a total of 500 responses tagged. All 500 tagged responses were reviewed, and four main trends (motivation, information need, context, and source) were identified, alongside tags associated with each response.

In the third stage, each researcher was randomly assigned another 100 responses and assessed if they contained information about the four main trends identified in the previous stage and if the trends and tags should be reconsidered.

The outcome of these stages revealed the following three broad ways in which users interpreted the question which we use as orthogonal dimensions to shape our taxonomy:

  • Motivation: work/school project, personal decision, current event, media, conversation, bored/random, intrinsic learning.
  • Information need: quick fact lookup, overview, in-depth.
  • Prior knowledge: familiar, unfamiliar.

Stage 2: Assessing the robustness of the taxonomy

We conducted two surveys similar to the survey used in the previous section on the Spanish (details at S1-Spanish) and Persian (details at S1-Persian) Wikipedia which resulted in similar observations and dimensions as above. Additionally, we assessed the robustness of the above taxonomy in two follow-up surveys. First, we ran a survey identical to Survey 1 to validate our categories on unseen data (details at S2-English). No new categories were revealed through hand-coding. We crafted a multiple-choice version survey based on the dimensions identified above with an additional Other field corresponding to each question (details at S3-English). Only 2.3% of respondents used the “other” option, and hand-coding of the corresponding free-form responses did not result in new categories.

Hence, we concluded that the taxonomy in Section Taxonomy of Wikipedia readers is the taxonomy of Wikipedia readers.

See also