Research:Newsletter/2023/September
Vol: 13 • Issue: 09 • September 2023 [contribute] [archives]
Readers prefer ChatGPT over Wikipedia; concerns about limiting "anyone can edit" principle "may be overstated"
By: Tilman Bayer
In blind test, readers prefer ChatGPT output over Wikipedia articles in terms of clarity, and see both as equally credible
editA preprint titled "Do You Trust ChatGPT? -- Perceived Credibility of Human and AI-Generated Content"[1] presents what the authors (four researchers from Mainz, Germany) call surprising and troubling findings:
"We conduct an extensive online survey with overall 606 English speaking participants and ask for their perceived credibility of text excerpts in different UI [user interface] settings (ChatGPT UI, Raw Text UI, Wikipedia UI) while also manipulating the origin of the text: either human-generated or generated by [a large language model] ("LLM-generated"). Surprisingly, our results demonstrate that regardless of the UI presentation, participants tend to attribute similar levels of credibility to the content. Furthermore, our study reveals an unsettling finding: participants perceive LLM-generated content as clearer and more engaging while on the other hand they are not identifying any differences with regards to message’s competence and trustworthiness."
The human-generated texts were taken from the lead section of four English Wikipedia articles (Academy Awards, Canada, malware and US Senate). The LLM-generated versions were obtained from ChatGPT using the prompt Write a dictionary article on the topic "[TITLE]". The article should have about [WORDS] words
.
The researchers report that
"[...] even if the participants know that the texts are from ChatGPT, they consider them to be as credible as human-generated and curated texts [from Wikipedia]. Furthermore, we found that the texts generated by ChatGPT are perceived as more clear and captivating by the participants than the human-generated texts. This perception was further supported by the finding that participants spent less time reading LLM-generated content while achieving comparable comprehension levels."
One caveat about these results (which is only indirectly acknowledged in the paper's "Limitations" section) is that the study focused on four quite popular (i.e. non-obscure) topics – Academy Awards, Canada, malware and US Senate. Also, it sought to present only the most important information about each of these, in the form of a dictionary entry (as per the ChatGPT prompt) or the lead section of a Wikipedia article. It is well known that the output of LLMs tends to have fewer errors when it draws from information that is amply present in their training data (see e.g. our previous coverage of a paper that, for this reason, called for assessing the factual accuracy of LLM output on a benchmark that specifically includes lesser-known "tail topics"). Indeed, the authors of the present paper "manually checked the LLM-generated texts for factual errors and did not find any major mistakes," something that is well reported to not be the case for ChatGPT output in general. That said, it has similarly been claimed that Wikipedia, too, is less reliable on obscure topics. Also, the paper used the freely available version of ChatGPT (in its 23 March 2023 revision) which is based on the GPT 3.5 model, rather than the premium "ChatGPT Plus" version which, since March 2023, has been using the more powerful GPT-4 model (as does Microsoft's free Bing chatbot). GPT-4 has been found to have a significantly lower hallucination rate than GPT 3.5.
FlaggedRevs study finds that concerns about limiting Wikipedia's "anyone can edit" principle "may be overstated"
editA paper titled "The Risks, Benefits, and Consequences of Prepublication Moderation: Evidence from 17 Wikipedia Language Editions",[2] from last year's CSCW conference, addresses a longstanding open question in Wikipedia research, with important implications for some current issues.
Wikipedia famously allows anyone to edit, which generally means that even unregistered editors can make changes to content that go live immediately – only subject to "postpublication moderation" by other editors afterwards. Less well known is that on many Wikipedia language versions, this principle has long been limited by a software feature called Flagged Revisions (FlaggedRevs), which was developed and designed at the request of the German Wikipedia community and deployed there first in 2008, and has since been adopted by various other Wikimedia projects. (These do not include the English Wikipedia, which after much discussion implemented a system called "Pending Changes" that is very similar, but is only applied on a case-by-case basis to a small percentage of pages.) As summarized by the authors:
FlaggedRevs is a prepublication content moderation system in that it will display the most recent “flagged” revision of any page for which FlaggedRevs is enabled instead of the most recent revision in general. FlaggedRevs is designed to “give additional information regarding quality,” by ensuring that revisions from less-trusted users are vetted for vandalism or substandard content (e.g., obvious mistakes because of sloppy editing) before being flagged and made public. The FlaggedRevs system also displays the moderation status of the contribution to readers. [...] Although there are many details that can vary based on the way that the system is configured, FlaggedRevs has typically been deployed in the following way on Wikipedia language editions. First, users are divided into groups of trusted and untrusted users. Untrusted users typically include all users without accounts as well as users who have created accounts recently and/or contributed very little. Although editors without accounts remain untrusted indefinitely, editors with accounts are automatically promoted to trusted status when they clear certain thresholds determined by each language community. For example, German Wikipedia automatically promotes editors with accounts who have contributed at least 300 revisions accompanied by at least 30 comments.
The paper studies the impact of the introduction of FlaggedRevs "on 17 Wikipedia language communities: Albanian, Arabic, Belarusian, Bengali, Bosnian, Esperanto, Persian, Finnish, Georgian, German, Hungarian, Indonesian, Interlingua, Macedonian, Polish, Russian, and Turkish" (leaving out a few non-Wikipedia sister projects that also use the system). The overall findings are that
"the system is very effective at blocking low-quality contributions from ever being visible. In analyzing its side effects, we found, contrary to expectations and most of our hypotheses, little evidence that the system [...] raises transaction costs sufficiently to inhibit participation by the community as a whole, nor [that it] measurably improves the quality of contributions."
In the "Discussion" section, the authors write
Our results suggest that prepublication moderation systems like FlaggedRevs may have a substantial upside with relatively little downside. If this is true, why are a tiny proportion of Wikipedia language editions using it? Were they just waiting for an analysis like ours? In designing this study, we carefully read the Talk page of FlaggedRevs.[supp 1] Community members commenting in the discussion agreed that prepublication review significantly reduces the chance of letting harmful content slip through and being displayed to the public. Certainly, many agreed that the implementation of prepublication review was a success story in general—especially on German Wikipedia. [...]
However, the same discussion also reveals that the success of German Wikipedia is not enough to convince more wikis to follow in their footsteps. From a technical perspective, FlaggedRevs’ source code appears poorly maintained.[supp 2] [...] FlaggedRevs itself suffers from a range of specific limitations. For example, the FlaggedRevs system does not notify editors that their contribution has been rejected or approved. [...] Since April 2017, requests for deployment of the system by other wikis have been paused by the Wikimedia Foundation indefinitely.[supp 3] Despite these problems, our findings suggest that the system kept low-quality contributions out of the public eye and did not deter contributions from the majority of new and existing users. Our work suggests that systems like FlaggedRevs deserve more attention.
(This reviewer agrees in particular regarding the lack of notifications for new and unregistered editors that their edit has been approved – having filed, in vain, a proposal to implement this uncontroversially beneficial and already designed software feature to the annual "Community Wishlist", in 2023, 2022, and 2019.)
Interestingly, while the FlaggedRevs feature was (as summarized by the authors) developed by the Wikimedia Foundation and the German Wikimedia chapter (Wikimedia Deutschland), community complaints about a lack of support from the Foundation for the system were present even then, e.g. in a talk at Wikimania 2008 (notes, video recording) by User:P. Birken, a main driving force behind the project. Perhaps relatedly, the authors of the present study highlight a lack of researcher attention:
"Despite its importance and deployment in a number of large Wikipedia communities, very little is known regarding the effectiveness of the system and its impact. A report made by the members of the Wikimedia Foundation in 2008 gave a brief overview of the extension, its capabilities and deployment status at the time, but acknowledged that “it is not yet fully understood what the impact of the implementation of FlaggedRevs has been on the number of contributions by new users.”[supp 4] Our work seeks to address this empirical gap."
Still, it may be worth mentioning that there have been at least two preceding attempts to study this question (neither of these has been published in peer-reviewed form, thus their omission from the present study is understandable). They likewise don't seem to have identified major concerns that FlaggedRevs might contribute to community decline:
- A talk at Wikimania 2010 presented preliminary results from a study commissioned by Wikimedia Germany, e.g. that on German Wikipedia, "In general, flagged revisions did not [affect] anonymous editing" and that "most revisions got approved very rapidly" (the latter result surely doesn't hold everywhere; e.g. on Russian Wikipedia, the median time for an unregistered editor's edit to get reviewed is over 13 days at the time of writing). It also found, unsurprisingly, a "reduced impact of vandalism", consistent with the present study.
- An informal investigation of an experiment conducted by the Hungarian Wikipedia in 2018/19 similarly found that FlaggedRevs had "little impact on the growth of the editor community" overall. The experiment consisted of deactivating the feature of FlaggedRevs that hides unreviewed revisions from readers. As a second question, the Hungarian Wikipedians asked "How much extra load does [deactivating FlaggedRevs] put on patrollers?" They found that "[t]he ratio of bad faith or damaging edits grew minimally (2-3 percentage points); presumably it is a positive feedback for vandals that they see their edits show up publicly. The absolute number of such edits grew significantly more than that, since the number of anonymous edits grew [...]."
In any case, the CSCW paper reviewed here presents a much more comprehensive and methodical approach, not just because it examined the impact of FlaggedRevs across multiple wikis, but also regarding the formalizing of various research hypotheses and concerning the use of more reliable statistical techniques.
The findings in detail
editIn more detail, the researchers formalized four groups of research hypotheses about the impact of FlaggedRevs [our bolding]:
- First, the study assessed whether the "system is indeed functioning as intended", by hypothesizing that it reduces the "number of visible rejected contributions" (i.e. edits that were reverted after being approved, i.e. becoming visible to the general reader), both from users affected by the restriction (H1a) and from all editors (H1c), but not from those editors not affected (H1b). All three of these sub-hypotheses were confirmed in an interrupted time series (ITS) analysis of the monthly counts of such reverts (aggregated for all users in each group, over the entire wiki), covering the timespan from 12 months before to 12 months after the date (or month) on which FlaggedRevs was activated on a particular Wikipedia version. The researchers conclude that "In general, the results we see are in line with our expectations and provide strong evidence that FlaggedRevs achieves its primary goal of hiding low-quality contributions from the general public."
- Secondly, "Our H2 hypotheses suggest that prepublication review will affect the quality of contributions overall. We operationalize quality in two ways. First, we use the number of rejected contributions that we operationalize as the number of reverts. [...] We also test our second hypotheses using average quality that we operationalize as revert rate [...] measured as the proportion of contributions that are eventually." Like the first hypothesis, this is separately assessed for affected users, non-affected users and all users. The authors anticipated a rise in the quality of contributions overall (H2c) and of contributions from affected users such as IP editors (H2a), reasoning that "proactive measures of content moderation and production control can play an important role in encouraging prosocial behavior." For unaffected users, they again hypothesized a null effect (H2b). This set of hypotheses was again examined in an ITS analysis of the time series of monthly aggregate counts of such edits. Here, the authors "find little evidence of the prepublication moderation system having a major impact on the quality of contributions. Thus, we cannot conclude that FlaggedRevs alters the quantity or quality of newcomers’ contributions."
- The third group of hypotheses was motivated by existing "research that has shown that additional quality control policies may negatively affect the growth of a peer production community" (citing several papers which have been covered here before, see e.g. "'The rise and decline' of the English Wikipedia"). Again, this is split into three sub-hypotheses for affected users (H3a), unaffected users (H3b) and the community overall (H3c). The authors chose the aggregate number of mainspace (article) edits as their measure of productivity, and hypothesized that FlaggedRevs would decrease it in all three cases – for affected (non-trusted) editors because of a "reduced sense of self-efficacy" (i.e. the lack of satisfaction that comes with seeing one's change immediately being shown to the public), but also for unaffected (trusted) editors, because "prepublication [review] systems require effort from experienced contributors and may result in a net increase in the demands on these users’ time". These hypotheses are again tested using an ITS analysis aggregate (per-wiki) monthly numbers. Regarding H3a, this confirms a significant decrease for IP editors as one group of affected users in H3a, but not for newly registered editors as the other group of affected users. (Unfortunately, the analysis appears to treat these as static groups, without examining the possibility that FlaggedRevs may have motivated at least some people who had habitually contributed without logging in to do so under an account instead, with the anticipatio of becoming a trusted/unaffected user after passing the applicable threshold.) The study finds that "the deployment of the prepublication [review] discouraged the participation of the group of editors with the lowest commitment and most targeted by the additional safeguard [i.e. IP editors], but not the other groups." In particular, FlaggedRevs did not cause a significant decline in article edits overall, contradicting the expectations formed based on the aforementioned previous research.
- The fourth hypothesis was similarly motivated by previous research that had found that the "barrier to entry posed by prepublication review, combined with the delayed intrinsic reward, might be disheartening enough to drive newcomers away" (in case of the creation of new articles on English Wikipedia, see our previous coverage: "Newcomer productivity and pre-publication review"). Here, the authors "hypothesize that the deployment of [FlaggedRevs] will negatively affect the return rate of newcomers (H4)." Differently from the previous three hypothesis, this effect on retention rate is tested using a per-user (instead of aggregate) dataset. The study finds "that although FlaggedRevs did negatively affect the return rate of newcomers in a way that was statistically significant, the size of this effect is extremely small." Again though, the analysis is limited by treating this group as static, without being able to consider the possibility that FlaggedRevs may motivate more people to create an account instead of contributing under an IP. What's more, the authors caution their analysis had been limited by the fact that "we do not have access to wiki-level configuration data on FlaggedRev" (referring to settings such as the edit number threshold where an editor will be automatically promoted to trusted status). However, the Wikimedia Foundation does in fact publish this information, so there might be opportunities for future research to examine this research question more thoroughly. Relatedly, while that paper promises that "[a] replication dataset including data, code, and other supplementary material has been placed in the Harvard Dataverse archive and is available at: https://doi.org/10.7910/DVN/G1YFLE ", that URL does not (yet) contain such material for most of the paper's results. (In March 2023, the authors acknowledged this issue and planned to remedy it, but at the time of writing the data repository appears unchanged.)
(Disclosure: This reviewer provided some input to the authors at the beginning of their research project, as acknowledged in the paper, but was not involved in it otherwise.)
See also related earlier coverage: "Sociological analysis of debates about flagged revisions in the English, German and French Wikipedias" (2012)
Briefly
edit- Wikimania, the annual global conference of the Wikimedia movement, took place in Singapore in August (as an in-person event again for the first time since 2019). Its research track included the by now traditional "State of Wikimedia Research" presentation highlighting research trends from the past year (with involvement by members of this research newsletter), see our blog post with videos and slides. Videos and slides from other presentations are being uploaded, too.
- See the page of the monthly Wikimedia Research Showcase for videos and slides of past presentations.
Other recent publications
editOther recent publications that could not be covered in time for this issue include the items listed below. Contributions, whether reviewing or summarizing newly published research, are always welcome.
"Wikidata as Semantic Infrastructure: Knowledge Representation, Data Labor, and Truth in a More-Than-Technical Project"
editFrom the abstract:[3]
"Various Wikipedia researchers have commended Wikidata for its collaborative nature and liberatory potential, yet less attention has been paid to the social and political implications of Wikidata. This article aims to advance work in this context by introducing the concept of semantic infrastructure and outlining how Wikidata’s role as semantic infrastructure is the primary vehicle by which Wikipedia has become infrastructural for digital platforms. We develop two key themes that build on questions of power that arise in infrastructure studies and apply to Wikidata: knowledge representation and data labor."
"Naked data: curating Wikidata as an artistic medium to interpret prehistoric figurines"
editFrom the abstract:[4]
"In 2019, Digital Curation Lab Director Toni Sant and the artist Enrique Tabone started collaborating on a research project exploring the visualization of specific data sets through Wikidata for artistic practice. An art installation called Naked Data was developed from this collaboration and exhibited at the Stanley Picker Gallery in Kingson, London, during the DRHA 2022 conference. [...] This article outlines the key elements involved in this practice-based research work and shares the artistic process involving the visualizing of the scientific data with special attention to the aesthetic qualities afforded by this technological engagement."
"The Wikipedia Republic of Literary Characters"
editFrom the abstract:[5]
"We [...] explore a user-oriented notion of World Literature according to the collaborative encyclopedia Wikipedia. Based on its language-independent taxonomy Wikidata, we collect data from 321 Wikipedia editions on more than 7000 characters presented on more than 19000 independent character pages across the various language editions. We use this data to build a network that represents affiliations of characters to Wikipedia languages, which leads us to question some of the established presumptions towards key-concepts in World Literature studies such as the notion of major and minor, the center-periphery opposition or the canon."
"What makes Individual I's a Collective We; Coordination mechanisms & costs"
editFrom the abstract:[6]
"Diving into the Wikipedia ecosystem [...] we identified and quantified three fundamental coordination mechanisms and found they scale with an influx of contributors in a remarkably systemic way over three order of magnitudes. Firstly, we have found a super-linear growth in mutual adjustments (scaling exponent: 1.3), manifested through extensive discussions and activity reversals. Secondly, the increase in direct supervision (scaling exponent: 0.9), as represented by the administrators’ activities, is disproportionately limited. Finally, the rate of rule enforcement exhibits the slowest escalation (scaling exponent 0.7), reflected by automated bots. The observed scaling exponents are notably robust across topical categories with minor variations attributed to the topic complication. Our findings suggest that as more people contribute to a project, a self-regulating ecosystem incurs faster mutual adjustments than direct supervision and rule enforcement."
"Wikidata Research Articles Dataset"
editFrom the abstract:[7]
"The "Wikidata Research Articles Dataset" comprises peer-reviewed full research papers about Wikidata from its first decade of existence (2012-2022). This dataset was curated to provide insights into the research focus of Wikidata, identify any gaps, and highlight the institutions actively involved in researching Wikidata."
"Speech Wikimedia: A 77 Language Multilingual Speech Dataset"
editFrom the abstract:[8]
"The Speech Wikimedia Dataset is a publicly available compilation of audio with transcriptions extracted from Wikimedia Commons. It includes 1780 hours (195 GB) of CC-BY-SA licensed transcribed speech from a diverse set of scenarios and speakers, in 77 different languages. Each audio file has one or more transcriptions in different languages, making this dataset suitable for training speech recognition, speech translation, and machine translation models."
15 years later, repetition of philosophy vandalism experiment yields "surprisingly similar results"
editFrom the paper[9]
"Fifteen years ago, I conducted a small study testing the error-correction tendency of Wikipedia. [...] I repeated the earlier study and found surprisingly similar results. [...] Between July and November 2022, I made 33 changes to Wikipedia: one at a time, anonymously, and from various IP addresses. [...] Each change consisted of a one or two sentence fib inserted into the Wikipedia entry on a notable, deceased philosopher. The fibs were about biographical or factual matters, rather than philosophical content or interpretive questions. Although some of the fibs mention “sources”, no citations were provided. If the fibs were not corrected within 48 hours, they were removed by the experimenter. The fibs were all, verbatim, ones that I used in Magnus (2008). [...] Thirty-six percent (12/33) of changes were corrected within 48 hours. Rounded to the nearest percentage point, this is the same as the adjusted result in Magnus (2008)."
References
edit- ↑ Huschens, Martin; Briesch, Martin; Sobania, Dominik; Rothlauf, Franz (2023-09-05), Do You Trust ChatGPT? -- Perceived Credibility of Human and AI-Generated Content, arXiv, doi:10.48550/arXiv.2309.02524
- ↑ Tran, Chau; Champion, Kaylea; Hill, Benjamin Mako; Greenstadt, Rachel (2022-11-11). "The Risks, Benefits, and Consequences of Prepublication Moderation: Evidence from 17 Wikipedia Language Editions". Proceedings of the ACM on Human-Computer Interaction 6 (CSCW2): 333–1–333:25. doi:10.1145/3555225. / Tran, Chau; Champion, Kaylea; Hill, Benjamin Mako; Greenstadt, Rachel (2022-11-07). "The Risks, Benefits, and Consequences of Prepublication Moderation: Evidence from 17 Wikipedia Language Editions". Proceedings of the ACM on Human-Computer Interaction 6 (CSCW2): 1–25. ISSN 2573-0142. doi:10.1145/3555225.
- ↑ Ford, Heather; Iliadis, Andrew (2023-07-01). "Wikidata as Semantic Infrastructure: Knowledge Representation, Data Labor, and Truth in a More-Than-Technical Project". Social Media + Society 9 (3): 20563051231195552. ISSN 2056-3051. doi:10.1177/20563051231195552.
- ↑ Sant, Toni; Tabone, Enrique (2023). "Naked data: curating Wikidata as an artistic medium to interpret prehistoric figurines". International Journal of Performance Arts and Digital Media 0 (0): 1–18. ISSN 1479-4713. doi:10.1080/14794713.2023.2253335.
- ↑ Wojcik, Paula; Bunzeck, Bastian; Zarrieß, Sina (2023-05-11). "The Wikipedia Republic of Literary Characters". Journal of Cultural Analytics 8 (2). doi:10.22148/001c.70251.
- ↑ Yoon, Jisung; Kempes, Chris; Yang, Vicky Chuqiao; West, Geoffrey; Youn, Hyejin (2023-06-03), What makes Individual I's a Collective We; Coordination mechanisms & costs, arXiv, doi:10.48550/arXiv.2306.02113
- ↑ Farda-Sarbas, Mariam (2023), Wikidata Research Articles Dataset, Freie Universität Berlin, doi:10.17169/refubium-40231
- ↑ Gómez, Rafael Mosquera; Eusse, Julian; Ciro, Juan; Galvez, Daniel; Hileman, Ryan; Bollacker, Kurt; Kanter, David (2023). "Speech Wikimedia: A 77 Language Multilingual Speech Dataset".
- ↑ Magnus, P. D. (2023-09-12). "Early response to false claims in Wikipedia, 15 years later". First Monday. ISSN 1396-0466. doi:10.5210/fm.v28i9.12912.
- Supplementary references and notes:
Wikimedia Research Newsletter
Vol: 13 • Issue: 09 • September 2023
About • Subscribe: Email •
[archives] • [Signpost edition] • [contribute] • [research index]