Article validation proposals/Page 2

The Article validation proposals have been split into multiple pages to keep them at a manageable size.

All | Page 1 | Page 2 | Page 3 | Page 4| New

Sanger's proposal

edit
When I say the approval mechanism must be really easy for people to use, I mean it. I mean it should be extremely easy to use. So what's the easiest-to-use mechanism that we can devise that nevertheless meets the criteria?
The following: on every page on the wiki, create a simple popup approval form that anyone may use. ("If you are a genuine expert on this subject, you can approve this article.") On this form, the would-be article approver [whom I'll call a "reviewer"] indicates name, affiliation, relevant degrees, web page (that we can use to check bona fides), and a text statement to the effect of what qualifications the person has to approve of an article. The person fills this out (with the information saved into their preferences) and hits the "approve" button.
When two different reviewers have approved an article, if they are not already official reviewers, the approval goes into moderation.
The approval goes into a moderation queue for the "approved articles" part of Wikipedia. From there, moderators can check over recently-approved articles. They can check that the reviewers actually are qualified (according to some pre-set criteria of qualification) and that they are who they say they are. (Perhaps moderator-viewable e-mail addresses will be used to check that a reviewer isn't impersonating someone.) A moderator can then "approve the approver".
The role of the moderators is not to approve the article, but to make sure that the system isn't being abused by underqualified reviewers. A certain reviewer might be marked as not in need of moderation; if two such reviewers were to approve of an article, the approval would not need to be moderated.
New addition I think it might be a very good idea to list, on an approved article, who the reviewers are who have approved the article.
--Larry Sanger

I think it would be a very good idea to list, on any article, the names of ANY reviewers to have reviewed the article. Seeing who disapproved it is, perhaps, more important than seeing who approved it, since Wikipedia generally has high quality articles anyway. Brianjd 10:22, 2005 Jan 29 (UTC)

This is quite a good idea - domain experts may not be wikipedians, but if we can leach a little of their time when they encounter a page, then they could still contribute greatly. The big issues here, of course, are verification of identity and what to do when an approved article recieves an edit. Identity verification I think we can just muddle on through with, and showing a link to the most recently approved revision of an article on any page that has ever been approved by an expert. It would also be nice to list the experts at the bottom - not just for completeness in Wikipedia, but for a (very) small amount of prestige for the experts themselves. This proposal should also be pretty easy to implement, which might mean it could serve as a simplest-thing-that-might-work first stab at solving the problem.

Of course, looking at the age of submissions on this page, it's not clear that anything will get implemented any time soon...

Peter Boothe (not a wikipedian, not enough free time) Tuesday, 18 October 2005 - 06:12 PM (PDT)

Bryce's proposal

edit
From my experience with the Wikipedia_NEWS, it seems that there's a lot that can be done with the wiki software as it exists. The revision control system and its tracking of IP addresses is ok as a simple screen against vandalism. The editing system seems fairly natural and is worth using for managing this; certainly we can expect anyone wishing to be a reviewer ought to have a fair degree of competence with it already.
Second, take note at how people have been making use of the user pages. People write information about themselves, the articles they've created, and even whole essays about opinions or ideas.
What I'd propose is that we encourage people who wish to be reviewers to set up a subpage under their userpage called '/Approved?'. Any page that they added to this page is considered to be acceptable by them. (It is recommended they list the particular revision # they're approving too, but it's up to them whether to include the number or not.) The reviewer is encouraged to provide as much background and contact information about themselves on their main page (or on a subpage such as /Credentials?) as they wish. It is *completely* an opt-in system, and does not impact wikipedia as a whole, nor any of its articles.
Okay, so far it probably sounds pretty useless because it *seems* like it gives zero _control_ over the editors. But if we've learned nothing else from our use of Wiki here, it's that sometimes there is significant power in anarchy. Consider that whoever is going to be putting together the set of approved articles (let's call her the Publisher) is going to be selecting the editors based on some criteria (only those with phds, or whatever). The publisher has (and should have) the control over which reviewers they accept, and can grab their /Approved? lists at the time they wish to publish. Using the contact info provided by the reviewer, they can do as much verification as they wish; those who provide insufficient contact info to do so can be ignored (or asked politely on their userpage.) But the publisher does *not* have the power to control whether or not you or I are *able* to approve articles. Maybe for the "PhD? Reviewers Only" encyclopedia I'd get ruled out, but perhaps someone else decides to do a "master's degree or better" one, and I would fit fine there. Or maybe someone asks only that reviewers provide a telephone number they can call to verify the approved list.
Consider a further twist on this scheme: In addition to /Approved?, people could set up other specific kinds of approval. For instance, some could create /Factchecked? pages where they've only verified any factual statements in the article against some other source; or a /Proofed? page that just lists pages that have been through the spellchecker and grammar proofer; or a /Nonplagiarised page that lists articles that the reviewer can vouch for as being original content and not merely copied from another encyclopedia. The reason I mention this approach is that I imagine there will be reviewers who specialize in checking certain aspects of articles, but not everything (a Russian professor of mathematics might vouch for everything except spelling and grammar, if he felt uncomfortable with his grasp of the English language). Other reviewers can fill in the gaps (the aformentioned professor could ask another to review those articles for spelling and grammar, and they could list them on their own area.
I think this system is very in keeping with wiki philosophy. It is anti-elitist, in the sense that no one can be told, "No, you're not good enough to review articles," yet still allows the publisher to discriminate what to accept based on the reviewer's credentials. It leverages existing wiki functionality and Wikipedia traditions rather than requiring new code and new skills. And it lends itself to programmatic extraction of content. It also puts a check/balance situation between publisher and reviewer: If the publisher is selecting reviewers to include unfairly, someone else can always set up a fairer approach. There is also a check against reviewer bias, because once discovered, ALL of their reviewed articles would be dropped by perhaps all publishers, which gives a strong incentive to the reviewer to demonstrate the quality of their reviewing process and policies.
-- BryceHarrington

We would need some standards for this to work. If the subpages are named differently and/or have different structures it will be too difficult to use.

Each publisher would have to check the credentials themselves. Leaving results of checks on user's pages/subpages is not acceptable; anyone can edit it; developers can edit the history. Brianjd 10:22, 2005 Jan 29 (UTC)

Magnus Manske's proposal

edit
I'll try to approach the whole approval mechanism from a more practical perspective, based on some things that I use in the Wikipedia PHP script. So, to set up an approval mechanism, we need
    • Namespaces to separate different stages of articles,
    • User rights management to prevent trolls from editing approved articles.
From the Sanger proposal, the user hierarchy would have to be
    1. Sysops, just a handful to ensure things are running smoothly. They can do everything, grant and reject user rights, move and delete articles etc.,
    2. Moderators who can move approved articles to the "stable" namespace,
    3. Reviewers who can approve articles in the standard namespace (the one we're using right now),
    4. Users who do the actual work. ;)
Stages 1-3 should have all rights of the "lowerlevels", and should be able to "rise" other users to their level. For the namespaces, I was thinking of the following:
    • The blank namespace, of course, which is the one all current wikipedia articles are in; the normal wikipedia
    • An approval namespace. When an article from "blank" gets approved by the first reviewer, a copy goes to the "approval" namespace.
    • A moderated namespace. Within the "approval" namespace, no one can edit articles, but reviewers can either hit a "reject" or "approve" button. "Reject" deletes the article from the "approval" namespace, "approve" moves it to the "moderated" namespace.
    • A stable namespace. Same as for "approval", but only moderators can "reject" or "approve" an article in "moderated" namespace. If approved, it is moved to the "stable" namespace. End of story.
This system has several advantages:
    • By having reviewers and moderators not chosen for a single category (e.g., biology), but by someone on a "higher level" trusting the individual not to make strange decisions, we can avoid problems such as having to choose a category for each article and each person prior to approval, checking reviewers for special references etc.
    • Reviewers and moderators can have special pages that show just the articles currently in "their" namespace, making it easy to look for topics they are qualified to approve/reject
    • Easy handling. No pop-up forms, just two buttons, "approve" and "reject", throughout all levels.
    • No version confusion. The initial approval automatically locks that article in the "approval" namespace, and all decisions later on are on this version alone.
    • No bother of the normal wikipedia. "Approval" and "moderated" can be blanked out in every-day work, "stable" can be blanked out as an option.
    • Easy to code. Basically, I have all parts needed ready, a demo version could be up next week.

Ehrenberg addition

edit

This would be added on to any of the above approval proceses. After an article is approved, it would go into the database of approved articles. People would be able to access this from the web. After reading an article, the reader would be able to click on a link to disapprove of the article. After 5 (more, less?) people have disapproved of an article, the article goes through a reapproval process, in which only one expert must approve it, and then the necessary applicable administrators. -- Suggested addition: there should be a separate domain, perhaps frozenwikipedia.org, which includes only approved articles. This could be used as a "reliable" reference when factual accuracy was very important.

This could be used as a "reliable" reference when factual accuracy was very important.

No, it couldn't. If factual accuracy is very important, you must check several sources, which means that it wouldn't make much difference whether you used the "approved" version or not. Brianjd 10:22, 2005 Jan 29 (UTC)

DWheeler's Proposal: Automated Heuristics

edit

It might also be possible to use some automated heuristics to identify "good" articles. This could be especially useful if the Wikipedia is being extracted to some static storage (e.g., a CD-ROM or PDA memory stick). Some users might want this view as well. The heuristics may throw away some of the latest "good" changes, as long as they also throw away most of the likely "bad" changes.

Here are a few possible automated heuristics:

  • Ignore all anonymous changes; if someone isn't willing to have their name included, then it may not be a good change. This can be "fixed" simply by a some non-anonymous person editing the article (even trivially).
  • Ignore changes from users who have only submitted a few changes (e.g., less than 50). If a user has submitted a number of changes, and is still accepted (not banned), then the odds are higher that the user's changes are worthwhile.
  • Ignore pages unless at least some number of other non-anonymous readers have read the article and/or viewed its diffs (e.g., at least 2 other readers). The notion here is that, if someone else read it, then at least some minimal level of peer review has occurred. The reader may not be able to identify subtle falsehoods, but at least "Tom Brokaw is cool" might get noticed. This approach can be foiled (e.g., by creating "bogus readers"), but many trolls won't bother to do that.

These heuristics can be combined with the expert rating systems discussed elsewhere here. An advantage of these automated approaches is that they can be applied immediately.

Other automated heuristics can be developed by developing "trust metrics" for people. Instead of trying to rank every article (or as a supplement to doing so), rank the people. After all, someone who does good work on one article is more likely to do good work on another article. You could use a scheme like Advogato's, where people identify how much they respect (trust) someone else. You then flow down the graph to find out how much each person should be trusted. For more information, see Advogato's trust metric information. Even if the Advogato metric isn't perfect, it does show how a few individuals could list other people they trust, and over time use that to derive global information. The Advogato code is available - it's GPLed.

Another related issue might be automated heuristics that try to identify likely trouble spots (new articles or likely troublesome diffs). A trivial approach might be to have a not-publicly-known list of words that, if they're present in the new article or diffs, suggest that the change is probably a bad one. Examples include swear words, and words that indicate POV (e.g., "Jew" may suggest anti-semitism). The change might be fine, but such a flag would at least alert someone else to especially take a look there.

A more sophisticated approach to automatically identify trouble spots might be to use learning techniques to identify what's probably garbage, using typical text filtering and anti-spam techniques such as naive Bayesian filtering (see Paul Graham's "A Plan for Spam"). To do this, the Wikipedia would need to store deleted articles and have a way to mark changes that were removed for cause (e.g., were egregiously POV) - presumably this would be a sysop privilege. Then the Wikipedia could train on "known bad" and "known good" (perhaps assuming that all Wikipedia articles before some date, or meeting some criteria listed above, are "good"). Then it could look for bad changes (either in the future, or simply examining the entire Wikipedia offline).

A trivial approach might be to have a not-publicly-known list of words that, if they're present in the new article or diffs, suggest that the change is probably a bad one.
Why does it have to be not-publicly-known? Brianjd 10:22, 2005 Jan 29 (UTC)
I assume the idea is that if the list was known, it would be quicker for vandals to think of alternative nasty things to say. But really we would want to be watching all the changes anyway, so that by watching bad edits that got past the filter one could update the list. But really I think this naughtiness flagging is over complicated and not very useful. More helpful might be a way to detect subtle errors: change in dates, heights, etc. that casual proofreaders would be less likely to pick up. Rather than simply flagging the edit, maybe highlighting the specific change in the normal version of the article to warn all readers: "Colombus discovered Antarctica? in 1492.". Reviewers would then be able to right click on the change and flag it as OK or bad, for example. Luke 05:34, 15 Feb 2005 (UTC)

PeterK's Proposal: Scoring

edit

This idea has some of the same principles as the Automated Heuristic suggested above. I agree that an automated method for determining "good" articles for offline readers is absolutely crucial. I have a different idea on how to go about it. I think the principles of easy editing and how wikipedia works now is what makes it great. I think we need to take those principles along with some search engine ideas to give a confidence level for documents. So people extracting the data for offline purposes can decide the confidence level they want and only extract articles that meet that confidence level.

I think the exact equation for the final scoring needs to be discussed. I don't think I could come up with a final version by myself, but I'll give an example of what would give good point and bad points.

Final Score: a: first thing we need it a quality/scoring value for editors. Anonymous editors would be given a value of 1 and a logged in user may get 1 point added to their value for each article he/she edits, up to a value of 100. b: 0.25 points for each time a user reads the article c: 0.25 point for each day the article has existed in wikipedia d: each time the article is edited it gets 1+(a/10)*2 points, anonymous user would give it 1.2 and a fully qualified user would give it 20 points. e: next if an anonymous user makes a large change then you get a -20 point deduction. Even though this is harsh, if it goes untouched for 80 days it will gain all those points back. It will gain the points back faster if a lot of people have read the article.

This is the best I can think of right now, if I come up with a better scoring system I'll make some changes. Anyone feel free to test score a couple of articles to see how this algorithm holds up. We can even get a way of turning the score to a percentage, so that people can extract 90% qualified articles.

next if an anonymous user makes a large change then you get a -20 point deduction.

This one is very dangerous. What is the threshold for a "large" change? If we set it too high, this won't work very well (it will presumedly react to blanking a page, but I think this is most often used as a substitute for the "nominate for deletion" link that should be there). If we set it too low, and several edits are made by anonymous users to a not-very-popular article, then it could give a low (negative?) score, even though the article is of a high quality. Brianjd 10:22, 2005 Jan 29 (UTC)

Anyone feel free to test score a couple of articles to see how this algorithm holds up.

How? We don't know the scores for the editors. The only way to test this algorithm properly is to download some articles and the entire contributions lists of all the contributors! Brianjd 10:22, 2005 Jan 29 (UTC)