Article validation proposals

This page shows the contents of all the other pages. Do not edit this page. The Article validation proposals have been split into multiple pages to keep them at a manageable size.

All | Page 1 | Page 2 | Page 3 | Page 4| New

A common theme among many suggested methods is to distinguish a public or stable version from the latest version of each article. (Contrast this with the implemented validation method for TomK's WikiReaders, which only enhanced the community visibility of selected articles)

Wiki version and stable version edit

Main article: en:Wikipedia:Stable versions proposal

There should be two versions.

  • The wiki version, which is editable by everyone anytime, and
  • The stable version, which may not be editable by anyone

Note: This means that if a particular version becomes flagged validated, new edits to it will not be marked validated, and there will be a seperation between latest and stable. (Perhaps minor edits can inherit the validated flag?)

The stable version is defined by validation of one wiki version; it is marked with a flag. The successive validated versions are visible in the history of the article.

We could add at the bottom of the article "This article has been validated by XX editors," or "This article has not yet been validated by any editors. See last validated version."

See also the detailed example below.

How to name/view (the last) stable version(s) edit

The last stable version is visible to a reader

  1. in a new namespace, on wikipedia itself, this namespace containing non editable articles
  2. or in another website (with cross-domain links)
  3. or in place of the most-recent wikipedia revision (with a diff and the most recent revision shown on edit)
    • Seeing the last stable version rather than the wiki-version might be chosen in the user preferences.

Validation via extra metadata edit

In preface: the suggestion below is a scalable technical change to make a variety of validation and reviewing efforts faster and more reliable. It is a medium-term change, and does not conflict with the excellent and immediate #Natural validation process suggested below.

About the final step: The final step in validation should be grunt-work : a user flagging a clearly valid article "validated". This should be human and not automatic. In the proposal below, this final step can be taken by any admin once certain conditions are met, but it could probably be carried out by any user (and undone only by admins).

The main problem is not in doing the actual validation -- FA works without restricting who can update that page -- but in distributing the work of steadily improving invalid articles, so that individual editors can specialize in what they do best.

Step 1

  • Allow our thousands of casual readers to inform the validation process, in a better-defined and more cleanly-parallel fashion than the current freeform method (talk pages). Anyone who can edit the wiki should be able to help validate (or unvalidate) an article. For example: give readers a review this article option which lets them review many distinct aspects of articles. Allow them to either set review flags relating to the article content, or flag the article as in serious trouble.
  • Review Flags, + | neutral | - (anything but neutral requiring explanatory comment):
    1. factual accuracy, (the most time-consuming part of reviewing; requires checking with external sources and/or expertise)
    2. quality of content,
    3. neutrality of tone,
    4. completeness of coverage (missing key info? too little info for a major topic?)
  • Unary flags: (these setting each of these also causes the article and any special pages to show appropriate templates and content)
    1. Stub
    2. Copyvio (requires comment)
    3. Inappropriate content (requires comment)

(These might simply be determined by the inclusion (or not) of a category or a template)

This allows a reviewer to either validate or invalidate an article in four facets of quality, and to quickly flag pages as stubs/copyvios, or for VfD. Presenting a set of facets encourages editors to think along the lines of validation criteria, and using flags makes tracking/responding to reviews automatic.

Step 2

  • Allow active editors to quickly and cleanly respond to negative article reviews: let them disassociate old comments as they are addressed/become outdated, with a responding comment.
  • Allow an editor to see a detailed review history of reviews and comments
June 8, 2004
20:41:06
Fuzheado A:+ (more detail than my Grove's has)

Q:= (OK) N:- (I tried to neutralise the bit about Iraqi civilians, but it still reads like propaganda. Some quotes from the other side would help balance out the "In the Media" section.) C:= ()

Technical thoughts:

  • Each review, and accompanying review notes (like edit summaries) are associated with a reviewing user, time, and article revision. They are also associated with each successive article revision unless they are deleted (by an admin, for vandalism) or responded to by another editor (in which case they are disassociated from the current revision).
  • Once an article has been given a thumbs-up in each facet by at least one reviewer, and all negative reviews have been responded to, an admin (after verifying that the responses were in good faith) can follow a "validate this article" link and brand that revision validated, creating a new revision (without altering the text). Someone else might come along the next minute and see that same text and give it a negative review... which would have to be addressed before the article was next validated.
  • Readers can look for the last {reviewed, validated} version, or see a 'list of all reviews', perhaps even a 'list of all validated versions'.
  • The current revision of an article is associated with the latest reviews of all users who have reviewed the article in the past - excepting reviews that have been responded to.

Possible metadata interface design edit

Interface design: associate a color and letter with each facet.

  • [A]ccuracy  : Green
  • [Q]uality  : Blue
  • [N]eutrality  : LtRed
  • [C]ompleteness: Purple

Let as much information as you can trickle up to the readers, in a subtle fashion.

  1. At the upper-right corner of each article there could be a tiny globe; grey by default, but colored in if that version is actively validated (blue?) or unvalidated (red?).
  2. On mouseover, a grey globe might text-message "This article has not yet been reviewed. ", or "This article has not yet been validated", or "This revision has not yet been validated", right next to a "see last validated revision" link).
    Note: most current revisions will be unreviewed; perhaps the globe can be suppressed in those cases, to minimize the impact of this system on typical browsing.
  3. At the bottom of a validated revision would be a link to "validation details", which would be a page displaying a list of recent review comments and responses, along with a list of validating users (that is, users whose last review had been positive), with the dates (colored according to which facet they were reviewing?) and comments submitted with their validations.
  4. Validated articles would have a "last validated revision" link near the globe at the top, which would take them to the appropriate revision.
  5. At the bottom of each page would be a "show review history" link that would display a list of each reviewed version, with a review summary, something like this:

June 8, 2004 20:41:06   A Qx2 Fuzheado, 64.102.12.3
June 2, 2004 14:12:11 A Q Kpjas
May 14, 2004 07:36:40 Ax4|1  Qx3 LittleBuddha, Fuzheado, Fuzheado, 143..., Anthere, Belizian, J-VH, Tomos
May 11, 2004 17:50:01 Ax1|1 Q Tomos, 143...

Technical interface design edit

  • Make reviews (and responses to reviews) show up on watchlists and a special V-RC page. This will help draw attention to validated articles, to make sure that people don't mistakenly validate a poor article.
    1. Publication teams looking for stable versions to copyedit and fact-check into a published work can start with this general validation system, refine it through contact with interested editors in the subject-area wikiprojects, and keep an eye on the reviewers/validators -- only team members should be responding to / unflagging reviews or validating articles.
    2. Admins should be able to revert a mischievous review by vandrolls; admins could archive a mischievous review flag, so that it shows up when viewing the article revision (along with a note: "reverted on <date> by <admin>") but doesn't appear in the "list all validated versions" summary view, or in other review summaries.

I hope this was not so long that it now seems overcomplicated! The basic idea is simple. +sj+ 07:20, 23 Jun 2004 (UTC)

Weighting and averaging reviews edit

For most articles, this should be unnecessary. Every objection or negative review of an article should be responded to before it is validated (Most articles will perhaps suffer from too few reviewers, rather than an excess of active pessimists who are never satisfied). For those few cases where some objections cannot be met or cannot be agreed upon, there should be some automated rule of thumb to help an admin decide whether an article has support for validation or not.

Appropriate levels of consensus We may want to isolate those aspects of articles (accuracy) which demand near-unanimous support from those more subjective aspects (completeness) for which a majority is acceptable.

Playing the game Since we only need to worry about this for controversial articles, we should expect that some energetic people will try to subvert the reviewing system. Verifying factual correctness, in particular, is time-consuming, since a random well-meaning Wikipedian cannot tell simply by reading an article whether its facts are correct (whereas the other facets may be difficult to work out, but success is easier to recognize/validate) -- and, moreover, it is a delciate facet, one which we would like validated articles to have near-unanimous support.

Combining Openness and Trust' It is important for all contributors to be able to help review articles. It is also important to have a notion of trust -- via which the opinions of trusted contributors may be heard above any focused effort among untrusted contributors to subvert the review process.

Suggested method

  • Make it hard for bots to submit reviews, on general principle, while allowing one-visit wonders to leave productive input.
    1. Use a broad trust metric to simplify the problem.
      Have a 'reviewer' flag for anyone who bothers to read what reviewing is all about, and apply it liberally. Let anyone review an article, but post reviews by non-reviewers at the end of the article talk page, instead of as normalized article metadata.
      Put a become a Wikipedia reviewer!" link at the top of the Review page for accounts that are not yet reviewers; require them to visit a page explaining the review process, and to submit a request to be a reviewer. Process requests from IPs manually; scan others after the fact. Allow admins to manually set/unset a user's "reviewer" flag, with minimal red tape.
    2. A en:CAPTCHA test (aka "copy the letters from this slightly hard-to-read image into the textbox") can offer some basic protection.
  • For the purposes of averaging reviews, calculate separately the consensus among all users commenting on the talk page (untrusted reviews) and that among the reviewers.
    • Reviews from accounts which have since lost their "reviewer" flag (for trolling, say) should be counted as untrusted reviews.
    • Allow untrusted reviews to be ignored if they give no reason for their review

This allows requiring supermajorities among active reviewers without giving mischief-makers 9:1 leverage in the review system. For example: (note: neutrality is the facet most likely to be targeted by large special-interest groups, and the most difficult to demonstrate conclusively)

Level of consensus required (all users | reviewers):  
         [A]      [Q]      [N]      [C] 
       45%|90%  40%|80%  25%|70%  25%|50%

+sj+ 03:01, 18 Jul 2004 (UTC)

Validation by committee edit

  1. Create a "Committee of Experts" on particular topics from among Wikipedia contributors: For example, Camemebert has shown expertise in classical music, and Viajero has shown expertise in opera. Put them in a music committee, chaired by TUF-KAT (all names are only examples). Similar committees, larger and smaller, would exist for various topics, such as biology, history, Lord of the Rings, etc. Committee membership can be flexible, with new people joining as they evidence expertise.
  2. Ideally, these "experts" would have to prove a) commitment to the Wikipedia project and ideals, and b) some expertise in the field in their non-Wikipedia life. A professor of biology would probably be better suited to chair the Biology Committee than a high school kid preparing for his A levels in biology.
  3. The Committees would have several tasks. The easy ones are:
    • determining categorization principles for that subject,
    • determining what is missing from the list of existing articles,
    • posting this information and encouraging gap-filling,
    • validating articles.
  4. The validated, or approved, article is flagged and [moved to a special page]. Once that happens, only committee members may alter it. It can now be accessed through a new, "Validated articles" section.
  5. The original article continues to be edited by the community. The Committee observes these changes and at regular intervals decides which are worthy of incorporation into the validated article. This will require that people on these committees be open to change and committed to the Wikipedia ideals.
  6. Validated articles can be produced instantly in a print version. Regular articles are in a state of constant flux. (note that for this stability, it suffices to mark one of the article revisions as 'validated' +sj+)
  7. An impartial Overseer institution would be created to ensure that the Committee of Experts does not overstep its bounds. (... how about The Community At Large? or is it turtles all the way down? +sj+)

Problematic points edit

  • Who decides the committee's composition, and the rules according to which it makes decisions?
  • The parallel existence of validated (but editable) and regular-wiki article could bring a lot of complexity to the system. A simpler solution is that a validated article is non-editable at all. (... like an old revision. +sj+) (...or an actual old revision.)
  • We must not endanger this collaboration either by dividing our contents into good/bad or by dividing our volunteers into editors/validators.
  • The validation committee are not laymen (and if they are, they soon won't be after validating many articles), but some articles need to be readable to laymen too. R3m0t 17:56, 21 Jan 2005 (UTC)

Validation by rating/voting edit

(changed 'vote' to 'rate' where appropriate to distinguish yes/no voting from rating on a scale from 1 to 6 +sj+)

Another option would be individual voting a la "This version has been validated by X editors".

  1. Any registered user can vote that an article is useful (yes/no); any page with more than x users voting 'yes', can be considered validated.
  2. Option 2: Any reg. user can rank an article on it's usefullness, giving a value ranging from 6 (extremly useful) to 1 (rubbish). Any Page rated by more than x users to a score of 3 or more can be considered validated.
NOOOOooooooo!!!! <imagines the pain and the suffering... the painful suffering... the insufferable pain...> +sj+
ROFL!!!!!!! Anthere 14:14, 23 Jun 2004 (UTC)
Could somebody please clarify what is so funny? The idea as whole bringing 'objective' criticism to it? From the discussion, I would reconsider this proposition, as using a range of values and allowing rating "by any user" is not good. However, consider the Slashdot moderation system. It works very well on how /. "validates" comments, and with some minor adoptions could possibly work as well for a wiki. Axel Kittenberger 17:28, 23 Jun 2004 (UTC)

Another positive effect could be an implied campaign araising between editors to create the most useful (scored) articles, could be an useful addition to encourage quality, not? Axel Kittenberger 17:41, 16 Jun 2004 (UTC)

  • I also dislike the voting idea, as that might be easily manipulated by sockpuppets, and doesn't really solve the problem, as those voting, even with good intentions, might not have all the facts or even the knowledge to vote correctly. Burgundavia 08:30, 18 Jun 2004 (UTC)

Natural validation process edit

Example of what was done with the WikiReader

  • Prerelease often and announce it everywhere (mailing lists...).
  • Find writers who are already active in the topic and make sure they know about the reader.
  • Promise the very active editors to be mentioned specially in the print edition (they will love to see their name in a book)

This provides many "validators".

This seems to me the right way to start, even if one of the above technical solutions is later implemented. It can work very well with focused subject-areas that are non-controversial, as the WikiReader progress has shown. The first steps for implementing this are:

  1. Decide on a manageable set of topics (this grows with each iteration)
  2. Decide on the rough set of articles within those topics (some will not exist)
  3. Get a few active, knowledgeable people to divert some of their energies to this effort;
    • Then get other people excited about reviewing and improving those articles, by broadcasting the effort and emphasizing the excellent final goal (for this it is important that the goal be attainable)

Optimizing before a problem crops up is a waste of time; only after many people are excited, must one start to worry about mischief. (However, while embarking on the above steps, we can simultaneously agree on which technical solution to employ eventually, and coders & designers can start working on that) +sj+

Trust metrics edit

One broad trust metric was mentioned above; trusting users with 20 edits not to be bots. This is a poor way to reach that conclusion; feel free to suggest a better one. Similar trust-related traits about a user which can be determined rather automatically:

  • an account is not always a bot
  • an account sometimes makes valid edits
  • an account been vouched for by some other account
  • a user has put +10 (+N) hours into an account

With some combination of the above, it is much easier to filter out vandalism and other counterproductive contributions.

Trust isn't really a binary, yes/no, human/bot quantity. A degree of trust, determined by # of validated contributions and meta-validation and endorsements by those already well-trusted would be more resistant to bots as well as provide a metric for the quality/weight of a review. This is particularly valid for qualifying 'expertness' for reviews of factual accuracy/completeness. However, it would be important to consider different levels of trust per category. Intangir 10:45, 6 Dec 2004 (UTC)

These are some possible solutions:

  • Add a publish button to every page.

    Anyone can edit the wiki, but only approved people can approve an edit, so it isn't live until checked by an authorised person.

    When an authorised person goes to edit an article they get a warning that an unapproved edit is queued. They check that edit. If they approve of it, they edit that version of the article and put it live. If they don't approve, it is removed from the queue and they edit the previous version of the article instead.

    Unapproved edits can not be seen until you log in. They will not show up in search engines.

    A query that reports all top versions which are not by approved people could be run to provide an approval queue. Approved users could go to Special:Unapproved to view these and publish any they agreed with.

  • Only serve cached pages, and only let approved people purge the cache. Unapproved versions would only be viewable in the page history, so would not be found by search engines.
  • Display a notice (something like the fundraising notices?) indicating that anyone can edit it and providing a link to the other version (the two versions would be the last approved version and the last version). Brianjd 03:01, 2005 Feb 24 (UTC)

Viewing unapproved pages edit

Readers should never see an unofficial version. You should have to log in to see an unofficial version. Therefore, there would be 5 types of user:

Reader
Not logged in. Can read live pages, but not drafts. Can not edit.
Logged in user
Someone with an unapproved account. Can edit draft versions. Can read unapproved versions.
Approved
Has been approved by a sysop. Can edit any unprotected page. Can choose to publish any edit.
Sysop
Can edit protected pages. Can add logged in users to the "approved users" list.
Bureaucrat
Can create new sysops.

Possible problems edit

One reason the Foundation wiki is not open to editing is that it allows the full use of HTML. To keep it safe, unapproved users should not be allowed to use HTML. If they use HTML, this should be rendered as plain text until someone approves it.

Comments edit

  • I am for the separation of unapproved draft / public version.
  • As for draft editing, I prefer to edit on meta first. Recent translation of Quarto vol.1 many (over 5) should engage ... I'm not sure that it is good such many people have their accounts on WMF and that same persons will take part in the coming issue ... How about we prepare first drafts on meta and move them to the WMF site as final drafts and then work there and use feedback on meta (as same as the recent French version correction). For avoiding the confusing, after moving the initial draft on meta should be deleted (or we should wait in a week and it would be too long?). --Aphaia 15:58, 27 Sep 2004 (UTC)
  • I agree very much with the solution, in part. Wiki's strength comes in both simplicity and flexibility. Requiring users to have an account to edit/publish may be well suited in some applications of Wiki, while in others it may be hindering. With this in mind, I believe either having editing levels(from anonymous usage to locked, with some resolution) or downright page permissions with regards to who(in respect of Approved Users, Users, and Anonymous) can read, edit, or publish may be in order.
  • In terms of drafts and publishing, I will comment based on my situation. I am part of a company who's leaning toward a Wiki to maintain documentation of its products. We want our customers to freely be able to make changes and additions to the documentation while being able to approve the validity of its contents. We either need to 1) filter all changes made by non approved-users, as is proposed in the solution. Or 2) have an 'approved' icon or message appear on a page which has been given approval by the Sysop or approved-user. In which case the page would then lose its approval by being edited by a non approved-user. --Brian 00:03, Oct 2, 2004 (UTC)
  • This sounds good but I think anyone should be able to edit discussion pages and have their edits appear immediately - how is the HTML problem dealt with on other wikis? 210.50.104.156 08:01, 17 Dec 2004 (UTC)
  • Lets keep it simpler and more similar to other wikis - have "readers" (users who are not logged in) and logged in users (users who are logged in but who are not "approved") treated the same! Also, I suppose sysops will be able to remove users from the "approved" users list?
  • Maybe the easiest way would be to only serve cached pages, and only let approved people purge the cache. Unapproved versions would only be viewable in the page history, so would not be found by search engines.
    We have a static link to the history and then static links to the various versions, so why wouldn't these be found by search enginges? Brianjd 10:18, 2005 Jan 24 (UTC)
  • The Foundation wiki is just a wiki where every page is protected, right? Shouldn't things be treated the same way as protected pages on other w:wikis (other w:Wikimedia Foundation wikis, anyway)? Brianjd 09:06, 2005 Jan 27 (UTC)
  • People can work on drafts before putting it live. (This was listed under advantages before I edited that section.)

    They already can, by putting the drafts on Meta, but this is bad as described above. Brianjd 10:01, 2005 Jan 29 (UTC)

  • I copied the following from the Advantages section Brianjd 03:02, 2005 Feb 16 (UTC):
    all this is true. The current situation was among other reasons selected for being easy. Developing a "approval" system required time and a developer motivated. If such a system was developped for another, I think I would support it. Anthere
    Huh? None of what you said makes any sense! Brianjd 10:01, 2005 Jan 29 (UTC)

The Article validation proposals have been split into multiple pages to keep them at a manageable size.

All | Page 1 | Page 2 | Page 3 | Page 4| New

Sanger's proposal edit

When I say the approval mechanism must be really easy for people to use, I mean it. I mean it should be extremely easy to use. So what's the easiest-to-use mechanism that we can devise that nevertheless meets the criteria?
The following: on every page on the wiki, create a simple popup approval form that anyone may use. ("If you are a genuine expert on this subject, you can approve this article.") On this form, the would-be article approver [whom I'll call a "reviewer"] indicates name, affiliation, relevant degrees, web page (that we can use to check bona fides), and a text statement to the effect of what qualifications the person has to approve of an article. The person fills this out (with the information saved into their preferences) and hits the "approve" button.
When two different reviewers have approved an article, if they are not already official reviewers, the approval goes into moderation.
The approval goes into a moderation queue for the "approved articles" part of Wikipedia. From there, moderators can check over recently-approved articles. They can check that the reviewers actually are qualified (according to some pre-set criteria of qualification) and that they are who they say they are. (Perhaps moderator-viewable e-mail addresses will be used to check that a reviewer isn't impersonating someone.) A moderator can then "approve the approver".
The role of the moderators is not to approve the article, but to make sure that the system isn't being abused by underqualified reviewers. A certain reviewer might be marked as not in need of moderation; if two such reviewers were to approve of an article, the approval would not need to be moderated.
New addition I think it might be a very good idea to list, on an approved article, who the reviewers are who have approved the article.
--Larry Sanger

I think it would be a very good idea to list, on any article, the names of ANY reviewers to have reviewed the article. Seeing who disapproved it is, perhaps, more important than seeing who approved it, since Wikipedia generally has high quality articles anyway. Brianjd 10:22, 2005 Jan 29 (UTC)

This is quite a good idea - domain experts may not be wikipedians, but if we can leach a little of their time when they encounter a page, then they could still contribute greatly. The big issues here, of course, are verification of identity and what to do when an approved article recieves an edit. Identity verification I think we can just muddle on through with, and showing a link to the most recently approved revision of an article on any page that has ever been approved by an expert. It would also be nice to list the experts at the bottom - not just for completeness in Wikipedia, but for a (very) small amount of prestige for the experts themselves. This proposal should also be pretty easy to implement, which might mean it could serve as a simplest-thing-that-might-work first stab at solving the problem.

Of course, looking at the age of submissions on this page, it's not clear that anything will get implemented any time soon...

Peter Boothe (not a wikipedian, not enough free time) Tuesday, 18 October 2005 - 06:12 PM (PDT)

Bryce's proposal edit

From my experience with the Wikipedia_NEWS, it seems that there's a lot that can be done with the wiki software as it exists. The revision control system and its tracking of IP addresses is ok as a simple screen against vandalism. The editing system seems fairly natural and is worth using for managing this; certainly we can expect anyone wishing to be a reviewer ought to have a fair degree of competence with it already.
Second, take note at how people have been making use of the user pages. People write information about themselves, the articles they've created, and even whole essays about opinions or ideas.
What I'd propose is that we encourage people who wish to be reviewers to set up a subpage under their userpage called '/Approved?'. Any page that they added to this page is considered to be acceptable by them. (It is recommended they list the particular revision # they're approving too, but it's up to them whether to include the number or not.) The reviewer is encouraged to provide as much background and contact information about themselves on their main page (or on a subpage such as /Credentials?) as they wish. It is *completely* an opt-in system, and does not impact wikipedia as a whole, nor any of its articles.
Okay, so far it probably sounds pretty useless because it *seems* like it gives zero _control_ over the editors. But if we've learned nothing else from our use of Wiki here, it's that sometimes there is significant power in anarchy. Consider that whoever is going to be putting together the set of approved articles (let's call her the Publisher) is going to be selecting the editors based on some criteria (only those with phds, or whatever). The publisher has (and should have) the control over which reviewers they accept, and can grab their /Approved? lists at the time they wish to publish. Using the contact info provided by the reviewer, they can do as much verification as they wish; those who provide insufficient contact info to do so can be ignored (or asked politely on their userpage.) But the publisher does *not* have the power to control whether or not you or I are *able* to approve articles. Maybe for the "PhD? Reviewers Only" encyclopedia I'd get ruled out, but perhaps someone else decides to do a "master's degree or better" one, and I would fit fine there. Or maybe someone asks only that reviewers provide a telephone number they can call to verify the approved list.
Consider a further twist on this scheme: In addition to /Approved?, people could set up other specific kinds of approval. For instance, some could create /Factchecked? pages where they've only verified any factual statements in the article against some other source; or a /Proofed? page that just lists pages that have been through the spellchecker and grammar proofer; or a /Nonplagiarised page that lists articles that the reviewer can vouch for as being original content and not merely copied from another encyclopedia. The reason I mention this approach is that I imagine there will be reviewers who specialize in checking certain aspects of articles, but not everything (a Russian professor of mathematics might vouch for everything except spelling and grammar, if he felt uncomfortable with his grasp of the English language). Other reviewers can fill in the gaps (the aformentioned professor could ask another to review those articles for spelling and grammar, and they could list them on their own area.
I think this system is very in keeping with wiki philosophy. It is anti-elitist, in the sense that no one can be told, "No, you're not good enough to review articles," yet still allows the publisher to discriminate what to accept based on the reviewer's credentials. It leverages existing wiki functionality and Wikipedia traditions rather than requiring new code and new skills. And it lends itself to programmatic extraction of content. It also puts a check/balance situation between publisher and reviewer: If the publisher is selecting reviewers to include unfairly, someone else can always set up a fairer approach. There is also a check against reviewer bias, because once discovered, ALL of their reviewed articles would be dropped by perhaps all publishers, which gives a strong incentive to the reviewer to demonstrate the quality of their reviewing process and policies.
-- BryceHarrington

We would need some standards for this to work. If the subpages are named differently and/or have different structures it will be too difficult to use.

Each publisher would have to check the credentials themselves. Leaving results of checks on user's pages/subpages is not acceptable; anyone can edit it; developers can edit the history. Brianjd 10:22, 2005 Jan 29 (UTC)

Magnus Manske's proposal edit

I'll try to approach the whole approval mechanism from a more practical perspective, based on some things that I use in the Wikipedia PHP script. So, to set up an approval mechanism, we need
    • Namespaces to separate different stages of articles,
    • User rights management to prevent trolls from editing approved articles.
From the Sanger proposal, the user hierarchy would have to be
    1. Sysops, just a handful to ensure things are running smoothly. They can do everything, grant and reject user rights, move and delete articles etc.,
    2. Moderators who can move approved articles to the "stable" namespace,
    3. Reviewers who can approve articles in the standard namespace (the one we're using right now),
    4. Users who do the actual work. ;)
Stages 1-3 should have all rights of the "lowerlevels", and should be able to "rise" other users to their level. For the namespaces, I was thinking of the following:
    • The blank namespace, of course, which is the one all current wikipedia articles are in; the normal wikipedia
    • An approval namespace. When an article from "blank" gets approved by the first reviewer, a copy goes to the "approval" namespace.
    • A moderated namespace. Within the "approval" namespace, no one can edit articles, but reviewers can either hit a "reject" or "approve" button. "Reject" deletes the article from the "approval" namespace, "approve" moves it to the "moderated" namespace.
    • A stable namespace. Same as for "approval", but only moderators can "reject" or "approve" an article in "moderated" namespace. If approved, it is moved to the "stable" namespace. End of story.
This system has several advantages:
    • By having reviewers and moderators not chosen for a single category (e.g., biology), but by someone on a "higher level" trusting the individual not to make strange decisions, we can avoid problems such as having to choose a category for each article and each person prior to approval, checking reviewers for special references etc.
    • Reviewers and moderators can have special pages that show just the articles currently in "their" namespace, making it easy to look for topics they are qualified to approve/reject
    • Easy handling. No pop-up forms, just two buttons, "approve" and "reject", throughout all levels.
    • No version confusion. The initial approval automatically locks that article in the "approval" namespace, and all decisions later on are on this version alone.
    • No bother of the normal wikipedia. "Approval" and "moderated" can be blanked out in every-day work, "stable" can be blanked out as an option.
    • Easy to code. Basically, I have all parts needed ready, a demo version could be up next week.

Ehrenberg addition edit

This would be added on to any of the above approval proceses. After an article is approved, it would go into the database of approved articles. People would be able to access this from the web. After reading an article, the reader would be able to click on a link to disapprove of the article. After 5 (more, less?) people have disapproved of an article, the article goes through a reapproval process, in which only one expert must approve it, and then the necessary applicable administrators. -- Suggested addition: there should be a separate domain, perhaps frozenwikipedia.org, which includes only approved articles. This could be used as a "reliable" reference when factual accuracy was very important.

This could be used as a "reliable" reference when factual accuracy was very important.

No, it couldn't. If factual accuracy is very important, you must check several sources, which means that it wouldn't make much difference whether you used the "approved" version or not. Brianjd 10:22, 2005 Jan 29 (UTC)

DWheeler's Proposal: Automated Heuristics edit

It might also be possible to use some automated heuristics to identify "good" articles. This could be especially useful if the Wikipedia is being extracted to some static storage (e.g., a CD-ROM or PDA memory stick). Some users might want this view as well. The heuristics may throw away some of the latest "good" changes, as long as they also throw away most of the likely "bad" changes.

Here are a few possible automated heuristics:

  • Ignore all anonymous changes; if someone isn't willing to have their name included, then it may not be a good change. This can be "fixed" simply by a some non-anonymous person editing the article (even trivially).
  • Ignore changes from users who have only submitted a few changes (e.g., less than 50). If a user has submitted a number of changes, and is still accepted (not banned), then the odds are higher that the user's changes are worthwhile.
  • Ignore pages unless at least some number of other non-anonymous readers have read the article and/or viewed its diffs (e.g., at least 2 other readers). The notion here is that, if someone else read it, then at least some minimal level of peer review has occurred. The reader may not be able to identify subtle falsehoods, but at least "Tom Brokaw is cool" might get noticed. This approach can be foiled (e.g., by creating "bogus readers"), but many trolls won't bother to do that.

These heuristics can be combined with the expert rating systems discussed elsewhere here. An advantage of these automated approaches is that they can be applied immediately.

Other automated heuristics can be developed by developing "trust metrics" for people. Instead of trying to rank every article (or as a supplement to doing so), rank the people. After all, someone who does good work on one article is more likely to do good work on another article. You could use a scheme like Advogato's, where people identify how much they respect (trust) someone else. You then flow down the graph to find out how much each person should be trusted. For more information, see Advogato's trust metric information. Even if the Advogato metric isn't perfect, it does show how a few individuals could list other people they trust, and over time use that to derive global information. The Advogato code is available - it's GPLed.

Another related issue might be automated heuristics that try to identify likely trouble spots (new articles or likely troublesome diffs). A trivial approach might be to have a not-publicly-known list of words that, if they're present in the new article or diffs, suggest that the change is probably a bad one. Examples include swear words, and words that indicate POV (e.g., "Jew" may suggest anti-semitism). The change might be fine, but such a flag would at least alert someone else to especially take a look there.

A more sophisticated approach to automatically identify trouble spots might be to use learning techniques to identify what's probably garbage, using typical text filtering and anti-spam techniques such as naive Bayesian filtering (see Paul Graham's "A Plan for Spam"). To do this, the Wikipedia would need to store deleted articles and have a way to mark changes that were removed for cause (e.g., were egregiously POV) - presumably this would be a sysop privilege. Then the Wikipedia could train on "known bad" and "known good" (perhaps assuming that all Wikipedia articles before some date, or meeting some criteria listed above, are "good"). Then it could look for bad changes (either in the future, or simply examining the entire Wikipedia offline).

A trivial approach might be to have a not-publicly-known list of words that, if they're present in the new article or diffs, suggest that the change is probably a bad one.
Why does it have to be not-publicly-known? Brianjd 10:22, 2005 Jan 29 (UTC)
I assume the idea is that if the list was known, it would be quicker for vandals to think of alternative nasty things to say. But really we would want to be watching all the changes anyway, so that by watching bad edits that got past the filter one could update the list. But really I think this naughtiness flagging is over complicated and not very useful. More helpful might be a way to detect subtle errors: change in dates, heights, etc. that casual proofreaders would be less likely to pick up. Rather than simply flagging the edit, maybe highlighting the specific change in the normal version of the article to warn all readers: "Colombus discovered Antarctica? in 1492.". Reviewers would then be able to right click on the change and flag it as OK or bad, for example. Luke 05:34, 15 Feb 2005 (UTC)

PeterK's Proposal: Scoring edit

This idea has some of the same principles as the Automated Heuristic suggested above. I agree that an automated method for determining "good" articles for offline readers is absolutely crucial. I have a different idea on how to go about it. I think the principles of easy editing and how wikipedia works now is what makes it great. I think we need to take those principles along with some search engine ideas to give a confidence level for documents. So people extracting the data for offline purposes can decide the confidence level they want and only extract articles that meet that confidence level.

I think the exact equation for the final scoring needs to be discussed. I don't think I could come up with a final version by myself, but I'll give an example of what would give good point and bad points.

Final Score: a: first thing we need it a quality/scoring value for editors. Anonymous editors would be given a value of 1 and a logged in user may get 1 point added to their value for each article he/she edits, up to a value of 100. b: 0.25 points for each time a user reads the article c: 0.25 point for each day the article has existed in wikipedia d: each time the article is edited it gets 1+(a/10)*2 points, anonymous user would give it 1.2 and a fully qualified user would give it 20 points. e: next if an anonymous user makes a large change then you get a -20 point deduction. Even though this is harsh, if it goes untouched for 80 days it will gain all those points back. It will gain the points back faster if a lot of people have read the article.

This is the best I can think of right now, if I come up with a better scoring system I'll make some changes. Anyone feel free to test score a couple of articles to see how this algorithm holds up. We can even get a way of turning the score to a percentage, so that people can extract 90% qualified articles.

next if an anonymous user makes a large change then you get a -20 point deduction.

This one is very dangerous. What is the threshold for a "large" change? If we set it too high, this won't work very well (it will presumedly react to blanking a page, but I think this is most often used as a substitute for the "nominate for deletion" link that should be there). If we set it too low, and several edits are made by anonymous users to a not-very-popular article, then it could give a low (negative?) score, even though the article is of a high quality. Brianjd 10:22, 2005 Jan 29 (UTC)

Anyone feel free to test score a couple of articles to see how this algorithm holds up.

How? We don't know the scores for the editors. The only way to test this algorithm properly is to download some articles and the entire contributions lists of all the contributors! Brianjd 10:22, 2005 Jan 29 (UTC) The Article validation proposals have been split into multiple pages to keep them at a manageable size.

All | Page 1 | Page 2 | Page 3 | Page 4| New

Trolls are not here to approve, and usually reject views of experts who must be certified by someone trolls grumble about. So one would expect them to be disgruntled by definition about such a mechanism. However, paradoxically, almost all trolls think they apply clear and reasonably stringent standards. The problem is that each troll has his own standards, unlike those of others!

That said, there is much to agree on: the mechanism itself must be genuinely easy to use, nothing slow and rigorous is of any value, the progress of Wikipedia and its proven process should not be impeded, and the results of the approval can be ignored. Where trolls would disagree is that verifying the expert's credentials are of any value. Any such mechanism can be exploited, as trolls know full well, often being experts at forging new identities and the deliberate disruption of any credentialing mechanism.

One might ignore this, and the trolls, but, it remains that what goes on at Wikipedia is largely a process not of approval but of impulse and then disapproval. As with morality and diplomacy, we move from systems of informal to formal disapproval. Today, even our reality game shows demonstrate the broad utility of this approach, with disapproval voting of uninteresting or unwanted or undesired candidates a well-understood paradigm.

So, imagine an entirely different way to achieve the "desirements", one that is a natural extension of Wikipedia's present process of attempt (stubs, slanted first passes, public domain documents, broad rewrites of external texts) and disapproval (reverts, neutralizing, link adds, rewrites, NPOV dispute and deletions). Rather than something new (trolls hate what is new) and unproven that will simply repeat all the mistakes of academia. Imagine a mechanism that

  • Begins with all approved, and makes it possible to broaden or narrow the selection of approvers (e.g., one person might only wish authors who have phd's, another would allow for anyone who has made an effort to approve any articles) for each reader, or supported class of reader, simply by disapproving editors.
  • Allows for extracting topic-oriented sets (e.g., in order to produce an "Encyclopedia of Music") relying on metadata that is specific to each such supported class of reader, not part of the Wikipedia as a whole
  • Exploits ongoing feedback ("I don't care about this." or "I don't understand this.") to adjust the list of articles of interest. Each user can begin from some class (like Simple-English-only readers), and adjust if they like.
  • Potentially, exploits more feedback on authors ("I can't believe this." or "I find this irrelevant.") to adjust also the list of disapproved authors/editors.
  • Credits each troll who has driven off a disapproved author or editor. OK, that's a joke, but what do you expect, I'm a troll neh neh neh...

By embracing and extending and formalizing the disapproval, boredom and disdain that all naturally feel as a part of misanthropy, we can arrive at a pure and effective knowledge resource. One that rarely tells us what we don't care about. And, potentially, one that can let us avoid those who we find untruthful.

Propose and veto edit

Include articles that have been proposed by at least one person, and vetoed by none.

Where two versions of an article are so approved, pick the later one. Where no versions of an article are so approved, have no article.

That's it.

This is too easily abused. It adds too much load to the server for the dismal results it will produce. Brianjd 10:22, 2005 Jan 29 (UTC)

Andrew A's proposal edit

See Referees. This proposal is consistent with much of the above.

Main features:

  • No impact on the existing way of doing things other than appearance of extra links on some article pages, saying that this article has been reviewed and linking to details.
  • All articles remain in the main namespace. The only new namespace is a special one used to keep track of who has reviewed what.
  • All reviews relate to a specific version. As any update to the article creates a new version, this initially has no reviewed status at all (with some possible exceptions where there is no need to maintain independence between contributor and reviewer and the software might therefore flag the approval by default). The concepts of freezing articles or promoting them in a configuration management system are not needed.
  • All reviews are simply check boxes or radio buttons. Discussion remains on existing talk pages.
  • QA on the main Wikipedia is supplied by three levels of reviewer:
    • A basic level which we initially give to anyone who asks for it,
    • A specialist level at which the reviewer claims expertise on the subject of the article,
    • A referee level at which the reviewer claims to be a quotable authority on the subject of the article.
  • At the referee level only, the reviewer is expected to be independent of the article author(s). At other levels, reviewers are encouraged to edit and refactor as necessary, but not to review articles which are primarily their own work.
  • At all levels, collegiate responsibility is the system, identical to the existing Wikipedia. There is no attempt to formally identify areas of expertise. Other members at this level are expected to comment if a particular reviewer is performing badly, and particularly if they are venturing outside their areas of expertise at the higher levels.
  • At all levels, all reviewers are expected to approve only articles that they consider of high quality.
  • All three levels allow for dissent from the approval of others.
  • The same software (with extensions) can support
    • Wikipedia 1.0,
    • A G-rated Wikipedia,
    • Specialist encyclopedias.
  • Readers of Wikipedia must be offered user-friendly ways of restricting what they see to their desired level of review and (hopefully) reliability. These might include
    • User options,
    • URL aliases (e.g. www.family.en.wikipedia.org, www.refereed.en.wikipedia.org).

David Levinson's proposal (revised Jan 2005) edit

Establishing Release Edition edit

The "Wikipedia Release Edition" would be a (hopefully large) subset of "Wikipedia Working Edition".

Users could browse the release edition or the working edition. When browsing, a header would note whether or not the latest version in the working edition was the same as the release edition.

At the bottom of every page of Wikipedia there would be two buttons: "Approve" and "Disapprove".

Any logged in user can vote once. Article versions would be scored based on number of votes for approval minus votes for disapproval. The article version with the highest score (so long as it is greater than 1) would be the released article.

A user who clicks "edit this page" would be presented with the newest version, and a note if this is not the Release Edition version. (showing differences)

Any save is automatically a vote in favor of that version and a removal of approval votes for every previous versions by that user.

Recent changes would note if a new version becomes the release version.

Advantages edit

  • Doesn't rely on "expert model",
  • Uses consensus approach, the article with the most favorable votes, and more favorable than unfavorable votes, is released;
  • If votes go awry, edit wars begin, sensible users can vote to "disapprove" bringing score negative;
  • Release edition can happen quickly, no approval queue waiting for votes, only one voter beside author is required (assuming the article garners no disapprovals).

Disadvantages edit

  • An older version with lots of votes in favor may be difficult to dislodge without active observation.
  • Editing may be more complicated with different versions for release and working edition.
  • Votes can be manipulated by a user with multiple log-ins.


Establishing Quality Assurance edit

The release edition depends on votes. It also may lead to the perception that there is a fork, or create confusion about multiple wikipedias. A more "wiki" way of doing things would be to have only one release version, but allow anyone to rate any version. This system exploits version control to give users the ability to know who has "certified" a version.

So if I know that revision 01.22.2005.10:33 of an article has been certified by User:Authority, User:Hobbyist and User:Professional, I may trust it more than an article certified by User:Vandal or User:Random.

Users who want to influence what other people read will need to establish credibility. Real world information helps here, as well as within-Wikipedia credibility.

Given the potential number of Users who might certifiy an article, and the likelihood I have never heard of most of them, users should be allowed to join certification "clubs" (or rather "clubs" should be allowed to include members, through a process like Wikipedia:Requests_for_adminship). There might be a group who certifies articles on biology, and another group that certifies articles on astronomy, and so on. There could be multiple clubs who certify articles (biology and astronomy might both certify an article on exobiology. There might be more than one astronomy club, if there are differences, and good articles would be certified by both, and controversial articles by only one. Clubs could team, so if Club:Biology and Club:Astronomy trusted each other, they would be part of a Team (say Team:Science) (and of course there might be multiple teams certifying the same article, or different versions, which would ultimately need to be hashed out in a wiki way or would stand).

If club members went off the reservation so to speak, and certified rubbish, they could be kicked out of the club, and they would no longer be able to speak for the club (there certifications would no longer hold the club imprimatur). (similarly clubs could be kicked out of teams).

The important thing is that no one is requiring that only credentialed individuals be permitted to certify an article, but that if they do, users can of their own free will give that more credance than an uncredentialed person certifying the article.

Does this make wikipedia more complicated? Yes of course it does.

Does this make wikipedia more reliable? Yes, as you would now know who thought what was accurate.

Additions to MediaWiki required by this edit

  • A way of tracking who certified each version,
  • A way of tracking who was in what club (and what club was in what team).
  • A certification history of articles.
  • At the top of the page it would notify me if there was a more recent uncertified version of the article, or the most recent version of the article certified by Team:X or Club:Y.
  • A certification or approval tab.
  • The ability to view only articles approved by Team:X, Club:Y, or User:Z. (This could be done like categories, but there is probably a better mechanism)
  • The ability to see Recent Approvals.


dml 17:09, 22 Jan 2005 (UTC)


dml

Giles Robertson's proposal edit

There doesn't need to be a central approval mechanism; approval is all about trust, and centrally dictating who can be trusted raises too many issues. That being said, an approval mechanism that relies on everybody voting is open to abuse, and doesn't carry the weight of approval by authority.

Instead, a decentralised proposal [which is similar to Andrew A's proposal], would be for authorities to create pages that list why they are an authority, and then list the pages and versions that they have approved. This could be done without any change to the existing codebase, but a system that marked some pages as "authority conferrers", and that marked on each page which authorities had approved it, would improve the usability.

Authority conferring pages could confer authority on to other authority-conferring pages; the aim is to build a 'web of trust' somewhat similar to that which provides the assurement in PGP keys that people are who they say they are. That said, it is important to prevent authority-conferring pages from fabrication; it may be necessary to lock these pages to certain users.

How do we establish that an authority-conferring page has authority? If we link it to another page, and that page displays text or graphic (with the Wikipedia logo) confirming that the external page trusts the authoritiy-conferring page, then we know, if the external page is free from interference, that the authority-conferrer is trusted. How strong that is as an approval, depends on what the external page(s) for the authority-conferrer are.

This also allows a selective extraction of approved pages: Pick an authority-conferrer, and extract every page that is approved by that authority, and, at your option, other pages at lower levels (e.g. page approved by authority trusted by authority you've just chosen).

The major disadvantage is that this introduces yet more metadata, in the form of the authority-conferrers. It also does not stop the creation of spurious authority-conferrers; though they won't be trusted or have much external approval, they may mislead. Any save is automatically a vote in favor of that version and a disapproval of every previous versions.

Recent changes would note if a new version becomes the release version.

Giles Robertson

m.e's proposal edit

I have a proposal at User talk:M.e/Production Wikipedia for creating 'production quality' pages. Highlight is the idea of collecting 'issues' and 'endorsements' against a 'frozen' version of a page. Hopefully we can then converge on a revised version that just fixes the issues. m.e. 12:16, 12 Sep 2004 (UTC)

Szopen's proposal edit

Pending edits. Maybe not directly about approval mechanisms, but similar. Edits would be delayed for a fixed amount of time, and if none would object to them, they would be automatically approved.

ChrisG proposal edit

main article: User:ChrisG/Approval_mechanism

This is a suggested approval mechanism to meet the need to make Wikipeda a more reliable source and to support Wikipedia 1.0. It is a twist on the namespace workflow mechanism suggested by Magnus Manske here. It is a process firmly based on wiki principles. It is flexible because decisions about quality are based on human judgment that each article meets the minimum standards for approval for Wikipedia 1.0. If it is agreed that standards should be raised then humans can amend their judgment appropriately during approval debates for more modern versions.

Aims

The design aims to:

  • Have no effect on the wiki process for creating and developing articles.
  • No fork within Wikipedia
  • Identify specific versions of articles as being stable and hence give the reader the comfort that the flagged version has been reviewed as meeting certain minimum standards.
  • Identify specific versions of articles as suitable for Wikipedia 1.0.
  • The process needs to be scaleable. It needs to be able to approve a large percentage of Wikipedia within 3-6 months.
  • Mark versions of each article appropriately depending on the outcome of any approval process for that version. This gives a guide to quality of particular versions in the history.
  • Be simple to understand.
  • Be easy to manage.
  • Be democratic.
  • Avoid requiring any exclusive, 'expert' clubs.
  • Be relatively simple to code.
  • Difficult to game.
  • Evolutionary approach rather than fundamental change in Wikipedia process.
  • Be acceptable to the majority of Wikipedians.

Overview of the process

The process makes use of namespaces to create a workflow process. A copy of the recommended version of an article is moved through the system, with an attached discussion page and votes. The process is best illustrated visually:

 

Maureen's proposal 1 edit

The following was proposed by User:Maurreen on Wikipedia talk:Forum for Encyclopedic Standards 06:49, 19 Nov 2004 (UTC), but seems appropriate to mention here. -- Jmabel | Talk 22:45, Nov 23, 2004 (UTC)

<begin Maurreen's post>
Here's an outline of a possible plan, submitted for your suggestions. One advantage is that the computer part of it is simple. Maurreen 06:49, 19 Nov 2004 (UTC)

  1. An article could be "approved" with at least 10 votes and no more than 10 percent of the votes objecting. Only registered users could vote. Users with multiple accounts should only vote once. Voting for each article would be open at least a week.
  2. To allow appropriate time for review and possibly improvement, no more than three articles would be considered at a time. Others nominated would be compiled on a "pending" list.
  3. Nominations would be accepted only from registered users who agree to maintain the article. So, in a sense, we would be voting on the nominators also. The commitment to maintain the article would allow the article to still be edited, but give us some assurance that the article wouldn't deteriorate.
  4. "Approved" articles would be compiled on a list, along with the names of the maintainers.
  5. Possibly the nominations or list could refer or be limited to a specific version of the article. That is, the article at such-and-so date and time.
  6. "Approved" articles could have some indicator of that status on the article itself.
  7. Guidelines or standards for what is worthy could be determined before voting ever starts on articles.
  8. Nominations would be encouraged from featured articles and peer-reviewed articles. Initially, general topics (such as Electronics) would be preferred to more-specific ones (such as Ohm's Law). That could work toward the "approved" material having a broad and even general base.
  9. Nominations of contentious articles would be discouraged, at least initally.

<end Maurreen's post>


Maurreen's proposal 2: "Reviewed articles" edit

Copied from Wikipedia talk:Forum for Encyclopedic Standards. Maurreen 14:12, 27 Nov 2004 (UTC)

Advantages:

  • Simple and quick. This could be implemented and useful within an hour of its adoption.
  • Takes best-of-both-worlds approach to wiki nature and any standing of experts.

Outline:

  1. Any registered user could review any article.
  2. There would be a category and list of reviewed articles.
  3. The list would indicate which version of an article was reviewed by each reviewer.
  4. Reviews on the list would be no more than a paragraph long.

Options:

  1. Detailed reviews could be written and linked to from the list.
  2. Reviewers who chose to could list themselves and a paragraph about any relevant qualifications or limitations on a list of reviewers.
  3. Reviewers should at least indicate if they have worked on the article.
  4. We could have a list of articles for which a review is desired, or use the current peer review page.
  5. We could choose a set of suggested levels or other indicators (such as “acceptable,” “weak,” “comprehensive,” etc.).
Maurreen 21:04, 21 Nov 2004 (UTC)

Interim measure

Maybe it would be helpful to think of my proposal for article reviews as an interim measure. It isn't intended to be perfect by any means.

It is intended to give readers some measure of the quality of any given article or article version.

It is something that could very easily be produced and used while something better is discussed, decided and developed. It does not preclude any other system. It can include, or not include, a minimum standard for Wikipedia articles, which would need to be developed.

It could be one of any number of tools that work toward an eventual paper or "release" version. Maurreen 09:34, 25 Nov 2004 (UTC)

Editorial board(s) edit

Copied from Wikipedia talk:Forum for Encyclopedic Standards. Maurreen 14:12, 27 Nov 2004 (UTC)

My thought on this is that there is an immediate problem with 'Any registered user could review any article'. There's no problem with having anyone review an article - however to improve the credibility and reliability of Wikipedia, I think that we (unfortunately) need a link to the 'outside world' where people have real names and qualifications, rather than 'karma' built up under a nom-de-plume. For articles' credibility to be increased, someone's, or some people's, reputation needs to be on the line. My thoughts are that a properly constituted editorial board needs to approve (and possibly modify) articles. As I've mentioned elsewhere, there could (and in my view should) be multiple competing boards aiming to set their seal upon particular article versions. For example, one such board could be a set of academics in a particular subject whose names are known, who have a publishing record in peer-reviewed journals, and have an academic reputation. This does not preclude a self selected group of people setting up their own board under noms-de-plume and producing a Wikireputation based set of approvals. Users would have the choice of using either or both or neither board's seals of approval (article tags) as a filter into Wikipedia. WLD 21:48, 22 Nov 2004 (UTC)

ChrisG's template and process edit

Copied from Wikipedia talk:Forum for Encyclopedic Standards. Maurreen 14:12, 27 Nov 2004 (UTC)

Thinking some more about Maurreen's proposal it occurred to me that with the combination of templates and categories we could set up a voting system to approve articles. Consider this template (I used subst to create the text, e.g. {{subst:ChrisGtest}} ):

If you look to the bottom of the screen this template categorises the article as a candidate for Wikipedia 0.1 and also by the current day, month and year (uses variables so need to update). This means anybody wishing to vote on articles need only check the appropriate category for articles. They could click the links to the talk page of the articles they are interested in. The talk page of the article would give the link to the specific version and the votes so far; after checking the article the person could vote as they see fit.

As time passes, the articles listed as candidates will dwindle as they are approved or rejected. The fact the candidates are categorised by date would mean we know when to close the vote of any articles that have been sitting in candidate status for too long.

Rejected articles would have the candidate category removed. Successful articles would be given a Wikipedia 0.1 category instead, again identified by the date of the version approved. In addition the specific version of the article should be listed somewhere as approved on that date, i.e Wikipedia: Approved 0.1/1 Dec 2004.

I realize we don't have a consensus on how to approve articles, largely I think because some people are talking about approving top quality articles and others are talking about minimum standards for the CD/DVD editions; but this is a method which we could apply now without changes to the software, which would scale and would thus be suitable for either purpose.

ChrisG 01:10, 25 Nov 2004 (UTC)

Hawstom's proposal edit

If we want a mechanism that works right now to improve our credibility a little bit without sacrificing the principles we are all comfortable with, it must be simple and open. I propose that we simply start with the existing technical hierarchy of anon/user/admin/bureaucrat/developer to create a system that can assign a level of confidence to every version of every article, then display by default the most recent version with a disclaimer and links if the confidence level is low.

Principles edit

An article is only as trustworthy as the last hand that touched it. Wiki works because of immediate gratification.

Viewing edit

We continue to show to the public the latest version. If the confidence level for that article version is lower than our established standard, we show a disclaimer along with link to any more trusted article versions available. We put in user preferences a selection for the level of credibility to show by default(Developer/Bureaucrat/Admin/User/Anonymous), and we enable anon users, via a cookie or session id, to say, "Show only article versions with credibility level 2 or higher."

Editing edit

For users with high permissions, we put a check box or radio button on the editing page so that they may save with artificially low trust interim work with remaining unresolved credibility problems.

How it's simple edit

Nobody has to take the time to review articles. The approval occurs naturally as a by-product of the way we already work.

Since an article is only as trustworthy as the last hand that touched it and wiki works because of immediate gratification, only natural and open methods such as this can work at the Wikipedia.

How it improves our credibility edit

Places editorial responsibility on our trusted users. While it is true we have expressly disclaimed any editorial authority for our trusted users, the implicit attitude of the community has been to give them that authority. This proposal simply recognizes the de facto arrangement. In the future, new levels of editorial authority may be created.

Endorsement idea edit

Copied from Wikipedia talk:Forum for Encyclopedic Standards. Maurreen 14:12, 27 Nov 2004 (UTC)

We might be able to develop a system by which anyone could endorse a particular version of an article; presumably groups could form whose endorsement would carry some weight. The mechanism would be one "reader" approval equals point one(0.1) an "editor" would rank one point (1) an "editing librarian" would rank one to ten (1 - 10) an admin would rank 10 to eleven (10 - 11), a group could be assigned a similarly weighted scoring rank, i.e. an opinion offered editorially by "The Royal College of Physicians & Surgeons" might rank one way or the other compared to a select group of its Alumni. Deriving from this an articles approval-rating would be a function of its veracity as regards the opinion of the majority of its' readers. an entry in wikipedia would have (available for review) an articles position relevant to all other articles. (Idea from anon, Maurreen moved from project page.)

mijobe's proposal edit

One big benefit of Wikipedia is the ease of use. This must not be effected by the approval machanism. Because aproval easily can be used to vandalism it only is to be used by known people. Here my idea to reach that goal:

  • only people which have contributed to eg 100 validated articles are allowed to approve new versions. This should be measured by software to ensure that all users regularly contributing to Wikipedia get the approval allowence
  • if an editor is allowed to approve an article she get's an additional button if she opens an unapproved version which allows to vote for approval or against approval. If she votes against approval she has to give reasons for that
  • the approval only can be decided unanimously
  • if eg 10 votes are given for approval the version will be flagged as stable
  • if someone is vandalizing the approvals by always voting no she can be blocked from the approval process
  • if a new version only differs in typos from an approved one admins are allowed to approve the new version without new approval

This way it should be impossible for vandals to misuse the approval mechanism, while it allows to increase the amount of approving people without manual intervention. Users should be able to configure if they want to see the approved version by default or the unapproved. Anonymous users should be able to set this for the current session. There should be an additional button to toggle between these two version which is greyed if they are equal. --Mijobe 14:07, 21 Mar 2005 (UTC)

Grika's proposal-Delayed update edit

This is really an elaboration of Szopen's proposal. This proposal is an attempt to improve article validity while maintaining the dynamic quality and community spirit of Wikipedia

First, create a new admin level called Superuser (or Reviewer, or anything else already suggested here), available to anyone with at least 3 months and 500 edits. Superusers are not sought, they are granted (denied or revoked) by any Admin upon request by a qualified user.

Articles have a lag, i.e. they don’t update until a Superuser or higher OKs the edit or the lag expires. The lag setting of an article is assigned (and changed) by an administrator. The lag amounts might be:

  • None
  • 1 hour
  • 6 hours
  • 1 day
  • 3 days
  • 7 days
  • Admin Only

None means that the article updates immediately, used for lists and other low vandalized articles. Admin Only means that only an admin or higher can qualify the edit and the lag never expires. The edits of Superusers (and perhaps anyone qualified to be a Superuser) and higher are visible immediately. New pages would default to the longest lag, but less than Admin Only (in the example above, 7 days).

The associated Talk page would still be immediately updated for everyone. Any user can request that the lag be temporarily set to None if they have a lot of edits to do (lag should automatically reset after four days).

The following pages might be effected thus:

  • Watchlist - pages might now show a number set by each entry such as (2/5/4) meaning 2 edits are in lag, 5 lags expired and 4 edits were certified. It could also simply duplicate the History page change below.
  • Recent Changes - articles will show up one or two times; one for the original edit and again when the lag expires or the edit is certified (it will show up only once if lag was set to None).
  • History - each entry might show a code such as (L) meaning in lag, (E) meaning lag expired, (S) meaning Superuser certified or (A) meaning Admin certified.

In the end: popular, often vandalized pages would be protected; Admins would not be burdened any more than they are now (and maybe less so); activities by Admins and trusted non-Admins would not be effected at all; and edits and new articles would not be "lost" due to Admin or Superuser lack of interest. The result should be a better resource for casual users with almost no effect on current active editors. Grika 20:40, 20 September 2005 (UTC)[reply]

Wolfkeeper's stabilisation edit

Basically, only the most recent stable version of article pages are shown. A stable version is one produced by any well established editor; or versions by anonymous or new user(s) that haven't been subsequently altered after a period of time, say 24 hours. The idea is that this stabilisation period gives people a chance to check their watchlists and remove vandalism and check edits against policy.

See: Wikipedia:Wikipedia:Timed article change stabilisation mechanism

The advantage of this kind of scheme is that it is extremely lightweight to use and no voting or other big changes to the wikipedia UI are needed.

Risk's Proposal edit

I thought I'd add to the haystack with my own ideas. They're mostly an elaboration on things that have already been proposed. I'll start out with a simple trust metric and discuss two elabotarions.

the basics edit

Each article gets one button marked 'validate', which lets the user validate the current version of an article (which is the only version that users can 'vote' on). That's all. Every user can validate, even anons. There is no change to the current user hierarchy. Every user starts with a basic 'weight', how much influence the user has on the validation process, say 0.1. Anons stay at this level, registered users can increase their weight. Once enough people have voted for validation on an article to reach a certain preset validation threshold (say 100), the article becomes validated. A user's weight increases when:

  1. An article that she has contributed to becomes validated. When a new version of an article becomes validated all the editors that have contributed between the last validated version and the new version get a small increase to their weight. In short, trusted editors get more validating power.
  2. The user has voted to validate an article that has become validated. For every article that becomes validated, the users that voted for it get a slight increase in weight (this is very small, and based on how overwhelming the vote was). In short, trusted validators get more validating power.

Additional comments:

  • The first 'weight' rule poses a problem. What if a user commits vandalism, that gets reverted, and the article gets validated? The vandalism counts as an edit and contributes to the vandal's weight. The solution here is that how much weight the user gains from a successful edit is based on how much of his contribution remains in the validated article. If only one letter of the user's edit remains, the user gets pretty much nothing at all.
  • The validation threshold differs per article based on a number of aspects such as the popularity of the article (by number of edits).
  • There is no negative vote. If a user thinks an article shouldn't be validated, she can change it or mark it unsuitable for validation with a template to call attention to it. Because any article will become validated if you leave it alone long enough (because of anons that don't know what they're doing), the amount that the article has already been validated decreases by a small amount per month (or the validation threshold increases).
  • When an unvalidated article that already has some votes going for it is edited, the validation starts anew. A part of the votes that it already had are transfered to the new version of the article. How much this is reduced, is based on the size of the edit. For instance, if an article is at 80% of the validation threshold and gets a 1 byte edit, the new version will be 70% validated. Anything more than two sentences wil greatly reduce the percentage. If a version of an article is precisely the same as a previous unvalidated version (for instance, because an edit got reverted) it will take on the exact number of votes the previous version had, or keep it's own, whichever is highest.
  • How users should validate (what it means to click the button) is left up to the community. The button could be marked 'click this if this article is free of spelling errors' or 'click this if you've fact-checked the article and found it correct'.
  • The validation thresholds should start off very high (so that it takes a lot of users to validate an article). A page could be created in the community portal for articles that require validation, to speed things along.

Advantages:

  • No change to the user structure. Wikipedia remains as free and as open as it has always been.
  • Simplicity. Just one button to click. Wikipedia retains as much user-friendliness as possible. All the complexity is in the technology and not on the user side.

Expert Knowledge edit

The above system would be good enough for fighting vandalism and checking grammar. For fact checking purposes the system needs to be more complex. The following addition is a way to take expert knowledge into account.

The first thing needed is a way to measure distance between articles, based on topic. For instance, the distance between 'black hole' and 'supernova' is very small and the distance between 'black hole and 'donkey' is very large. Ideally this would be measured by the shortest chain of links between the articles, but that would be to expensive computationally. There are possible shortcuts. See the additional comments for more.

The system takes previous edits and previous validations into account as described, but it weighs them based on article distance. For instance, if I vote to validate the 'black hole' article the algorithm calculates my weight for this vote dynamically by checking my edits and validations and adding them to my weight (by the rules already described), but weighted by distance. A successfull edit in the supernova article will increase my weight a lot, whereas a successfull edit in the donkey article will only slightly increase my voting power in the black hole article. This way, users who are successfull in a certain topic, get more validating power in that topic.

additional comments:

  • The ideal measure of distance would be the shortest chain of links between the two articles. This is far to expensive to calculate for every article dynamically. A good approximation would be based on categories (provided that the wiki is well categorized, like wikipedia is). That way the distance could be based on the highest level category the two articles have in common. 'Black hole' and 'donkey' only have the bottom-level category 'thing' in common (great distance), whereas 'black hole' and 'supernova' have the category 'Stellar Phenomena' in common. ('Black hole' isn't a member of 'Stellar Phenomena, but rather of the category 'Black Holes', which is a subcategory of 'Stellar Phenomena'). While it may still be a little expensive to calculate this, it's far more manageable and with a little cacheing and other optimizations I think it's very doable.
  • Checking exvry single one of a user's edits and validations and weighting them to distance every time a user clicks validate is too expensive as well. On any kind of decent wikipedian the system would have to check over 1000 edits (and god knows how many validations) and calculate the distance for every one of them. The solution, again, lies in the category tree. Where in the first system, the user had a single value called his weight, the user now has a small category tree, with weights attached. Let's say I have a successfull edit on the black hole article, and I earn 0.1 in weight for it. This weight is added to my personal 'weight tree' at the black hole-node, at the astrophysiscs-node, then at the physics-node, all the way down to the things-node. After many successfull edits and validations, I have a tree with a weight at the things node, that represents my total weight (like in the previous proposal), and several nodes that represent my weight in various categories, like astronomy. If I've never done anything in the category 'Politics', my weight for that category is zero (and the node doesn' have to be stored, minimizing the needed space). Now when I'm trying to validate the 'black hole' article, the system can check my weight for the 'black hole' category, the 'astrophysics' category etc. (All the categories that my tree has in common with the article I'm validating) and use them to increase my weight based on my expert knowledge.

advantages:

  • Expert knowledge is taken into account.
  • Still one button, still no change in the user hierarchy.

levels of validation edit

A third (and smaller) addition would be to have different levels of validation. Instead of one button there would be three (or more) buttons. A user could validate an article for spelling and grammar checking, layout and structure and finally fact checking. Each would require a different level of attention and each would require different settings for the algorithm. The community could require fact checkers to explain their findings on a the discussion page.

Risk 13:53, 27 September 2005 (UTC)[reply]

_TOC_

DiamondGeezer's Proposal: Karma Quality Editing edit

  1. Each registered user has a "karma" attribute assigned to them.
  2. Each SysOp has a karma of 500. Every designated "editor" has a karma of 400 (this would be a new attribute for each contributor)
  3. When a new article is written, the karma of the author is increased by 5 points.
  4. Every time an article is edited, the modifications are made to the wiki instantly, but are stored for a finite period (Bureaucrat settable parameter) and the author is informed of an edit to his/her original article. The modification is shown in a different color until the edit is accepted.
  5. If the edit is accepted, the karma of the person who made the edit is increased by +1
  6. If the edit is not accepted, the karma of the person who made the edit is either zero or reduced by 1.
  7. If the edit is rejected, then the editors are informed of the rejected edit. If an editor decides that the edit is justified, then the edit is put in and the karma of the person who made the edit increased by +2. If the editor decides the edit is not justified, or if the editors do not respond to the edit at all within some (bureaucrat-set) period, then the edit is junked and the originator informed that his/her karma has been decreased.
  8. If a contributor's karma reaches the level of the editors, then the contributor becomes an editor (if he/she desires).
  9. If a contributor's karma reaches some low point (like -25) then the contributor's edits get bumped straight to the attention of the editors.
  10. If a contributor's karma reaches some minimum (like -50) then the contributor's edits get immediately thrown in the bin.
  11. If a contributor does not have an account, the IP address gets a karma of zero. If the edit is accepted, then the karma of the contributor is not increased, but if rejected, then the karma is decreased.
  12. The bureaucrats can decide that if they have an authoritative expert, then that person's karma can be raised to whatever value up to an editor.
  13. The editors have the discretion to punish vandalism with more negative karma.
  14. Each article has a star-value (like the barnstars). The more stars the article gets, each star adds +1 to the author's karma (up to a maximum of 10).
  15. There can be more than one author assigned to an article, up to a maximum of five.
  16. If an article is deleted then 0 to -10 points are taken from the author's karma at the editor's discretion.

Discussion edit

This means that vandalism is discouraged, sensible contributions are rewarded, idiots get less and less attention until they effectively ban themselves. Bad articles or troll articles are a quicker way to get the contributor to give up their privilege to contribute to the wiki.

Editors and stylists (such as what Project Galatea are trying to do) would exercise weight to preserve the contents of the wiki from attack, increase the intellectual weight given to authors of particular articles.

Everybody can contribute to the wiki regardless of background, see instantly the results of their efforts and quality is rewarded.

--DiamondGeezer 20:19, 15 November 2005 (UTC)[reply]

The paragraph as unit of validation edit

Perhaps we could use paragraphs or subsections as semantic units for validation? A paragraph has sufficient internal consistency to stand alone from the rest of the article, and it is difficult to subvert one paragraph from within another, since paragraphs usually contain some context information that anchors them to the rest of the text. By doing this, small edits to one or more paragraphs would not necessarily invalidate the validation of an entire article. -- The Anome 00:25, 16 December 2005 (UTC) The Article validation proposals have been split into multiple pages to keep them at a manageable size.[reply]

All | Page 1 | Page 2 | Page 3 | Page 4| New

Summary of Validation Proposals edit

Automated Heuristics edit

Individual Voting on Individual Articles edit

Centralized, Hierarchichally-appointed Committees edit

Anyone can approve an article edit

"Anyone" can approve an article(subject to approval by centralized committee) edit

Disapproval Models edit

Social Networking edit

Approval by voting edit

Automated Trust Networks edit

See also

Article validation proposals >

DiamondGeezer's Proposal: Karma Quality Editing edit

  1. Each registered user has a "karma" attribute assigned to them.
  2. Each SysOp has a karma of 500. Every designated "editor" has a karma of 400 (this would be a new attribute for each contributor)
  3. When a new article is written, the karma of the author is increased by 5 points.
  4. Every time an article is edited, the modifications are made to the wiki instantly, but are stored for a finite period (Bureaucrat settable parameter) and the author is informed of an edit to his/her original article. The modification is shown in a different color until the edit is accepted.
  5. If the edit is accepted, the karma of the person who made the edit is increased by +1
  6. If the edit is not accepted, the karma of the person who made the edit is either zero or reduced by 1.
  7. If the edit is rejected, then the editors are informed of the rejected edit. If an editor decides that the edit is justified, then the edit is put in and the karma of the person who made the edit increased by +2. If the editor decides the edit is not justified, or if the editors do not respond to the edit at all within some (bureaucrat-set) period, then the edit is junked and the originator informed that his/her karma has been decreased.
  8. If a contributor's karma reaches the level of the editors, then the contributor becomes an editor (if he/she desires).
  9. If a contributor's karma reaches some low point (like -25) then the contributor's edits get bumped straight to the attention of the editors.
  10. If a contributor's karma reaches some minimum (like -50) then the contributor's edits get immediately thrown in the bin.
  11. If a contributor does not have an account, the IP address gets a karma of zero. If the edit is accepted, then the karma of the contributor is not increased, but if rejected, then the karma is decreased.
  12. The bureaucrats can decide that if they have an authoritative expert, then that person's karma can be raised to whatever value up to an editor.
  13. The editors have the discretion to punish vandalism with more negative karma.
  14. Each article has a star-value (like the barnstars). The more stars the article gets, each star adds +1 to the author's karma (up to a maximum of 10).
  15. There can be more than one author assigned to an article, up to a maximum of five.
  16. If an article is deleted then 0 to -10 points are taken from the author's karma at the editor's discretion.

Discussion edit

This means that vandalism is discouraged, sensible contributions are rewarded, idiots get less and less attention until they effectively ban themselves. Bad articles or troll articles are a quicker way to get the contributor to give up their privilege to contribute to the wiki.

Editors and stylists (such as what Project Galatea are trying to do) would exercise weight to preserve the contents of the wiki from attack, increase the intellectual weight given to authors of particular articles.

Everybody can contribute to the wiki regardless of background, see instantly the results of their efforts and quality is rewarded.

Branchless Stable Versions edit

Heres a simplified proposal with a lot of flexability (copied from my post to WP):

  • Each article has a stable marker, as well as metadata for each revision that shows if a revision was ever marked stable. Appropriately permissioned users can move the stable marker forward to a newer version.
  • Versions are referred to as "stable" and "proposed" - this reflects that the "proposed" versions are more likely to contain errors, POV, ommissions, and that they don't reflect final products.
  • Users without accounts default to seeing the stable version, and the software is set up so that search engines will only see the stable version.
  • A stable tab and a proposed tab are added to the interface to switch between the proposed and stable versions.
  • Because the stable version is just a marker, its not possible to edit the stable version directly - edits have to take place in the "proposed" version, therefore all edits stay on the trunk, and any rework to the stable version has to incorperate the changes of subsequent proposed versions.

Hopefully this will be workable - Stephanie Daugherty (Triona) - Talk - Comment - 15:49, 24 January 2006 (UTC)[reply]

Added - This could also be adopted to the current Wikipedia:Featured Article system, placing "Featured" tags at revisions that truely stand out as the best possible work our community can offer. - Stephanie Daugherty (Triona) - Talk - Comment - 15:52, 24 January 2006 (UTC)[reply]

Added more, still brainstorming For that matter, if we do tagging, multiple tags are possible, with different levels of access to update them - featured, reviewed, stable, draft, and proposed - the "revisionlevel" of any revision could be raised with the right access, but never lowered, the newest revision with a certian tag or a higher tag is considerd that version that represents that status - if you request a "stable" and theres a "featured" thats newer, the "featured" version is returned - could work like this:
  • Proposed Versions are the current work in progress.
  • Draft versions are tagged by any logged in user.
  • Stable versions are tagged by any logged in user that meets the appropriate criteria (probably total edits, time editing, community consensus or come combination of the three)
  • Reviewed versions have withstood a serious peer review and all signifigant deficiencies have been fixed.
  • Featured versions have survived each of the above processes and have met the criteria and consensus for Featured Article status.

These versions could easily be numeric "approval levels" internally, making it easy to code - each class of users has a maximum level they can tag at, as well as a level of tag that they see by default when viewing articles.

Comments? - Stephanie Daugherty (Triona) - Talk - Comment - 16:03, 24 January 2006 (UTC)[reply]

Branchless stable version - Kevin's mods edit

 
Article (stable-version) - URL:"http://en.wikipedia.org/wiki/Mathematics/stable"
 
Click where a thumb would be to flip to thumbs up(approve), click again to flip to thumbs down(disapprove). overall rating is shown in horizontal bar. Current stable version (highest rated) is highlighted. (sorry about the bad image editing)

Same as above before "Added...."., with the following modifications:

  • Each article has a stable marker, said marker cannot be moved explicitly, but is calculated by wiki software, as the version with the most approve minus disapprove votes:
    • Votes are approve/disapprove (or no vote)
    • Votes are made & shown in edit history and diffs.
    • All logged-in users can vote on any article.
    • Only the most recent few (say, 5) (approve or disapprove) votes per user per article are active, older votes are expired to "no vote".
    • Votes older than the xth (say, 3rd) most recent stable version, where the voter has not voted since said stable version, are expired (set to no vote), to prevent people not participating from causing stick.

Semi-automation. Also, there will be a software aid: "Stable" versions are generally just that: stable; they haven't been edited in a while. An automated mechanism could go through the history and mark revisions with a tentative measure:

k * e^(-c*age) * e^(d*[time before next edit])

where c is the decay rate (more recent versions should be prefered) and d is the stability unit. both c and d could be inferred statistically per article, since some article are edited a lot(high d) and others not-so-much(low d), and some articles are about current events(high c), and some are about dead events(low c). Revisions with a high such measure are candidates for the next "stable" version, to be selected by community approval.

Furthermore, on the history page, one could filter versions to see only the top handful (say,8) of stability scorers (and the current stable version), with current vote counts, and then look at the diffs, and vote. Kevin Baastalk 22:27, 24 January 2006 (UTC)[reply]

And admins could set pages to automatic instead of semi-automatic. In automatic, the software will determine stable version based on the stability score, rather than community vote. This would be usefull on pages such as "current events". Kevin Baastalk 23:06, 24 January 2006 (UTC)[reply]

Oh, and on the history page, little graphic bars showing the stability measure and approve-disapprove. (different colors). This is trivial to do. It's just a 1-pixel image given a certain height, and whatever width you want. Kevin Baastalk 18:25, 26 January 2006 (UTC)[reply]

Validation by community assent edit

en:Wikipedia:Community assent is only for policy and guideline pages. There is a goal to allow those that want to be bold to work together with those that want consensus first and still maintain stableness in the policy and guideline pages. This stableness is achievable with this guideline for validation, which establishes a deliberative consensus over a specific version.

The basic idea is to mark a specific version of the policy or guideline page for nomination, and someone else must second the nomination. Once there is a second, a copy of the specified version is placed on a new subpage, and discussion continues on the new talk page for consensus about that specified version. The process is based on some parts of en:parliamentary procedure combined with ideas to implement stable versions of articles.

If consensus fails on the new talk page, the subpage will eventually be deleted. Before its deletion, a summary of the reason why it failed will be added back at the original talk page under the nomination or, if the nomination has been archived, simply appended with reference to the nomination.

If consensus maintains supports of the version, an appropriate measure is made to effectuate the assent. This may simply mean that the subpage remains protected from edit and deletion, and a link is added on the original page for reference to the current version assented by the community.

This allows for development to continue on the original page, which is always considered instruction creep until there is community assent. Since "votes are evil," the subpage allows an appropriate place for the consensus to evolve. Even with community assent established, it neither means there is a final decision nor that everyone has individually consented.

Please see en:Wikipedia:Community assent for further reasons and details of the coherency poll and deliberative consensus. Dzonatas 01:01, 4 March 2006 (UTC)[reply]


Article Branches + Display Precedence by Voting edit

  • I suggest not having one article, with its versions recorded, but instead having an arbitrary number or articles (branches), each with its versions recorded. (See e.g. Concurrent Versions System for such a branched version control)
  • Each article has a display precedence value, the article branch with the highest value is shown as default article (front branch) when an article is looked up. The other article branches are accessible by extra clicks (the higher the value, the easier it should be to access/display that article branch)
  • The display precedence value relates to user popularity, measured by voting (possibly modified by some time decay)

Thus the concept of the article would be gone.

Instead we would have the possibility of having alternative articles (article branches), a major improvement, as even in a field like mathematics there are often alternative ways to write something up (otherwise we wouldn't have hundreds of different calculus textbooks). As each article branch would have its own history, it would allow to have different teams of authors working on their view over a long time.

While article branches are technically equal, they all would compete for the easiest display/user access - to be the front branch.

The user is the souvereign, he decides which article branch is the best by voting. (An addition could be an external reference counting ranking, a la Science Citation Index or Google's page rank algorithm)

My original proposal can be found under Wikipedia:de:Benutzer:Marc van Woerkom/Alternativen. --Marc van Woerkom 02:03, 8 July 2006 (UTC)[reply]


Organic verification edit

I love wikipedia.

I want wikipedia to be: 1. open 2. comprehensive 3. accurate

There is an inherent tension between quality(as seen by an expert) and openness. A critical aspect of our wiki is its popularity, Now, if the structure of wiki is changed to make one particular version of an article more authoritative, by definition: some guy who changes something will see his change automatically reverted, or invisible or hidden under some link or etc, this is a very undesirable thing because the ONE thing that has made wiki so powerful is a new user typing: "TESTING TESTING" any change to wiki that prevents this first dip in the water is fatal... take it away and the curious user will lose interest and we him... and as big as wiki is, don't kid yourself wiki will always need new blood. So... what to do about accuracy?

I think everyone would agree with the following: the essential problem is: we don't have enough experts with enough time to go through every change, find the bad ones, revert them, AND have enough time to find and fix up bad article AND put meat on stubs. What we need is a way to improve the situation.

I am a believer in techno solutions but all improvements must be focused on the psychology of the users. I believe that JW is a visionary. In the features and policies that he has adopted; but much more importantly in the ones he hasn't adopted, he has resisted (so far) the urge to "protect" wiki thus giving it its power.

Finally I’d like to present my own proposal, which although small, has been mulled over for a long time. (please note that the wording, format or options in this proposals are just general suggestions and should be changed to whatever seems better), but essentially:

you (any user) open an article and see bad quality... you click on a link asking "does this article need improvement?"... you select options indicating the article has problem(s)... the article goes into a category of less-than-perfect articles and automatically gains a header stating "this article has been reported as not being of high quality". and it will stay that way until someone makes ANY non-minor edit to the article... at the same time a link on the main page points users (or just experts) to problem pages.

Notes:

  • this is not the same as the recent changes, watching reported articles is much more efficient at finding real problems.
  • users will become a lot more involved
  • bad editors will get quick feedback on their work without the need for a tough skinned expert with plenty of time to come along and battle it out with them just for the stubborn guy to revert everything tomorrow.
  • good expert editors can rest easy as their work will be protected by the common sense of the silent majority who will not get involved in edit wars but can make a great difference if the process is convenient
  • there is no positive vote. wikipedia is incompatible with positive selection. anyone trying to build a method of validation based on positive selection will always hit mountains of rules, loop holes and disaffected users. it is impossible. its an inherent aspect of wiki.
  • any single vote can put an article in the less-than-perfect category. though alternatively the number of people reporting the article could push it higher and higher up the bad article ladder
  • this feature is very easy to setup and would not change the structure of wikipedia in anyway


thank you for your time --58.106.21.27 12:01, 23 August 2006 (UTC)[reply]