Image filter referendum/Next steps/en

The referendum ended 30 August 2011. No more votes will be accepted.
The results were announced on 3 September 2011.

Image filter referendum

Organization

This is a summary of specific analysis, problems raised, and proposed next steps from the recent discussions. A work in progress; please edit.

Analysis edit

Many people consider this an important topic – not just a few conservative zealots or a few vocal editors who worry about the children of others and see pornography everywhere. This includes some people who initially did not see a use for such a feature. They tend to focus on reader choice and immediate effects (what this lets readers do for themselves).

Many people consider the proposal as described inappropriate for Wikimedia – not just a few liberal zealots or a few vocal editors who worry about 1984 scenarios and see censorship everywhere. This includes some people who initially thought it sounded like a reasonable idea. They tend to focus on philosophical and moral problems, implementation difficulty, and second-order effects (slippery slopes, impact on category integrity, impact on editorial practice).

German opinion poll edit

An opinion poll was started on the German Wikipedia, running for three weeks from August 25 to September 15. It asks active editors whether they want the feature and implementation linked from the global survey (an image filter with global/local categories) to be expressly not implemented on the German Wikipedia.[1]

The German poll differs from the committee's referendum in a number of ways. It asks a yes/no question about support or opposition. It is run according to the 'opinion poll' standards on German Wikipedia: to vote users must be 2 months old, with 200 article-namespace edits, 50 in the past year. The poll is public. The global survey in contrast used a secret ballot and was open to anyone with at least 10 edits in any wikimedia project. Once language-specific stats are available from the global survey and the de:wp one is finished, a comparison will be possible.

The initiators of the german referendum did not have access to the mail adresses of all users via the database. If the WMF will pay the DE:WP a referendum on a secure server for a secret ballot, the german speaking community probably would glady accept that.(comment by Eingangskontrolle)

Implementation concerns edit

There has been limited discussion of possible implementations that could address the request by the Board in its resolution on the topic. Only one proposed implementation was proposed in detail, in the Spring, as an illustrative example. A discussion of implementation issues was expected to happen after this vote, however details of the example implementation were mentioned in many of the negative reactions.

The current concept of the image filter seems unclear to many interested participants. Categories visible in the example design include: childrens games, gladiatorial combat, monsters and horror, graphic violence, medical, sexually explicit and other controversial content. Some have taken this very literally, responding that "other controversial content is a very wide category, which could be abused to filter anything anyone opposes. It is unclear, why children games should be filtered."[2]

Issues to address edit

Specific problems identified, or proposals made.

Clarify the conversation edit

(from the talk page)

Stop now, have a break and organize a better referendum with clearer proposal, questions and options.

Coren's four woes edit

Those are what I see as the four principal problems with the image filter as currently conceived, with some detailed discussion of why I think they are showstoppers. One point of note is that they are not particularly interdependent.

1. Technical misuse

Pretty much by definition, any mechanism which allows the reader to see or not see "classes" of images according to user preference can be trivially misused to censor those images without their consent.

There are a number of reasons why it is not possible/tractable to display a different page to different users (caching, the resources needed for a fresh render, etc). Because of this, the selection criteria of images must be transmitted along with the page contents.
The intent, obviously, is to have a mechanism on the user side (javascript being the obvious choice) then choose to load and display images or not according to whether it matches the filtering criteria. Those (category matches of the images) must either be made available in-band, or easily accessible through a request (with, say, AJAX).
The problem with this is that anyone who has network access can install an easy proxy that filters using those very mechanisms we must make available, user settings and preference be damned. This is not currently possible because deciding whether some image must or must not be filtered or not is a hard problem (the filename is of very limited help); we are proposing a mechanism by which we provide both the technical means to discriminate and the free manpower to classify censorable images.
A proxy that makes filtering decisions against the user's wishes is not an hypothetical scenario; we already have a number of entire countries that are filtered through a very small number of proxies administered against the readers' interests. The only reason the censorship does not sting more than it does is the intractability of deciding what to filter – solving that difficulty seems a very bad move indeed.
2. Social misuse

A user setting to show or not show images of various types presupposes that the readers are always – or even generally – in a position where they can make a free and informed choice in how that feature is configured. This is unlikely to be realistic for many readers.

First aspect of this is pure social pressure. The very existence of a possible filtering mechanism places pressure on readers in morally restrictive environments to turn on filtering that they would otherwise prefer not to; both by fear of opprobrium and by exploiting the common guilt mechanism. Arguably, this is self-inflicted and "not our problem", but the end result remains that a significant number of readers will be pressured into using the feature that otherwise would not self-censor.
The second, and more worrisome, aspect is that a very significant number of readers (the majority?) of non-euro-occidental nations (the so-called "global south" we are so desperate to reach) can only use our projects in public and semi-public places – cafés, public libraries, and shared computers. It is likely, if not certain, that those places will place at least some restrictions on how that filter is configured in order to allow access.
Finally, the existence of a filtering feature is almost certain to increase the outcries about improper contents on our projects. Giving access to a filter, we are setting up an expectation that is unrealistic: a reader who requests to not see nudity, for instance, will be justifiably angrier at seeing the inevitable classification error than one who understands from the outset that Wikipedia has diverse contents and that some subjects may be illustrated in a matter they find unpleasant.
3. It cannot be made neutral, or objective

The very concept of simple categories of possibly objectionable images (which is a requirement of the proposed design) is unworkable. Even ostensibly objective criteria could not possibly be agreed upon culturally given the diverse background of readers (and editors).

For instance, even a superficially simple category like "nudity" is fraught with insurmountable subjectivity:
  • Is it images of people who are wearing no clothing, no matter how much or little you see? (But then, what if you only see a shoulderblade but it's clear from context that the model is nude? What percentage of visible body is needed?) Images of people where you can see genitals? (What about fig leaves? Or well placed bit of decor?) Breasts or other secondary sexual characteristics? (What about nipples on men?) What about somebody who wears clothing but has exposed buttocks? Or a skin-tight bodysuit that leaves nothing to the imagination? Is a man that is very visibly erect through clothing "nudity"? (No, but then people who would filter on that would expect to not see it).
  • Large number of increasingly specific categories, but there goes simplicity and the "10-15 categories".
  • What about art? Is a photograph of a sculpture of a nude person "nudity"? What about a fairly abstract painting that nonetheless represents a nude person? (Look at some Picasso nudes for inspiration).
  • Is an image of a dildo or vibrator sexually explicit? Is one of a condom? An intrauterine device?
  • Want or need to make an exception for medical imagery? Do you then place images of genital warts in or out?
And that's all the simple topic of nudity(!) We haven't touched the more complicated subjects of "violence" and "religious imagery".
This all has the effect that you either go overboard and people do not see images they would have wanted to see (but cannot know they exist or check without disabling the filter – possibly exposing them to images they didn't want to see), or you go conservative and only mark the most egregious examples everyone can agree on, in which case the only people who would willingly use a filter in the first place will be dissatisfied and complain even more loudly because of the increased expectation that they will be "protected".
4. It is not compatible with our mission

This is arguably the most subjective of my objections, but the one I feel most strongly about. Ultimately, this feature provides for self-selected POV forks of articles and thus defeats our goals of neutrality. I cannot think of a single argument that makes this any different from marking paragraphs of text for possibly objectionable contents to show different article text (blah blah gay partner blah blah blah); yet nobody would consider the latter without yelling in horror.

Make no mistake – it's not making a slippery slope argument: I'm not arguing that filtering images leads to filtering text; I'm arguing that they already are the same thing. Ultimately, we are presenting different content to different readers depending on their preconceptions (or – worse – someone else's), the exact opposite of what NPOV means.

Finally, our educative mission is set aside for the convenience of superficial "respectability". Being educated sometimes mean being disturbed, or shocked. It means learning new things, not being placed in a comfortable room where the only things you can hear and see are those with which you already agree. Our job, our mission is "all the knowledge", not "all the knowledge you are personally comfortable with". The latter is network television's job, not ours.

Alec's Addendum edit

draft of extended remarks
1. Categories ≠ Labels

Categories are objective, Labels are subjective. Two different purposes, two different standards for inclusion.

2. Objective Categorization + Cultural Neutrality is not possible.

Offense is an emotional reaction, it's not rational or objective. Two people who have different emotional experiences cannot reach a consensus on what a person "should" feel when viewing an image.

3. Unlimited labeling + Cultural Neutrality is possible, but controversial.

With unlimited labels and a collaborative space, users of the feature could develop their own filters. Letting them do so is, in fact, controversial with a substantial portion of the editors.

4. We need to develop collaborative labeling for subjective measures anyway.

In five years, Commons will have whole video series-- lots and lots of them. We want to find the "most fascinating" and the "most mindblowing" and the "most humorous" videos on the site. Except, we won't all agree on what's "fascinating". So let's start building a system to let people collaborate on describing content in subjective ways. I could even use the system to seek out the "most offensive" images, those most likely to shock my unsuspecting friends. Tagging for "Inspiring" and tagging for "Offensive" are the same technology-- only the emotion is different.

A summary of questions and concerns edit

The following is a summary of the questions and concerns posed by users in the talk pages and forums relating to the referendum and the filter.

Referendum edit

  1. Does the board/foundation now believe that there should be a community wide discussion about the image filter and one about the future of referendum here?
  2. Was the exercise a referendum or a poll?
    Referendums usually provide one or two clear questions
    They usually allow a yes/no answer.
    The results are either binding or are meant to seriously affect the final implementation of the problem/question posed.
  3. Would information of what active editors think about the filter (apart from non-contributing readers) play a role in the decision of the implementation of the filter?
  4. Do they entertain the possibility of another referendum?
    Would a new referendum pose a yes/no question?
    Would a new referendum take data of users edit count and activity in the referendum to allow cross tab analysis of the results?
  5. What do they think about the poll on the German wiki and the potential threat of forking the wiki based on the poll (a poll with problems but not an impossible threat)
  6. Would having had community discussion before-hand, have led to better questions posed in the referendum and have provided clearer answers?
  7. Do they see that with the data available, meaningful cross analysis is very difficult (i.e. which voters were contributors and which were non contributors etc.).
  8. The information provided to users on the referendum forum and talk page painted the filter in only a positive light. The FAQ was extremely one sided and only became more balanced half way through the referendum, the content of the main page however could not be edited to make it more neutral as the page was locked. How might this have effected the results of the referendum?
  9. There was no effort made to encourage users to think about any ramifications of the use of the filter or join the discussion (readers are not all familiar with talk pages and users included may have incorrectly assumed that the talk page was simply a discussion about how to carry out the referendum. How might this have affected the results of the referendum?

About the image filter edit

  1. How does the board/foundation feel about the difficulty in finding any consensus on which categories to make and how to place each image in what category?
  2. What does culturally neutural mean? A concrete example would help users greatly in understanding the problem and possible solutions.
  3. What does the board think of the potential use of the filter by 3rd parties to limit images that users would see using computers connected to that network (schools, universities, local provided access, etc.)?
  4. Is it worth considering possibility of the filter being the first step in a slippery slope of providing more and more tools that cross the line of personal control vs. censorship?
  5. How would the board deal with the difficult technical aspects of the filter (better stated by other users on this page)?
  6. Is there a meaningful difference between filtering images and filtering text?
    Would the board/foundation ever consider a text filter?
    Are images as important to the overall presentation of an article and understanding of any topic?
  7. Would the filter conflict with the concept of "content by consensus" and more simply the open content" concept or "the free encyclopedia"?

--Shabidoo 23:55, 9 September 2011 (UTC)[reply]

Potential models for hiding images edit

A few different options for letting readers hide images have been discussed. They are compiled here for discussion and as possible options for proceeding. Please feel free to add to or modify the following. If you wish to modify an option radically, please consider adding a new option instead of changing the existing one.

Options requiring no labelling information on the wiki edit

No such feature edit

This option would reject any sort of feature to hide images. This may involve an amendment of the board resolution requesting such a feature.

Highlights and concerns:

  • Everyone who does or does not want filtering can at least in principle satisfy their own preferences.
  • People who want to hide certain images will have to find a way to do this on their own or as an organization, or to limit their use of the projects.
  • Avoids any concerns of the critics about censorship and effective use of financial and community resources, and avoids protests by community members and projects that oppose the feature.
  • Does not address concerns of users who would like this option in any way. They may continue to put pressure, for example, to tailor categories to enhance the outside filtering.
  • We can be positive than simply saying "no action" — we can say that we "continue to allow others to do filtering", meeting the apparent desire of the Board to do something. Wikimedia generates intellectual property which, if proprietary, would be worth billions, and makes it available to those who would develop a version they find suitable - all they need to do is sort through it to decide what to offer.
  • According to our CC and GFDL licensing, anyone in the world may create a derivative of any part of Wikipedia for any purpose. Those who desire to hide images or other content is free to do so using whatever criteria they choose to develop. Under this licensing, however much we may dislike some of the uses of the filter, we can not prevent this.
  • It requires no additional work on the part of Wikimedians. Providing a filter does not become our job.
  • It continues to maintain our own existing principles of freely available content. In view of pressures in the world towards censorship, we at least should maintain our basic vision.

Off-wiki support edit

This option keeps image hiding features outside of the normal editing and reading of Wikipedia. For example, a mirror site could serve all of Wikipedia after revising it however it sees fit. Or a nannyware company might publicly offer Wikipedia "filtered" by its standards. The distinction between this and doing nothing is that Wikimedia would arrange to be extra friendly for such sites' purposes - for example, by setting up a URL where an approved history version of an article can be served up without any extra markup, suitable to be put in a frame by another site and presented as the current version. Or allow an edited version of an article to be saved outside of the normal editing history, for the use of a particular group with an image or content hiding agenda. (Both of these things allow a third party to run a mirror site, but rely on Wikipedia for much of the disk space and bandwidth) Or we provide a nannyware company with a constant running update on all the images uploaded or changed so that they could keep more up to date.

Highlights and concerns:

  • Requires some source of funding for these sites to be publicly available. While it may indeed be possible for these people to solicit donations, they might come at the cost of Wikipedia donations.
  • May lead to fairly close contact and collaboration with outside commercial or religious entities, which could be perceived to undermine WMF's honesty or independence.
  • Keeps filtering disputes away from Wikimedia projects.
    • WMF is still involved in the suggestion of filter application, raising disputes about the righteousness to advise these options to its users.
  • Third party filters without direct supervision by Wikimedia Foundation are not guaranteed to be free of spyware or viruses.
  • Money from donations would have to be spent on an external company that censors Wikipedia content. Self-censorship is bad, but censorship without giving the community any chance of control is worse.

Options with on-wiki impact, but not needing agreement on image categorization edit

Simple features, hiding all images edit

This option might include user preferences such as "show no images" or "turn off images for this page".

Highlights and concerns:

  • This would be the easiest proposed feature to implement.
  • Useful for readers using slow connections as well, depending on implementation.
  • All web browsers include a "show no images" feature, and many offer site-specific blocking options through ad-blocker plug-ins. But this has not stopped users' persistent demands for a more usable mechanism. Many readers are unaware of available methods or find them awkward.
    • The basic functionality to hide images on individual pages has also existed for years (by editing one's personal CSS file; see instructions here). But this is not currently reader-friendly, and has limited flexibility.
  • A more generally useful setting might be "show no images by default" that has a trivially usable "show the images on this page" and "show this image" toggle always at the users' reach. Making all three toggles would neatly solve the usability problem a double as the "hide any image" feature below (that is, if the editor does not hide images by default, they can turn individual images off by the same mechanism they would show them if they did).

Personal blacklist/whitelist edit

This option would let a user click to shutter or display individual images, but not groups of images based on categories or labels. It would be very similar to the mock-up wmf:Personal image filter for shuttering individual images, but would not include the ability to select groups of images to filter.

Each user could shutter images after viewing them on a case-by-case basis (possibly article-by-article). The hidden images would be remembered and would form the the user's personal blacklist. A user could also choose to shutter all images, and allow individual ones on a case-by-case basis, creating a personal whitelist.

Highlights and concerns:

  • Simple to implement, flexible.
  • The strength and weakness of this proposal is its lack of prejudgment. This requires each user to see each image before they can decide whether to place it on their individual blacklist. However, this frees the community from developing criteria to identify potentially controversial images, and from applying those criteria to images.
  • The basic functionality to hide individual images has existed for years (by editing one's personal CSS file; see instructions here). This approach would improve the visibility, usability and user-friendliness of this existing functionality.
  • A major disadvantage of this approach is that (by itself) it does not allow any sort of prediction of a reader's future objections, regardless of that reader's past choices. Every reader will have to turn off all images, or turn off each image he or she finds offensive individually after seeing the image at least once. It's unclear how much reduction in reader offense this approach will actually produce. The results of the image referendum, particularly the question on "It is important that individuals be able to report or flag images that they see as controversial, that have not yet been categorized as such?" indicate that at least a third of the participants rate as extremely desirable the ability to filter by image category, not just by individual images.

Shared personal blacklist/whitelist edit

This option permits users to incorporate other users' personal blacklists/whitelists into their own. For example, if blacklists are written as Wiki pages in userspace, they could permit other Wiki pages to be cited as templates.

Highlights and concerns:

  • Requires users to work together to decide on an individual basis whose lists to trust.
  • Nearly as simple as the personal list.
  • Permits a purely personal list as the simplest case.
  • Requires one person to see the images.
  • Vulnerable to vandalism on any of the incorporated pages, unless technical measures are taken to protect the lists (for example, by allowing a user to apply some new protection template to his userpages that allows only admins and himself to edit them).
    • The blacklist page can be protected like watchlist, js or css pages which are edit-able to their sole owner. But it can be abused by 3rd party censorship tool as their database.
      • The existing 3rd party censorship tools are already applying the existing image categories as their database, user-defined blacklist does not make too much differences. Unless such user-defined blacklist is hidden from anyone except its owner, but this will make sharing the blacklist between users impossible, or extremely inconvenient.
  • Also does not require a community-wide process to categorize images, beyond voluntary collaborations.
  • May also benefit from the basic functionality already present.
  • Needs login to edit/create own lists.

Develop relevant on-wiki processes edit

Per Alec's suggestions above: develop ways to add subjective categories, as is already done for 'good articles' or 'needing cleanup';
Per Pavel's comments on de:wp: develop better standards for how to illustrate sensitive articles or those where there are edit wars about image inclusion -- support being welcoming to readers through a wiki process

Highlights and concerns:

  • This is highly compatible with other wiki practice.
  • This can be slow to develop and may not reach consensus, even for very high-importance articles (e.g., artist's conceptions of Mohammed being used to illustrate the lede of his bio).
  • Wikipedians would have to invest in policy development.
  • Subjective Categories will introduce POV.

Label-based filter edit

This option would introduce new labels to be used by the filter. Labels would be similar to the existing category system, but distinct in implementation and purpose, being expressly for filtering. For example, [[Filter label: Graphic wounds]] would be added to certain images to allow users to filter that group of images.

Highlights and concerns:

  • Keeps labeling system (for filtering) technically and philosophically distinct from category system (for finding and describing). This may help to prevent conflicts between users working at these two distinct purposes, makes it easier to maintain good practices for applying categories, and allows us to acknowledge and possibly mitigate the negative effects of labeling.
  • Creating an effective opt-in filter would require identifying 'controversial' content in images. The Harris report recommended "that tagging regimes that would allow third-parties to filter Wikimedia content be restricted in their use."
  • Labels must be maintained by the community, which offends part of the community.
  • Wikipedians will have to invest their working capacity into installing and maintaining a filter. This option would introduce the need to observe the list and to protect it from POV-Warriors.
  • Labeling images as "controversial" may prejudice readers against the content.
  • The two editing (as opposed to reader) communities surveyed on the German and the French Wikipedia have shown strong resistance (80%) to the idea of filtering. Thus, the filtering effort cannot count much on volunteer participation from the core editing community, at least on some Wikipedias. Therefore, a solution which requires substantial editor contribution, like special labeling, may turn out unfeasible in practice, despite the readers' desire, unless a massive restructuring of some editing communities takes place beforehand to realign them with the casual readers' expectations.

Limited list edit

This approach is based on an acknowledgment that labeling is an infringement of intellectual freedom because it may prejudice the reader against certain content, but holds that the infringement is justified in a few cases. Specifically, it may be justified where a label can be objectively defined and is not discriminatory on the basis of personal characteristics. (More here)

Additional highlights and concerns:

  • Attempts to strike a balance, providing a reasonably effective filter while minimizing prejudicial effects of labeling.
  • Filters provided may not be the best possible match to user expectations because their effectiveness will be hampered by the requirement of minimal discrimination.
  • Majority sensibilities would likely dominate the selection of the limited number of filters.

Open list edit

This option would allow the community to develop and maintain labels for filtering without restriction. Wikipedians could create a variety of specific labels and then combine them into different parent sets.

Additional highlights and concerns:

  • Users would be able to choose from, and help develop, a variety of custom-built, community-maintained filters to meet their needs.
  • Likely the most effective and flexible option, as users have considerable choice in selecting filters.
  • Where users are able to find a filter that matches their expectations, would result in relatively few false positives.
  • Individuals might choose to create prejudicial or discriminatory filters, e.g., removing images of people based on their gender, sexual orientation, race, or political affiliation.
  • Wikipedians would need to label ten million images.
  • May be more difficult to implement performantly.
  • An open list will give the community at least a bit of control but is also very affective to trolling and POV-Wars.

Basing the filter on the existing category system edit

This option would implement a personal image filter along the lines proposed in the initial example, using categories which match classes of controversial images.

Highlights and concerns:

  • In theory, this approach gives the reader a broader choice than targeted labels, as it allows filtering to be tailored even for uncommon objections.
  • Assembling a suitable the initial list of categories for a given user may be a less-than-trivial task, and it may prove clunky for the casual reader. Commons alone has approximately 500,000 categories, with dozens being created and deleted each day.
  • There is less up-front work than individually inspecting and labeling millions images, but this may come at the cost of filtering precision; see next item.
  • Existing categories are not designed to be effective filters. To the extent we are able to maintain good classification principles for these categories, the filters may not meet user expectations. Some categories will likely be under-inclusive from a filtering perspective (e.g. filtering images primarily about nudity, rather than images containing nudity), while others are likely to be over-inclusive, e.g. excluding nude paintings not just photographs. This leads us to the following concern:
    • Users wanting an effective filter would have a strong incentive to apply these categories to images as one would apply a warning label, rather than a description of aboutness (e.g. applying a nudity-related label to any image containing nudity, rather than images where nudity is central to the subject). To the extent this practice dominates, it would assist third-party censorship and reduce the usability of these categories as descriptive and finding aids.
    • The above implies the need to watch the most filtered categories and to protect them from POV-Warriors and vandals. Wikimedia editors will have to invest their working capacity in this effort.

Closed list edit

A global survey could identify the top 5-10 classes of controversial images, attempt to codify what is controversial about them, and identify existing categories which match those classes of controversial images.

Additional highlights and concerns:

  • May offer a better approximation of user expectation than some alternatives, though it lacks the flexibility of "open list" options.
  • A finite number of categories would be easier to maintain than an open list system.
  • Majority sensibilities would likely dominate the limited number of 'controversial' areas available.
  • Labeling 5-10 categories as "controversial" may prejudice readers against the content in those categories. This is of particular concern if the categories relate to (or disproportionately feature) identifiable groups of people.

Open list edit

This option would allow users to select from the existing category list to build individual filters.

Additional highlights and concerns:

  • Users could create a customized list to suit their needs.
  • Each user would need to make individual decisions about the ~500,000 existing categories on Commons as well as the categories on any wiki that allows file uploads, such as wikis with an EDP. Compiling an effective filter would be difficult.
  • By itself, this approach may be difficult to reconcile with the desires of up to 75% of responders to the referendum because this cohort gave the top rating to the complex question "It is important that the feature allow readers to quickly and easily choose which types of images they want to hide (e.g., 5-10 categories), so that people could choose for example to hide sexual imagery but not violent imagery?" Unfortunately, it is unclear from the question design if the "5-10 categories" was central in the voters' rating of this complex usability issue.

A mixed approach: open personal list, but offer a limited set of defaults to seed it edit

This option would allow users to select from the existing category list to build individual filters, but would bypass the difficulties of selecting the initial list by offering a limited number (5-100) lists of categories targeted at specific audiences, for example: Muslim-sensitive, (Western) NSFW, common phobias, etc. As cultural sensitivities vary worldwide, the initial choice may even be suggested by the software using IP geolocation.

The default seed lists may even be adaptively adjusted for maximal usefulness by surveying how the majority of the readers change them once they import them to their personal list. The filter users may also be prompted from time to time to adjust their list with useful hints. This process would be a similar to how online merchants promote similar or complementary products to buyers, e.g. how Amazon builds "customers who bought this item also bought" suggestions; translate that into "readers who filtered out this category also filtered out ..."

Additional highlights and concerns:

  • More complex than the other options.

Post mortem for the vote/discussion process edit

  • Standards are needed for defining "polls / RFCs / referendums". This was not what would normally be called a referendum - unify terminology and set expectations properly.
  • Standards are needed for designing questions in polls, and selecting participants, for statistical utility. There are specific variations on standard practice in sociology that apply to sites where one wants to distinguish veterans / newbies / readers (or casual users).
    • In particular, a sampling of respondents should be surveyed to estimate the expected population responses to questions, so that they can be phrased to capture the full range of opinions expected from a normal distribution of respondents, so that most of the questions do not result in confusing bimodal responses.
    • Should the Foundation hire a professional survey statistician?
  • The place of the research committee in such workflows should be well defined.
  • Having public discussion and comments as with a typical RFC is quite nice. We should bring this back. It can be done while also allowing for private ballots.
    • Most languages had their "discuss" page redirect straight to the English-language discussion page. This may have discouraged non-english-speakers from participating in discussion. In future, make "Discuss" link to a per-language pages that included a prominent advertisement pointing editors to active discussions in other languages. However, non-English discussions usually happen elsewhere (in local language-only wikis or mailing lists).
  • The more data you release, the more we can learn about ourselves and our values. (Obviously, keep voter identity and comments confidential, per secret vote.) But minimally-redacted ballot dumps should be a standard part of poll/referendum/voting results release-- the community needs to know as much about itself as possible, and data dumps can tell us a lot more about ourselves than any histogram ever could.
  • We should consider giving voters fair warning not to include personal information, then releasing all vote information immediately at the end of polls, without any delay for redaction or examination. The comments could be placed on ordinary Wiki pages and redacted, perhaps even revdeled, but only after the fact (which is what we do at the Wikipedia Refdesk fairly often).
    • We could also add a "public comment textbox" to the existing ballot so voters will have both options-- just in case someone really does want their comment to be secret.

See also edit

References edit