Talk:Controversial content/Brainstorming

Latest comment: 12 years ago by Pundit in topic CEO's Hanover statement

Right mouse click menu

edit

Since someone suggested using a blurb filtered, triggered or untriggered by something in the right mouse click menu. That better be a browser add-on people have to download if the want it. People fucking with my right click menu are one of the reasons I categorically don't allow Javascript across the net. --94.134.216.218 05:41, 11 October 2011 (UTC)Reply

So you are against using right mouse click for anything related with controversial image handling. I agree, but whether or how to use a mouse to trigger something is a very minor usability detail. We are looking for the general strategy instead. -- JakobVoss 17:43, 11 October 2011 (UTC)Reply
It's relevant in that people can get very upset about bad minor details. I'd suggest keeping implementations out of the main description - David Gerard 18:06, 11 October 2011 (UTC)Reply

Apparent misstatement

edit

Part of this page currently says, "If an image is controversial in most contexts, it should be deleted from Commons like it is done now." At the very least, that is a misstatement of current Commons policy. Commons has no policy of deleting images because they are "controversial". For example:

  • Félicien Rops' 19th century drawing of Saint Theresa masturbating is certainly "controversial in most [all?] contexts"; it's also an important work of art, in the public domain, and one we should certainly keep.
  • I would hope that any picture of a present day person in a Nazi uniform would be "controversial in most contexts", but if that were a well-known individual, it would almost certainly be an image to keep.
  • The Abu Ghraib prisoner abuse photos are certainly controversial, I presume "in most contexts" -- really, in any context not relating specifically to abuse of prisoners, and even in some contexts that do related to abuse of prisioners -- but they are certainly well within Commons' scope.

So what, if anything, does this passage mean to say? - Jmabel 23:32, 11 October 2011 (UTC)Reply

I meant by "controversial in most contexts" that an image is controversial in almost any context, without providing an educational value. The examples you provided are not, but if you know rotten.com, there are better examples. -- JakobVoss 18:46, 12 October 2011 (UTC)Reply
But that's misleading; we wouldn't delete them because they're controversial, but because they're out of scope (i.e., not educational). LtPowers 00:48, 13 October 2011 (UTC)Reply

Controversal content - What's that.

edit

Read this: A personal appeal from Rillke:

Each single individual will tell you something else what "controversial" is (Muslim, some fanatic WASP, GOP-member, you regarding to what your children should see or not see, ...). I think there is no problem at all— at least not with Wikipedia. If you click on an article about a matter, you have to accept to get an illustrative image with it. Otherwise you can turn off images in browser. It is complete nonsense to attempt to filter! Just another crappy extension which causes more work and unnecessary server load and cost. Come down from the ivory tower and do something useful and switch off these annoying big banners!

An der "Abstimmung" habe ich auch nicht teilgenommen, weil ich inzwischen alles was durch riesige Banner angekündigt wird, ignoriere.

Kommen Sie vom Elfenbeinturm herunter! Über diesen Unsinn ärgere ich mich seit Beginn der Debatte. -- Rillke 10:56, 12 October 2011 (UTC)Reply

This page is not for discussion for or against any treatment of controversial content at all, nor for discussing how to advertise or not advertise it, but for brainstorming of how you could handle controversial content. I agree that different people find different images disturbing and I think it also depends on time, mood and other context. -- JakobVoss 18:50, 12 October 2011 (UTC)Reply
There is no agreement on making a filter for controversial content. Therefore, this brainstorming page can be used for discussing if we need a filter. --Lin linao 21:41, 14 October 2011 (UTC)Reply
How strange that a person who does so much on Commons would give such a Wikipedia-centric complaint.
Perhaps our problem is largely cultural: those of us with our own personal computers, high-speed data connections, and plenty of space and privacy probably have no idea what difficulties our images pose to people whose only internet access is at a public cafe. It's all very well for us to say "don't read the article if you don't want to see the pictures", but imagine a girl in a poor, morally conservative country who has been raped and needs to know more about sex. Her only computer access is in a very public internet cafe. Should we wealthy people insist that this poor girl's computer display easily identifiable images, so that any person who walks past her knows what she's reading about? Should we tell her, "Honey, if you don't want the world to know you're reading about sex, then you should be wealthy like me, so you can take your computer into your own room to read, or just not read the articles." WhatamIdoing 16:04, 24 October 2011 (UTC)Reply
People with poor bandwidth normally turn off images like I do when connecting with a 56K-modem. The next step is hiding images in my anatomy-atlas. BTW: Examples are bad. The "little girl" is only one of thousands of examples. If parents do not tolerate images, I think they do not tolerate the text as well. As I said elsewhere: If we have to hide images from Wikipedia-Articles, we do something wrong. Wohl bekomms.
Concerning the "strange thing that I dare to give a wp-centric comment", I know that WP is much more popular than Commons and I don't think the "little girl" will come to Commons in order to read something. So the filter focuses on WP and I just followed. -- Rillke 07:21, 25 October 2011 (UTC)Reply
None of the proposals for dealing with controversial images are sufficient for parent-directed control. If a parent wants to keep images away from their kids, then they need something irreversible, not a "click here to show the image" option.
I think it's important for us to remember that there are people who have legitimate, rational uses for such filters. The example I gave is a person attempting to preserve her right to privacy in a public setting, not person with limited bandwidth. How would you preserve that person's privacy? Do you think if she wants to read about sex at the only internet connection available to her, that we ought to force her to display all images—or to require her to see absolutely no images at all, even images that she would like to see or that would not compromise her privacy? WhatamIdoing 20:05, 25 October 2011 (UTC)Reply
Parent-directed control ist not our problem. That is problem of the parents and applies not only to Wikipedia. It also applies to any other website. To only filter Wikipedia for that purpose makes no sense at all.
Privacy in a public setting? Disable images and don't let anyone read the bold, big headlines. Thats all you need to keep your privacy intact. But again: Why you only need privacy while browsing Wikipedia? Never ever used Google to search for something and left surprised with a page where tits wiggle and cocks dance in circles? --Niabot 12:12, 26 October 2011 (UTC)Reply
I agree that the parental control issue is the parents' problem.
Google has offered an image filter for years. It's called "Safe Search". If you're worried about what's going to appear on your screen, then you enable their image filter. Also, the vast majority of web searches at Google produce only text, not pictures, so there's no need to worry about images. WhatamIdoing 16:20, 28 October 2011 (UTC)Reply
Google does it for for textual content as well as for images. But the result is questionable. Firstly it can be misused and is misused. This applies to proxy software that uses Googles rating to hide pages, as well to claims by users (page owners) to exclude pages/images from the search. Secondly it fails in filtering quite often. A simple search for "futanari" delivers so called explicit content, even if strict filtering is on. A typical search term for the category: I don't know what it is, i want to read about it, but i don't want to be offended. Something that never works, since no one knows the preferences of a user before he has seen something he dislikes or likes. Filtering of this kind is always build upon assumptions, which is generally imperfect and in case of knowledge something evil. That means, if you want to learn about something new then someone has to accept to be offended. --Niabot 08:35, 29 October 2011 (UTC)Reply
A simple search for futanari, even with filtering turned off, produces zero images on Google. The same is not true if you run that simple search on Commons or Wikipedia. WhatamIdoing 16:37, 29 October 2011 (UTC)Reply
A joke? [1] --Niabot 12:55, 30 October 2011 (UTC)Reply

Perhaps google produces parallel-internet-universum, Niabot, and we are both in the other one ;-). I just searched at google pictures for "futanari" - and get more then 24.400.000 (!) results... --Alupus 15:14, 30 October 2011 (UTC)Reply

And how many images did you see on that results page? The question here isn't about "results" or "words". It's primarily about what is normally called "photographs". WhatamIdoing 02:16, 31 October 2011 (UTC)Reply

Guys, why are you comparing Google to Wikipedia? Google is a search-engine, like Bing and so many others. They don't have a mission to be neutral and present unbiased information to everyone, we do. Google and other search engine focus on maximizing revenues, offering same features to their users as other engines. they also have a reach further than most other websites including a physical presence in several dozen countries, and as such have to conform to decency laws or national request in many countries, no matter how detrimental it might seem. Their mission is maximizing revenues and being accountable to their share-holders, we don't. Wikimedia is accountable to the donors and the contributors who keep editing and donating for the goal of gathering the sum of all human knowledge, not the sum minus controversial content. Theo10011 02:35, 31 October 2011 (UTC)Reply

Niabot apparently believes that the existence of web search engines, which sometimes produce unexpected results, is a good excuse for the WMF to make zero effort to reduce its significant tendency to produce unexpected results.
Also, Google is a good comparison, because both the WMF and Google have a significant, similar goal: both want to keep their dominant position in the global marketplace. Google's actions indicate that the WMF's goal of acquiring and retaining readers will not be accomplished by refusing to give the end user any control at all (however partial or imperfect) over the display controversial images.
Google could have taken Niabot's advice and told people that if they didn't want to see astonishing images when performing benign searches, they shouldn't be searching images at all. But that cost them users and generated complaints, so they implemented an optional image filter. The WMF could take Niabot's advice, but the Board has discovered that Niabot's approach is costing us users and generating complaints, so they have decided to implement an optional image filter. WhatamIdoing 15:17, 31 October 2011 (UTC)Reply
I beg to differ. Google and WMF are completely different, one is a for-profit multi-billion dollar corporation with shareholders, the other is a non-profit organization based out of San Francisco that generates donations to keep the projects online; WMF is expected to provide support to the existing community and project. WMF doesn't have to meet any legal requirements for meeting decency standards, Google does. Google has a physical presence in a dozen countries, it offers multiple features to its users, one of which happens to be an optional image filter. Google's identity is based on innovation and creating new feature for products it owns. WMF just has to maintain operations and support the community that actually runs the projects; not take decisions that go against it. They are completely different things. I am not sure why you need to single out Niabot in this? Theo10011 15:34, 31 October 2011 (UTC)Reply
Google and the WMF both have to comply with exactly the same U.S. decency laws. It will apparently surprise you to learn this, but there is no special exemptions for non-profits under the U.S. decency laws.
Also, under US federal laws, the WMF is absolutely required to support its charitable purpose, not "the existing community". "Supporting the existing community" is not a charitable purpose. WhatamIdoing 15:52, 2 November 2011 (UTC)Reply
If the WMF isn't able to comply with the basics of "how to support the authors to write an encyclopedia" and to comply to the US law, then it has two options. 1) Stop claiming to support the creation of an encyclopedia. 2) To move to another country that values free speech and educational content. --Niabot 16:08, 2 November 2011 (UTC)Reply
Who said I was talking about U.S. decency laws? I was talking about decency laws in other countries, where these organizations have a physical presence. What part of that was not clear? WMF was incorporated for the purposes of managing the projects, WMF doesn't provide any of the content, ergo its purpose is to support the accumulation of content. You can read about the goals here WMF Goals, the stated purpose is maintaining and providing the content; content which it does not generate, the community does the last time I checked. "the existing community" as you quoted me, which I assume you are a part of, should be left alone when it comes to editorial and content decisions. Theo10011 16:31, 2 November 2011 (UTC)Reply
Both Google and the WMF have their primary presence in the US; therefore, those laws are the ones that are most relevant to both organizations. In fact, both Google and the WMF are declaring not just the USA, but specifically the State of California to be the relevant legal jurisdiction. The decency laws of other countries are consequently unimportant—almost irrelevant to everyone, actually, and entirely irrelevant to anyone not located in those other countries.
Google does not provide an image filter to 100% of users worldwide merely because it makes a few people in Dubai less unhappy with them. They provide it because the users want the option—just like many WMF users want the option. If they were trying only to comply with Dubai's laws, then that service would be available only to users in Dubai.
On your other point, the WMF's charitable purpose is education. It happens that the community has been (so far) a very effective means of promoting education, but if the Board ever determines that the community is preventing the Board from fulfilling its charitable purpose of providing education, then the Board has exactly two legal options:
  1. choose to fulfill their educational purpose, even if that means alienating or eliminating the community, or
  2. donate all of the WMF's assets to another charity and re-register as a tax-paying, for-profit corporation.
(In practice, I don't think they are ever likely to face a true either-or choice over this issue, because I personally believe that the community's other contributions towards fulfilling the WMF's educational purpose would outweigh even active obstruction of some education-promoting activities, but those are the actual rules, if push comes to shove. A non-profit is not allowed to prioritize its stakeholders over its charitable purpose.) WhatamIdoing 17:37, 4 November 2011 (UTC)Reply
First, you are making the board and WMF out to be this external appendage that came up and inherited this great tome of knowledge and all the responsibilities that came with it. WMF was incorporated to continue supporting Wikipedia and other projects, and providing access to the content. It's a vehicle to steward the information, it raises money in the name of Wikipedia, not the purpose of spreading educational content, without Wikipedia, WMF and its board have no identity. Your impression makes it seem like WMF is an external organization that came up to promote education, and that purpose is so paramount that it would severe it's ties with the community or anyone else, legally and philosophically, promoting educational content is one aspect of the larger picture. It is an encyclopedia, they are just expected to steward it and keep it online. Wikipedia can not exist if they severe the community. You are honestly focusing too much on the "promoting educational activities" part. You are refusing to look at the practical picture, and instead focusing on the legal narrative.
Second, I'm not sure if you know what kind of filtering goes on in Dubai or PRC. They restrict content not with cooperation with google or any large website but around them. That is a separate argument to what I was making. I was suggesting a couple of things, first, the scope, WMF operates on a much smaller scale than google i.e. physical presence in multiple countries. Second, google's content filter represents a feature to avoid shock and make the results less shocking. (to be expanded later) Theo10011 00:34, 14 November 2011 (UTC)Reply
IMO the legal narrative matters here, because the WMF cannot support the community at all if it ceases to exist, and it cannot support the community effectively if it ceases to be a non-profit organization. WhatamIdoing 20:12, 14 November 2011 (UTC)Reply
edit

An activated filter may propagate an aura of safety for the visitor - but its unlikely to work perfectly all the time. Could that be a reason in some country for somebody to file a potentially successful charge against the foundation? Alexpl 19:36, 15 October 2011 (UTC)Reply

No, and being exposed to a controversial image normally doesn't count as "harm" to a person anyway. Bad experiences cost us readers and editors. Besides, when the new Terms of Service are finally put in place, lawsuits in the WMF will need to be filed in California, which is simply going to laugh at any such claim. WhatamIdoing 15:56, 24 October 2011 (UTC)Reply
California being the top producer of porn in the USA, if I recall correctly. :-0 ASCIIn2Bme 10:14, 24 November 2011 (UTC)Reply

CEO's Hanover statement

edit

I'm pasting it from the pdf for ease of reference:

"Some German editors have told me that they cannot accept an image filter of the type we originally designed. Its introduction would make them want to leave the projects. I take that very seriously. The Wikimedia Foundation will not impose an image filter on German editors that editors strongly oppose. ¶ Having said that: I am not promising that nothing will ever change on the German Wikipedia without consensus agreement. ¶ As a movement, we need to be able to be bold, and to experiment freely. The image filter is different though: it is not an ordinary feature, and so it required special, serious advance discussion. ¶ Does the Wikimedia Foundation intend to go ahead with its original plan to build a category-based image filter? ¶ The Wikimedia Foundation does not intend to build a category-based image filter. It was clear in the referendum results and the discussion afterwards that a category-based filter system would be unworkable and unacceptable to many editors. ¶ Therefore, we will not build it. ¶ So: what will happen next? ¶ The Board has not rescinded its equest to me to build a personal image hiding feature, and so I intend to do it. ¶ To that end, the Wikimedia Foundation will work in partnership with editors, engaging in discussion, until we figure out a solution that will work for everyone. ¶ Right now some ideas, such as a general images on/off switch, seem to have broad general support. A few proposals are in development, and being discussed. ¶ My hope is that we can have a good, rich, open conversation about acceptability and usefulness of different ideas, and figure this out in a way that works for everyone. I hope you'll participate on pages like Controversial content/Brainstorming and talk through some of the options there. ¶ We wanted to do something; we asked you what you thought of it. You raised serious objections, and we listened to you. Now, we're rethinking. ¶ This is how it's supposed to work."

ASCIIn2Bme 10:06, 24 November 2011 (UTC)Reply

Hanover statement and blacklisting

edit

It was clear in the referendum results and the discussion afterwards that a category-based filter system would be unworkable and unacceptable to many editors. Therefore, we will not build it.

That seems pretty clear and unambiguous and WMF are pretty much bound by it.

But does anyone else get the feeling that it leaves very few options open?

Some of the proposals are based on blacklisting as a workaround. But it seems to me that the above statement only permits blacklisting so long as you maintain a single blacklist. As soon as you have two blacklists, your system is "category-based filtering". That effectively means you can't distinguish porn from violence from pictures of prophets - all you can do is assign a zero, a one or some other score-value, so that your filter will remove images across hypothetical categories.

Does anyone not see it that way? FormerIP (talk) 00:35, 10 August 2012 (UTC)Reply

By "Category-based", I believe the statement is referring to the MediaWiki Category system, not any categorization scheme that someone might come up with. LtPowers (talk) 13:47, 10 August 2012 (UTC)Reply
The problem is that the community opposed any category-based approach and the WMF seams to understand that it only referred to Commons-Existing-Categories. In fact the community opposed any category alike filtering, as long it aren't neutral categories (directional vs prejudicial). --Niabot (talk) 17:54, 10 August 2012 (UTC)Reply
Are there any previous discussions/statements that would help me to understand this? FormerIP (talk) 00:24, 11 August 2012 (UTC)Reply
I'm not sure where a good example would be. Perhaps this will help: The members of the American Library Association all agree that the correct categorization for Hustler magazine is "pornography". You can legitimately label this porn magazine as being a porn magazine, because that's an accurate description, even though you know that this label provides information that some readers will use to avoid the magazine. But you can't, in their way of thinking, label it "items in our collection that children should not view", because you're then telling some library users that they aren't welcome to use the full resources of the library.
So we have Commons:Category:Photographs of sexual intercourse. You can legitimately label these images this way, because that's an accurate description, even though you know that this label provides information that some readers will use to avoid the images (e.g., by never clicking on the cat name at Commons and therefore never seeing the complete list of images in it while browsing Commons). But you can't, according to the Wikimedian objectors, put that label on a list of "stuff we know that people who freely choose a 'no sex images on my computer screen, please' option don't want to see", because making a list of categories like this one, for the purpose of letting people avoid those images, is inherently an act of censorship. (These people probably also believe that the en:MediaWiki:Bad image list, which has been operating at both en.wp and de.wp since at least 2005, is also a type of censorship that the community would never accept.)
I don't agree with the rationale given by Wikimedians for this, but I think this is an accurate description of their concern. WhatamIdoing (talk) 17:35, 11 August 2012 (UTC)Reply
The main point of the bad image lists is to stop the usage of images for heavy (unstoppable) vandalism or other misuse (for example copyrighted images in German WP with PD-US, eg. Albert Einstein photographs). They are not meant to exclude images just because they are controversial. At least that is the intent inside the header of the lists, even though some images are listed without any such reason. --Niabot (talk) 18:00, 11 August 2012 (UTC)Reply
  • For starters, the poll was about global/universal solutions. These cannot be introduced, since projects differ. That's why project-dependent labeling is a much more workable solution, and also in line with the poll's spirit. Pundit (talk) 11:01, 11 August 2012 (UTC)Reply
  • OK. So my reading of the situation is that there's a faction of "anti-categorist" Wikimedians who take a view such as described by WAID above. WMF has sought to appease this faction with a promise that no filtering system based on categories would be built.
    That's somewhat problematic, because the anti-categorists are likely to view that promise as bankable. They may well use it, probably with a high chance of success, to block any proposal that goes against the promise. In fact, they may not have to, because WMF may not need reminding to stick to its promises (they won't have made it lightly, and I can't see why it would be in their interests to renege).
    However, some editors in these discussions believe that the promise is not as (pardon the pun) categorical as it appears, because WMF has since clarified that it applies only to categories within the MediaWiki "categories" setup, and categories supported by a new technical apparatus will be fine. Alternatively, we are in a fundamentally different game now, so promise previously made are null and void.
    The really, really important question, though, is: have WMF actually made any such clarification, or are we just sticking our heads in the sand?
    Or, is there something else I am not getting? FormerIP (talk) 21:00, 11 August 2012 (UTC)Reply
The problem is that the opposition thinks (me included) that category based filtering is bad. This includes any non neutral categories (like NSFW, which could mean anything depending on viewpoint). The WMF seams to have a problem to understand this point and made a "no existing commons categories" out of it, which is wrong or was at least not the intention of the opposition. There are two facts mixed together:
  1. The current categorization scheme on commons is based on directional labels/categories, which aren't really usable for filtering, because they have a different intention. (For example: A picture inside the category violence does not have to depict violence, but has to be related to violence) Both sides, the WMF and the opposition agree that using the existing categories would not be good: Inefficient filtering and bad influence on existing categorization scheme.
  2. Arbitrary, additional categories/labels/tags (like child safe or NSFW) aren't welcome by the opposition, because they would discriminate content (violating NPOV) and would expose our (the communities) judgment on topics. This is the part which the WMF (or at least Jimbo[1]) still ignores.
[1] Jimbo opposed the resolution until a FAQ was appended to the resolution which excludes point 2, directly continuing with the NSFW filter discussion/proposal. --Niabot (talk) 21:58, 11 August 2012 (UTC)Reply
The WMF seams to have a problem to understand this point and made a "no existing commons categories" out of it - When did they do this? Do you have a link? FormerIP (talk) 23:59, 11 August 2012 (UTC)Reply
Sue said on her talk page that a category based solution is of the table [2], but Jimbo asked for a one category based (NSFW) filter [3]. For me it is either the indication that Sue and Jimbo have a different position or that Sues comment has to be understood as "no commons categories, but others are welcome". --Niabot (talk) 08:38, 12 August 2012 (UTC)Reply
Yes, because commons categories would be a global solution (one-size fits all), while labels are not. Pundit (talk) 12:05, 12 August 2012 (UTC)Reply
But the community opposed any kind of category based approach globally or locally, since every language has minority viewpoints. That is the discrepancy. --Niabot (talk) 12:45, 12 August 2012 (UTC)Reply
Technically, a significant minority of the community opposed a filter that assumed the contents of Commons:Category:Photographs of sexual intercourse was something that readers ticking the 'no sex photos, please' filter would not want to see. We shouldn't fall into the trap of assuming the small group of people who were upset enough to comment in English and German at Meta are representative.
Using the Commons cats does not have to be a global solution. Zh.wp could decide that Commons:Category:Photographs of sexual intercourse should be on the list of images filtered and de.wp could decide that it should not.
The idea of adding "child safe" or "NSFW" labels was rejected as being too convenient for external censorship in the original report that the WMF commissioned. The very mistakes that the Commons filter would produce (e.g., not every image in [Category:Violence] actually being violent) was thought to discourage its use for censorship. And I remind you that on WMF sites, the filter is supposed to be instantly, individually reversible, so that if you have a filtered image, and the caption is "Logo for The Anti-Violence League", then you are one click away from seeing the image anyway. It's really just a question of whether your page initially loads with the logo or initially loads with a placeholder that says "click here to see this image". WhatamIdoing (talk) 15:48, 12 August 2012 (UTC)Reply
I was talking about the 86 % and 83 % of contributers that opposed the filter at DE and FR for exactly that reasons i stated. A full list of arguments why the communities had there own poll and why they opposed the filter can be found here. Please don't mix the referendum results with the local polls, because the referendum asked way different questions. --Niabot (talk) 06:29, 13 August 2012 (UTC)Reply
It is far more important to me that they asked different people. The more a person contributes, the less the person wants a filter. The de.wp poll asked top-level contributors for their opinions. Then the de.wp top-level contributors have pretended that the opinions of the top-level contributors are exactly the same as the non-contributors, which is wrong. WhatamIdoing (talk) 15:36, 13 August 2012 (UTC)Reply
You repeat my words. Both polls had a different audience, different questions and different introduction. So lets compare what we have:
The referendum:
  • The audience mostly consisted out of editors from various locations
  • It was asked if developing a filter is important
  • The introduction only mentioned how it could look like and only stated advantages
The German poll:
  • The audience mostly consisted out of editors from Germany and reached a respectable number of votes. Most other polls have a much lower vote count.
  • It asked if filter could be implemented or if there should not be any kind of filter.
  • The introduction features the arguments of proponents and opposition, while anyone was free to add arguments during the poll creation process.
Now you will have to agree that both "polls" aren't sufficient to give a clear or good answer. The referendum asked for importance and did not mention the implications (the talk pages are full of them). The German poll in itself was better organized, but was aimed at a regional group of editors. Can we agree that throwing in the poll results does not make much sense, because they are so different and none of them asked the readers in a fair way? --Niabot (talk) 15:55, 13 August 2012 (UTC)Reply
If I am repeating your words, then why does my point not appear in your list? What is important to me is this:
The referendum:
  • Most of the people replying had made very few edits.
The German poll:
  • People who had made <200 edits to de.wp (>99% of users) were completely banned from participation.
The German poll is like allowing only millionaires to vote for the government. It does not produce a fair, representative view of what everyone wants. WhatamIdoing (talk) 14:59, 14 August 2012 (UTC)Reply
Thats why i asked multiple times for a real poll (one with clear questions and a complete introduction) that is directed at all readers. That would be results we could build upon. But till now we only have two polls which have different results and big flaws. Under this circumstances none of them can be seen as representative. I guess we better forget about them. --Niabot (talk) 16:12, 14 August 2012 (UTC)Reply
Good idea, although polls should be organized across projects, as different cultures differ in sensitivity - it does not make any sense to have just one large global poll. Pundit (talk) 16:27, 14 August 2012 (UTC)Reply
  • Let's be clear about this: the community NEVER expressed any disapproval for label systems developed basing on local project's preferences, without influencing other projects (besides Commons, as a repository for such labeling, but transparent to the browsing). It is also clear that the global community would never force local projects against this anyway. Pundit (talk) 10:14, 13 August 2012 (UTC)Reply
That may or may not be true, Pundit, but I think it's relevance is limited.
AFAICT, what Sue Gardner said above hasn't been rescinded, so it does matter if you have a good argument why it isn't valid or if you bold it, unless you can convince the anti-categorists.
I just think a bit of a reality-check is in order to prevent editors going completely off-track and spending a lot of time and energy on proposals that don't have a chance of succeeding.
The reality of the situation seems to be that the WMF board has abandoned all support for a filter. It has given Jimmy the go-ahead to continue trying to build a consensus around the idea, but it has seriously circumscribed him by stating that any proposal has to be "not controversial and supported by a vast majority of community members". I would take that to mean that the proposal would need to have virtually no opposition. Any proposal that we can very well guess will be bitterly opposed by a faction of editors is not worth wasting time on. The solution, if there ever is one, will be pedestrian and will have carefully considered and accounted for objections raised along the way.
Incidentally, I personally favour category-based filtering, so I'm not trying to paint a picture in support of my own position here. I just think we need to keep it a bit more real. FormerIP (talk) 12:35, 13 August 2012 (UTC)Reply
I am not promoting categories. I have previously described a system of labels (there are many differences: labels cannot be browsed, are invisible unless edited, and can be project dependent). I am against categories mainly because they have global effects. Also, I believe that local independent labels should not be as visible as categories would - a casual Commons viewer should not be immediately informed that image X is considered as requiring advisory on project Y. Pundit (talk) 12:51, 13 August 2012 (UTC)Reply
This is really the type of thing I'm talking about. Most people would consider labelling to be a means of categorisation, rather than a fundamentally different thing. You might be able to convince yourself differently but, at the end of the day, it is not yourself you will need to convince. FormerIP (talk) 15:07, 13 August 2012 (UTC)Reply
I said it multiple times already, but you don't seam to listen. Labeling, Tagging and Categorizing are more or less the same thing. All are meant to create groups of items that have a common property (in a broad sense). It is no problem for us to have neutral categories, labels or tags. The problem are "properties" that are in itself a value judgment (morally, religious, ...). Thats what most opposed, independently from the question if the filter would be global or local. --Niabot (talk) 15:39, 13 August 2012 (UTC)Reply
They might be morally the same, but they are not technologically the same. Actually, I think that using the existing category tree is less morally objectionable than creating a separate system especially for the filtering work, because the existing category tree has a neutral basis, and a filter-specific system would not. WhatamIdoing (talk) 15:43, 13 August 2012 (UTC)Reply
Thats right. Technically they are different, but morally they are more or less the same. But since we speak about a moral issue we should agree that the technological difference doesn't really matter. Yes, the existing categories are better then what was presented inside the "category" based proposals, but they don't really suit the needs of filter. That is ironically how the categories should look like. They should be free from moral issues and only a "technical" neutral help for users and the community. Building a filter on top of that has at least two implications for me. If the filter uses the categories, as they are right now or should be, then it would be a bad filter. So there are two basic ways to improve the filter. Either by creating additional moral pseudo categories (including labels/tags/groups of categories/...) or by changing the existing category system to suit the needs of a filter. Such pseudo categories are often seen as discrimination by the creator of such categories (it is his moral point of view) and changing the existing category system to suite a filter is strongly going against the NPOV principle. --Niabot (talk) 16:10, 13 August 2012 (UTC)Reply
No, it wouldn't be "a bad filter". It would be "a non-specific filter". It would filter most of the target images as well as some non-target images, just like a medical test that misses some people with a disease, and some people get a false-positive. We could simply accept that level of imperfection, rather than trying to create a non-neutral system for tagging or trying to change the existing categories to be perfect for the filter. WhatamIdoing (talk) 15:05, 14 August 2012 (UTC)Reply
And as I have said multiple times already (and you seem not to listen ;) labeling is FUNDAMENTALLY different from categorizing. Categories are visible to the surfing people, they allow browsing per category, and also they are global. Labels, as I propose them, are not visible to the visitors, do not allow browsing, and are local. What is "the same thing" or not is up to the community, but I am 100% certain that the majority opposing global categories would not object to project-dependent local labels. Pundit (talk) 16:09, 13 August 2012 (UTC)Reply
I'm not an expert, but on a technical level, wouldn't it be that, whether you choose to call it "labelling" or "categorisation", you will be doing precisely the same thing: adding an item of metadata with a value to be read by a parser (or whatever it would be called)? FormerIP (talk) 16:06, 13 August 2012 (UTC)Reply
Yes, but the effects, unlike with categories, would be different on different projects. Also, labels would not be as visible as categories (they would show rather as interwikis) and would not be browseable. Pundit (talk) 16:19, 13 August 2012 (UTC)Reply
Please explain me the difference between hiding a image that is in some kind of local category and hiding a image that has some kind of local label. --Niabot (talk) 16:12, 13 August 2012 (UTC)Reply
There are no local categories, so your question is poorly phrased. The community objected to creating a one-size-fits-all category system for all projects. Again, can you explain to me under what principles could the global community FORBID a project from using a local labeling system? Pundit (talk) 16:17, 13 August 2012 (UTC)Reply
The common principle of NPOV and you did not answer my question. --Niabot (talk) 16:55, 13 August 2012 (UTC)Reply
Enforced globally onto local projects? Good luck with that. Not to mention that not all projects even agree on NPOV principle. Per your question, I don't know how to be more clear: since there are no "local categories" you ask about (because Commons categories serve all projects and have global impact) your question does not make any sense and cannot be answered. In some hypothetical world, if all projects could create categories each separately for their own purposes, we could discuss this, but what sense would it make? Categories, unlike advisory labels, are not culture-dependent and can be easily used globally. I don't see why would we want to waste time on discussing local categories for images, if we all know this is completely futile. The only possible outcome is adding more text to the discussion and deterring more disputants just by mere weight of the debate. Pundit (talk) 17:03, 13 August 2012 (UTC)Reply
Give me a break. You have your view point, i have another view point. I tried to explain the goals of the projects and why every project follows NPOV in some way. Commons does not have an NPOV policy for pictures, but it indirectly has one for categories. The same thing applies to any other project that does not have an NPOV policy. You can't agree on that and I'm fine with it. But until either you or i change the viewpoint a discussion about this point makes no real sense. --Niabot (talk) 18:20, 13 August 2012 (UTC)Reply
It's possible that the path forward is to pick one of the Asian, African, or Middle Eastern Wikipedias where a filter was strongly supported, and do the implementation work there. Then when something's working and the people most likely to use it are happy with it, it could be offered to individual projects the same way that any other feature is offered. WhatamIdoing (talk) 15:43, 13 August 2012 (UTC)Reply
Missionary work, you mean. FormerIP (talk) 15:55, 13 August 2012 (UTC)Reply
If you have a look at my labeling proposal, it relies exactly on the principle that each project can make independent decisions for themselves (and they can decide HOW they want to use labels, if use them at all). I can't actually see why the global community would want to meddle. Pundit (talk) 16:10, 13 August 2012 (UTC)Reply
That's great, but I think you're missing the point. Your proposal might be brilliant in a number of respects, but you can't get around the problem of objections to categories by using a different word. FormerIP (talk) 18:21, 13 August 2012 (UTC)Reply
I'm not getting around the problem, I'm just trying not to increase the confusion by using the word "category", which has a very precise meaning on Wikimedia projects, for something which in many respects is utterly different (most importantly, it is not global, and this was the crucial objection, also importantly, it is not visible to anybody just browsing and the stigmatizing effect of categories was also criticized, and finally, it does not allow searching within categories, which are used for cataloging). Local labels, created separately by different projects, address the main issue here, that some cultures have different sensitivities and where we as a global community don't see any problem, they and the majority of their local readers do. Pundit (talk) 19:58, 13 August 2012 (UTC)Reply
The objections to using "categories" were generally that the opponents believed the category tree at Commons was not functional for this purpose. Opponents believed that it would produce too many false positives and false negatives.
There are objections to using a new, separate labeling system. The principal objection is that it is not technically possible to (1) make the label truly invisible and (2) make the label functional. You can keep most users from browsing through the list, but you cannot keep users from finding out whether an individual image has been tagged by it (all you have to do is load the page/image once with the filters on, and once with the filters off). Consequently, you cannot prevent an automated process from discovering which images have been tagged as objectionable, and thus making a list of images to censor (or to turn into a shock site, but that doesn't concern our community).
The imperfections in the Commons categories (e.g., the logo of an anti-violence organization [wrongly] appearing in Category:Violence) are what prevent the existing cat tree from being useful for this purpose. It's too approximate to be handy for a censor, and additionally it's something that the censor could do right now anyway. WhatamIdoing (talk) 15:14, 14 August 2012 (UTC)Reply
You're right - the category tree, objected to, is another difference with the labels system. When you say that "there are objections" you mean that you have them, right? Just making sure if you're not referring to some prior discussion. It is possible to make label totally invisible for people who do not click "edit" - just as interwiki. Of course ultimately you can find out which image is labelled - after all this is also a function of social control. And of course you're right that by knowing this one could create a system of filtering the labelled content out. But this is clearly a problem outside of our community (btw, we have no way of preventing third parties from creating any filters on any Internet content they want, also even now it is very easy to use some Commons categories for this - same principle applies of course also to text articles, there is nothing we can do if *somebody* wants to block our content, although if we disagree with the principle we can easily exert pressure by take it or leave it approach, by forbidding third parties filtering under the penalty of cutting off Wikipedia content). Also, please note that if we really introduce labels based on local projects preferences, each image may be tagged totally differently across projects. Pundit (talk) 16:03, 14 August 2012 (UTC)Reply
Actually, by "there are objections", I mean "this system was explicitly rejected in the report that the WMF commissioned". See 2010 Wikimedia Study of Controversial Content: Part Two#Explanations under Recommendation #10.
The CC-BY-SA license prohibits us from forbidding third-party mirrors and forks from removing material that they consider objectionable. The only choice we have is how easy and convenient we make it. Adding labels or tags that permit them to automatically and with minimal effort remove potentially objectionable material was deemed incompatible with the WMF's values. WhatamIdoing (talk) 15:43, 15 August 2012 (UTC)Reply
If you read the link and the point you refer to carefully, you'll see that it does not refer to our own labeling systems, but it objects the third party filters. Adding our own labels WAS NOT deemed incompatible with the WMF's values in recommendation no. 10. Let me quote: ...We are adopting the position that Wikimedia projects stand against censorship, by and large, no matter who is doing the censoring, and that it is Wikimedia editors who should decide what restrictions, if any, should be placed on Wikimedia content, not people outside the organization. With this we all agree. As I wrote before, if there is a problem with third party filters, we can exert pressure by limiting access to Wikipedia. The fact that there are zillions of mirrors of Wikipedia is irrelevant - it is Wikipedia that viewers want and no Internet provider would risk losing access to it, because they'd lose their clients. Pundit (talk) 16:02, 15 August 2012 (UTC)Reply
  1. We cannot "limit access to Wikipedia" contents, and if an ISP decided to censor images based on our labeling of them—to "forget" to load a tagged image and its caption—there would not necessarily be any way for their customers to know that they were doing this.
  2. The whole point of recommendation #10 is to oppose RDF tags and similar labeling regimes that would make it easier for third parties to filter or censor WMF websites. See the third sentence: "it is possible for us to make that job easier or more difficult, by adopting regimes to tag our content in accordance with increasingly widely-used systems of content management (e.g., RDF tags), or not." The report recommends not making it easy for third parties to censor our content, whether that is direct censorship ("forgetting" to load tagged images when someone reads en:Penis or giving a fake "file not found" error if someone goes to Cat:Photographs of sexual intercourse on Commons) or indirect censorship (forking the content to a SFW version). WhatamIdoing (talk) 17:59, 17 August 2012 (UTC)Reply
All true, we don't want to make filtering easier without a reason. This does not mean that if we, as a community, have a reason to use tags/labels, we shouldn't because of third parties. Pundit (talk) 12:56, 18 August 2012 (UTC)Reply
Return to "Controversial content/Brainstorming" page.