Controversial content/Brainstorming
Brainstorming ideas go here!
Just brainstorming. More on the talk page. |
Useful ideas will meet the minimum requirements set forth in the The WMF Board Resolution on controversial content. These requirements include:
- The filter must "enable readers to easily hide images hosted on the projects that they do not wish to view" (no complex, obscure, or technically challenging solutions).
- The filter must be available when "first viewing the image or ahead of time through preference settings".
- It must never permanently remove any image.
- It must be "visible, clear and usable on all Wikimedia projects" (not just your home project; remember that no projects have identical policies and specifically that NPOV has been rejected by several).
- It must be usable "for both logged-in and logged-out readers" (cannot depend on services like page creation or Special:Preferences that are only available to logged-in readers on some projects).
In addition, Sue Gardner, Executive Director of the Wikimedia Foundation, has indicated that the implementation of this resolution will not involve the creation of a category-based image filter.
Category-based filtering
editmw:Personal image filter provides an idea to use a category-based system to hide images on a per-user basis. In all fairness if this is truly a open site them why would u hide content to the public. Surly there is no shame In ones openness to ones own sexuality. Is this the point of creating for your self. Dose this not define it in the purest form. It's at the center if its own manifestation. And if to truly be understood it must be a source is display to define the unique stand of becoming in powered
Geography-based filtering
editIf 90% of the people in your area have hidden a particular image, individual users could set a personal threshold at which they would want images to be hidden by default for themselves. For example, a user who geolocates to a predominantly Muslim region of the world might have images of Muhammad automatically obscured by default. The idea is based on the principle that people close to you will have similar views to your own. Users who didn't want to opt in could simply not use the feature.
Per-image per-user filtering
editAllow individual users to easily disable specific images on an individual basis.
Per-use-of-image-in-context filtering
editIf an image is controversial in almost any context, it should be deleted from Commons like it is done now (if not, it's a problem of Commons only, that should be solved in the Commons community. But most images are controversial only in a specific context. This context cannot be limited to a geographic area, but it to the wiki page, which an image is used at. If an image is shocking for a non-neglectable number readers of an article, it should either not be used in this article or it should be hidden. Authors and readers of an article can best discuss about the right use of an image like they already do. If an image is too controversial to be shown directly, bit still relevant enough to illustrate the article, the authors can decide to show a pixelized version, with a simple MediaWiki function to be implemented:
[[image:nameofimage|pixelize]]
If authors act responsibly, they will use this feature for controversial images (if authors do not act resposibly, then the Wikimedia community has a problem that cannot be solved by technical features anyway). A random reader will first see a pixelized version. With a click on the image the reader can see the original version. Registered readers can choose not to see pixelized images at all. An example:
pixelized | full version after one click |
This image may be appropriate and not controversial in an article about Streaking, but it may be appropriate but controversial in an article about demonstration.
All-or-none filtering
editAllow individual users to enable or disable all images on a per-site basis.
- This would be fantastically useful for me personally. I envisage images being placeholders until clicked upon and loaded. - David Gerard 23:32, 10 October 2011 (UTC)
- This should be implemented anyway as a MediaWiki feature, separate from an image filter. --ErrantX 13:33, 11 October 2011 (UTC)
- I was hoping it would be in the new mobile interface, but it isn't - David Gerard 13:46, 11 October 2011 (UTC)
- Don't most modern browsers have this option already built-in? LtPowers 00:37, 13 October 2011 (UTC)
- Done on browser level will affect browsing of all sites. We look at filters for the foundation projects. --Sir48 01:22, 13 October 2011 (UTC)
- Yes, but it's not at all clear why. If someone is concerned about images on Wikipedia, oughtn't they be equally concerned about images anywhere on the web? LtPowers (talk) 22:07, 15 June 2012 (UTC)
- Done on browser level will affect browsing of all sites. We look at filters for the foundation projects. --Sir48 01:22, 13 October 2011 (UTC)
- Don't most modern browsers have this option already built-in? LtPowers 00:37, 13 October 2011 (UTC)
- I was hoping it would be in the new mobile interface, but it isn't - David Gerard 13:46, 11 October 2011 (UTC)
- This should be implemented anyway as a MediaWiki feature, separate from an image filter. --ErrantX 13:33, 11 October 2011 (UTC)
- bugzilla:32138 --MZMcBride 02:25, 2 November 2011 (UTC)
Weak "All images hidden by default" filtering
edit- A weak "all images hidden by default option" - where individual images can be clicked on to load if a user wants to load them and all images are greyed out with placeholders seems like the simplest form of filter. It has several advantages over any other system in that it has no "censorship" worries and unlike a personal filter, such a proposal is "preventative" in that if enabled users who don't want to see particular images have no risk of seeing them before they can filter them. In order to be effective on a wikiproject, users will need accurate image descriptions, but that is a goal of wikipedia anyway. Personally, I would like to see a more flexible solution but I am not that optimistic about a more flexible solution gaining consensus, so this is my preferred "if all else fails" option.Ajbpearce 11:18, 29 November 2011 (UTC)
Revived: w:User_talk:Jimbo_Wales/Personal_Image_Filter#keep_it_simple 75.166.200.250 21:03, 19 July 2012 (UTC)
An abandoned project
editI encourage anyone interested in ideas for implementation to look at Virgo, an abandoned project due to filter hosting restrictions. With the talk of integrating the filter into the interface, Virgo may be obsoleted altogether, but I do have a fairly long list of image URLs that can help kick off the "tagging" or "blacklisting", whichever is used (tagging has been rejected on numerous occasions for several legitimate reasons). I think it would be great to see something like this, only easier to use and browser-independent. Also, addons should ideally not be required. Bob the Wikipedian 22:58, 10 October 2011 (UTC)
- Note-- this idea operates on blacklisting instead of tagging. Bob the Wikipedian 22:59, 10 October 2011 (UTC)
Mode-based filtering
editEstablish various situations where a user could be browsing and automatically tailor to the appropriate place. For example, "school mode", "home mode", "work mode". Perhaps combined with an age-based filter to make modes such as "school mode" more accurate.
Age-based filtering
editStore user's age and allow images to be tagged with "no one under X years old should see this by default".
- Crosses the line into subjective filtering - and unfairly censors based on whatever prejudice. If a parent wants to remove nudity for their child then that is their right. But we don't impose it on others. --ErrantX 13:31, 11 October 2011 (UTC)
- COPPA gets in the way here. If a website in America knowledgeably serves a child in America who is under 13, the website is not allowed to store or make available any information that enables someone to contact them directly, which would mean we'd need to disable talk pages and emailing for these users. Best not to ask their age. Bob the Wikipedian 17:55, 11 October 2011 (UTC)
Do nothing
editAlways an option.
- La mejor opción. Lo controvertido varía mucho entre individuos y culturas y casi cada cosa podría filtrarse porque a alguien en algún lugar le molestará. Una herramienta como esta ayudará poco a los "lectores sensibles" y mucho a quienes quieran ocultar lo que ellos no quieren que vean otros. Me extraña que en esta página se pongan a discutir acerca de cómo lo harán cuando no hay acuerdo para hacerlo e incluso existe una gran oposición. Ojalá alguien pudiera mejorar mi traducción. Gracias. --Lin linao 21:55, 14 October 2011 (UTC)
- (The best option. Controversial varies among individuals and cultures. All things could be filtered because someone in some place could be disturbed by something. A tool like this maybe will be useful for some "sensible (?) public" but will be very useful for people who want hide content that they consider shouldn't seen by others. I wonder about a brainstorming page for discuss "how make it" if we haven't an agreement about "make it", moreover there is a strong opossition. Fix my translation, please. Thanks. --Lin linao 21:55, 14 October 2011 (UTC))
- Me adhiero. / That's right! Pandora, keep your box closed. Agathenon 22:08, 24 October 2011 (UTC) (de/from de:WP)
Support unless you have proven that there is a problem of concernable extent, how it looks like and a filter of any kind will be a suitable solution.Catfisheye 00:54, 25 October 2011 (UTC) (please not such an opinion essay like the harris-report again.)
Support also by me. Ein Projekt, dass sich dem freien Wissen für alle Mitmenschen verschrieben hat, sollte hier und da aufkommende kontroverse Ansichten zu seinen Inhalt als Auszeichnung, und nicht als Aufforderung zu entgegeneilendem Gehorsam, ansehen. (in english, please exuse mistakes: A project that has dedicated itself to the free movement of knowledge for all fellow human beings, should here and there emerging controversy to its content watch as award, and not as a call for anticipatory obedience.) --Alupus 15:53, 26 October 2011 (UTC)
- Good idea. This approach has worked fairly well during the last ten years and seems easy to implement, low on server load and quite uncontroversial. -- Carbidfischer 18:08, 26 October 2011 (UTC)
Support While there were loads of filter opponents lately, I have yet to see people demand filtering.---<(kmk)>-
- The Board of Trustees has demanded image filtering, so it's going to happen. Also, if you look at the per-language results from the image filter referendum, nearly all Asian and Middle Eastern projects have put a very high priority on offering image filtering.
- At this point, our only realistic option is to influence how it will happen, not whether it will happen. WhatamIdoing 16:30, 29 October 2011 (UTC)
- That link might interest you: Link --Niabot 16:38, 29 October 2011 (UTC)
- WhatamIdoing: "nearly all Asian and Middle Eastern projects have put..." - and what? OK, so let them go on filtering, we habe nothing against it. But the second biggest community and project opposed the idea by 86 %, so please respect this and do not argue that the Germans must accept the filter because a project in Middle East want it. You risk very much here, when you argue this way. -jkb- 16:54, 29 October 2011 (UTC)
- As far as I'm concerned "how it happens" encompasses "which projects we enable it on". This page isn't about "how to do image filtering on de.wiki". It is about "how to deal with image filtering".
- The fact is that (so far) the Board has said this will happen on all projects, not just the projects that want it. Under the current resolution, Sue's power includes the ability to delay implementation for de.wiki (it's normal for such things to be rolled out slowly, a few projects at a time, and Sue could declare that de.wiki will be the very last on a very long list), but she does not actually have the authority to refuse the Board's direction that it happen everywhere. The most she really can do is to give you time and assistance in changing the Board's decision. WhatamIdoing 17:05, 29 October 2011 (UTC)
- WhatamIdoing, you wrote: "The fact is that (so far) the Board has said this will happen on all projects...", and you gave my answer, that could be shortly expressed with "So what?", to this statement in your last sentence. Even the Board of Trustees of WMF should be able to change an resolution. --Alupus 19:21, 29 October 2011 (UTC)
- Yes, the Board (and only the Board) has the power to change the resolution. In particular, employees like Sue cannot change the resolution, no matter how much we scream at them.
- I believe that the likelihood of the change you want actually happening is less than 5%. As a result, I believe that anyone who doesn't want the worst-possible filter implemented should be making positive suggestions for ways to mitigate the possible damage, rather than letting only the pro-filter people design the filter, and pretending that the filter will never affect any of the anti-filter projects.
- This is a normal response to a threat: You should both try to stop the threat from happening and do what you can to reduce the possible damage from the threat, just in case you fail to stop it entirely. WhatamIdoing 01:57, 31 October 2011 (UTC)
- There will be only a threat if the board decides to implement the filter against community consensus. If this happens, then the board creates his own threat. Your wording is just the simpleton play to suggest a compromise, that isn't a compromise. --Niabot 14:40, 31 October 2011 (UTC)
- In my opinion, based on what I know about organizations in general and the pressures on this Board in particular, the Board is highly likely to implement the filter on 100% of projects.
- I realize that there are people who value their "moral purity" so highly, and the effects on people in pro-filter projects so lowly, that they cannot bring themselves to be involved. But for those who believe that everyone deserves a "least evil" filter, even people at ar.wiki, rather than a maximalist filter designed solely by filter proponents—and for those who can see the handwriting on the wall—I recommend providing positive, constructive suggestions in addition to engaging in a political campaign to (possibly) change the Board's mind. WhatamIdoing 15:11, 31 October 2011 (UTC)
- Well, I guess, that the board members very likely do know better how organizations in general do work and pressure on organizations does work. They might remember what happened to Shell's turn over when they intended to sink the Brent Spar. They might also look into the planned introduction of the E10 fuel in Germany and how this plan grandiosely failed. And I guess they're clever enough to draw the right conclusions. Actually the very likely already drew thos conclusions by letting Sue declare some days ago that before January 2012 there won't be implemented anything. Very clearly the foundation got aware that a culmination of the discussion during the fund raiser could develop into a PR disaster. --Matthiasb 17:46, 31 October 2011 (UTC)
- There will be only a threat if the board decides to implement the filter against community consensus. If this happens, then the board creates his own threat. Your wording is just the simpleton play to suggest a compromise, that isn't a compromise. --Niabot 14:40, 31 October 2011 (UTC)
- WhatamIdoing, you wrote: "The fact is that (so far) the Board has said this will happen on all projects...", and you gave my answer, that could be shortly expressed with "So what?", to this statement in your last sentence. Even the Board of Trustees of WMF should be able to change an resolution. --Alupus 19:21, 29 October 2011 (UTC)
- I'm sorry, but this made me laugh. The Board "let" Sue declare that a software feature that has long been expected to take at least three months, and probably six, to create, will not be implemented in less than two months? The thing hasn't even been designed yet! Of course it's not going to be running next month!
- If you'll go look at the discussions from months ago, I also "declared" that unavoidable fact back at the end of the summer. Anybody who can count as high as three pages on a calendar could have "declared" that whenever they wanted. Pointing out that there are fewer than three months left in the calendar year does not constitute a change of plans. WhatamIdoing 15:58, 2 November 2011 (UTC)
- @Matthiasb: Please don't blow up the strawmen and remember, that sometimes, you can't expect to find anything else then bended arguments. ;-) --Niabot 16:12, 2 November 2011 (UTC)
Implement an existing content rating standard
editImplement one of the existing content ratings systems like PICS (obsolete) or POWDER (officially a W3C Recommendation since 2009) in MediaWiki. Every revision/file will have associated content ratings in various categories; transcluding a page or displaying an image will automagically bump up the content rating of the outputted page to whatever is higher in each category.
Nobody actually uses these content tags, and virtually no browsers support them, but it's a start (and if Wikimedia projects started using POWDER, it might finally catch on). This shifts the burden of actually filtering to the user, and lets the user choose exactly which categories they want to filter.
A new group right/flag to review images
editI am mostly in the "Do nothing" camp on this issue, but as long as we're only talking about enwp, a viable alternative to not relying blindly on the categorization system, would be implementing a new "image reviewer" flag on en.wp and maybe in commons. This method would create a list of reviewed images that can be considered objectionable, that could be filtered/black-listed. The difference is, 1)this system already works "article reviewer", 2)does not rely on the existing categorization system and would create 3)a new process that won't be fool-proof but probably harder to exploit for vandals. The technical implementation of this would probably be easier too, and the community can decide on the offensive-ness on its own through a request for review or something similar, in case of contentious decisions. Whether other projects can have this should of course remain their decision, they can choose to completely opt-out of this flag similar to "article reviewer", and for that very reason, enwp community should vote on this itself- not random readers but a straight forward vote on wiki. Theo10011 01:31, 11 October 2011 (UTC)
- How do you recruit enough of these reviewers to categorise 11 million images? How do you identify the images that some might consider objectionable? And how do you avoid placing another burden on the volunteers? Any system that places the burden of reviewing all images on the uploaders and volunteers will lead to criticisms that the community had failed to tag certain images within a reasonable time, and arguments as to whether certain images should or should not be in certain filter blacklists WereSpielChequers 11:27, 11 October 2011 (UTC)
- The process could be semi-automated, taking its lead from existing categories, and only presenting as yet unclassified images to the reviewer. Many categories are very unlikely indeed to contain any media that might fall within the purview of the filter. What is vital is to have good category definitions. These should be neutral and descriptive, focusing on actual media content rather than any potential offence caused. And the definitions have to be formulated in such a way that they can reliably distinguish between http://commons.wikimedia.org/wiki/File:Michelangelos_David.jpg and http://commons.wikimedia.org/wiki/File:Erected_penis.jpg – the second is probably a candidate for filtering, and the first isn't. Beyond that, the rule should be "When in doubt, leave it out", unless filter users complain. We only want clear cases to be filtered; being over-inclusive will just mean that users will find the filter more trouble than it's worth, as they end up having to click time and again to reveal pictures they do want to see. --JN466 14:47, 11 October 2011 (UTC)
- I manually add images to my image filter daily as I casually browse Wikipedia and wouldn't mind doing this for the world instead of myself; assigning proper metadata or adding it to appropriate blacklists would take slightly longer, but it's doable. Bob the Wikipedian 17:58, 11 October 2011 (UTC)
- WereSpielChequers, 11 million is the number of images on commons. I meant enwp only, in my example. The burden has always been on the volunteers, whether the category system or something new. This flag system already has a proven track-record on en.wp through a similar article reviewer flag, image review on the other hand would require even less of a commitment than article reviewer. People would just need to click on an image and it could be automatically added to a daily list, any contentious decision can then be disputed and voted on for 7 days after which the image would be included or not in the filter. I doubt we'd have shortage of people willing to review and boost their own edit-count, if we make this process as semi-automated as possible. Theo10011 19:31, 11 October 2011 (UTC)
- Would you entertain using objective descriptions (photos of real-life genitals, photos of murder, images of Muhammad, etc.) rather than using a subjective criterion of offensiveness? Objective descriptions have the advantage that the user can decide what they might find offensive, and tailor the filter accordingly. --JN466 10:25, 12 October 2011 (UTC)
- The process could be semi-automated, taking its lead from existing categories, and only presenting as yet unclassified images to the reviewer. Many categories are very unlikely indeed to contain any media that might fall within the purview of the filter. What is vital is to have good category definitions. These should be neutral and descriptive, focusing on actual media content rather than any potential offence caused. And the definitions have to be formulated in such a way that they can reliably distinguish between http://commons.wikimedia.org/wiki/File:Michelangelos_David.jpg and http://commons.wikimedia.org/wiki/File:Erected_penis.jpg – the second is probably a candidate for filtering, and the first isn't. Beyond that, the rule should be "When in doubt, leave it out", unless filter users complain. We only want clear cases to be filtered; being over-inclusive will just mean that users will find the filter more trouble than it's worth, as they end up having to click time and again to reveal pictures they do want to see. --JN466 14:47, 11 October 2011 (UTC)
Trust the editorial judgement
editIf pictures are not educational at all, delete them. If pictures are not a important part of an article, don't include them. If you don't want to get shocked by penis pictures, don't read penis related articles.
Fot the very very few cases, where educational material can be too shocking, editors should have the option to use a blurring odr click-to.view-feature.
- One persons idea of not educational may not be the same as anothers, so deletion is not the answer. Or rather deletion won't be seen as a solution by those who want to censor unless you delete anything which anyone finds grossly offensive, and by that stage most people will see deletionists as the equivalent of dark age vandals defacing statues they deem idolatrous. But the last bit - the "rule of least surprise" is probably uncontentious until you see the implications. Did the DE community see a vulva as an unsurprising thing to put on their mainpage? The rule of least surprise is very difficult to apply across cultures. WereSpielChequers 11:29, 11 October 2011 (UTC)
- The vulva incident was one and it's more than a year ago. It did not happen again and as Achim Raschka pointed out, the article was on the front page because of it's quality. There was no tangible effect whatsoever, no school banned Wikipedia because of this, neither did the media scandalize it.
- The "rule of least surprise" is something I wonder about. If you want to spread wisdom and information, you have to suprise the unknowing. It's an inherant quality of an encyclopaedia. As a rule I would look for something like "don't shock people to make a point" or "remember that kids watch this". The *least* surprise is nothing you should hope for or aim at. --81.173.133.84 12:09, 11 October 2011 (UTC)
- The German main page appearance of the vulva image is a red herring. The fact is that this was not a problem in Germany, because Germany has a different attitude to nudity (while actually having stricter youth protection laws than the US when it comes to pornography vs. nudity), and German schoolchildren see images like that at school. What is a real problem is that if I search for electric toothbrushes in Commons, the second result shows a woman masturbating with one. If I search for cucumber, the first page shows me a woman with a cucumber stuck up her vagina. If I search for pearl necklace, the first image that comes up is of a woman with sperm on her neck. And some of the illustrations in the Wikipedias are more explicit than some people might like to see (e.g. [1], [2], [3], and more explicit than most (if not all) reputably published educational sources would show. --JN466 13:41, 11 October 2011 (UTC)
- Well, wouldn't that suggest that actually the wiki search function isn't optimal? Googling the cucumber on commons.wikimedia.org does not show any obscene within in the first four or five result pages (and I didn't try further). Indeed I ask: do we have this discussions because of our search function isn't very well? --Matthiasb 14:56, 11 October 2011 (UTC)
- Good question. What criterion does the Commons search function use to order search results? Anyone? --JN466 15:07, 11 October 2011 (UTC)
- Matthias, I've dropped a mail to the Commons list: [4] --JN466 15:24, 11 October 2011 (UTC)
- I remember reading about "What Links Here" being sorted alphabetically-- up to a certain date (don't remember what date), and all indices added after that date are sorted by the date they were indexed. I wouldn't be surprised if the image search works in a similar fashion. Bob the Wikipedian 18:02, 11 October 2011 (UTC)
- I suspect that our search software is less sophisticated than Google's. If you look at the descriptions the first hit has a description that starts "Pearl Necklace" subsequent hits start "A white" or "a golden". Change the description and you probably change the sequence. WereSpielChequers 18:49, 11 October 2011 (UTC)
- I changed the description, but looking at the other examples I am doubtful that is the reason. We'll know in a few hours. I think it more likely that Maarten has it right. Answering this conclusively really needs a developer's input. --JN466 23:30, 11 October 2011 (UTC)
- I suspect that our search software is less sophisticated than Google's. If you look at the descriptions the first hit has a description that starts "Pearl Necklace" subsequent hits start "A white" or "a golden". Change the description and you probably change the sequence. WereSpielChequers 18:49, 11 October 2011 (UTC)
- I remember reading about "What Links Here" being sorted alphabetically-- up to a certain date (don't remember what date), and all indices added after that date are sorted by the date they were indexed. I wouldn't be surprised if the image search works in a similar fashion. Bob the Wikipedian 18:02, 11 October 2011 (UTC)
- Matthias, I've dropped a mail to the Commons list: [4] --JN466 15:24, 11 October 2011 (UTC)
- Good question. What criterion does the Commons search function use to order search results? Anyone? --JN466 15:07, 11 October 2011 (UTC)
- Well, wouldn't that suggest that actually the wiki search function isn't optimal? Googling the cucumber on commons.wikimedia.org does not show any obscene within in the first four or five result pages (and I didn't try further). Indeed I ask: do we have this discussions because of our search function isn't very well? --Matthiasb 14:56, 11 October 2011 (UTC)
Make this logged in users only
editSome IP addresses spend months only being used by one person, some are shared by many. The same applies to PCs. Unless and until we can find an alternative way to restrict an image filter to an individual decision, any image filter needs to work for logged in accounts only. Otherwise we would be allowing some people to decide what others can see. WereSpielChequers 11:28, 11 October 2011 (UTC)
- That largely defeats the object (i.e. a reader tool). But then again the proposed technical solutions aren't to save settings by IP - but via cookie. So it is on a per computer basis :) You could change IP and have the same settings. --ErrantX 13:29, 11 October 2011 (UTC)
- Per-browser, per-domain, per-protocol basis, actually. For example: Firefox cookies won't be set for Internet Explorer; secure.wikimedia.org cookies won't be set for en.wikipedia.org; https cookies won't be set for http (currently?). --MZMcBride 13:38, 11 October 2011 (UTC)
- I don't see a problem with this idea; it can become one of the perks to registering a username. We only require that anyone can edit, not that anyone can browse safely. Bob the Wikipedian 18:05, 11 October 2011 (UTC)
- Just to note that the problem with the idea is that it will not work unless the Board alters its resolution, as it requires that "the feature be visible, clear and usable on all Wikimedia projects for both logged-in and logged-out readers". --Moonriddengirl 12:35, 15 October 2011 (UTC)
- I don't see a problem with this idea; it can become one of the perks to registering a username. We only require that anyone can edit, not that anyone can browse safely. Bob the Wikipedian 18:05, 11 October 2011 (UTC)
- Readers are welcome to create accounts - we don't delete accounts that don't edit. So this would be viable, and while we know that many editors don't share PCs or cookies, there are some countries where the Internet culture is largely based in Internet cafes. WereSpielChequers 18:45, 11 October 2011 (UTC)
- Per-browser, per-domain, per-protocol basis, actually. For example: Firefox cookies won't be set for Internet Explorer; secure.wikimedia.org cookies won't be set for en.wikipedia.org; https cookies won't be set for http (currently?). --MZMcBride 13:38, 11 October 2011 (UTC)
A cookie is a form of login (w/o an explicit account); and existing categories may work well enough
editA cookie-based solution would work for everyone, logged in or not. It may require minor changes to the wmf:Privacy policy, but since this is their resolution, let them worry about that. I suspect that keeping state on the servers for every reader (because you wouldn't be able to fit many categories in the cookie itself) may require a fair bit of up-scaling of the WMF hardware, but let the WMF-paid developers worry about this.
If the filter were based on existing categories, instead of requiring new ones, it would not be a big deal for editors. For example, if a user wants to hide the commons cat "Depictions of Muhammad" and all its subcategories from his own computer, let them. I would not support special categories for "controversial" stuff, as creating and maintaining those would be an editors' nightmare. However, because WMF bought the idea from the Harris report that they can be like Google, if WMF wants to implement their own categories for controversial stuff, and to pay WMF staff to maintain those either manually or via software (as Google does for their "safe search"), then more power to them.
A basic filtering-out of images based on existing categories will work well for many people, especially if they could preload some profile with certain categories in it. E.g. a user from an Arab country IP shows up with no cookie set: ask him/her on first page visit if he wants to load a WMF-recommended "safe images" filter for such countries, which would exclude human-like images of Muhammad. You could have another one for China, and perhaps a generic one for NSFW stuff (anything remotely related to nudity, sex, etc.) to just ask anyone about enabling on a first no-cookie visit. Using existing categories the filter will likely exclude more stuff that strictly necessary, but you can't have it both free (on the WMF side) and extremely discerning (on the user side); that's not the Google way. The WMF may be able to hire wannabe censors for almost nothing to create profiles based on existing categories if they put up the right add. ASCIIn2Bme 02:42, 8 November 2011 (UTC)
Create the categories/tags in such a way that they are inaccessible to people outside the Foundation
editWould alleviate the concern that the filter category/tagging system makes censorious contries'/ISPs' job easier than the already present categorisation system. (Was part of the original Harris recommendation.) --JN466 13:31, 11 October 2011 (UTC)
- I'm not sure the Foundation needs another job to do; you'd want to delegate this responsibility to a task force similar to the flagged revision checkers. Bob the Wikipedian 18:06, 11 October 2011 (UTC)
- Indeed, that is what I had in mind. What I meant was simply that outsiders should not be able to see the tags. But given that the site and the user's computer have to exchange information about this, so the site can decide which version of the page to send to the user's computer, I am doubtful that it would be possible to perform this information exchange in such a way that outside censors could be barred from detecting and manipulating the relevant data (e.g., ensuring that all users within a country or ISP receive the version with the hidden version, and then disabling the ability to display the image). So unless there were some clever encryption, say, I fear censors could adapt the system to their own uses. On the other hand, let's not forget that the existing category system already lends itself to the same kind of censorship, as do our publicly accessible "bad image" lists, and that those who want to censor us do so already. --JN466 23:27, 11 October 2011 (UTC)
Refine the existing category system so it forms a suitable basis for filtering
editWe have many existing categories that could serve as the basis for a filter system -- categories like http://commons.wikimedia.org/wiki/Category:Facial_cumshot or http://commons.wikimedia.org/wiki/Category:Oral_intake_of_semen
It may be possible that this is all we need to define a small number of categories (each containing a group of existing Commons categories) that users might want to filter. (Also see #Category-based_filtering above.) Again, using existing categories would alleviate the concern that we are creating a special infrastructure that censors could exploit. --JN466 15:03, 11 October 2011 (UTC)
- Commons would become the go-to site for "adult" entertainment. I don't like it. Bob the Wikipedian 18:07, 11 October 2011 (UTC)
- It already is. Many of the most viewed media files are sexual media, and I think the required changes to the category system would really be minor. --JN466 23:37, 11 October 2011 (UTC)
- Some categories would work that way, others wouldn't. But if you go down the category route you place an unfair burden on the volunteers - people will get criticised for loading an image with a penis without properly categorising it, or categorising an image without adding some of the categories that a censor might want but an encyclopaediest might consider irrelevant to that article. WereSpielChequers 18:54, 11 October 2011 (UTC)
- I think it has to be clear that no filter would be perfect, and no user (newbies especially) should be hit over the head for failing to categorise a file correctly. We simply cannot guarantee users of the filter that they will never come across an image they would rather have wanted filtered. Neither does Google – even with Strict Search enabled, our "jumping ball" still is the first result Google shows. --JN466 23:37, 11 October 2011 (UTC)
- Definitely not the way to go. That would lead to the creation of categories for possibly offensive material, which I'm strongly opposed to. It also does not solve the problem. It only divides one question ("Is this image offensive?") into two questions:
- Does the image belong inside the category?
- Does this category need to be filtered?
- --Niabot 19:49, 11 October 2011 (UTC)
- No. I am not talking about creating new categories, but looking at the existing categories like http://commons.wikimedia.org/wiki/Category:Facial_cumshot or http://commons.wikimedia.org/wiki/Category:Oral_intake_of_semen which can be quite neutrally described, and the user given a choice of whether they want to see them or not. --JN466 23:37, 11 October 2011 (UTC)
- So, if I´ve deep concern about some images I dont want children to see - I would start manipulating the category system on commons. Move the subcategory "Bikinis" from "Women's clothing" to "Nudity" for example. Those editwars will be entertaining, but it is obvious the idea wont work. Alexpl 08:04, 12 October 2011 (UTC)
- Frankly, I don't think people will bother that much about something that only affects users who have opted in to using the filter, does not change everybody else's viewing experience, and at any rate can be overridden by the user. Questions like moving "bikini" to "nudity" aren't a realistic problem either. The community makes a decision whether bikinis are nudity or not (I'm pretty sure the vast majority would say they're not), and anyone who edit-wars against consensus is blocked. Simple, and no different from any other editorial dispute. Categories that would need adjustment would be categories like http://commons.wikimedia.org/wiki/Category:Nudity which conflate real-life nudity with nudity in art. We don't want to offer an option that filters Rembrandt. --JN466 10:06, 12 October 2011 (UTC)
- I am smart and wouldnt edit-war against consensus - I am a highly motivated religious activist* and would call in help from some real-life supporters, let them get commons accounts, and agree on my actions there. The handful of volunteer workers, who are really and truely active on commons, can only slow me down, but they cant stop me. Yesterday I was isolated - Tomorrow, with a commons based filtersystem, I can force my medieval moral on the whole world and all I need is five or six supporters.(* not really)
- I just want to be sure you get the global scale of this enterprise and regard all its consequences carefully. This is not about questionable charakters who are turned on by the idea to upload photos of their private parts on commons. They could be scared away easily by a rule to identify themselfs while uploading such images. Alexpl 11:07, 12 October 2011 (UTC)
- Re:"We don't want to offer an option that filters Rembrandt." Actually, the only real user-centered way is allow filtering out any categories, including Rembrandt. ASCIIn2Bme 02:51, 8 November 2011 (UTC)
- Frankly, I don't think people will bother that much about something that only affects users who have opted in to using the filter, does not change everybody else's viewing experience, and at any rate can be overridden by the user. Questions like moving "bikini" to "nudity" aren't a realistic problem either. The community makes a decision whether bikinis are nudity or not (I'm pretty sure the vast majority would say they're not), and anyone who edit-wars against consensus is blocked. Simple, and no different from any other editorial dispute. Categories that would need adjustment would be categories like http://commons.wikimedia.org/wiki/Category:Nudity which conflate real-life nudity with nudity in art. We don't want to offer an option that filters Rembrandt. --JN466 10:06, 12 October 2011 (UTC)
- So, if I´ve deep concern about some images I dont want children to see - I would start manipulating the category system on commons. Move the subcategory "Bikinis" from "Women's clothing" to "Nudity" for example. Those editwars will be entertaining, but it is obvious the idea wont work. Alexpl 08:04, 12 October 2011 (UTC)
- No. I am not talking about creating new categories, but looking at the existing categories like http://commons.wikimedia.org/wiki/Category:Facial_cumshot or http://commons.wikimedia.org/wiki/Category:Oral_intake_of_semen which can be quite neutrally described, and the user given a choice of whether they want to see them or not. --JN466 23:37, 11 October 2011 (UTC)
A proposal
editI've drafted a possible compromise at User:WereSpielChequers/filter. Feedback welcome. WereSpielChequers 20:33, 11 October 2011 (UTC)
- Most interesting approach so far. It does not touch Commons and prevents people from building some sort of "censorship-infrastructure" there. Instead, users create local blacklists with image url´s, which are compared with other lists. As soon as a user decides to activate the filter, the images, listed on most of those blacklists, are "banned", but the ones, listed on only a few, are displayed anyway. So the lists of "extremists" wouldnt have any significant effect. I dont like the idea to filter in general, but this is good work. Alexpl 22:52, 11 October 2011 (UTC)
- Thanks. though it doesn't work on straight ratio of filter and permit decisions on each image, but similarity of preference. So if lots of filterers add photos of Everton players to their filter, but you are one of the few who accepts those images but hides images of Liverpool players, then the system would learn that you want it to ignore the filter settings of those who want to filter out Everton players, but give extra weighting to people like you who block images of Liverpool players. WereSpielChequers 23:37, 11 October 2011 (UTC)
- Hm, it seems impossible to create some sort of menue, where people can predetermine almost every type of preference - so the system has got to learn every individuals preferences by analysing its blacklist. I´m no tech guy, but it sounds like a mathematical challenge, based on a comprehensive database for every logged-in-user, consuming a considerable amount of calculating capacity - is that realistic ? Alexpl 06:34, 12 October 2011 (UTC)
- Yes it avoids the whole menu approach, and thereby sidesteps the difficulties of defining porn, violence and various religious filters. It does require processing power that a few years ago would have been expensive, but isn't so unreasonable nowadays. The difficulty in scoping the hardware is that it is difficult to estimate how many people would opt in to this. At one extreme it could just be a small percentage of our currently active editors, at another it could be a significant proportion of our hundreds of millions of readers. One possibility is that it would prompt tens of millions of readers to create accounts, and creating an account probably makes you more likely to donate time or money to the project. WereSpielChequers 07:39, 12 October 2011 (UTC)
- It would not work for Sue´s idea of the "least astonishment", since new images, as bad as they may be, would not be banned until they appear on a list. But the idea is worth to be further investigated. Alexpl 09:28, 12 October 2011 (UTC)
- Actually there is an option to hide all images, and there are options that would default new images to hidden if unfiltered, but it might be best not to introduce them until we had a reasonable knowledge base in the filter system - remember when and if this goes live all images will start of as "new". As for least astonishment, I see that as more of an editorial decision, don't place a potentially offensive image where people would not reasonably expect it. WereSpielChequers 20:46, 13 October 2011 (UTC)
- It would not work for Sue´s idea of the "least astonishment", since new images, as bad as they may be, would not be banned until they appear on a list. But the idea is worth to be further investigated. Alexpl 09:28, 12 October 2011 (UTC)
- Yes it avoids the whole menu approach, and thereby sidesteps the difficulties of defining porn, violence and various religious filters. It does require processing power that a few years ago would have been expensive, but isn't so unreasonable nowadays. The difficulty in scoping the hardware is that it is difficult to estimate how many people would opt in to this. At one extreme it could just be a small percentage of our currently active editors, at another it could be a significant proportion of our hundreds of millions of readers. One possibility is that it would prompt tens of millions of readers to create accounts, and creating an account probably makes you more likely to donate time or money to the project. WereSpielChequers 07:39, 12 October 2011 (UTC)
- Hm, it seems impossible to create some sort of menue, where people can predetermine almost every type of preference - so the system has got to learn every individuals preferences by analysing its blacklist. I´m no tech guy, but it sounds like a mathematical challenge, based on a comprehensive database for every logged-in-user, consuming a considerable amount of calculating capacity - is that realistic ? Alexpl 06:34, 12 October 2011 (UTC)
- Thanks. though it doesn't work on straight ratio of filter and permit decisions on each image, but similarity of preference. So if lots of filterers add photos of Everton players to their filter, but you are one of the few who accepts those images but hides images of Liverpool players, then the system would learn that you want it to ignore the filter settings of those who want to filter out Everton players, but give extra weighting to people like you who block images of Liverpool players. WereSpielChequers 23:37, 11 October 2011 (UTC)
- You asked for feedback, so: my first impression is "it sounds complicated." In your proposal, basically some sort of software in a black box would do the "magic" of identifying "similar filter preferences". I understand the reasoning behind your idea: the "filter categories" would be managed 100% by a feedback-triggered algorithm and not by any humans -- they would be nameless, "objective", adaptive, and no manpower would go into their maintenance. That's an aspect I like. However I'm sceptical as to how well it would work in practice. One crux might be: Your proposal relies strongly on user's "filter this" input. In practice, I think, most users would not get to give a lot of such input -- because they don't spend their lifetime in our wikis. They only look up articles here and there, and only very occasionally would they come across an image they don't want to see. (Arachnophobiacs don't usually visit a lot of spider articles.) So I fear maybe too little user input would be available for the proposed system to work. --Neitram 09:13, 25 October 2011 (UTC)
- The coding and the match algorithms would probably wind up a little complex, but the user experience should not be. I very much doubt that it would completely fail for lack of people willing to opt in and use it, but actually that scenario would mildly amuse me. What is likely to happen is that some casual users who never normally come across anything contentious would find that this doesn't give them the option they want; To tick a couple of boxes and then not see all sorts of images except the particular type that offends them. For those users we would have to explain that this is a big complex world and we can't assume what particular filter they would want, but if they set their options to block all images and do a few searches for the sorts of things that offend them then they won't need to block many images of each sort of thing they don't want to see for the system to learn their preferences. Of course that does mean they need to see the file names captions and alt text to make a decision as to whether it was the sort of image they'd want to block, but how else could you set personal filter preferences? WereSpielChequers 19:09, 7 November 2011 (UTC)
Even for someone with Google's money I would not propose something like this. I can summarize your proposal in one word: utopian. The WMF servers can barely keep up serving contents. ASCIIn2Bme 02:56, 8 November 2011 (UTC)
- Ten years ago that would have been a fair assessment, now that would depend on the efficiency of the code, the number who opted in to it, the size of the various filter lists and the speed with which it performed the matching. I would anticipate that many millions of our images would effectively fall into one group - images that no-one objects to and that this would make the processing rather more practical than if every image was objectionable to some but not others. As for the actual server load, well the worst case scenario is that we postpone the matching part of the feature until Moore's Law has brought computing costs down to the point where it can run. The options "never show me this image again" and "hide all images" would not use much IT resource. I doubt that would be necessary but I'll try and get a comment from one of our devs as to whether they think this is beyond our current capabilities. WereSpielChequers 14:14, 8 November 2011 (UTC)
Commons search
edit- As I said on the proposal's talk page, it's an interesting concept. One deficit I see is that it does not answer the needs of casual (not logged-in) users, who are unfortunately likely to represent the vast majority. Their experience of searching for electric toothbrushes or cucumbers on Commons would remain unchanged. I should be able to do a "safe search" on Commons as an anonymous user, just like I am able to on Google. --JN466 10:17, 12 October 2011 (UTC)
- The Search on Commons itsself was never ment to be filtered. And the community was never asked if it should be. So lets focus on the local wikipedias. For any commons problem - adress it on commons :) Alexpl 10:39, 12 October 2011 (UTC)
- You are mistaken. Commons was a big part of the 2010 Wikimedia Study, and "The personal image hiding feature would give logged-in and anonymous users the ability to easily hide images they do not wish to view on Wikimedia projects." --JN466 11:43, 12 October 2011 (UTC)
- Ah, I was misguided by the arguments for a filter, presented in previous chats and mails, since those were focused on wikipedia only - but I guess WereSpielChequers proposal would work on commons as well. To protect any visitor on commons, regardless how he got there and on which subsite he starts, from any possible astonishment is impossible - unless you force every visitor to use a filter automatically, which, well, would make many people unhappy. Alexpl 16:40, 12 October 2011 (UTC)
- I am more worried about the potential unhappiness of people who are looking for an image of a kid's toy on Commons and see first of all one inserted up a woman's vagina. [5] Or people who are looking for an image of a toothbrush / Zahnbürste, and see one used for masturbation: [6]. Are you happy with this, or if not, how would you suggest addressing this? --JN466 20:25, 12 October 2011 (UTC)
- It seems, at least I learn that from your comments on the german wikipedia, that you assume, a massive filtering on commons would increase the quality of the english wikipedia, because editorial judgement tends to fail there in some cases. As I said earlier: Adress it on commons, to make sure sexual acts are only found via search if you really do look for them. This wont help much with the problems you see on en.wp, but I guess it can be done without an active filter, just by modifying the search function/parameter.
- Can we agree to go on with WereSpielChequers proposal now? Alexpl 06:57, 13 October 2011 (UTC)
- As far as I can tell, implementing something like Google "moderate safe search" on Commons would require categorisation. As things are, for any given search term, the algorithm simply looks for matches in (1) file names (2) file description pages. It can't (and never will be able to) tell what the image shows.
- Sexual images on the German Wikipedia, by and large, indeed seem to be more conservative – less explicit – than the ones on the English Wikipedia, but that does not really affect this discussion here. --JN466 12:11, 13 October 2011 (UTC)
- Nobody on this planet, except for yourself, can label or categorize inappropriate material matching your taste. With 194 states and legal systems and even more sociological societies, one moral-based-category-system on commons, naturally working for all of them, is imperatively going to fail. There will be no progress in discussing this topic, as you may have learned by know. This proposal is the only one, except for full image-block, which has the potential to come close to a solution. So please stop beeing a Red Herring by pointing to cucumbers and toothbrushes. We got it. Alexpl 13:49, 13 October 2011 (UTC)
- It doesn't have to work for everyone. If it's good enough for Google, it's good enough for us. I find Google safe search useful, and wish we had an equivalent function in commons. (Try searching for "drinking".) --JN466 19:35, 13 October 2011 (UTC)
- Nobody on this planet, except for yourself, can label or categorize inappropriate material matching your taste. With 194 states and legal systems and even more sociological societies, one moral-based-category-system on commons, naturally working for all of them, is imperatively going to fail. There will be no progress in discussing this topic, as you may have learned by know. This proposal is the only one, except for full image-block, which has the potential to come close to a solution. So please stop beeing a Red Herring by pointing to cucumbers and toothbrushes. We got it. Alexpl 13:49, 13 October 2011 (UTC)
- I am more worried about the potential unhappiness of people who are looking for an image of a kid's toy on Commons and see first of all one inserted up a woman's vagina. [5] Or people who are looking for an image of a toothbrush / Zahnbürste, and see one used for masturbation: [6]. Are you happy with this, or if not, how would you suggest addressing this? --JN466 20:25, 12 October 2011 (UTC)
- Ah, I was misguided by the arguments for a filter, presented in previous chats and mails, since those were focused on wikipedia only - but I guess WereSpielChequers proposal would work on commons as well. To protect any visitor on commons, regardless how he got there and on which subsite he starts, from any possible astonishment is impossible - unless you force every visitor to use a filter automatically, which, well, would make many people unhappy. Alexpl 16:40, 12 October 2011 (UTC)
- You are mistaken. Commons was a big part of the 2010 Wikimedia Study, and "The personal image hiding feature would give logged-in and anonymous users the ability to easily hide images they do not wish to view on Wikimedia projects." --JN466 11:43, 12 October 2011 (UTC)
- I see the search issue as a different one to the filter issue, and I'm currently focussing on the filter issue. But anyone concerned about the search anomaly could try amending the description of some of those things that come high or low on searches. I'm pretty sure that the image which tops the Pearl necklace search does so at least partly because our description starts with the words Pearl Necklace. Improving descriptions is an easy thing to do. WereSpielChequers 20:07, 13 October 2011 (UTC)
- The Search on Commons itsself was never ment to be filtered. And the community was never asked if it should be. So lets focus on the local wikipedias. For any commons problem - adress it on commons :) Alexpl 10:39, 12 October 2011 (UTC)
Tagging contents off-limits should be off-limits
editI am against any kind of system that allows the tagging of any contents as off-limits. I am the father of two children, who search Wikipedia and the web almost daily. My wife uses the Internet too. Everybody in this family is aware that when you search you can pop into things you didn’t want to find. It happens and there is an easy way out: use the small back arrow. Children are best protected by using the web with their parents. As history shows us, no wall has ever stopped a determined foe (or child in this case).
Tagging files of any kind is very useful for a potential Big Brother. Many a government would love to be able to say ‘Hey, man, why didn’t you flag Tiananmen, Dalai Lama, democracy or lesbian off limits?’ I know that’s not the purpose of the present proposal. But tagging makes the coming of such thing easier. Remember that Google couldn’t resist some government’s pressure.
I’ve been looking categories cucumber and toothbrush. Nothing unusual there. Category nude women includes 89 files. Google gives 29.7 million answers to the same search. My conclusion is that Commons turns out to be a very bad porn provider but a very useful resource of knowledge. For my family and for everybody else. If you’re offended by this statement, use the back button. B25es 16:15, 12 October 2011 (UTC)
- Wilde says that "scandal is gossip made tedious by morality. [···] A man who moralises is usually a hypocrite". I agree with that. I do believe there is a problem, but it's not our problem. Moralists have to asume that this project will not accept content censorship, no matter the shape it takes. I won't. Never. Wikisilki 22:13, 12 October 2011 (UTC)
- If you had looked yesterday, you would have found the image in the electric toothbrushes category; it was removed from the category because of discussions going on in various places [7]. --JN466 00:03, 13 October 2011 (UTC)
- That's a problem of entitling and categorization, not of the content of the image itself. Wikisilki 00:38, 13 October 2011 (UTC)
- @Jayen466: Well, your comment looks offensive and out of place to me, from the moment this talk was open respectfully to express what the user think. Should your comment be deleted, then? --Andreateletrabajo 01:16, 13 October 2011 (UTC)
- Reworded. Pardon my levity. --JN466 12:15, 13 October 2011 (UTC)
- @Jayen466: Well, your comment looks offensive and out of place to me, from the moment this talk was open respectfully to express what the user think. Should your comment be deleted, then? --Andreateletrabajo 01:16, 13 October 2011 (UTC)
- That's a problem of entitling and categorization, not of the content of the image itself. Wikisilki 00:38, 13 October 2011 (UTC)
@Jayen466: This example clearly shows that the real problem is with the categorization. But censoring content can’t fix it, nor achieve anything. It only reminds me of an old joke. Surely, it is urgent to improve the categorization of Commons and ensure compliance with the already existing editorial policies from the projects. These are indeed the real problems. To solve them, we don’t need image filtering procedure, nor delete anything. That would not only be against the fundamental principles of Wikipedia and other projects, but also seems very naive. Now the joke: A man finds his wife having sex with her lover on the couch. His best friend asked him very worried: – "What a problem! What will you do?" – "I’ve already solved the problem", says the man – "I’ve sold the couch". Mar del Sur 14:49, 14 October 2011 (UTC)
- This reminds me that an editor on de.WP has proposed a context-dependent image filter without categorisation; there's a translation of it here. This seems another avenue worth exploring; it's very simple and elegant, leaves the individual communities in charge, and addresses the question of non-logged-in users. (He's said he might start a thread about his proposal here over the weekend). --JN466 23:24, 14 October 2011 (UTC)
- Done, see below. --Neitram 15:20, 15 October 2011 (UTC)
- Great, thanks! --JN466 22:00, 15 October 2011 (UTC)
- Done, see below. --Neitram 15:20, 15 October 2011 (UTC)
Buttons to switch images off and on
editAn OFF/ON button on each project side could allow readers to switch off the display of images (except GUI elements). Readers could then first read image undertitles and could no longer be surprised by any image content. It should be possible to (re-)switch on individual images and all images. By default all images should be shown, but users should be able to choose a general switch off in their preferences settings; for readers only (without login) a similiar solution could be installed by a - permanent? - cookie. Commons would not have to anticipate and categorize content as "controversial". --Martina Nolte 19:45, 12 October 2011 (UTC)
- You cannot search Commons for images and have all images switched off. As I said above, people who are looking for an image of this kid's toy on Commons see first of all one inserted up a woman's vagina. [8] People who are looking for an image of a toothbrush / Zahnbürste see one used for masturbation as the second result: [9]. Do you think this is okay? And if not, how would you suggest addressing it? --JN466 20:29, 12 October 2011 (UTC)
- That are typical worst case scenarios. If I type in things like dog, cat, ball, etc. then i don't find anything like this, until i go very deep inside the searches. But does it help to filter such content, or would it be more appropiate to add an disclaimer that mentions that it will find anything related? Consider a simple example at google. Search for "Futanari" and turn on "strict filtering". What you will find are images with penises as big as the figure it belongs to. It will happen anyways, if filtered or not. Such rare cases can't considered a rule over thumb for anything else. --Niabot 18:08, 13 October 2011 (UTC)
- Well, someone who looks for futanari knows what they're looking for. This is different from entering a non-sexual search term and getting sexual material in return. As we've seen, this happens for searches like cucumber, Zahnbürste, toothbrush, electric toothbrush etc.; it also happens with others like Black, Caucasian, Asian; Male, Female, Teenage, Woman, Man; Vegetables; Drawing, Drawing style; Barbie, Doll; Demonstration, Slideshow; Drinking, Custard, Tan; Hand, Forefinger, Backhand, Hair; Bell tolling, Shower, Furniture, Crate, Scaffold, and galipette (French for somersault), to name but a few more examples of searches which return sexual media of varying explicitness in their search results. --JN466 00:18, 17 October 2011 (UTC)
- The question is: Is this a real problem, or something to be expected? Mostly the sexual related images got a whole lot attention. One of the reasons are rightfully failed deletion requests, which often lead to detailed, unique descriptions. Something that many pictures lack in comparison. Additionally they often got renamed and have a "title" which contains the search words. That gives them a fairly high rank inside the search-engine. So by deeming this kind of pictures different, we actually increased their rating for searches. An search-related Streisand effect occurs.
- With "drawing style" you picked an example with an image of my own. That is an great example for high rating due to deletion requests. It lead to a fairly detailed description and now it shows up before other "non explicit" images, which also contain "drawing style" inside the description. I doubt that it would be ranked so high without the attention it got. Would it have been treated like anything else, it would be far down. In the end "we have a problem" because we made it problem. Congratulations on that. ;-) --Niabot 07:07, 17 October 2011 (UTC)
- I guess that would be "poetic justice" :), but having looked at the files in question, it has little to do with the problem. The files (even yours) I looked at had the names and descriptions found by the search engine when originally uploaded. It's not surprising that the search function finds them; we can't have thousand of sexual images, images of porn actresses etc. and expect them never to show up in searches. It's in the nature of things that they will. --JN466 17:01, 17 October 2011 (UTC)
- Well, someone who looks for futanari knows what they're looking for. This is different from entering a non-sexual search term and getting sexual material in return. As we've seen, this happens for searches like cucumber, Zahnbürste, toothbrush, electric toothbrush etc.; it also happens with others like Black, Caucasian, Asian; Male, Female, Teenage, Woman, Man; Vegetables; Drawing, Drawing style; Barbie, Doll; Demonstration, Slideshow; Drinking, Custard, Tan; Hand, Forefinger, Backhand, Hair; Bell tolling, Shower, Furniture, Crate, Scaffold, and galipette (French for somersault), to name but a few more examples of searches which return sexual media of varying explicitness in their search results. --JN466 00:18, 17 October 2011 (UTC)
- That are typical worst case scenarios. If I type in things like dog, cat, ball, etc. then i don't find anything like this, until i go very deep inside the searches. But does it help to filter such content, or would it be more appropiate to add an disclaimer that mentions that it will find anything related? Consider a simple example at google. Search for "Futanari" and turn on "strict filtering". What you will find are images with penises as big as the figure it belongs to. It will happen anyways, if filtered or not. Such rare cases can't considered a rule over thumb for anything else. --Niabot 18:08, 13 October 2011 (UTC)
thumb/hidden
editHere is my idea, translated from [10]:
- There will be no central categorisation of all images in different filter categories.
- Instead, a new attribute "hidden" will be introduced in Mediawiki for the embedding of an image into a page. "hidden" has the following effects:
- Unregistered users, by default, will see the image "hidden", meaning it is not visible.
- One click on a show/hide button will display the image, another click will render it hidden again.
- For registered users, there will be a new option for "hidden images" in the user preferences: a) Show all images, b) default, c) hide all images. b) means: show all images except those with the "hidden" attribute.
- For unregistered users, a similar option could be offered via a cookie mechanism.
- There will be no separate categories.
- One and the same image can be "hidden" in one article, and normal (not hidden) in another. (Principle of least surprise)
- The same image can be "hidden" in an article in one language version (e.g. Arabic Wikipedia) and normal (not hidden) in an article in another language version (e.g. French Wikipedia). Each Wiki community can decide upon the use of the attribute according to their own policies. This allows better consideration of cultural aspects. It also continues the principle we've always had, that each wiki community is free to decide case-by-case as well as make general guidelines for the inclusion or exclusion of images in articles.
- This solution will leave it to the individual wikis to decide which images are encyclopaedically relevant (informative, illustrative) – but still "critical" – in which articles. Images of spiders could be handled in the same way as images of Muhammad, sex or violence.
- The "hidden" feature could also be made such that individual wikis can enable or disable it.
- The presentation of images outside of an article context – e.g. in galleries of Commons categories or Commons search results – would require a separate solution, perhaps to be implemented in a subsequent phase.
My personal opinion: I prefer the status quo, which has served us fine for 10 years, to any new feature as dramatic as the global image filter that has been planned by the WMF, which opens doors to all kinds of abuse of the feature. However I do present the above solution for discussion. I think it is less dangerous and closer to the principles upon which the Wikimedia projects were founded -- namely, autonomy of the independent wiki communities over their contents, what to show to the reader and what not. The "hidden" feature will open a third option between the inclusion or exclusion of an image in an article, at the place where the image is used, and that's all it does. --Neitram 11:46, 15 October 2011 (UTC)
- That's the same concept, a filter for content=censorship. This just changes the place where the filter is applied to wikipedias instead of commons. Wikipedians do decide actually whose images to put on the articles, what to show to the reader and what not, so there's no need for another filter to hide some of them. Wikisilki 14:47, 15 October 2011 (UTC)
- Yes, Wikipedians do decide which images to put on the articles. The above idea is the same kind of "censorship" (or call it "editorial judgement") we have always done -- just with an additional third option, a middle way, between the inclusion and exclusion of an image in an article. --Neitram 15:42, 15 October 2011 (UTC)
- Another note: The above idea is not an opt-in solution -- because neither is the status quo. Currently, if Wikipedians decide to exclude an image from a page, the users will not see it when they visit the page. They have not even any hint that there was, perhaps, an image which has been removed from the page, unless they study the page history. This means editors currently make a very rigorous choice what the visitors will or will not see. I think if we make the "hidden" feature active by default (opt-out), it will allow us more liberty in the inclusion of images. We can then dare to include images "hidden" that we currently exclude because we think they are too controversial. --Neitram 16:11, 15 October 2011 (UTC)
- I'd like to propose another possible name for this attribute: not "hidden", but "clicktoview". This name, "clicktoview", would emphasize the fact that the purpose is not so much to hide images from viewing, but to require an extra click to view the image. --Neitram 08:17, 17 October 2011 (UTC)
- Yes, that would be even better. --JN466 17:02, 17 October 2011 (UTC)
- I'd like to propose another possible name for this attribute: not "hidden", but "clicktoview". This name, "clicktoview", would emphasize the fact that the purpose is not so much to hide images from viewing, but to require an extra click to view the image. --Neitram 08:17, 17 October 2011 (UTC)
- Another note: The above idea is not an opt-in solution -- because neither is the status quo. Currently, if Wikipedians decide to exclude an image from a page, the users will not see it when they visit the page. They have not even any hint that there was, perhaps, an image which has been removed from the page, unless they study the page history. This means editors currently make a very rigorous choice what the visitors will or will not see. I think if we make the "hidden" feature active by default (opt-out), it will allow us more liberty in the inclusion of images. We can then dare to include images "hidden" that we currently exclude because we think they are too controversial. --Neitram 16:11, 15 October 2011 (UTC)
- Yes, Wikipedians do decide which images to put on the articles. The above idea is the same kind of "censorship" (or call it "editorial judgement") we have always done -- just with an additional third option, a middle way, between the inclusion and exclusion of an image in an article. --Neitram 15:42, 15 October 2011 (UTC)
- This proposal is technically workable, but it leaves the community with an area of ambiguity. Firstly how, and especially how without invoking one's own POV, does the community identify that an image is offensive to enough people to merit hiding, or conversely if one person says they find it offensive how does one dispute that and say it isn't sufficiently offensive to merit hiding? Secondly this proposal has the drawback that it is public and therefore the filtered images could be screened out by a censor. Also context depends on the surroundings, would you differentiate between an image that is appropriate within an article and that same image in the abstract of that article as viewed from the mainpage? WereSpielChequers 14:19, 20 October 2011 (UTC)
- As to the first question, the decision would be made collaboratively in the same way that editors currently decide on the inclusion or exclusion of an image on a page. We would follow our current processes: the wiki principle, talk page discussion, project guidelines, if necessary admin action to stop vandalism or edit wars, etc.
- As to the second point: yes, an institution willing to censor could collect a blacklist of all images that are, or have ever been, used with the 'clicktoview' attribute anywhere. Or they could try to block the 'Show' button, maybe by hacking into our CSS or JS. However the result would be a 'sweeping blow' censorship because, lacking filter categories, it would hit religious, sexual, medical, botanical, anatomical, etc. media alike. They could already now create 'better' blacklists by using the category system on Commons, if they wished to.
- As to the third point, yes, that's one advantage of the proposal. The community could decide, for example, that a vagina image should be displayed 'normal' in the Article "Vagina", but 'clicktoview' when included in a featured article teaser on the main page. --Neitram 09:37, 21 October 2011 (UTC)
Opt-in version of this proposal
editTo satisfy the criteria in the board resolution, the proposal should be opt-in, rather than opt-out. The proposal could be amended as follows to satisfy this requirement:
2. Instead, a new attribute "hidden" will be introduced in Mediawiki for the embedding of an image into a page. "hidden" has the following effect: users will see a caption accompanying the image, saying "(You can opt out of viewing images like this. To do so click here.)" Clicking on this will hide the image, as well as all other images with the "hidden" attribute that the user might encounter, and instead display a caption saying, "(You can opt in to viewing images like this. To do so click here.)" --JN466 22:26, 15 October 2011 (UTC)
Further customisation of the "hidden" attribute would be possible as well. For example, there could be several "flavours" -- "hidden-X" for sexually explicit material, "hidden-V" for explicit violence, "hidden-M" for images of Muhammad. This would ensure that users who have opted out of viewing violent images would still be able to see sexually explicit images, etc. --JN466 23:32, 15 October 2011 (UTC)
Further refinements of this proposal are possible:
- The message displayed along with any image that has a "hidden" (perhaps we should rather call it "filter") attribute set could be simplified. All it needs to say really is: "You can opt out of viewing images like this."
- Clicking on "opt out" would take the user to a screen saying,
You have opted out of viewing [sexually explicit images/violent images/images of Muhammad/etc.]. Return to [article name]. Set up other options for your personal image filter.
- The personal image filter set-up would also be available via user preferences, in a new "personal image filter" tab.
- The preferences tab could also offer the option of suppressing the display of the "You can opt out of/in to viewing images like this" messages.
- Clicking on a hidden image would always display the image, regardless of whether the "You can opt in to viewing images like this" message is displayed or not. --JN466 17:27, 16 October 2011 (UTC)
- This opt-in variant sounds like the attribute should not be called "hidden" but "filterable". It doesn't really go well with the principle that the same image can be visible by default in one article and hidden by default in another article. "You can opt out of viewing images like this" suggests that the user can filter the image, when in fact the image may be unfiltered and therefore visible in another article. A more correct wording would be "You can opt out of viewing images like this in those articles where the image has been marked as filterable." It does not sound like an intuitive thing.
- The other idea to expand the attribute into a number of categories makes it worse, I feel. First, it would likely lead to the "infinitely growing number of categories" problem. Second, it could lead to abuse of the attribute because the filter information can be used for external censorship -- we had discussed this before. Third, it falsely implies that a user who activates the "opt out" feature would never see any image of that respective category anymore.
- I may be mistaken, but this variant sounds to me like a weaker variant of the original image filter idea, and also like a weaker variant of my original thumb/hidden idea. My thumb/hidden idea was not meant to be an "image filter" at all, but a completely different approach: We wouldn't start to categorize images according to certain criteria. We would look at images on an article-by-article basis, and then make a ternary, rather than the current binary, decision as to their inclusion and default visibility. It is based on the belief that all images that we, as a community, choose to keep in our repository, have encyclopedic value and are an important part of our work and our educational mission. My philosophy is not to categorize our visual contents to give users the technical means of a self-censored, image-reduced version of our wiki. I think if a user has a really serious personal problem with looking at certain of our images, they should go for an external, client-side solution that really filters images -- preferably all images from the Web, not just Wikipedia & sister sites. My philosophy is to keep the inclusion, exclusion and visibility of all our contents in the hands of the editors, and on a page-by-page basis. This is our daily work: We add. We delete. We replace. All with the conjoint aim of creating "the perfect" article. We often have great disputes about contents such as "controversial" images. That's normal. A "hidden" feature would not be a panacea, but it would offer a "third option" that might, perhaps, be useful. --Neitram 23:21, 16 October 2011 (UTC)
- Thank you. These are well-considered points. You're right that the wording "You can opt out of viewing images like this" would potentially be misleading, if the same image were considered essential and unfilterable in another article. On the other hand, I can't foresee this being a frequent occurrence, and perhaps it's something that could be explained on a "WP:Personal image filter" page, or on the user preferences tab. I also agree that the attribute would have to be called "filterable" (I said as much above). I need to think about the rest of what you said, but can say now that I really appreciate what you're contributing here, Neitram. :) --JN466 00:06, 17 October 2011 (UTC)
Simple personal category-based filtering
editA variant of this filter proposal, designed for greatest possible simplicity:
- There will be no "Content Filter" categories. Only existing categories will be used.
- Registered users can specify a personal list of categories on Commons they want to filter.
- The number of categories in this list will not be limited. (If necessary for technical reasons, limit it to 100 or 1000.)
- Mediawiki will check for all images whether they are in any of the user's filtered categories or subcategories of them. If so, the image will not be displayed. In its place, a "show/hide" button will be displayed. Clicking this button will show the image, clicking it again will hide it again. For a sketch demo of the interface, see to the right.
- There will be no Flagging mechanism.
Pros:
- Very straightforward and simple.
- Using the existing categories means little extra work for the feature.
- No issues with limited "made for filtering" categories, little danger of abuse by external institutions.
- Each user can tailor their personal filter very flexibly by picking from the myriad of Commons categories those that match their personal filter needs.
- If a user's filter needs are not well covered by existing categories, they can add a new (sub)category, split categories, etc. -- according to the rules at Commons.
Known issues:
- The feature is fully opt-in. All users would by default see all images. If the "we have an opt-in image filter now, so we can include more controversial images" effect happens, the boomerang will hit.
- Using the filter feature requires registration. Unregistered users would suffer the boomerang most. It's therefore not a good solution for expansion in non-western cultures.
- Creating the filter means work for the user. Possibly unpleasant work as category names can not always be guessed; they have to browse Commons for the kind of material they don't want to see. (Possible solution: offer "commonly used filter templates" somewhere for copy&paste.)
- Images, especially newly uploaded images, are not always placed in the correct categories.
- People might start moving images in and out of categories more than they are doing now. We might get to see "category edit wars".
- When categories are renamed or moved about in the category tree, will the users whose filters are affected by it be notified?
- How to expand the filter feature to images on local wikis, in addition to those on Commons?
--Neitram 22:42, 20 October 2011 (UTC)
- Yes, that seems like a workable proposal as well, especially if there is a list of commonly used filter categories. It could also be used to adapt the results of the Commons search function to the user's requirements, by filtering out the categories the user has no wish to see (cf. Controversial_content/Problems). --JN466 05:59, 21 October 2011 (UTC)
- Neitram, could you have a look at User_talk:WereSpielChequers/filter#Thanks_for_this_proposal.2C_WereSpielCheqrs? How would you address the concerns Sue expressed there? They weren't specifically related to this proposal, but do impinge upon it. Best, --JN466 14:58, 21 October 2011 (UTC)
- I think Sue is right in both her workload argument and in her point that our current categories are not ideal for filtering purposes because they were not made with a filtering use in mind. Say, for example, that a user has a light form of arachnophobia -- they don't want to see spiders with hairy legs, but they are fine with smooth legged spiders. Our current categories would not allow such a fine-tuned filter because "hairy legged spiders" is not an existing category. True, such a category could be created, but we should employ mechanisms that keep the categories in a system as consistent as possible and within reasonable, managable numbers.
- I think however that having separate "categories of objectionable material" just for filtering purposes would be an even worse solution: however they are implemented, they would come with a substantially higher risk of abuse by external censors. The categories we currently have are already a considerable risk. That's why I suggested the above as a possible solution that would be relatively easy to implement (I think), means little additional workload for the community, and tries to minimize new risks -- and accepting that it means suboptimal filtering. --Neitram 10:14, 22 October 2011 (UTC)
- I agree with you, and was thinking along similar lines in proposing #Refine_the_existing_category_system_so_it_forms_a_suitable_basis_for_filtering above. --JN466 15:28, 22 October 2011 (UTC)
- That approach is entirely unaccaptable. Changing the category system to introduce prejudical labels will never ever get my approval. That can be rightfully called censorship through the backdoor. It does not only influence the view of people that use the filter, but also the view of people that don't see any need for filtering at all. Touching the categories for a filter purpose shall never happen. --Niabot 17:40, 22 October 2011 (UTC)
- I tend to feel similarly. I too worry that if we adapted or grouped our existing categories for filtering purposes, we would increase the danger of abuse for censorship. The better our categories reflect the filter preferences of a potential censor, the greater the risk. --Neitram 09:37, 25 October 2011 (UTC)
- That approach is entirely unaccaptable. Changing the category system to introduce prejudical labels will never ever get my approval. That can be rightfully called censorship through the backdoor. It does not only influence the view of people that use the filter, but also the view of people that don't see any need for filtering at all. Touching the categories for a filter purpose shall never happen. --Niabot 17:40, 22 October 2011 (UTC)
- I agree with you, and was thinking along similar lines in proposing #Refine_the_existing_category_system_so_it_forms_a_suitable_basis_for_filtering above. --JN466 15:28, 22 October 2011 (UTC)
- I think however that having separate "categories of objectionable material" just for filtering purposes would be an even worse solution: however they are implemented, they would come with a substantially higher risk of abuse by external censors. The categories we currently have are already a considerable risk. That's why I suggested the above as a possible solution that would be relatively easy to implement (I think), means little additional workload for the community, and tries to minimize new risks -- and accepting that it means suboptimal filtering. --Neitram 10:14, 22 October 2011 (UTC)
- I'm rather inclined to agree with Niabot that a filter which uses our existing categories is unacceptable, though my main concerns about it are quite different. I note that both Sue Gardner and her deputy have now accepted that an image filter based on our current categories won't be acceptable to the community. What do you guys think about this alternative? WereSpielChequers 15:24, 23 October 2011 (UTC)
- <squeeze>I replied above. (better to discuss each proposal in its own section) --Neitram 09:45, 25 October 2011 (UTC)
- My main concerns regarding your proposal are 1. no easy start-up, by just selecting from a list what I feel I can do without seeing, 2. no support for non-registered users. These are the main points we should try to address to make the proposal stronger.
- Neitram above mentions the boomerang effect, which is actually a serious concern. If we end up with a (filterable) Commons video of someone defecating in w:defecation, a (filterable) w:crush film in w:animal cruelty, etc. – because "Wikipedia is not censored and anyone who doesn't like it can filter it out" – we won't have achieved a more professional, educationally effective and user-friendly presentation, but rather the reverse, because the presence of such media, even if filterable, will absorb all the attention, and no one will read the text.
- There is a fundamental problem with the belief that it doesn't matter for Wikipedia whether a media file is upsetting or offensive, the way it matters to reliable sources who "have to make a profit" and so on. That little conceit that "we are not censored and they are" is just that – a conceit. Reliable sources do include explicit media. We just had an example in the news, the videos of Gaddafi's capture and killing (shown with viewer discretion warnings, but shown nonetheless). Historians, sexologists and writers of medical reference works use explicit images. Scholars don't restrain themselves when it gets in the way of their educational purpose. So by relying on precedent in reputable sources for the style of illustration, just like we rely on reputable sources for all of our text, we are not forced to forego the use of explicit images. Our sources use them too. But if we think we can do "better" than our sources, by adding media that no self-respecting published source would use, we are actually more likely to degrade our work.
- It's a matter of reader psychology. If we feature a video or explicit image of, say, defecation, urination, ejaculation, sexual intercourse, animal cruelty, a killing, torture or rape in an article on these topics, we are hitting the sensory apparatus of the human animal who reads our article at a very primal level. It has an immediate and involuntary impact on their state of mind. If a reader comes to a page and their first reaction is Whoa?!!!, and the next thing they do is leave, or click on the discussion tab to complain about a media file we have included, then we have failed to educate that reader, because our editorial judgment ends up attracting more attention than the article content. When we observe reliable sources omitting types of media that we may have available in Commons, it is a warning sign that we should not ignore lightly. It is not necessarily a sign that the sources were "afraid" to include an image, because of possible repercussions, lack of press freedom, or fears of complaints. ("They have to listen to complaints because they need to make money. We don't!" is really the most perverse and educationally counterproductive rationalisation imaginable in this context.) It may rather be indicative of the way experts like to present their topic, because they know what works in an educational context, and avoid what doesn't. Disregarding educational expertise when it comes to illustration is just as dangerous as original research is in text. We know we are vulnerable in this area, due to our demographics, and if at the end of this we end up with yet more offensive media, only that now they're filterable, they will still be offensive, and will still attract opprobrium and drama. --JN466 23:46, 23 October 2011 (UTC)
- The sense of media in general to support an article is not the point here. If it was, you should just vote for an "hide all images" button and we would be through. Alexpl 09:25, 24 October 2011 (UTC)
Re: "Using the filter feature requires registration." No it doesn't; see my comment above. I agree however that using existing categories is the only cheap way for the WMF to implement this. Anything else will require paying their own staff, because the editing communities are largely opposed to image filtering, so the WMF is unlikely to gain support/manpower for such efforts there. ASCIIn2Bme 03:32, 8 November 2011 (UTC)
Re: "Creating the filter means work for the user." Not necessarily. One can come up with a few good enough profiles containing some category bundles; I've also addressed this in my comment in the other section. They will not be perfect, but perfect is the enemy of good. And the WMF servers can even offer to enable specific profiles for IPs from specific regions (thanks to IP geolocation). ASCIIn2Bme 03:36, 8 November 2011 (UTC)
And after further consideration, it is possible to use one element proposed by WereSpielChequers above: if a user adds a category to his filter then it is possible to provide correlation-based hints as to what other categories the user might desire to filter out. Online merchants like Amazon etc. commonly do this for purchases. (But you can't base the entire filer on this trick, nor can you realistically make it all happen without user intervention.) ASCIIn2Bme 04:28, 8 November 2011 (UTC)
- The Foundation has stated that it will not build a category-based image filter; that idea is off the agenda. [11] --JN466 12:31, 21 November 2011 (UTC)
- Agreed. A personal private image filter could be implemented with or without using the category system. There would be some minor gains in efficiency and increase in complexity if the ability to pick categories were added to such a system; But on the downside all the reasons why it would be a bad idea to use the commons categories in an image filter would also kick in. Obviously we can't stop people reopening the issue, but I would suggest that anyone who wants to propose that we use the existing commons categories within the system first deals with at least 0.1% of the categorisation backlog on commons. WereSpielChequers 09:48, 11 December 2011 (UTC)
- At the moment, that means about 200 files. I've done more than that since discovering that there was a problem a few months ago, so it's an achievable goal. It's actually kind of fun. I recommend going to commons:Category:Media needing categories and picking one of the larger subcategories (the small cats often have just difficult files left) and turning on HotCat. WhatamIdoing 16:41, 12 December 2011 (UTC)
- commons:Category:Images from the Geograph British Isles project needing categories by date has over a million images, so 0.1% of just that part of the backlog would be more than a thousand images. After the images stored on projects other than commons this could be our biggest current backlog. The whole backlog is of course somewhat larger and if our GLAM strategy succeeds is likely to continue to grow rather than reduce. With a combination of Hotcat and catalot you can do a fair bit, I've done thousands of them. My estimate is that anyone who tackles a thousand uncategorised images will realise that the total task is already quite substantial, and getting it cleared and keeping it so would be a tall order even if the community strongly supported a category based filter, had the categorisers to promptly categorise new images and could review the existing 11.6 million to identify contentious images. WereSpielChequers 17:16, 12 December 2011 (UTC)
- Then perhaps those images are not tagged normally. The "Uncategorized" template is present on fewer than a quarter million images. WhatamIdoing 22:10, 12 December 2011 (UTC)
- commons:Category:Images from the Geograph British Isles project needing categories by date has over a million images, so 0.1% of just that part of the backlog would be more than a thousand images. After the images stored on projects other than commons this could be our biggest current backlog. The whole backlog is of course somewhat larger and if our GLAM strategy succeeds is likely to continue to grow rather than reduce. With a combination of Hotcat and catalot you can do a fair bit, I've done thousands of them. My estimate is that anyone who tackles a thousand uncategorised images will realise that the total task is already quite substantial, and getting it cleared and keeping it so would be a tall order even if the community strongly supported a category based filter, had the categorisers to promptly categorise new images and could review the existing 11.6 million to identify contentious images. WereSpielChequers 17:16, 12 December 2011 (UTC)
- At the moment, that means about 200 files. I've done more than that since discovering that there was a problem a few months ago, so it's an achievable goal. It's actually kind of fun. I recommend going to commons:Category:Media needing categories and picking one of the larger subcategories (the small cats often have just difficult files left) and turning on HotCat. WhatamIdoing 16:41, 12 December 2011 (UTC)
Trilemma
editI'm starting to feel we've hit on a core dilemma (or trilemma). Either we implement a solution that will make image filtering for certain categories of "controversial content":
- opt-in for registered users only: boomerang threat, doesn't solve many issues
- opt-in for everyone, including unregistered users: same boomerang threat, plus censorship threat
- opt-out for everyone: unacceptable, de facto censorship
My current conclusion: We might just have to realize that image filtering, as such, is a bad idea for a project such as Wikimedia. The solutions I think worthy of following are all going away from the image filter idea:
- do nothing (still my preference), and explain to people who complain why we don't implement an image filter. There are good reasons. Our entire project is based on the philosophy of making free contents available to everyone, not giving people tools in hand to block themselves access to our contents.
- client-side filter (opt-in, but reduced censorship risk compared to server-side filtering)
- no filtering, but addressing the reported issues (e.g. with articles, search function) in other ways
- uncategorized show/hide feature (blurred, pixelated or greyed out) -- opt-out, but possibly acceptable when easy for users to disable. Low censorship risk when uncategorized.
- spin-off wiki projects for minors, muslims, sex/nudity sensitive, etc.
--Neitram 16:37, 25 October 2011 (UTC)
- It's a good summary, though I'm less pessimistic with regard to the censorship threat in option 2. Our present categorisation system in Commons already gives anyone so inclined everything they need to pick out a few hundred relevant categories in a couple of afternoons. The boomerang threat however applies to option 2 just as it does to option 1 (even more so). --JN466 22:42, 25 October 2011 (UTC)
- Yes, true, thanks. I added it above. --Neitram 08:37, 26 October 2011 (UTC)
- I'm not so convinced that there would be a Boomerang effect. If we had an image filter we might gain some editors who aren't comfortable being here without an image filter and they would tend to be more prudish than our existing editors are at present. Whether this as a trend would cancel out any boomerang effect is pure speculation, what I suspect will be bigger than both effects is the ongoing greying of the pedia. Our editor base is rapidly getting older, and generally the older editors are in my experience more likely to be pragmatic about such things, sometimes they may even have got wiser with age, and I believe that if there is any change at all it will be towards greater concern as to whether our use of images in articles leans towards the way academic sources would illustrate an article or the way tabloids would cover the topic.
- But in either event, provided the image filter choices were personal and private I don't think that an opt in filter would be likely to alter discussions as to what images should be used in an article. WereSpielChequers 20:21, 14 November 2011 (UTC)
- You may be right (I certainly hope so). The wider question whether our approach to illustrations should as far as possible reflect the approach used by reputable sources, or whether the community should be able to formulate its own editorial standards for illustrations, based purely on talk page consensus, and without reference to sources, is not resolved, at least not on en:WP. That, along with the impression that at least some of our illustration habits depart from standards applied in the literature, is part of the underlying tension. --JN466 12:33, 19 November 2011 (UTC)
- What you suggest is to look at reputable media and to copy their kind of illustration behavior. I guess that this won't be an option, since there is no common editorial standard for illustrations in reputable media. It varies strongly, not even from publisher to publisher, but even so between different works from the same publisher. While mass media publications often tend to exclude graphic content (to please all readers), you will definitely find more illustrations in specialized articles/books/papers (not meant to suit any reader, but to represent the truth as close as possible). Which way should Wikipedia try to go? Are we going for mass media, to reach as many readers (possible donors, contributers excluded[1]) or do we try to depict the world as it is, even so it might be beautiful and ugly at the same time?
- [1] Under the assumption that easily offended readers aren't ready to understand the purpose of an encyclopedia and don't make good authors anyway. --Niabot 15:12, 19 November 2011 (UTC)
- You may be right (I certainly hope so). The wider question whether our approach to illustrations should as far as possible reflect the approach used by reputable sources, or whether the community should be able to formulate its own editorial standards for illustrations, based purely on talk page consensus, and without reference to sources, is not resolved, at least not on en:WP. That, along with the impression that at least some of our illustration habits depart from standards applied in the literature, is part of the underlying tension. --JN466 12:33, 19 November 2011 (UTC)
- Yes, true, thanks. I added it above. --Neitram 08:37, 26 October 2011 (UTC)
- I think the "follow the sources" is a reasonable, achievable model. It doesn't tell us exactly which image to pick, but it does help us by giving broad guidelines. The goal behind the "reputable sources" approach is to filter out the most extreme, not to provide narrow standards.
- For example, a picture on Commons was described in a discussion at Meta earlier. It shows a woman using a cucumber (Gurke) as a sex toy. We are "not censored", but we also don't put that image in the articles about the vegetable, either. One way that we know this photograph is likely to be a less-than-ideal illustration for an article about a vegetable is because 100% of reputable sources about cucumbers refuse to use such images to illustrate the vegetable. Since no reputable sources use this type of image, then our readers would be astonished if we did. WhatamIdoing 19:55, 19 November 2011 (UTC)
- That is an bad example, because it is obvious even without any sources. You should change the article to "masturbation" and then start to ask the same question again, while comparing sources that are aimed at a broad audience and sources that are aimed at an specialized audience. Now go a step further and choose other topics from war, religion or ethnics. Does this help you to understand what the problem is, and which decission the author (mostly well informed/specialized) has to made? --Niabot 09:02, 20 November 2011 (UTC)
- It's a good example, actually. And it works for articles on wars or religion as well. Take the images in w:Rape of Nanking or w:Holocaust. These images can be found in quality sources, and if someone complains about them, this can be pointed out to them. In a case like w:Muhammad, sources are split between those which show only calligraphy, mosques etc., and those that (also) include figurative images. So to reflect the POV balance neutrally, we should mainly focus on calligraphy, mosques, etc., but include a couple of figurative images as well. When it comes to something like w:Goatse, you'll quickly notice that sources writing about it don't include the picture, but just give a URL. We can do the same. It's doable. We balance sources like that all the time for textual content. We always strive to reflect sources neutrally, representing viewpoints in proportion to their prominence, and our approach to illustration should aim to do the same – broadly reflect approaches taken by reputable sources. --JN466 16:17, 20 November 2011 (UTC)
- Niabot, you keep assuming that people already have a basic grasp of what is appropriate and encyclopedic. So it's a bad example—if I'm trying to teach you something new, because the simplest examples are already "obvious" to you. You're looking for advice on the subtle, complex questions. However, it's a very good example if I'm dealing with the kind of people who actually need to have this general approach explained.
- I can give similar examples for every category you've named:
- War: Combat troops aren't always under fire, and we have photographs of them sitting around laughing and drinking beer. Is that representative? No. How do I know that it's not generally considered representative? Because the reputable sources don't use this as their general illustrations.
- Religion: The scandal of priests sexually abusing young people is horrifying. Is a photograph of a weeping abuse victim an appropriate image for en:Catholic Church? How do I know that it's not generally considered representative? Because the reputable sources don't use this kind of image as their general illustrations for this religious group.
- Ethnicity: The black descendants of slaves from Africa are members of a en:Native American tribe, the en:Cherokees. Would I choose an image of a black-skinned person as a picture to show what a Cherokee Indian looks like? No. How do I know that it's not generally considered representative? Because the reputable sources don't use this as their general illustrations.
- The model in general works quite well (and says that articles about Mohammed probably (but not definitely) ought to include at least one image of him, but that such inclusion is not actually mandatory, by the way: reputable sources differ significantly in their choices, and so we may choose to follow any of the models they offer). WhatamIdoing 17:30, 20 November 2011 (UTC)
- It's a good example, actually. And it works for articles on wars or religion as well. Take the images in w:Rape of Nanking or w:Holocaust. These images can be found in quality sources, and if someone complains about them, this can be pointed out to them. In a case like w:Muhammad, sources are split between those which show only calligraphy, mosques etc., and those that (also) include figurative images. So to reflect the POV balance neutrally, we should mainly focus on calligraphy, mosques, etc., but include a couple of figurative images as well. When it comes to something like w:Goatse, you'll quickly notice that sources writing about it don't include the picture, but just give a URL. We can do the same. It's doable. We balance sources like that all the time for textual content. We always strive to reflect sources neutrally, representing viewpoints in proportion to their prominence, and our approach to illustration should aim to do the same – broadly reflect approaches taken by reputable sources. --JN466 16:17, 20 November 2011 (UTC)
- That is an bad example, because it is obvious even without any sources. You should change the article to "masturbation" and then start to ask the same question again, while comparing sources that are aimed at a broad audience and sources that are aimed at an specialized audience. Now go a step further and choose other topics from war, religion or ethnics. Does this help you to understand what the problem is, and which decission the author (mostly well informed/specialized) has to made? --Niabot 09:02, 20 November 2011 (UTC)
- Sorry, but both of you missed the point behind my words. I was talking about the difference in between reputable media. We have so called "mass media" material, which is usually directed at a broad audience, even if reputable. On the other side we have media from reputable sources that goes in the very detail. So called "specialized/professional literature". Inside the later you will definitely find more graphic, objectionable material, because it deals with the sensible topics. As an example: The common medical book for the normal reader usually doesn't show much blood. But medical literature that deals with accidents or heavy illnesses will represent somehow shocking images, since this illustrations/photographs are essential for the understanding.
- This applies to many topics and you could say: The more specialized, the better the literature is, the more shocking imagery you will find. Not because shocking images make good literature, but because it is often essential for the understanding.
- Thats why i was asking which way to go: Do we aim for the standards of mass media or do we aim for the standards of specialized literature? I'm convinced that our _aim_ is to write articles at professional level. But wouldn't it be contradictory to exclude the necessary illustrations, just because our standard for images is fetched from mass media? --Niabot 20:47, 20 November 2011 (UTC)
- It's commonly accepted that the most reputable, most qualified sources carry the most weight in establishing NPOV. So a scholarly publication and its approach to illustration would carry more weight than an article in a tabloid. The only thing to watch out for is that the sources should be about what the article is about, rather than a narrow subtopic. So illustrations for an article like w:knee should not give undue prominence to images of knee injuries or operations for example. --JN466 12:12, 21 November 2011 (UTC)
- You follow the same kinds of sources for picking images that you follow for writing the text of the article. In addition to the general preference that JN summarizes, you also need to consider the specific subject of the article. If the article is en:Knee, then you're probably using mostly general anatomy text books, and you should mostly use images like those seen in typical anatomy text books. If the article is the narrower en:Anterior cruciate ligament reconstruction, then you're probably using mostly surgery books and peer-reviewed medical papers, and you should mostly use images like the images used in those sources.
- In neither article would you be using many sources about the knee as the object of a sexual fetish, or as a weapon in hand-to-hand combat, so you should in neither article have images that look like pornography or that show violence.
- This is exactly the same process that you use for text. It's not nearly as difficult as you're thinking it is. WhatamIdoing 16:33, 21 November 2011 (UTC)
- And what if we switch "knee" against w:pornography? Would you apply the same rule as well? Or better said: Why isn't it applied in the same way you propose? --Niabot 09:40, 22 November 2011 (UTC)
- It is. We'd use the same principle, broadly reflecting the way reputable sources present the topic. --JN466 11:05, 22 November 2011 (UTC)
- Yes, it's the same principle, no matter what the subject is.
- I'm not sure why you think it is not being used on the article en:Pornography. I'm not familiar with the sources, but it appears reasonably likely that they are following this model. For example, the sections talking about the history of pornography include images of the kinds of historical works that I would expect to find in a reputable source about pre-modern pornographic works. (Compare it to this this university web page.) Similarly, it does not contain the kinds of images that I would be surprised to find in a typical reputable source. WhatamIdoing 17:11, 22 November 2011 (UTC)
- It is. We'd use the same principle, broadly reflecting the way reputable sources present the topic. --JN466 11:05, 22 November 2011 (UTC)
- And what if we switch "knee" against w:pornography? Would you apply the same rule as well? Or better said: Why isn't it applied in the same way you propose? --Niabot 09:40, 22 November 2011 (UTC)
Provide a wikipedia edition without illustrations at all
editThe Foundation should provide a Wikipedia Edition in all languages made of pure text. That is very easy to achieve and will not only be helpful for people who get easily astonished, but also for people with slow internet connection. (And it would be easier to fork by migrating the content and launch another project with the old content included). Sargoth 18:09, 31 October 2011 (UTC)
- Nothing against the proposal per se, but as a solution for the personal image filter, that's throwing out the baby with the bathwater. I still would like to see a poll of the readership, as opposed to the editorship. --JN466 23:59, 1 November 2011 (UTC)
- Can this section be merged with #All-or-none filtering? I don't see any noticeable differences. --MZMcBride 02:18, 2 November 2011 (UTC)
- Sargoth, do you mean a feature to disable/enable all images (as suggested above), or do you mean a separate copy of Wikipedia as a fork? I don't really understand what you mean in your last sentence. --Neitram 16:05, 2 November 2011 (UTC)
- Neitram, the #All-or-none filtering would certainly reduce the speed of the page loading due to technical reasons. I've rather thought of something slim like mobile without illustrations. I should mention that we're talking about a copy of the wikipedia for readers only, so there is no use for editing tool. A simple copy without images, updated in real-time. url noimages.wikipedia.org or similar. --Sargoth 23:37, 2 November 2011 (UTC) Well, actually, while reading the bug report yes, if the image is not loaded it's quite similar, if it's loaded and not displayed it's not.
- Thanks for clarifying. I would call that not a "copy of wikipedia", but a different front-end to wikipedia. One wiki, different front-ends (like the mobile front-end, or like the different skins we have). So your suggestion is that of a text-only and read-only front-end. Isn't that what we already have via [12] ? --Neitram 13:19, 4 November 2011 (UTC)
- You can enable images there. But yes, why not disable images in general, give it another name and use it. --Sargoth 15:09, 7 November 2011 (UTC)
- Thanks for clarifying. I would call that not a "copy of wikipedia", but a different front-end to wikipedia. One wiki, different front-ends (like the mobile front-end, or like the different skins we have). So your suggestion is that of a text-only and read-only front-end. Isn't that what we already have via [12] ? --Neitram 13:19, 4 November 2011 (UTC)
- Neitram, the #All-or-none filtering would certainly reduce the speed of the page loading due to technical reasons. I've rather thought of something slim like mobile without illustrations. I should mention that we're talking about a copy of the wikipedia for readers only, so there is no use for editing tool. A simple copy without images, updated in real-time. url noimages.wikipedia.org or similar. --Sargoth 23:37, 2 November 2011 (UTC) Well, actually, while reading the bug report yes, if the image is not loaded it's quite similar, if it's loaded and not displayed it's not.
- Sargoth, do you mean a feature to disable/enable all images (as suggested above), or do you mean a separate copy of Wikipedia as a fork? I don't really understand what you mean in your last sentence. --Neitram 16:05, 2 November 2011 (UTC)
MZMcBride, I think the proposal at #All-or-none filtering? would "Allow individual users to enable or disable all images on a per-site basis", while a "wikipedia edition without illustrations at all" would not allow this per-site choice.
I'm not convinced that the Founding principles allow Wikimedia to create an own Wikipedia fork without images, without first asking the Wikipedia authors, who have put the images into the articles, since in my opinion such a fork without images would not comply with:
- "3. The "wiki process" as the final decision-making mechanism for all content"
Also, the founding principles demand that almost every article of a Wikimedia project is editable:
- "2. The ability of almost anyone to edit (most) articles without registration."
But readers of the proposed edition without images would not be able the edit this articles, --Rosenkohl 14:55, 25 November 2011 (UTC)
- FYI: Wikimedia proposes Wikipedia Zero "In an effort to increase its mobile presence, the Wikimedia Foundation has reached out to mobile carriers, who it hopes will see value in allowing free access to a "lite" version of the encyclopedia. The lite version will contain all of Wikipedia's textual content, but no images or other media, reducing the cost to a mobile carrier of supplying the service to users. In return, mobile carriers will hope to "lure in" potential web users with tasters such as Wikipedia. The WMF is following in the footsteps of Facebook ..." See also bugzilla:32001, Mobile Projects/features#Carrier solutions. Not a bad idea, but I'm sorry to see that their community involvement on this is also ... zero. --Atlasowa 16:43, 25 November 2011 (UTC)
Page-specific "images off" button
edit- (I guess a better name for this proposal would be "Personal filter lists". Users generate a list of files they don't want to have displayed by default, by clicking images individually as they come across them, or ahead of time through Commons categories. I'll leave the heading as it is for now though, because it's linked to.) --JN466 15:15, 22 November 2011 (UTC)
- Note that this proposal has developed a bit since discussion of it began, and a more up-to-date summary is below. --JN466 16:08, 24 November 2011 (UTC)
- Each project page includes a button to "switch images off".
- The choice is remembered and stored in the user's preferences, or a cookie.
- After that, the user can reveal individual images they do want to see by clicking on them; that choice too is remembered.
- The next time the user visits the page they will only see the images they want to see.
- If the user opts out of viewing a particular image, they won't see it when it is used in another article either. --JN466 14:27, 21 November 2011 (UTC)
- Feedback: I like it, but I don't think much is won by having two visibility criteria: a) hidden by page selection and b) unhidden by image-individual selection. Why not modify this idea as follows:
- Users who wish to use the feature can opt into it somehow (via user settings or a cookie).
- When the feature is active, every image is visible by default, but has a hide/show feature.
- The system remembers the user's choice. Any image a user has chosen to hide will be hidden for them on any page, wherever it is used.
- What I like about the idea:
- no categorization in any way
- the workload is entirely on the individual users, not on the editors
- the burden of decision is taken off the editors' shoulders
- every user can apply their personal criteria completely at their liberty
- no risk of censorship (hopefully)
- no risk of boomerang (hopefully)
- Possible issues I see with the idea (in either your or my variant):
- lots of work for users to maintain their individual filters (we have millions of pages and millions of images, and counting)
- lots of work is done redundantly
- the system has to store a lot of data for each user's individual filter
- unregistered users will lose their filter whenever they clear cookies or change to another computer
- Possible help: feature to share filters or copy/paste filter templates? --Neitram 17:20, 21 November 2011 (UTC)
- Thanks, Neitram. I agree on the advantages.
- Another potential benefit is that this could also be made to work in Commons, either for individual images or for Commons categories, via SUL (single unified log-in).
- As in Wikipedia, users could opt to hide individual images in Commons, or apply that choice to all images included in a Commons gallery/category.
- So if I've chosen not to see an image in Commons, I won't see it displayed in Wikipedia (unless I click on it to reveal it); and vice versa, if I've said in Wikipedia I don't want to see it, it will appear greyed in Commons and other projects as well.
- This could also address the Commons search problem (see Controversial_content/Problems). The relevant images would still show up in searches, but they would be greyed until clicked upon. And if one slips through, it can be hidden with a click.
- As for the issues, –
- While we do have millions of images, most users only ever come across a tiny proportion of them. Clicking them off as you come across them might be okay.
- Yes, the data storage volume could be significant, depending on how many users use the function. On the other hand, the data volume is probably not much bigger than that involved in storing watchlists.
- Unregistered users losing their preferences when changing to another computer, or deleting their cookies, is a drag. I don't see a way round it. It might encourage them to register an SUL account though.
- Sharing filter lists or templates might be useful, but may also marginally help censors, by giving them prefab lists of files to block.
- On the whole though, I don't think this kind of solution would do much for censors. As implemented on Wikimedia sites, each image would still be displayable by clicking on it. That would simply not be enough for a censor. So if they did want to use a variant of this function to block image access completely, they would (a) have to compile or obtain a list of files to block, which is not hard to do in Commons anyway, and (b) be required to modify the code, to disable the reveal function. That is just how censors can and do work today, so nothing much changes for them. And perhaps our programmers could do some work to make it extra hard to mess with the click-to-display function. --JN466 00:50, 22 November 2011 (UTC)
- A manually editable filter list (manually editable just like the user's watchlist) would enable users to add images or categories (e.g. Depictions of Muhammad) without first having to view examples. --JN466 16:43, 22 November 2011 (UTC)
- As a post-viewing choice, this doesn't deal with a sufficient number of complaints. In practice, the user experience is this:
- go to the page
- see at least the images at the top, and
- be unhappy about seeing the images, (and so turns them off).
- Removing already-viewed images is too late: we need something that turns off the images before viewing, not after someone is having a panic attack. WhatamIdoing 16:58, 22 November 2011 (UTC)
- The Harris Report was clear that the filter should be an opt-in filter, and that the default viewing experience should be unfiltered. That's also Sue's and (I believe) the board's view. Having said that, there could be a button at the top of any Wikipedia page, enabling users to switch all images on that page off. They could then switch individual images back on that they would like to see, based on the caption. That would work for w:Muhammad. Or users could go to Commons and add categories to their list there; you don't have to look at a category's content to do that. Or they might be able to manually edit their filter list, just like you can manually edit your watchlist if you want to. As you know, there are two ways you can add and remove pages to or from your watchlist: you can go to the page and click on Watch/Unwatch, or you can open your watchlist and manually add and remove pages. The filter list could work the same way, allowing individual users to add images (or Commons categories) to their personal list – in this way they could filter an image or category without having to see the media first. I guess someone could even post a list of all the Muhammad images/image categories somewhere, so Muslims could simply copy and paste it into their personal filter lists. The beauty is that this would also work in Commons search listings: if I know that Commons contains lots of masturbation media and feel quite certain that I usually don't want to have these pop up in random searches I perform, I can simply add the masturbation media categories to my filter list. If I then look for an image of an electric toothbrush on Commons, the masturbation image will still come up, but it will be greyed, requiring another click to reveal. I think this could work. --JN466 18:05, 22 November 2011 (UTC)
- As a post-viewing choice, this doesn't deal with a sufficient number of complaints. In practice, the user experience is this:
Exactly Jayen466, some people would publish, share and recommend their favorite Wikimedia filter lists, either on their project user pages or on external webpages. So the individual "filter list" you propose would work like filter categories and in fact they would be filter categories, --Rosenkohl 21:35, 22 November 2011 (UTC)
- If I understand the idea properly: not quite like that. Sure, you can click a whole category off - but it is a category beeing on Commons (changing every day anyway), it is not a category liek "good images" and "bad images", categorized by someone from the WMF or the board. If I undestand it well. -jkb- 23:29, 22 November 2011 (UTC)
- It's something that readers may choose to do, Rosenkohl. There is no prefab solution provided by the Foundation, no Foundation endorsement of any cultural or religious position. The Foundation, and the editor community, remain completely neutral and out of it. What readers choose to put in their personal filter list – spiders, Muhammad images, masturbation videos, images of corpses – is their very own, private and personal decision, just like it is their decision which articles they put in their watchlists, or which articles they choose to read in the first place. --JN466 00:12, 23 November 2011 (UTC)
- the Board resolution says that readers should be able to "easily hide images hosted on the projects that they do not wish to view, either the image or ahead of time through preference settings". "Default off" means that brand-new users should have all images displayed, not that all images must be displayed on the initial viewing of each page or each day.
- It is unclear to me whether the the Board means "readers should have both options" or "either of these, but not both" here. Both logical OR and XOR are valid interpretations. However, we do know that complaints on first viewing are significant, and if you want to stop complaints, then the only solution is some sort of preference setting created (by the user) in advance.
- As for the category list, I don't think you're grasping the scale of the problem: How long would it take you to just read the names of half a million categories? How much time would it take you just to read the names of the categories that are added or deleted each week? (That's "only" on the order of 500.) Commons is enormous. No individual human is going to be able to do this. WhatamIdoing 21:23, 23 November 2011 (UTC)
- "Default off": How long would it take you to just read the names of half a million categories, whem I'd like to see them at first viewing? -jkb- 21:41, 23 November 2011 (UTC)
- You and I would probably not ever disable any images at all. But we have many thousands of casual and unregistered users who want all of these features:
- the ability to suppress such images in advance, without ever viewing them (and preferably without even loading them)
- without reading through half a million category labels or undertaking any other time-consuming task
- without doing anything technically challenging (like manually editing a watchlist)
- when they encounter an image that they dislike, they want to suppress all similar images (e.g., all photographs of spiders, not just the particular one at the top of this page)
- I'm not seeing a proposal here that meets their stated needs. I'm only seeing proposals that meet our desire to not be bothered with letting them have what they want. WhatamIdoing 19:06, 25 November 2011 (UTC)
- You and I would probably not ever disable any images at all. But we have many thousands of casual and unregistered users who want all of these features:
- "Default off": How long would it take you to just read the names of half a million categories, whem I'd like to see them at first viewing? -jkb- 21:41, 23 November 2011 (UTC)
- I agree that Commons is enormous, but disagree that the number of categories that people might want to filter is comparably enormous. I mentioned this to Robert once, who surveyed Commons at the time, and he seemed to agree. People interested in filtering would have to -- and I believe would -- compile lists that users can install when they first create an account, and which they can then subsequently add images to and remove images from, just by clicking images or placeholders they come across. So users would have the ability to hide images "either when first viewing the image or ahead of time through preference settings", as per the board resolution. I share your understanding that "brand-new users should have all images displayed, not that all images must be displayed on the initial viewing of each page or each day." Once a file is in the filter list, it would never be displayed again, even if it occurs on a page the user has never been to before, unless they remove it from the filter list by clicking on the placeholder and revealing the image. And just like you can add something to your watchlist and remove it again a minute later, readers could click on an image to see if they're comfortable with it, and if not, click on it again to hide it and add it back to the filter list.
- Of course Commons receives new files each day, and new categories are created each day. If a user selects an entire category to filter, the filter list could keep a record of both the individual file names and the category name. Regular automatic updates could add any new images added to a Commons category to the user's filter list, update file or category names after renames, etc. Compilers of filter lists may create updates that they make available publicly.
- It became very clear during the discussions over the past few months that tagging files for the image filter, or creating image filter categories, is not something the community as a whole wants to become involved in. Various people suggested that the Foundation should do nothing, and leave the creation of image filters to third parties. This is a middle ground – the Foundation provides the software capability to create and maintain personal filter lists, and it is then up to a separate crowdsourcing effort by those who want to have a filter. This is consistent with the overall Wikimedia crowdsourcing approach, and really just an extension of it.
- Under this proposal, the entire informational infrastructure for filtering would reside in readers' individual filter lists. The wiki itself does not change at all, contains no filter tags, and no specially created filter categories. Even if it should turn out that my guess is wrong, and that crowdsourcing of filter lists does not happen, readers will at least have the possibility to dispatch any image they come across in their normal surfing with a single click, with the knowledge that they will not encounter this image ever again unless they expressly change their minds. They will also have the possibility to add known types of images that they care most strongly about to their filter list beforehand. At worst, they could start at a high-level Muhammad category, a high-level spider category, a high-level nudity/sex category or indeed any other high-level category they choose, like Category:Women if they are ultra-conservative Jews who don't want to see photographs of women, and add all that content to their filter, with the option to subsequently reveal individual images they come across in Wikipedia that they do want to see after all. Even that would be more than they are able to do at present. --JN466 02:42, 24 November 2011 (UTC)
- I agree that no one is going to tick all of the Commons categories, but the fact is that you cannot make a decision about which ones to put on your personal list without actually looking at them.
- Tagging images as being controversial (whether through a special category or through a separate class of label) was rejected by the Harris report, and so far as I know, has never been seriously entertained by anyone. IMO the most workable solution (although still significantly imperfect) is still to use a handful of groups of content (not "offensiveness") categories: So commons:Category:Masturbating with dildos goes on the anti-sex list, but "Category:Images that are objectionable because they involve sexual acts" never even gets created in the first place.
- Fundamentally, I don't get the "crowdsourcing" meme. What do you think we've been doing for the last ten years, if not crowdsourcing? Telling the usual crowd of editors that they can make a few lists that readers can trivially click to enable is what was originally proposed. You're proposing, well, that we tell the usual crowd of editors that they can make a bunch of lists that the readers can go to a lot of trouble to enable, assuming that the brand-new reader magically knows the lists exists. How does this constitute an improvement? WhatamIdoing 19:16, 25 November 2011 (UTC)
- Many in the community took strong exception to the idea that they should be doing this, and warned that disputes, edit wars and vandalism about what to categorise where would be interminable. It was also considered to be a lot of additional work, given that backlogs abound as it is. In my view, someone with a particular POV can more effectively produce a filter list that reflects their vision, and if there are several such efforts, the user can then select the one that matches their own vision best and adopt it for themselves. Once such filter lists exist, I believe word would get around -- not immediately, but in time -- and the documentation for the image filter could mention that third parties may offers such lists. --JN466 21:25, 26 November 2011 (UTC)
- Yes: Many power users took strong exception to the idea that readers, casual editors, and people with different values are part of the Wikimedia community, or that a system could be created that did not require the power users themselves to dirty their hands in support of.
- In my opinion, the WMF should create a system that works (more or less), and anyone who wants to help classify images for it is welcome to do so. (The rest of us, naturally, can ignore it.) "The WMF created" has no connection to "and so we power users must support it". And, yes, that means that if nobody happens to classify the images... well, the WMF made a good effort, right? WhatamIdoing 02:22, 27 November 2011 (UTC)
- Many in the community took strong exception to the idea that they should be doing this, and warned that disputes, edit wars and vandalism about what to categorise where would be interminable. It was also considered to be a lot of additional work, given that backlogs abound as it is. In my view, someone with a particular POV can more effectively produce a filter list that reflects their vision, and if there are several such efforts, the user can then select the one that matches their own vision best and adopt it for themselves. Once such filter lists exist, I believe word would get around -- not immediately, but in time -- and the documentation for the image filter could mention that third parties may offers such lists. --JN466 21:25, 26 November 2011 (UTC)
That is an eminently reasonable proposal. I fully support it. — Coren (talk) / (en-wiki) 04:48, 24 November 2011 (UTC)
Jayen 466,
I did not say that filter lists are not something readers may choose to do. Of course, any kind of personal image filter would be something that readers may choose to do.
What I said is that filter lists could be shared and that such filter lists would effectively work as filter categories.
Every user can already now easily create private filter lists with their individual (Cascading Style Sheet)- (.css) and Java Script (.js) pages if they want to. Or they can install imgae filter software on their own computer if they want to.
So actually, the Foundation does not have to provide any software capability for filter lists because readers can do it on their own. On the other hand, by installing an official filter feature on the Wikimedia user interfacer, Wikimedia would endorse and control filtering, which would include doing statistics on how often this feature is used by the readers. --Rosenkohl 14:15, 24 November 2011 (UTC)
- I do not think that "easily" is a fair description of the current process. Editing CSS and JS files is not "easy" for average readers. WhatamIdoing 20:29, 25 November 2011 (UTC)
Reformulation of the Proposal: Personal filter lists
editSince the beginning of the discussion above, this proposal has developed a bit from the original idea, and some details have changed. I've reformulated the description so it reflects that. The wording below is a copy of what I posted to the Foundation list.
The idea is to give users the option of creating personal filter lists (PFL). The structure and interactivity of these personal filter lists would be comparable to those of editors' personal watchlists.
The way this would work is that each project page would have an "Enable image filtering" entry in the side bar. Clicking on this would add a "Hide" button to each image displayed on the page. Clicking on "Hide" would then grey the image, and automatically add it to the user's personal filter list.
Any image added to the PFL in this way would appear greyed on any subsequent visit to the page. It would also appear greyed on any other project page where it is included, and (given an SUL account) any page containing the image in any other Wikimedia project such as Commons itself – including Commons search result listings. In each case, the user would always retain the option of clicking on a "Show" button or the placeholder itself to reveal the picture again, and simultaneously remove it from their PFL. Of course, if they change their mind, they can add it right back again, by clicking on "Hide" again. It would work like adding/removing pages in one's watchlist.
Apart from enabling users to hide images and add them to their PFL as they encounter them in surfing our projects, users would also be able to edit the PFL manually, just as it is possible to edit one's watchlist manually. In this way, they could add any image file or category they want to their PFL. They could also add filter lists precompiled for them by a third party. Such lists could be crowdsourced by people interested in filtering, according to whatever cultural criteria they choose.
It became very clear during the discussions over the past few months that tagging files for the personal image filter, or creating image filter categories, was not something the community as a whole wanted to become involved in – partly because of the work involved, partly because of the arguments it would cause, and partly because it would not be possible to do this truly neutrally, given different cultural standards of offensiveness. Various people suggested that the Foundation do nothing, and leave the creation of image filters to third parties altogether.
This proposal occupies a middle ground. The Foundation provides users with the software capability to create and maintain personal filter lists, just like it enables users to maintain watchlists, but it is then up to a separate crowdsourcing effort by those who want to have a filter to find ways of populating such lists. This is consistent with the overall Wikimedia crowdsourcing approach, and a natural extension of it. Even if this crowdsourcing effort should unexpectedly fail to take off, readers will still gain the possibility of hiding images or media as they come across them with a single click, with the assurance that they won't ever see them again anywhere on our projects unless they really want to. That in itself would be a net gain. Users who don't want to have anything to do with filtering at all could switch any related screen furniture off in their preferences, to retain the same surfing experience they have now.
Under this proposal, the entire informational infrastructure for filtering would reside in readers' personal filter lists. The data structure of the wiki itself does not change at all, just like adding pages to a personal watchlist affects no one apart from the user whose watchlist it is. There are no filter tags, no specially created filter categories, and no one has to worry about defining, creating or maintaining them. The filter users do that for themselves.
For unregistered users, their PFL could be stored in a cookie. However, they would be encouraged to create an SUL account when they first enable image filtering, so they can retain the same surfing experience even after changing computers, or after accidentally deleting the cookie. --JN466 16:06, 24 November 2011 (UTC)
- Unless I'm missing something the differences between this proposal and User:WereSpielChequers/filter are:
- This proposal has an extra line in everyone's sidebar as opposed to the image filter being a user preference option.
- This proposal doesn't have the option of anonymous matching of preference lists. Instead it has the option of importing preference lists from elsewhere.
- This proposal encourages those who want to censor to publicly share filter lists instead of making them a personal and private matter. Presumably we'd then have userboxes such as "this user browses Wikimedia using Acme churches "Sodom and Gomorrah" filter" and endless arguments as to whether we should use image x because 13 of our 60 most heavily used filter lists block it out.
- This proposal includes IPs via a cookie, but I'm not clear how in such circumstances you make sure that that cookie is unique to one individual. My understanding was that the only way to prevent one person using this to censor another's image viewing was to make this logged in users only.
- In processing terms this would take less processing and coding than User:WereSpielChequers/filter as it skips the complex matching. It is closer to what the board wants because it includes IP users and I suspect it would be much easier for those who want to advise others what to filter as they could publish filter lists and instruct their supporters to import them (though once those lists get into the tens of thousands I suspect that an automated option of subscribing to particular filterlists would be required). However from the point of view of those who are concerned about the idea of image filters it has some significant drawbacks. By including IPs it allows some people to censor others viewing. By encouraging those who want to censor things to create and share filter lists it makes some of these lists public and thereby assists those who would use this to censor our projects. By making these lists publicly shareable it will result in it being public information that particular images are on particular filter lists - with all the consequent pov related discussion as to whether particular images should be on particular filter lists and which editors can and can't edit particular filter lists. If we can find a way to make session cookies unique to one user session in an Internet Cafe then I'd happily add that option to User:WereSpielChequers/filter and I like the name "personal image filter". But I'm uncomfortable about the other differences, in particular I don't see how we can have public image filter lists without the drawbacks of them being publicly built and discussed on wiki. WereSpielChequers 17:32, 24 November 2011 (UTC)
- Yes, but WereSpielChequers, you write on User:WereSpielChequers/filter#Privacy, refering to your own proposal:
- "Filterers are free to disclose their use of the feature and discuss their experiences, but there is no obligation to do so."
- Of course, and it is the same with any possible personal filter system. So as far as I can see, the situation would be quite similar with your proposed system, too. No one can stop people to talk about how they use a filter feature, which includes to talk about which collections of images they filter, or which collections of images they suggest to be filtered.
- Since it is a public discourse, I actually don't see a problem when people start to talk about which images they filter, or when they share filter lists. Greetings --Rosenkohl 20:03, 24 November 2011 (UTC)
- The question who is and isn't allowed to edit filter lists shouldn't be an on-wiki concern, as I would envisage these lists to be compiled and edited off-wiki by those wanting to use a filter. If anyone wanted to provide one on-wiki, it should be in their user space (if that), as a reflection of their personal viewing preferences. They would have no official standing or endorsement of any kind on-wiki. Beyond that, they would only be present in user's personal filter lists, which are as confidential as watchlists. --JN466 21:24, 24 November 2011 (UTC)
- Another possibility would be to create a separate wiki, not under the aegis of the Foundation, where those so inclined could collaborate to create various filter list templates catering to different viewing preferences that users can then make use of if those preferences match their own. My first instinct is to say that the Foundation should not get involved in that at all, and make zero resources available for it, because these filter lists are pure reflections of personal POV. They have no place on-wiki, other than in an individual user's own PFL affecting only himself and no one else. --JN466 21:37, 24 November 2011 (UTC)
- I see a difference between disclosing that one uses this system and publishing a public filter list. My understanding of the debate a few weeks ago is that that was one of the concerns of the opponents, and to a great extent I share the concern. I suppose we could comply with that by saying they must be off wiki or by allowing them in userspace; But neither solution is very satisfactory. Encouraging Off wiki POV groups to collaborate with each other to build a filter whilst hoping they don't develop into POV canvassing groups strikes me as overoptimistic. Whilst allowing POV lists on wiki is also problematic, especially when you get into arguments as to how lowcut a top needs to be before it goes into a particular filter list. I would prefer a system that didn't encourage factionalisation. WereSpielChequers 12:33, 25 November 2011 (UTC)
- If we think that allowing people to add lists compiled by someone else leads to problems, there is no need to make available a functionality that allows PFLs to be populated by copy-paste. It might make things more convenient to the filter user, but the proposal works (if a little less well) without that. It's hard to predict how things would develop. I am sure you'd get some individuals wanting to be helpful and offering people various options: "some nudity allowed", "no nudity and no Muhammad", "no spiders", etc. That would be part of the crowdsourcing instinct. You might get moral guardian organisations of various philosophies making them available on their sites, and putting in the effort to maintain them, with users notifying them of images that should be included according to whatever world view they are guided by. I would think these people would be more focused on serving their constituency by giving them a filter that meets their needs well, rather than with trying to change things on-wiki (because addition to the filter already moves much of the motivation for doing it). Or none of the players might consider it worth their while to bother with it at all, because any filtered image can after all still be instantly revealed, just by a click. They might prefer to invest their resources elsewhere, in bolt-on filters that actually block images, rather than trying to map the shifting sands of a wiki. That's the sort of Sisyphus task that appeals more to Wikimedians. Basically, I find it hard to predict what would happen, and whether any resulting factionalisation would be worse than what we have already, or whether more attention on some obscure and obscurely illustrated articles (and some obscure nooks and crannies of Commons) might actually be a good thing. :) --JN466 14:14, 25 November 2011 (UTC)
- I see a difference between disclosing that one uses this system and publishing a public filter list. My understanding of the debate a few weeks ago is that that was one of the concerns of the opponents, and to a great extent I share the concern. I suppose we could comply with that by saying they must be off wiki or by allowing them in userspace; But neither solution is very satisfactory. Encouraging Off wiki POV groups to collaborate with each other to build a filter whilst hoping they don't develop into POV canvassing groups strikes me as overoptimistic. Whilst allowing POV lists on wiki is also problematic, especially when you get into arguments as to how lowcut a top needs to be before it goes into a particular filter list. I would prefer a system that didn't encourage factionalisation. WereSpielChequers 12:33, 25 November 2011 (UTC)
- Yes, but WereSpielChequers, you write on User:WereSpielChequers/filter#Privacy, refering to your own proposal:
- Do I understand you correctly, Jayen466, that this proposal picks up my modification suggestion from 17:20, 21 November 2011 (UTC) (no filtering by page, only by image) and drops your idea of adding categories to the filter list from 16:43, 22 November 2011 (UTC)? In that case, I still see the issue that I wrote above: Lots of work for users to maintain their individual filters (we have millions of images, and counting). As new images are uploaded continually, people's filters will overage quickly and they will need to keep continually updating them. I personally wouldn't mind this drawback as I don't intend to ever use an image filter -- but I'm not sure such a solution would be acceptable to the pro-filterers and to the WMF. --Neitram 12:59, 25 November 2011 (UTC)
- The first question: Yes, no filtering by page, just individual images that users can "click away" once they "enable image filtering". The way this could look is that a little box with an X is displayed in the top right hand corner of each image while image selection for the PFL is enabled, and clicking on that switches the image "off" (grey). When the user is done adding images to their PFL, they would switch the display of these little boxes off again. As to your second question: No, I still think it should be possible to add categories, like Depictions of Muhammad, and that new images arriving in these categories should automatically be added to the PFL (periodic automatic updates). They should probably be stored as individual filenames in the PFL though rather than as categories (so a joker who removes the cat from prominent Muhammad images e.g. wouldn't affect filterers). --JN466 13:51, 25 November 2011 (UTC)
- Jayen466, I agree that "the Foundation should not get involved in that at all, and make zero resources available for it, because these filter lists are pure reflections of personal POV. They have no place on-wiki". But you are proposing just that: that the Foundation should provide this functionality, use its resources for that, implement it on-wiki and give room for public POV filter lists in the user space. I strongly oppose that. I see several problems with this proposal, some have already been pointed out by others (werespielchequers i.e.), I'll try to write this up later. --Atlasowa 13:40, 25 November 2011 (UTC)
- To be honest, I'd be more comfortable with it if there were an outright ban on posting filter lists on-wiki. They should be posted off-site. But it feels absolutely fine to me for a user to have a personal filter list matching their personal preferences that defines their viewing experience. No one else sees it, and it only affects the user her- or himself. If you think about it, a watchlist is just the same – pure personal POV. Cheers, --JN466 13:55, 25 November 2011 (UTC)
- Jayen466, I agree that "the Foundation should not get involved in that at all, and make zero resources available for it, because these filter lists are pure reflections of personal POV. They have no place on-wiki". But you are proposing just that: that the Foundation should provide this functionality, use its resources for that, implement it on-wiki and give room for public POV filter lists in the user space. I strongly oppose that. I see several problems with this proposal, some have already been pointed out by others (werespielchequers i.e.), I'll try to write this up later. --Atlasowa 13:40, 25 November 2011 (UTC)
- The first question: Yes, no filtering by page, just individual images that users can "click away" once they "enable image filtering". The way this could look is that a little box with an X is displayed in the top right hand corner of each image while image selection for the PFL is enabled, and clicking on that switches the image "off" (grey). When the user is done adding images to their PFL, they would switch the display of these little boxes off again. As to your second question: No, I still think it should be possible to add categories, like Depictions of Muhammad, and that new images arriving in these categories should automatically be added to the PFL (periodic automatic updates). They should probably be stored as individual filenames in the PFL though rather than as categories (so a joker who removes the cat from prominent Muhammad images e.g. wouldn't affect filterers). --JN466 13:51, 25 November 2011 (UTC)
- Who cares if such a list is "pure personal POV"? Have we all forgotten again that NPOV is not mandatory, and is explicitly disclaimed by several projects? Try not to impose your Wikipedia POV on the anti-NPOV projects like WikiBooks and Commons, and then think about how you might implement a filter for one of the POV-embracing projects. That might help you see other useful paths. WhatamIdoing 17:26, 26 November 2011 (UTC)
- Fair enough. I was indeed thinking Wikipedia-centrically. --JN466 20:54, 26 November 2011 (UTC)
- Whether or not we have a POV based filter for wikiBooks, if we introduce a filter on Wikipedia it needs to fit in with Wikipedia policies on NPOV. Apart from the hide all and hide this image in this page the other options involve a lot of work to vet over 11 million images already on commons, many more on other wikis and the probably much greater number that will be loaded in future. So to my mind it makes sense that if we have a filter we have one system, and if that one system is released for use on Wikipedia we do need to make sure it complies with NPOV. WereSpielChequers 06:43, 29 November 2011 (UTC)
- To my mind that means that either we cater for all POVs or none, and if we do this it is best to do this in a way that has zero impact on those who choose not to participate in the filter. WereSpielChequers 11:21, 1 December 2011 (UTC)
- As an opponent of the vast majority of the "filtering" proposals, I nonetheless have suggested something very similar to this to be acceptable, as it does not require users to debate about the rating or categorization of any file. (See [13]) I think there is a chance that a mechanism of this type can be implemented using only user-level custom .js programming, but I'm not sure; in any case some new content-neutral features would be useful for it (like the ability to search a page for whether a string occurs in it, or a web form where you can type a category name and get back a list of all the files that are in that category or any subcategory of it). Wnt (talk) 09:21, 9 June 2012 (UTC)
- To my mind that means that either we cater for all POVs or none, and if we do this it is best to do this in a way that has zero impact on those who choose not to participate in the filter. WereSpielChequers 11:21, 1 December 2011 (UTC)
- Who cares if such a list is "pure personal POV"? Have we all forgotten again that NPOV is not mandatory, and is explicitly disclaimed by several projects? Try not to impose your Wikipedia POV on the anti-NPOV projects like WikiBooks and Commons, and then think about how you might implement a filter for one of the POV-embracing projects. That might help you see other useful paths. WhatamIdoing 17:26, 26 November 2011 (UTC)
Deutschsprachige Diskussion
editHier eine Beschreibung des obigen Vorschlags auf Deutsch:
- Auf jeder Wikipedia-Seite gibt es einen "Bilder filtern"-Knopf, mit dem der Leser
alle Bilder ausblenden (grau stellen) kann.ein "Ausblenden"-Icon neben jedem Bild anzeigen kann. - Das System "merkt" sich ausgeblendete Bilder und zeigt sie beim nächsten Besuch der Seite nicht wieder an.
- Ausgeblendete Bilder können jederzeit mit einem Klick wieder eingeblendet werden. Diese Wahl wird ebenfalls gespeichert.
- Dasselbe passiert auch in Commons. Wenn ich angebe, dass ich die Möglichkeit haben will, Bilder zu filtern, erscheint neben jedem Bild sowie auf jeder Galerie- oder Kategorieseite ein Filtern-Icon.
- Beispiel: Wenn ich einen Artikel in meinem Tennis-Blog mit einer dynamischen Aufnahme einer Rückhand illustrieren will und bei der entsprechenden Suche in Commons zuerst mal drei lustig hüpfende Masturbations-Videos auf meinem Bildschirm präsentiert bekomme, kann ich die wegklicken und werde sie beim nächsten Mal auch nicht wieder sehen. Sie erscheinen dann in Zukunft nur als graues Rechteck (mit Einblenden-Icon für den Fall, dass ich sie doch sehen will).
- Man könnte auch schon in der Commons-Suchliste ein Icon für die Möglichkeit bieten, alle Bilder in der betreffenden Commons-Kategorie auf einmal auszublenden und zur Liste der gefilterten Bilder hinzuzufügen. Dann muss ich nicht drei Masturbationsvideos einzeln klicken, sondern kriege sie mit einem Klick vom Bildschirm.
- Wenn eine dieser Mediendateien in eine Wikipedia-Seite oder eine Seite in einem anderen Projekt eingebettet ist, sehe ich sie dort dann standardmäßig auch nur als graues Rechteck – aber immer mit der Möglichkeit, sie wieder einzublenden.
Das erzeugt keinerlei Mehrarbeit für die Community. Der ganze Arbeitsaufwand liegt beim Leser. Jeder Benutzer legt praktisch im Laufe der Zeit eine Liste von Mediendateien an (ähnlich wie eine Beobachtungsliste), die er nur dann sehen wird, wenn er das wirklich will. Das können auch Spinnen- oder Mohammed-Bilder sein, oder was immer den Leser sonst aufregt. Eine zentrale Kategorisierung findet nicht statt. Gespeichert wird das Ganze in einem Cookie für unregistrierte Benutzer (nicht empfohlen), oder accountbezogen (empfohlen), ähnlich wie die Beobachtungsliste. SUL (single unified log-in) ist erforderlich, damit das Ganze projektübergreifend funktioniert, d.h. meine Auswahl in Commons auch in Wikipedia bei der Seitendarstellung berücksichtigt wird.
Wie wär's? --JN466 12:36, 22 November 2011 (UTC)
- Was sagen die Techniker dazu wg. Performance-Zusammenbrüchen? -jkb- 14:22, 22 November 2011 (UTC)
- Ich habe bei Brandon Harris angefragt. [14] --JN466 15:07, 22 November 2011 (UTC)
Zur Info: Info-Blogpost -- southpark (grad zu faul zum anmelden)
- Jo, thx, -jkb- 08:58, 24 November 2011 (UTC)
- Völlig überflüssig und verzichtbar, genau wie der Originalvorschlag. Für nicht angemeldete Benutzer ist das gar nicht machbar, weil Cookies sehr vergänglich sind. Manche Menschen löschen sie automatisiert nach jeder Browsersitzung, speziell, wenn es nicht ihr eigener Rechner war. Oder in regelmäßigen Zeitabständen. Und/oder sie benutzen mehr als einen Browser und sowohl mobile als auch stationäre Rechner. Allenfalls hat dann der nicht angemeldete Benutzer im Internetcafé die Bildfilter seines Vorgängers oder des Cafébesitzers (analog in der Arbeitswelt). Der angemeldete Benutzer hat a) Besseres zu tun, als monate- und kilometerlange Diskussionen zu verfolgen, wie denn nun doch dieser sogenannte *******-Bildfilter durch die Hintertür modifiziert eingeführt werden kann, was dann vermutlich als Kompromiss zwischen WMF und Community verkauft werden soll und b) braucht er keine Sammlung personenbezogener Daten, was er denn nicht sehen will. Das geht niemand was an und braucht weder in einem Cookie noch sonstwo auf den Servern gespeichert zu werden. Die anfallenden Daten zur globalen Aktivität und die Beobachtungsliste sind schon genug. Vor einer Einführung in der de.wp sollte in Anbetracht des Ausmaßes der Diskussion ein Meinungsbild gestartet werden. --Blogotron 23:33, 27 November 2011 (UTC)
Neuformulierung des Vorschlags: Persönliche Filterlisten
editSeit Beginn der Diskussion oben hat sich der obige Vorschlag etwas weiterentwickelt, mit einigen Änderungen auf der Detailebene. Hier daher eine aktuellere Beschreibung. Der Wortlaut entspricht dem, was ich auf Foundation-l gepostet habe.
Die Idee ist, Leser/innen die Möglichkeit zu geben, persönliche Filterlisten (PFL) anzulegen. Die Struktur und Interaktivität dieser persönlichen Filterlisten ist mit denen der persönlichen Beobachtungslisten vergleichbar.
Funktionieren würde das so: auf jeder Projektseite gäbe es links am Rand einen Eintrag "Bildfilter aktivieren". Wenn man darauf klickt, wird jedem Bild auf der Seite ein "Ausblenden"-Knopf beigestellt. Wenn man auf diesen klickt, wird das Bild grau gestellt und automatisch in die persönliche Filterliste der Leserin oder des Lesers aufgenommen.
Ein so in die PFL aufgenommenes Bild erscheint bei allen weiteren Besuchen der Seite grau. Ebenso erscheint es grau, wenn man es auf einer anderen Projektseite antrifft, und auch (ein SUL-Benutzerkonto vorausgesetzt) auf jeder anderen Wikimedia-Projektseite, die das Bild enthält, also auch in Commons und in Commons-Suchergebnissen. Benutzer/innen hätten dabei aber in jedem Fall die Wahlmöglichkeit, das Bild doch einzublenden, indem sie auf den Platzhalter oder einen "Einblenden"-Knopf klicken, womit das Bild dann gleichzeitig auch wieder aus der persönlichen Filterliste entfernt würde. Natürlich kann man es sich dann auch wieder anders überlegen: ein Klick auf "Ausblenden" stellt das Bild wieder grau und fügt es wieder in die PFL ein, genauso, wie man einen Artikel mit zwei Klicks aus seiner Beobachtungsliste entfernen und dann wieder in diese aufnehmen kann.
Abgesehen davon, dass Benutzer/innen Bilder, die sie beim Surfen in unseren Projekten antreffen, ausblenden und in ihre PFL aufnehmen können, könnten sie ihre PFL auch manuell editieren, genauso, wie es möglich ist, eine Beobachtungsliste manuell zu editieren. Auf diese Weise können sie jede beliebige Bilddatei oder -kategorie in ihre Liste aufnehmen. Sie könnten auch von Dritten bereitgestellte Filterlisten übernehmen. Solche Listen könnten durch Crowdsourcing von denjenigen erstellt werden, die an einer Filterfähigkeit interessiert sind, ganz nach den von ihnen gewünschten kulturellen Kriterien.
In den Diskussionen in den vergangenen Monaten ist klar geworden, dass das Taggen von Dateien für einen Bildfilter oder das Anlegen von Filterkategorien etwas ist, an dem die Gemeinde als ganze sich nicht beteiligen möchte – zum Teil wegen des dabei anfallenden Arbeitsaufwands, zum Teil wegen der Streitereien, die dabei anfallen würden, und zum Teil, weil es aufgrund unterschiedlicher kultureller Normen einfach nicht möglich ist, diese Arbeit wirklich neutral zu erledigen. Verschiedentlich wurde die Meinung geäußert, die Stiftung sollte gar nichts tun und die Schaffung von Bildfiltern einfach Dritten überlassen.
Dieser Vorschlag nimmt hier eine Mittelposition ein. Die Stiftung gibt den Benutzer/innen die softwaretechnische Möglichkeit, persönliche Filterlisten anzulegen und zu pflegen, genau, wie sie es ihnen ermöglicht, Beobachtungslisten anzulegen; aber herauszufinden, was in diese Filterlisten aufgenommen werden sollte, ist eine Crowdsourcing-Aufgabe für die Leute, die gerne einen Filter nutzen möchten. Das ist konsistent mit dem grundlegenden Crowdsourcing-Ansatz von Wikimedia und eigentlich eine ganz natürliche Erweiterung davon. Selbst wenn dieses Crowdsourcing wider Erwarten nicht stattfinden sollte, gewinnt jeder Leser und jede Leserin zumindest die Möglichkeit, Bilder und Medien, denen sie begegnen, mit einem einzigen Klick auszublenden, mit der Gewissheit, dass sie sie nie wiedersehen werden – es sei denn, sie wollen sie wirklich sehen. Auch das wäre schon ein Nettogewinn. Benutzer/innen, die mit Filtern gar nichts zu tun haben wollen, könnten die entsprechenden Bildschirmelemente in ihren Präferenzen ausschalten und ihre gewohnte Surferfahrung beibehalten.
Bei diesem Filtermodell liegt die gesamte Informations-Infrastruktur für das Filtern in den persönlichen Filterlisten der Benutzer/innen. Die Datenstruktur des Wikis verändert sich überhaupt nicht, genauso, wie das Hinzufügen einer Seite zu einer persönlichen Beobachtungsliste auch nur für die jeweilige Benutzerin oder den jeweiligen Benutzer einen Unterschied ausmacht. Es gibt keine Filtertags, keine speziell angelegten Filterkategorien, und niemand muss sich Gedanken um ihre Definition, Schaffung oder Pflege machen. Die Filterbenutzer/innen tun das für sich selbst.
Für unregistrierte Benutzer/innen könnte die PFL in einem Cookie gespeichert werden. Allerdings würde allen, die die Bildfilterfunktion in Anspruch nehmen, nahegelegt werden, ein SUL-Konto anzulegen, damit sie auch dann noch dieselbe Surferfahrung haben können, wenn sie an einem anderen Rechner arbeiten oder ihren Cookie versehentlich gelöscht haben. --JN466 17:57, 24 November 2011 (UTC)
- Eine "hinterlistige" Frage :-): in welchen Medien (extern, intern...) kann man nachlesen, wie sich die Boardmitglieder zu diesem Modell stellen würden? Gruß -jkb- 18:48, 24 November 2011 (UTC)
- Es erfüllt die Vorgaben der Board-Resolution. Der usprünglich beabsichtigte kategorienbasierte Ansatz ist laut Sue nicht mehr Diskussionsgegenstand. Ich gehe davon aus, dass das Board auch dieser Meinung ist. --JN466 21:13, 24 November 2011 (UTC)
- OK, erfüllt die Vorgaben... wäre gut, wenn es so bliebe. Gruß -jkb- 23:56, 24 November 2011 (UTC)
- Note: This question was about whether this idea would be acceptable to the board. --JN466 01:31, 25 November 2011 (UTC)
- Interessante Idee. Zwei Fragen nachgehakt: 1.) Würde das Auftauchen von Filterlisten, also einer Art downloadbaren Datei-Blacklist, die Zensurmöglichkeiten von Staaten (bzw. technisch den dortigen Providern) signifikant erhöhen? Oder haben die entsprechende Listen bereits selbst angelegt und nutzen sie? 2.) Wie gross wäre so ein Filter mit, sagen wir mal, 1 Mio Dateien - ich nehme mal an, wer freiwillig einen solchen Cookie wolllte, müsste Dateien in der Grössenordnung ausblenden (zumindest auf die Zukunft gesehen)? Könnte man das noch performant hinbekommen? --Port(u*o)s 00:42, 25 November 2011 (UTC)
- 1. Signifikant erhöhen glaube ich nicht. Was ein paar Amateure hinbekommen, schafft ein staatlicher Zensurapparat locker. Google schafft es auch, mit Safe Search, und die Zensoren in Saudi-Arabien haben professionelle Sofwarelösungen. 2. 1 Mio. Dateien ist glaube ich zu hoch gegriffen. Das wäre etwa jedes 10. Bild in Commons. Schau mal, was da in Google (Filter ausgeschaltet) kommt: [15] Ich sehe auf der ganzen Seite unter 250 Bildern kein einziges, das filterwürdig wäre. Die Funktionalität ist ferner der einer Beobachtungsliste nicht unähnlich. Etliche Leute haben mehrere Tausend Seiten in ihrer Beobachtungsliste. Trotzdem geht das Anzeigen der Beobachtungsliste immer noch ohne spürbare Verzögerung. Das soll nicht heißen, dass der Performance-Aspekt nicht berücksichtigt werden muss, und es macht sicher Sinn, wenn die Programmierer darüber nachdenken, aber ich wäre überrascht, wenn es nicht machbar wäre.
- Note: Port(u*o)s is asking about two things: whether precompiled filter lists might make censors' job easier. (In my view, no, as this would be amateur work, and censors already have professional software solutions.) The other is performance -- if users end up having very large filter lists, containing thousands (millions?) of files against which any page has to be checked, would filtering affect performance? (I doubt it, given the size of editors' watchlists, which do not seem to impose time delays in displaying recent changes. I also doubt anyone would want to filter every tenth image in Commons, but it is worth examining, especially as Commons is growing continuously.) --JN466 01:29, 25 November 2011 (UTC)
- Die Frage nach den Performance-Effekten habe ich oben auch schon gestellt, die Antwort ist ja in Auftrag gegeben. Dabei denke ich nicht, dass das Problem das Specihern der Listen wäre - das sind Textdateien, die so gut wie nix in Anspruch nehmen; eher die Belastung durch den Verkehr, das ständige Abrufen der pers. Listen beim Wikisurfen, könnte beachtlicher sein. -jkb- 09:20, 25 November 2011 (UTC)
- Angenommen es gebe wirklich einen so großen Bedarf wie von Andreas K. immer wieder in den Raum skizziert, dann würden wir uns wirklich die Server damit lahm legen. Das schöne Caching funktioniert ja nur deshalb so schön da die meisten Leser die gleiche/unveränderte Seite aufrufen. Würde es wie hier in den Raum gestellt mit Cookies funktionieren, dann hätten wir erst einmal ziemlich viele Listen (da kommt bei einer Millionenleserschaft schnell was zusammen, selbst wenn nur mal eben ausprobiert wird) und vor allem könnten wir die Squids gleich abschalten. Der Aufwand wäre also durchaus erheblich.
- Bleibt aber immer noch das Problem das irgendwer die Listen erstellen muss. Jeder für sich allein ist aus Prinzip sinnlos, da sich jeder dann ja erst einmal das Bild ansehen und blocken muss. Bis er dann mal alles auf dem Index hat sind da sowieso schon wieder andere Bilder in den Artikeln, was das Tool zum kostenträchtigen Spielzeug macht. Hier wurden nun ja externe Listen vorgeschlagen. Damit diese entstehen, müsste man deren Entstehung fördern und sie öffentlich machen (zum kopieren). Wo da jetzt aber der Unterschied zu den Pfui/Ekel/Würg-Kategorien sein soll, dass erschließt sich mir nicht. --Niabot 20:08, 26 November 2011 (UTC)
- Der Unterschied ist, dass wir nichts kategorisieren. :) Wenn irgend jemand eine Liste von seiner Meinung nach filterwürdigen Dateien und Kategorien zusammenstellen will, kann er das machen. Und er und andere können diese Medien dann filtern. Aber das ist ihr Bier, nicht unseres. Lass diejenigen die Arbeit machen, die filtern wollen – oder die wollen, dass andere filtern können. Ich würde auch nicht sagen, dass jeder allein sinnlos ist. Viele Leute besuchen dieselbe Seite immer wieder. Wenn Ihnen da ein Bild nicht passt, wie der aufgesägte Schädel im Meningitis-Artikel, brauchen sie es nur einmal sehen, und nicht immer wieder. Wenn die PFL die Auswahl von Commons-Kategorien unterstützt, könnten Neueingänge in den Kategorien automatisch per Bot in die PFL aufgenommen werden. --JN466 21:41, 26 November 2011 (UTC)
- Tausche doch das Listen-Sharing gegen das Hinzufügen von Kategorien zur Liste aus. D.h. das man sie nicht mehr extern tauschen kann, aber dennoch ähnliche Bilder recht schnell/bequem hinzufügen kann (hide image oder hide this image and images in the same category X). Das beugt dem Missbrauch vor und würde den einzelnen Nutzer unterstützen. Ergänzen könnte man das noch optional mit der Blur-Methode (siehe weiter unten). Will halt eben keine Kategorien/Listen haben die von außen her gesehen und exportiert werden können, da man sonst mit diesen auch die bösen Filter füttern kann. So macht jeder sein persönliches Ding und es gibt keine Veranlassung Bilder nach Empfindlichkeiten zu kategorisieren. --Niabot 20:43, 27 November 2011 (UTC)
- Hätte ich auch kein Problem mit; aber die Entscheidung hierfür liegt bei Sue und dem Board, wenn sie diesen Vorschlag aufgreifen wollen. (Die Fähigkeit, Kategorien hinzuzufügen, ist bereits Bestandteil des Vorschlags.) Note: Niabot is saying that enabling users to add categories to PFLs would be better than allowing public list sharing, and that the blur method could be used in display. He'd rather not see categories/lists made public that users can import, as they could also be used by other censors. --JN466 21:11, 27 November 2011 (UTC)
- Tausche doch das Listen-Sharing gegen das Hinzufügen von Kategorien zur Liste aus. D.h. das man sie nicht mehr extern tauschen kann, aber dennoch ähnliche Bilder recht schnell/bequem hinzufügen kann (hide image oder hide this image and images in the same category X). Das beugt dem Missbrauch vor und würde den einzelnen Nutzer unterstützen. Ergänzen könnte man das noch optional mit der Blur-Methode (siehe weiter unten). Will halt eben keine Kategorien/Listen haben die von außen her gesehen und exportiert werden können, da man sonst mit diesen auch die bösen Filter füttern kann. So macht jeder sein persönliches Ding und es gibt keine Veranlassung Bilder nach Empfindlichkeiten zu kategorisieren. --Niabot 20:43, 27 November 2011 (UTC)
- Der Unterschied ist, dass wir nichts kategorisieren. :) Wenn irgend jemand eine Liste von seiner Meinung nach filterwürdigen Dateien und Kategorien zusammenstellen will, kann er das machen. Und er und andere können diese Medien dann filtern. Aber das ist ihr Bier, nicht unseres. Lass diejenigen die Arbeit machen, die filtern wollen – oder die wollen, dass andere filtern können. Ich würde auch nicht sagen, dass jeder allein sinnlos ist. Viele Leute besuchen dieselbe Seite immer wieder. Wenn Ihnen da ein Bild nicht passt, wie der aufgesägte Schädel im Meningitis-Artikel, brauchen sie es nur einmal sehen, und nicht immer wieder. Wenn die PFL die Auswahl von Commons-Kategorien unterstützt, könnten Neueingänge in den Kategorien automatisch per Bot in die PFL aufgenommen werden. --JN466 21:41, 26 November 2011 (UTC)
- Die Frage nach den Performance-Effekten habe ich oben auch schon gestellt, die Antwort ist ja in Auftrag gegeben. Dabei denke ich nicht, dass das Problem das Specihern der Listen wäre - das sind Textdateien, die so gut wie nix in Anspruch nehmen; eher die Belastung durch den Verkehr, das ständige Abrufen der pers. Listen beim Wikisurfen, könnte beachtlicher sein. -jkb- 09:20, 25 November 2011 (UTC)
- OK, erfüllt die Vorgaben... wäre gut, wenn es so bliebe. Gruß -jkb- 23:56, 24 November 2011 (UTC)
- Es erfüllt die Vorgaben der Board-Resolution. Der usprünglich beabsichtigte kategorienbasierte Ansatz ist laut Sue nicht mehr Diskussionsgegenstand. Ich gehe davon aus, dass das Board auch dieser Meinung ist. --JN466 21:13, 24 November 2011 (UTC)
- Kurzer Einwurf zum Tauschen der Listen: natürlich dürfen die Listen nicht zugänglich sein, nicht einmal für Admins, CUs, Stewards u.a. - so wie die BEO-Listen. Nur, wenn ich will, so kann ich meine BEO-Liste - oder Teile davon - ganz bequem doch jemanden geben. Das liegt in meiner Entscheidung (nun, mit meiner BEO-Liste würde ich es nicht tun...). -jkb- 22:48, 27 November 2011 (UTC)
Has anyone approached Google
editand asked if we can hitch a ride on their safe search categorization? They are constantly categorizing the media on Commons. --Anthonyhcole 13:44, 24 November 2011 (UTC)
- Google is not perfect: [16] :)
- It's an idea; a downside is that Wikimedia would be tied to Google's "one-size-fits-all" judgment. Crowdsourcing filter lists is more flexible, allowing different cultural norms to be expressed by those sharing them. Best, --JN466 16:38, 24 November 2011 (UTC)
- So you're not aware of any discussion between the Foundation and Google on this point? On being tied to Google, presumably we could drop them if their filtering becomes a problem. --Anthonyhcole 00:32, 25 November 2011 (UTC)
- No, not aware of any discussions on this. I remember Jimbo saying though that the relationship between WM and Google is very good. He might be a better person to ask. --JN466 01:38, 25 November 2011 (UTC)
- So you're not aware of any discussion between the Foundation and Google on this point? On being tied to Google, presumably we could drop them if their filtering becomes a problem. --Anthonyhcole 00:32, 25 November 2011 (UTC)
- Google's "safe search" is based on unknown criteria, is completely non-configurable and covers only a subset of issues. I don't think they include in their filter spiders, Mohammeds, or whatever else our users might want to filter. --Neitram 09:48, 25 November 2011 (UTC)
- On the other hand, if they would "export" their list to us once, then it would give us a baseline to work from and then we could adjust it from there.
- I've considered approaching other companies (makers of web proxies, for example) for similar purposes, but Google hadn't occurred to me. WhatamIdoing 20:33, 25 November 2011 (UTC)
Blurred images by user preference
editThis was developed some time ago by a german user and presented on the foundation mailinglist and on de:Wikipedia:Kurier#Verschwommene Bilder statt Filter. There are some references to it here on this page. As you can see on the screenshot (click for full size/ blurring), it blurs all the images on the chosen wikimedia project, if the user enables it (not the text, obviously). The blurred image is revealed when the user hovers over the image (with his mouse). How does it work?
"To try this out you would have to copy or import this code
- http://en.wikipedia.org/wiki/User:BlurredImages/vector.js
- http://en.wikipedia.org/wiki/User:BlurredImages/vector.css
into your own skin.js and skin.css files which are available e.g. under
- http://en.wikipedia.org/wiki/Special:MyPage/vector.js
- http://en.wikipedia.org/wiki/Special:MyPage/vector.css"
That is very easy. I (firefox 8.0) have tried it on both en.wikipedia and wikimedia.commons and it worked very, very well. For more information and some interesting observations on how this blurring affects the reading experience, read here and here. Some minor problems (like small logos or the little chess piece images on the chessboards in en:chess that are only revealed one at a time) could be easily fixed by blurring only images larger than 50x50. And it did not blur the Jimbo donation banner. But it really works well and especially on commons, to evade the apparently omnipresent bad toothbrush image ;-). It is very fast, just moving your mouse makes the image blurred again, no need to find a close-image button and click it. --Atlasowa 15:41, 27 November 2011 (UTC)
Personal filter lists v Personal private filter lists
editMy concern about maintaining public image filters on wiki as with the collapsing idea is that it legitimises POV comments about content on wiki. Now that would be fine if everyone shared my biases and gut reactions to particular images, but it is a big wide complex if not multiplex world and there will be all sorts of biases and prejudices in play. Since we don't all share the same prejudices the only way we can collaborate is if we all agree to leave our prejudices, opinions and original research at the door. Mines that hulking big pile next to the steaming pile of mammoth poo, I'm sure if you get close you can tell the difference.
If we allow people to decide to hide particular images we get into discussions as to which religious groups are so important that we should hide that which offends them and which groups we can classify as unimportant and ignore. While public filter lists mean public discussions as to how much cleavage would offend one group or another. I'm OK with an opt in image filter that is genuinely private and personal, but not one that involves on wiki discussions as to where you draw the line between what goes into the softcore porn filter and what goes into the beachwear filter. To my mind that would institutionalise too much of a distraction on wiki, it also risks creating an expectation that the volunteer community will adopt and maintain something that isn't encyclopaedic. All filter proposals have to answer the question "how in 6 years time will your filter identify the offensive imagery loaded that day and predict that I will find it offensive". If the scheme relies on me loading up a new filter list every time I log in then you need to automate that. If that filter list is publicly maintained on wiki, even in userspace then the community needs to be aware that if our readers use it they will expect those lists to continue to be maintained long after the creators have gone. That shouldn't be a problem if the filter is maintained by its own users. But we should avoid ones maintained publicly by subsets of the community, I see a direct path slippery slide from "Filter lists maintained on wiki" to "wikimedia has a backlog of x00,000 images that need to be checked to see if they need to be added to any of y filter lists, please help" to "We don't have enough volunteers to screen all new images against all the filter screens, we have to make it the responsibility of the uploaders". With somewhere along the way the Press reporting that our systems have failed.
There is also the argument that if the filter lists are publicly available censors will use them. For both those reasons I think we should ensure that if we implement an image filter the contents are kept as private as a watchlist. WereSpielChequers 06:59, 29 November 2011 (UTC)
- The comment above makes it sound like the religious groups are the ones pushing for nudity-related filters. They aren't. They're pushing for very directly religion-related filters: They don't want pictures of their own specific, sacred objects to be displayed (to themselves, at least). Figuring out which ones to include is easy: any actual, organized religion whose prohibition on public display of religious objects can be verified in proper reliable sources can have their objects added to the religion filter. So far, I believe that a whopping three such religious groups have been identified worldwide. WhatamIdoing 21:13, 29 November 2011 (UTC)
- The personal filter lists proposal was designed to avoid exactly that – by leaving it to either the user himself, or to third parties, to compile filter lists for various needs, and to ensure that there are no on-wiki discussions or endorsements on what should belong into what list. I still think it's an elegant solution – like-minded people can get together and work on compiling and maintaining these filter lists according to their own criteria, and there is no on-wiki strife about categorisation. Instead, everyone can individually do, or subscribe to, their own thing. However, note counter-arguments from Erik on Foundation-l. --JN466 16:06, 30 November 2011 (UTC)
- Hi WhatAmIDoing. My experience has been that religious groups are very much involved in codes as to how much of the human body can be visible, your experience may be otherwise but I would suggest you avoid sounding so definitive in such a complex situation. For example I'm in the UK and here we've had Christian groups that have publicly complained about the showing of penises on the BBC, Islam also has rules about nudity. But if we were to disallow nudity rules and restrict this to objects that particular religious groups consider should not be displayed on the Internet then yes that could be a small and easily definable group suitable for the collapse system. However I suspect it will be rather more than three religions and if we start we won't be able to restrict it to objects. I'm no theologian but Church of the Latter day saints temple garments, Some Scientology high level stuff and the sacred objects of religions that keep things secret even from their own worshippers such as the Druze would be picked up by that rule. Of course you'd then have arguments as to whether masonic regalia and Aboriginal items should have the same protection - in the latter case the less organised nature of Aboriginal religion has resulted in some treating that as merely a cultural preference. But the slope isn't just slippery in terms of what you count as a religion, I think it would be very difficult to accept that some religions could have their religious objects protected by Wikipedia whilst having no restrictions on images of certain specific people; The Bahai and Islam are two religions with very strong views about images re their founders. Then of course you will get the arguments about sacred places, and there are countless thousands of them. It's also slippery in terms of the use to which images can be put, a lot of religions would be rather keen on a "no disrespectful derivatives" rule which would not be compatible with our licensing. I think that a broad "religious" filter would be a mistake as it would inevitably involve overkill - people might respect one religion's rule but few will respect more than one. Collapsing makes me nervous because if we do it I don't see where we can draw a clear line short of complying with the requests of every religion that contacts us. Requiring reliable sources to report that a particular religion is offended by us having images of that they consider sacred and not to be shown on the Internet is practically a guarantee that various groups will be holding demonstrations and inviting the press to the demonstration. Personal Private filters should be able to cater for the complexity of this, but so far that's the only option that could do so without disrupting the pedia or at least impacting editors who don't object to the images in question. WereSpielChequers 11:13, 1 December 2011 (UTC)
- I don't doubt that religious groups have views on nudity, just like many non-religious groups do. For example, there was a minor pro-nudity meatpuppeting incident on en.wiki recently, organized by a nudist/naturist discussion forum. Why should a religious group's views on the appropriateness of public nudity be treated any differently than a nudist group's views on the appropriateness of public nudity?
- But the point behind the "religious" filter in every single proposal so far, from the Harris report on, was specifically do deal with images of sacred objects. LDS temple garments and images of Mohammed are two of the examples given. And the goal is "you personally don't have to look at images that your religion prohibits the public display of", full stop. There's no slippery slope behind that line: no restrictions on licensing, no need for demonstrations, no endless lists of special objects, no hiding items from anyone except the (probably very few) individual people who voluntarily turn on this filter for themselves.
- This is actually the easy case. The actual prohibitions are so rare that they're famous, and they are tightly defined. For example (so far as anyone has been able to make out), there are zero religions in the world that prohibit the display of photographs of sacred geographic features. We really don't need to worry about this case. WhatamIdoing 17:03, 1 December 2011 (UTC)
- From religious groups not having views on nudity to them having views on nudity is a big change. I suspect you'll find those tight definitions also changing as more and more religions come online and assert any rights we give to religious groups under the collapsing proposal. To me it looks like an awfully slippery slope and I see a direct line from "we don't allow photography in our temples/at that sacred place" to "that photo shouldn't exist/be displayed". WereSpielChequers 17:56, 1 December 2011 (UTC)
- No, there's no change here. The originally proposed "religion" filter has nothing at all to do with nudity. It has never been intended to cover anything except images of sacred objects. Religious groups get exactly the same input on the non-religious filters as anybody else, which is this: anybody in the world, not just any atheist in the world, is welcome to participate in discussions about nudity, sexuality, violence, etc. Nobody has ever proposed a "nudity filter for people who object specifically on religious grounds".
- If a religious group says that photos of some sacred object/temple/person shouldn't be displayed, then fine: we can put those photos in the "religion" filter, and they can voluntarily turn on that filter for their own browsing. I'm actually not aware of a single religion that prohibits photographs of their sacred places. (The official website for the LDS church, for example, contains photos of every single temple.) More importantly, the "religion" standard is pretty clear-cut: these are organized groups with specific rules that have been discussed in reliable sources. Unlike, say, whether an underwear photo is "nudity", it's not a matter of personal opinion. All you have to do to figure out whether an image falls into the "religion" filter is read the reliable sources about that religion's rules on public display of sacred objects. So let me suggest this to you: before you worry any longer about some hypothetical religious group saying that it's impermissible to display photographs of a geographic location, how about you see whether you can find a reliable source that says such a group even exists? And if you can't find any such source, then how about you quit worrying about this particular non-problem? WhatamIdoing 21:21, 2 December 2011 (UTC)
- Well the best known is probably about some of the sites around Uluru. But its actually quite a commonplace for there to be rules about photography in sacred places. I've visited two temples in London that have a policy of not allowing photography. Then of course there's Islam FCO guidelines. But I think the main differences between us are that you are thinking of currently articulated well reported concerns by "organised groups with specific rules"; Whilst I'm thinking of the concerns that I anticipate would arise if we create a mechanism to cater for them. Perhaps my perspective on these things is overly influenced by personal experience of being threatened off an archaeological site by a change in local tribal policy about that sacred site. Its the less formally organised groups with rules that may not currently seem very specific to us whose very pronounced and clear views on sacred spaces that concerns me - especially if we create a policy that enables any tribal council to get Wikipedia to hide some images just by passing a resolution issuing a few press releases and telling the press they've invoked the local Gods to put pressure on Wikipedia. WereSpielChequers 21:54, 8 December 2011 (UTC)
- Now, how exactly does it hurt you to have such images (assuming the prohibition can be documented in a reliable source) listed in a filter that someone else either voluntarily turns on, or doesn't? Why do you care if some religious group wants a particular picture listed in the filter, so that their own adherents can avoid looking at it? WhatamIdoing 17:40, 9 December 2011 (UTC)
- If they are going into an opt in NPOV filter such as the scheme I've been advocating then obviously I have no concerns about that. My concern is if we go down the route of hiding such images for everyone in certain articles. WereSpielChequers 10:43, 10 December 2011 (UTC)
- They're not going to do that. The Board specifically instructed them to develop something that works user-by-user, not as a universal one-size-fits-none approach. Have you not bothered to read the Board resolution? Look for phrases like "We support the principle of user choice" and "we support access to information for all". There's not a single line that could be construed as "Alice" getting to decide what "Bob" gets to see. It's all about Alice deciding what Alice sees, and Bob deciding what Bob sees—and WereSpielChequers deciding what WereSpielChequers sees. WhatamIdoing 17:49, 10 December 2011 (UTC)
- If they are going into an opt in NPOV filter such as the scheme I've been advocating then obviously I have no concerns about that. My concern is if we go down the route of hiding such images for everyone in certain articles. WereSpielChequers 10:43, 10 December 2011 (UTC)
- Now, how exactly does it hurt you to have such images (assuming the prohibition can be documented in a reliable source) listed in a filter that someone else either voluntarily turns on, or doesn't? Why do you care if some religious group wants a particular picture listed in the filter, so that their own adherents can avoid looking at it? WhatamIdoing 17:40, 9 December 2011 (UTC)
- Well the best known is probably about some of the sites around Uluru. But its actually quite a commonplace for there to be rules about photography in sacred places. I've visited two temples in London that have a policy of not allowing photography. Then of course there's Islam FCO guidelines. But I think the main differences between us are that you are thinking of currently articulated well reported concerns by "organised groups with specific rules"; Whilst I'm thinking of the concerns that I anticipate would arise if we create a mechanism to cater for them. Perhaps my perspective on these things is overly influenced by personal experience of being threatened off an archaeological site by a change in local tribal policy about that sacred site. Its the less formally organised groups with rules that may not currently seem very specific to us whose very pronounced and clear views on sacred spaces that concerns me - especially if we create a policy that enables any tribal council to get Wikipedia to hide some images just by passing a resolution issuing a few press releases and telling the press they've invoked the local Gods to put pressure on Wikipedia. WereSpielChequers 21:54, 8 December 2011 (UTC)
- Hi JN466, I can see that your personal filter lists proposal would be less contentious if the list compiling could somehow be kept off wiki. Conceptually it is simpler than Controversial content/Brainstorming/personal private filters. But whilst one proposal is easier to get off the ground the other is easier to maintain. With a purely list based system you need to continually import updated lists to cater for new images. With a matching based solution once you've expressed enough preferences for the system to match you to people with similar preferences the system can quietly go on matching your preferences to filter decisions made on new images by people who have similar preferences to you. I think one way round that to get the best of both systems would be to have the option to seed personal private lists - this wouldn't require ongoing maintenance of the external filter lists, nor would it require comprehensive filter lists to be publicly available. But it would enable new users to quickly and perhaps even easily set their preferences. WereSpielChequers 10:27, 2 December 2011 (UTC)
- Yes, that might work, using a few personal filter lists as seeds for the matching system. Perhaps we manage to build the pedal-powered hot-air balloon after all. :) Cheers, --JN466 23:17, 2 December 2011 (UTC)
- Hi JN466, I can see that your personal filter lists proposal would be less contentious if the list compiling could somehow be kept off wiki. Conceptually it is simpler than Controversial content/Brainstorming/personal private filters. But whilst one proposal is easier to get off the ground the other is easier to maintain. With a purely list based system you need to continually import updated lists to cater for new images. With a matching based solution once you've expressed enough preferences for the system to match you to people with similar preferences the system can quietly go on matching your preferences to filter decisions made on new images by people who have similar preferences to you. I think one way round that to get the best of both systems would be to have the option to seed personal private lists - this wouldn't require ongoing maintenance of the external filter lists, nor would it require comprehensive filter lists to be publicly available. But it would enable new users to quickly and perhaps even easily set their preferences. WereSpielChequers 10:27, 2 December 2011 (UTC)
Clustering for search results on Commons
editIn the light of recent attacks from FOX against Wikipedia there were some discussions about the image search on Commons. Since I'm not friend of special tagging for the pure purpose of hiding content from the user and sharing the same viewpoints as the ALA, I came up with an approach to tackle multiple problems at once. The main problems are:
- The current search reacts to every keyword and might give surprising results. For example: A search for "cucumber" delivers not only a cucumber, but also it's use as a sex toy.
- When terms collide you won't find what you want to find. For example: If you search for "monarch" you will get hundreds of images of a butterfly, but very few results concerning monarchy.
Thats why i made the following proposal for the search:
- The search works as usual and grabs all results by keyword.
- It looks at the categories of the results. If it finds multiple images from different parts of the category tree it will split the results in groups, labeling them after the lowest parent category. This means that it would form clusters using the categories to group the results.
- Instead of showing a list of images it would display this groups, which can be expanded.
I know that this is a very rough description. Thats why i appended this illustration to show how it could look like. In Mode 1 you see more or less the same as the current search. But it also shows the groups. This lets the user refine his search request based on the category tree. This would guide the user to find what he wants and we could actually make use of the categories we are working on. This would circumvent the second problem. A user searching for "monarch" would have the option to choose from the clusters/groups "butterfly" and "politics" (it may be deeper and more groups in an actual implementation), which gives him an new tool to find what he is looking for.
But what about problem number one? Thats why Mode 2 exists. In Mode 1 you would still have random results pop up out of nowhere until you make a decission, which wouldn't satisfy FOX and some of our users. Mode 2 changes the way the search displays the results. It groups them up first and only shows the name of this groups. Then it is up to the user to decide what he is actually looking for by opening the groups. At this point no one could complain to see what he sees if he chooses to look at the category/group/cluster "sex toys" while searching for "cucumber".
Jayen466 proposed even a third mode which combines Mode 1 and Mode 2. He would allow groups to show a preview (as illustrated for Mode 2) if a cluster contains most of the images.
The good things
- The search itself would be improved by taking advantage of our category tree
- No one could complain anymore that he looks at unwanted results
- No extra work for Commons or "discrimination of content" by favoring or excluding results (by us and not the user himself)
- The approach could be easily expanded/modified and used for the search on Wikipedia itself
The open problems
- It has to be implemented
Thats all so far from my side. To get some more background information you may visit Commons:Requests for comment/improving search - A little bit of intelligence. --Niabot (talk) 20:25, 3 March 2012 (UTC)
- I really like this proposal, which I think would significantly enhance the usability of Commons in general, well beyond solving – almost as a side effect – the Prince Albert, toothbrush and cucumber problem. --JN466 04:11, 4 March 2012 (UTC)
- Visualizing groups sounds like a constructive and far-reaching search improvement. Should the discussion continue here or on the Commons page? –SJ talk 04:35, 4 March 2012 (UTC)
- I've dropped a link to this section to the Foundation list as well. [17] --JN466 05:07, 4 March 2012 (UTC)
- Do you envision just one secondary keyword or multiple? The diagram focuses on a primary keyword (e.g., "cucumber") and a single secondary keyword (e.g., "vegetable", "art works", "sex toy"). Is there any idea if this is extendable to a depth of more than one? Can there be multiple associations to a single keyword (a painted mural of a cucumber in a vagina would be "cucumber" + ("art works" + "sex toy")), right?
This sounds more and more like image tagging with some intersection capability, which has been discussed quite a lot in the context of Commons. Thoughts? --MZMcBride (talk) 23:16, 4 March 2012 (UTC)
- Lets say you search for "cucumber". Then you will get all results containing this keyword/tag, separated in clusters (if it makes sense, e.g. more then 20 results or so). That means that you would already find all images for "cucumber". If you then would enter the cluster "art works", it would have the ability to create new clusters from this result set. If we have actually some images inside both categories it would be very likely that the cluster "sex toy" would be available again. That way you would be able to create a "cucumber" + ("art works" + ("sex toy")) query. Additionally there could be an "and" and "or" connection for keywords. This would effectively work like tagging ("cucumber vagina" or "nail + hammer"), while the clusters help to narrow down the results. I would call this "hierarchical tagging". ;-) --Niabot (talk) 00:17, 5 March 2012 (UTC)
Now on Bugzilla, as Bugzilla:35701 - Clustering for image searches. Rd232 (talk) 17:30, 4 April 2012 (UTC)
- This seems like it would be a big improvement, although I am not techy enough to know anything about implementation.
I think it might be improved further if data linking categories was incorporated so that the search knew what the primary category for a given search term is. If it knew that "cucumber" related most strongly to "vegetable", it could default to showing the files categorised under vegetable, then give options to see the results in the other categories. "cucumber sex toy" could then default to the sex toy category. FormerIP (talk) 22:49, 14 April 2012 (UTC)
- We won't need to set such defaults/hints manually. If there is only one major category/group/cluster that contains a certain threshold (for example 90%) of results, then it could be used in the way you describe it. To improve such a search you will have to adjust the ranking parameters for clustering, which are itself independent from search terms. That is the basic idea behind clustering: Let the algorithm decide which groups can be build and how important they are. --Niabot (talk) 21:53, 15 April 2012 (UTC)
- The sensible primary category might not always be the one with the most images though. I guess it wouldn't be the end of the world if "monarch" didn't take you straight to kings and queens, but how would an automated system handle "human male"? FormerIP (talk) 02:14, 16 April 2012 (UTC)
- At first it would look for all images that contain both words inside the description (not the title). This is basically the same as it currently does. Than it would start to create clusters depending on the categories the images belong to. Its goal would be something to create up to 10 clusters, depending on distance metric. Distance is measured in image to image distance. A very simple approach would be: "find the shortest path trough the category tree from one image to the other and count the steps needed", Based upon that data you can create many small clusters (smallest would be "images inside same category"). Now you start to combine clusters which are close (another distance rating) to each other, step by step, until there are less then 10 clusters left. That way you don't exclude any outliers. But you have the option to exclude them with a simple click. All what it takes is to go back to the point when this cluster wasn't already merged. So in case of human male you would most likely have the category "Male" or multiple sub categories as the result, together with categories which aren't directly related but contain the words. If you would now click on "male" or "male anatomy" you would get their respective clusters. All together with the option (your choice) to view a preview for the category content. For the user it is an divide and conquer approach. You get anything but as grouped and balanced as good as our categories are. A thoughtful tuning for distance measurement can help to avoid worst case scenarios for a badly arranged category tree. --Niabot (talk) 16:26, 16 April 2012 (UTC)
- The sensible primary category might not always be the one with the most images though. I guess it wouldn't be the end of the world if "monarch" didn't take you straight to kings and queens, but how would an automated system handle "human male"? FormerIP (talk) 02:14, 16 April 2012 (UTC)
- We won't need to set such defaults/hints manually. If there is only one major category/group/cluster that contains a certain threshold (for example 90%) of results, then it could be used in the way you describe it. To improve such a search you will have to adjust the ranking parameters for clustering, which are itself independent from search terms. That is the basic idea behind clustering: Let the algorithm decide which groups can be build and how important they are. --Niabot (talk) 21:53, 15 April 2012 (UTC)
- FWIW, I thought this is an excellent strategy. Everyone who agrees should go vote on bugzilla:35701 (even tho the devs say they disregard votes) John Vandenberg (talk) 04:59, 16 May 2014 (UTC)
Flickr system
editA lot of Wikimedia content is transferred from Flickr. Consider using Flickr-type content flagging, search function, and account types (with children's accounts unable to access restricted material). --JN466 06:52, 4 June 2012 (UTC)
- "It is these same settings which are used to filter content for users in Singapore, Hong Kong, India, and South Korea, where users are only able to see photos deemed ‘safe’ by Flickr staff. German users may only view ‘safe’ and ‘moderate’ photos." This is from a report by the OpenNet Initiative in 2010 which doesn't yet include mobile internet providers that may be "filtered" too. And how does Flickr define content that is "not safe"? By (1) forcing their users to apply very vague criteria: "not sure whether your content is suitable for a global, public audience" (moderate) or "content you probably wouldn't show to your mum" (restricted) and by (2) "reviewing" if those users are "good self-moderators" whose content is deemed "safe".
- Jayen466, you are advocating here for a model that is really not an "optional" filter, but a censorship and self-censorship tool for vague "controversial" content in several, entire countries. IMHO, this is unacceptable. --Atlasowa (talk) 20:26, 6 June 2012 (UTC)
- Yes, in a way it's funny. German users aren't allowed to view Flickr's restricted section, because of Germany's strict Age Verification System requirements. And the envisaged solution seems to be to transfer Flickr's restricted section to Wikimedia's US servers, so German users get to see it there.
- I think I prefer the personal filter lists solution above. But I would prefer the Flickr-type solution to having nothing at all. --JN466 02:06, 7 June 2012 (UTC)
- P.S.: These "same settings" referred to in your quotation are applied by Flickr themselves, not by ISPs or government censors. It's entirely a corporate decision by Flickr, and mutatis mutandis it would be a corporate decision by Wikimedia whether or not to make particular categories available in a country. Right? --JN466 02:12, 7 June 2012 (UTC)
- Who is then the person in question that defines what is controversial and what isn't? It is definitely not the user itself or the parents of the child. Flickr is a company that has not the same goals as our projects and it dictates his own rules to make the most profit. That is the opposite of what the projects of Wikimedia are aiming for. Comparing them (Flickr, Google, MS, ...) with Wikipedia or Commons is like comparing Capitalism and Communism, reaching the conclusion that both have the same goals and approach. --Niabot (talk) 11:04, 7 June 2012 (UTC)
- Well, in the Personal Filter Lists proposal above, it is the individual user who gets to define what they want to filter. Are the users evil capitalists too? --JN466 12:08, 7 June 2012 (UTC)
- Publicly shared filter lists already exist and/or could be already created since "Childcare Software" already does that (I quoted it for a reason). But it doesn't make sense to implement such a system on our own and to create our own lists. It has no ability to protect children since there is http://other.websi.te and a child could just create an account to circumvent the system. Now look what is left. You might want to list the true benefits and i will list you the issues that come along if you want to realize the benefits. --Niabot (talk) 12:29, 7 June 2012 (UTC)
- The same is true about Flickr. In practice, few users do circumvent the system (it takes knowledge, initiative and a preparedness to lie that few younger children have), and casual users searching Flickr without logging in do not see any restricted media. It's a good system. Not good enough for German youth protection law, to be sure, but it would be a step forward from where Wikimedia is now. --JN466 14:20, 10 June 2012 (UTC)
- Flickr is a commercial site that can choose a particular population segment to target, and its filter can then be tuned for that market. We have a global mission and a filter that imposed one culture on the world would be incompatible with that ethos. You may think that a simple limited amount of censorship would be easier to sell to the community than a more complex system, but please consider the opposite argument, a more complex filter that is compatible with our ethos may be easier to sell to the community than a simple system which is incompatible with our ethos. There is also the issue of how you identify the content that you are going to filter. Flickr does this by putting the onus on the uploader to flag any content that they consider should be filtered. That is unlikely to be acceptable to the community, not least because for some uploaders you would be telling them to filter according to the mores of a different culture. Also it doesn't address the question of the > 13 million images that we already have. WereSpielChequers (talk) 13:07, 11 June 2012 (UTC)
- Flickr is crowdsourced just like Wikimedia. Its graded scale – safe, moderate, restricted – allows some flexibility. Someone in one community might decline seeing moderate and/or restricted material, another person might choose to see everything. The US in particular are culturally highly diverse. --JN466 13:51, 11 June 2012 (UTC)
- Yes it is partially Crowdsourced, but its potential crowd is not the whole world, hence a symplistic almost binary filter that may work for Flickr but would not work for us. Safe, moderated and restricted means just one axis of concern, an image of a woman in a lowcut dress that I deem safe might be considered moderate by some and restricted by the mores of someone in Medina. Another image that I consider should be moderated due to violence might be considered safe in Medina. If we were to try and implement a simplistic scheme that only allowed for safe, moderated and restricted then my concern is that the idea of anyy sort of image filter would be off the agenda for a long time. To have a realistic chance of getting the community to agree to an image filter it is essential that we only propose options that are compatible with the ethos of our site. Getting the community to agree to an image filter will not be particularly easy, but if we include an option that is incompatible with NPOV then we are wasting our time even trying. WereSpielChequers (talk) 15:17, 11 June 2012 (UTC)
- Flickr is crowdsourced just like Wikimedia. Its graded scale – safe, moderate, restricted – allows some flexibility. Someone in one community might decline seeing moderate and/or restricted material, another person might choose to see everything. The US in particular are culturally highly diverse. --JN466 13:51, 11 June 2012 (UTC)
- Flickr is a commercial site that can choose a particular population segment to target, and its filter can then be tuned for that market. We have a global mission and a filter that imposed one culture on the world would be incompatible with that ethos. You may think that a simple limited amount of censorship would be easier to sell to the community than a more complex system, but please consider the opposite argument, a more complex filter that is compatible with our ethos may be easier to sell to the community than a simple system which is incompatible with our ethos. There is also the issue of how you identify the content that you are going to filter. Flickr does this by putting the onus on the uploader to flag any content that they consider should be filtered. That is unlikely to be acceptable to the community, not least because for some uploaders you would be telling them to filter according to the mores of a different culture. Also it doesn't address the question of the > 13 million images that we already have. WereSpielChequers (talk) 13:07, 11 June 2012 (UTC)
- The same is true about Flickr. In practice, few users do circumvent the system (it takes knowledge, initiative and a preparedness to lie that few younger children have), and casual users searching Flickr without logging in do not see any restricted media. It's a good system. Not good enough for German youth protection law, to be sure, but it would be a step forward from where Wikimedia is now. --JN466 14:20, 10 June 2012 (UTC)
- Publicly shared filter lists already exist and/or could be already created since "Childcare Software" already does that (I quoted it for a reason). But it doesn't make sense to implement such a system on our own and to create our own lists. It has no ability to protect children since there is http://other.websi.te and a child could just create an account to circumvent the system. Now look what is left. You might want to list the true benefits and i will list you the issues that come along if you want to realize the benefits. --Niabot (talk) 12:29, 7 June 2012 (UTC)
- Well, in the Personal Filter Lists proposal above, it is the individual user who gets to define what they want to filter. Are the users evil capitalists too? --JN466 12:08, 7 June 2012 (UTC)
- Who is then the person in question that defines what is controversial and what isn't? It is definitely not the user itself or the parents of the child. Flickr is a company that has not the same goals as our projects and it dictates his own rules to make the most profit. That is the opposite of what the projects of Wikimedia are aiming for. Comparing them (Flickr, Google, MS, ...) with Wikipedia or Commons is like comparing Capitalism and Communism, reaching the conclusion that both have the same goals and approach. --Niabot (talk) 11:04, 7 June 2012 (UTC)
"Report this picture"
editCan I propose a filter idea? Every image has a "report this picture" button, that takes the reader to a list of issues to select from:
- nudity
- sex
- violence
- religious offense
When the user clicks "save" on their offensive image report, the image is assigned to their ip's vault (a personalised list of unwanted images).
The number of such offense reports an image gets, compared to the number of views, determines whether an image is classed by the project as unsafe. That ratio is set globally. When a reader selects "safe search" all images classed as unsafe and all images in their ip's vault become invisible, replaced by alt text or caption and the "show image" button.
Admins may protect an image. That is, if we notice images a benign image of Justin Beiber is mysteriously ranking as unsafe, we can step in.
Please forgive me if this has already been discussed. --Anthonyhcole (talk) 15:48, 9 June 2012 (UTC) Updated 14:47, 1 July 2012 (UTC)
- There are several drawbacks to this particular proposal:
- Everyone gets a "report this picture button", some people can live with or even strongly support an opt in system for others, some can live with an opt out system, but this proposal puts everyone into the same system. I think we can get consensus for an Opt in system if carefully designed. You have a second stage of opting in to "safe search", but it seems that everyone will have the report this picture button.
- Any system we introduce has to consider how easily gamed it would be. Obviously if a certain site were to direct its members to object to all Commons images of rabbits or that lasciviously portray rainbows then we could with some effort block the accounts involved and extract those images from the filter. But if they were more subtle and claimed to be vertigo sufferers? We have an awful lot of images taken from the tops of tall buildings. Now you could just say that particular reason for a filter isn't on your list, but nudity is, and nudity for some people means a female face or a wisp of hair. I would hope that an easily gamed system that relied on large numbers of admins making consistent and sensible judgements would be a non-starter. Aside from whether we want to make that work for ourselves, with RFA broken and the number of EN Wiki admins in decline there would be a serious difficulty in manning this. If you made it generic and relied on Commons admins to enforce it you face the problem that some of them are opposed to this in principle. In my view any system we introduce needs to generate minimal work for the community, and also be hard to game.
- If we include a list of issues to select from we have to decide which sorts of things can be filtered out and which can't. That then raises the issue of why should some concerns be catered for and not others, and how do you decide to filter out things that cause religious offense but not those that upset those with vertigo or arachnaphobia. That would not be a problem if we were a commercial site focussed on a market such as "mainstream native english speaking", but it is a problem for us.
- You propose a list of things that might offend people, but then a single on/off status of filtered or not filtered. We are a global site with a mission to offer information to the whole world. Commercial sites that cater for mainstream western culture can try a simple binary filter, but if we introduce a filter it needs to cater for different cultures. Some people might want to filter violence and nudity, others nudity and things that offend their religion.
- definitions of nudity vary by culture, and any system we introduce has to cater for all cultures. But one binary choice re nudity means that we have to decide where we draw the line over nudity. I did toy for a while with the idea of giving people a pair of mannikin images and a sliding bar so that they could decide their personal definition of nudity, but even that becomes unworkable when you consider some of the images that we have and the cultural mores involved. For example, I consider that a photograph of a marble statue and a photograph of a drunken streaker are very different things even if the same parts of the anatomy are shown, others would disagree. In order to get consensus I believe that any system we introduce has to be sufficiently tunable as to work for editors in Madrid, Mississippi and Medina.
- An IP level filter would be a personal filter for some of us, but in many countries the Internet culture is still largely in Internet cafes. I think we can get consensus for a personal filter that is as private and personal as your watchlist, but if you go to IP level you are enabling some people to censor other people's viewing. For many people here that is problematic.
- Publically available ratings as to how offensive an image is have various drawbacks, especially in a US election year. I for one do not want media stories as to which candidate's visage is deemed most offensive "by Wikipedia". A personal Private filter gets round that by having the filter preferences as personal as a watchlist and almost as private. You don't need to let anyone else know what is in your filter, and the system could have you matched with editors who you detest but whose nudity tolerance is at the same level of yours.
- The common factor between our proposals is that they both start with the idea of allowing edtors to say "don't show me that image again". But from thence they go very different ways, sorry if my critique sounds harsh, I'd welcome your frank review of my own proposal. WereSpielChequers (talk) 12:54, 11 June 2012 (UTC)
- I'm addressing the nature of the filter, not whether it's opt-in or -out.
- There are four categories: violence, nudity, sex and religious. Vertigo isn't in there. If an image from a sky scraper busts the ratio for sex complaints, we protect the image. You don't know if gaming is going to be an issue. Only a trial would show if that's a problem.
- I chose the four categories from the working group report behind the foundation's resolution on controversial images. Those are the classes of controversial image they identified as needing a filter. Vertigo and arachnophobia weren't in the report.
- You're right. The filter should allow the reader to cherry pick from the four categories when they opt in or out.
- The global filter doesn't have to be something that satisfies a mullah in Medina and a Madrid lingerie model. This filter will arrive at a global mean, and individuals can adjust their personal settings. If the crowd relegates a Michelangelo nude to unsafe, those using "safe search nude" can just click "display this image".
- Per WhatamIdoing, below.
- Per WhatamIdoing, below; and the IP's vault needn't be visible to anyone but the IP, if that; and I'm not proposing publishing the details of the global vault; and images of Mitt Romney or Obama won't fit any of the four categories.
- --Anthonyhcole (talk) 08:07, 14 June 2012 (UTC)
- Thanks for agreeing to allow users to treat those four categories separately. It does complicate the user interface a bit, but the alternative is over simplistic.
- The controversial content report which came up with four categories was a study commissioned by the WMF, but as far as I'm aware it wasn't a consensus decision of the community or a comprehensive survey of the potential users of a filter. That said it isn't a total deal breaker for me, though it may well be for others. I prefer a system where anyone can privately filter their personal viewing for any reason that causes them to want to filter. Perhaps it would be best to think of the four as a minimum requirement from the Board, and a more flexible system which allowed for Arachnophobia and others as being preferable and less contentious (in all these debates I've seen several people argue against setting those four criteria and no others, but I've only once seen someone say it is good to show spiders to arachnophobes).
- Where it will be harder to get agreement is that in your design the four categories: violence, nudity, sex and religious are all binary choices. I see three of them as continuums where different cultures will want to set different thresholds. Taking nudity as an example, definitions of nudity vary dramatically around the world. If we were designing a filter for a website targeted at the mainstream in the English speaking world then we might define nudity for females as anything less than an opaque bikini. But our aim is for a culturally neutral global website, and to me that means if we are going to introduce an image filter it has to work for both the people who consider nudity as somewhat less or dramatically more covered than a bikini. With nudity there is also the issue of acceptable levels of nudity... Religious offense is not a continuum, rather several unconnected things, images deemed offensive by Islam might be inoffensive to Latter Day Saints and vice versa. In a personal preference system those are not problems, but in a binary system they are. The possible solutions to that range from picking one culture's standard on nudity and setting it at a level that some will consider excessive and others insufficient, and for religions, accepting that if anyone says an image offends their religion then it goes in the filter. But in any event a binary filter system is guaranteed to provoke endless discussions as to whether particular images should or should not be in particular filters. It is also guaranteed to have both overkill and underkill. Also on your proposal it will be publicly known which images are in which filters, this could have a freeze effect on editorial decisions with people potentially using cropped photos of statues to illustrate an article on a statue without using images that are in the filter. Another objection that some have cited to public filters is that they would make it easy for other people and organisations to censor Wikimedia for people who don't want it censored. WereSpielChequers (talk) 17:04, 14 June 2012 (UTC)
- Sorry, I'm being unclear, I'm actually referring to the working group of board members appointed by the board to evaluate the Harrises' report. It's their recommendations I was referring to. The Harrises' excluded religiously offensive material from their proposal; the working group deliberately earmarked religiously offensive material for filtration. So it's that working group report and the foundarion resolution which endorsed it's recommendations that I was drawing on for the four categories.
- I'm not convinced we need more filter categories for Wikipedia. We're doing the sensible thing on en:Arachnophobia without any fancy automated filtering. We won't be abandoning sensible and sensitive image curation there once we adopt a filter. But finer filtering may make sense at Commons.
- I can't follow your last paragraph. It's late so I'll look with clearer eyes in the morning. --Anthonyhcole (talk) 19:29, 14 June 2012 (UTC)
- I agree with this critique. In (literally) under a second my thoughts went straight to reporting Rick Santorum's picture as offensive. Note that this is not merely some abuse of the system - the image actually is displeasing to look upon, so why shouldn't I call it that. Philosophically, your safety valve of having the admin step in means that you're dismissing that notion of offensive, while saying that, say, someone who objects to John Collier's painting of Lilith by comparison has a rational reason. So rejecting my reaction is to say that you're really going by some unspecified set of rules of your own after all; but if you accept it, then obviously, we'll have dueling political parties classifying each others' candidates. Wnt (talk) 18:25, 11 June 2012 (UTC)
- Santorum's picture is neither nude, sexual, violent nor religiously offensive. I confess I didn't understand the rest of your response. --Anthonyhcole (talk) 08:07, 14 June 2012 (UTC)
- Well, the biggest objection to the image filter is that people don't want you deciding for them what is religiously offensive, etc., which is the point I meant to make. Wnt (talk) 17:46, 22 August 2012 (UTC)
- Santorum's picture is neither nude, sexual, violent nor religiously offensive. I confess I didn't understand the rest of your response. --Anthonyhcole (talk) 08:07, 14 June 2012 (UTC)
- We have to provide something at the "IP level". One of the minimum requirements set forth by the Board resolution is that unregistered users be able to use it. The only thing needed to keep user #1 from accidentally "censoring" user #2's experience at the Internet café is a reset button. This isn't difficult. WhatamIdoing (talk) 16:42, 13 June 2012 (UTC)
- We don't have to provide something at IP level. The Board resolution is something that staff need to consider, but it hasn't been entirely acceptable to the community. If the Board was to go forward with an attitude that that resolution has to be implemented regardless of community feedback then I for one will back off from the image filter as being overly contentious and unlikely to happen. The community has made it pretty clear that it doesn't want a system that lets different people in an Internet café censor each others viewing. A reset button that somehow reset things for each different human using the same IP would be cool, but how would such a reset button know that there was now a different set of fingers on the keyboard? By contrast accounts are free, easy to set up and each relates to one unique individual. It would be OK to do something at the IP level if it were possible to distinguish different humans at the same IP, currently the only way we have to do that is by encouraging those people to create accounts. WereSpielChequers (talk) 06:33, 14 June 2012 (UTC)
- An internet cafe user selecting "safe search nude" won't be bothered by a blanked Michelangelo image, especially if they can either click reset and erase the ip's vault or, more likely, just click "show this image". --Anthonyhcole (talk) 08:07, 14 June 2012 (UTC)
- We don't have to provide something at IP level. The Board resolution is something that staff need to consider, but it hasn't been entirely acceptable to the community. If the Board was to go forward with an attitude that that resolution has to be implemented regardless of community feedback then I for one will back off from the image filter as being overly contentious and unlikely to happen. The community has made it pretty clear that it doesn't want a system that lets different people in an Internet café censor each others viewing. A reset button that somehow reset things for each different human using the same IP would be cool, but how would such a reset button know that there was now a different set of fingers on the keyboard? By contrast accounts are free, easy to set up and each relates to one unique individual. It would be OK to do something at the IP level if it were possible to distinguish different humans at the same IP, currently the only way we have to do that is by encouraging those people to create accounts. WereSpielChequers (talk) 06:33, 14 June 2012 (UTC)
- We have to provide something at the "IP level". One of the minimum requirements set forth by the Board resolution is that unregistered users be able to use it. The only thing needed to keep user #1 from accidentally "censoring" user #2's experience at the Internet café is a reset button. This isn't difficult. WhatamIdoing (talk) 16:42, 13 June 2012 (UTC)
- WSC, I hope this doesn't surprise you, but the WMF employees have to do what their employer tells them to do, not what (one small fraction of) "the community" (as if there were only one) says.
- But I think your objection is a bit overblown: The reset button will know that it's time to reset the filters when someone clicks the reset button. The fact that images are being filtered is supposed to be visible, and the means for changing the filter is supposed to be obvious. It doesn't actually matter if the human is the same person or a different one. The Board requires that it be trivially reversible for anybody who is sitting at the computer. WhatamIdoing (talk) 16:40, 14 June 2012 (UTC)
- I think the Board and the WMF are looking for a workable compromise. If you read Sue Gardner's comments on my proposal She certainly wasn't saying that there was no room for compromise. In fact she seemed to accept that if a system came in it would not be category based, though she didn't comment on the IP issue. As for IPs and image filters, you seem to be thinking in terms of a shared computer with different browsers sequentially on it. I'm thinking in terms of Internet cafes and other institutions where there could be hundreds of users concurrently on the same IP - in one case I think there is an entire gulf state behind one IP. So yes one user could refresh the filter - but that might then mean that the person who has just carefully set it would find it unset..... With three or four people browsing Wikipedia at the same time and resetting the filter for each other the results could be problematic. Especially if the person who'd set the filter was doing so before viewing an article that contained images that were illegal to view where they were. If it was set by the network then you'd have the problems of people censoring each other. If you don't mean IP but instead it would be each PC within a network, then it starts to slip beyond my technological grasp. WereSpielChequers (talk) 17:27, 14 June 2012 (UTC)
- Me too. I was thinking an IP address identifies each individual device but I know nothing at all about networking, so perhaps I'm mistaken, Just to be clear, I'm not really pitching this to the community. I'm putting it before the community for your critique but I'm pitching it to the foundation. We can make our recommendations but it's their decision. As WhatamIdoing points out, it is their duty, and they have the power, to implement the controversial images resolution. --Anthonyhcole (talk) 18:50, 14 June 2012 (UTC)
- The IP address is the connection to the Internet. I have WiFi at home and have just checked, both my notebook and my home PC have the same IP address OK they are both in the use of one person, but if I have a guest staying they may use my WiFi. For businesses and other institutions it can be much greater than two machines - that's why we usually only block schools for 31 hours as you are often blocking a whole school. There are some institutions with very large numbers of people sharing one IP address, that's why IP address based filtering is so problematic, both for people who suddenly find themselves opted into a filter and perhaps worse, anyone who relies on that and finds that someone has opted them out. There is also a Privacy issue as the filter options you set in the IP will presumably be visible to other users of that IP address. Much simpler and cleaner in my view would be to make this a free service that requires registration.
- As for pitching this to the Foundation rather than to the community, I'd suggest you read Sue Gardner's comments on my proposal. As of last October the Foundation was looking for a workable compromise, and it is in that spirit that I'm trying to work up a scheme that would enable people to filter whatever they want to filter from their own viewing but with minimal impact on others. Given those comments last year I would be greatly surprised if the Foundation was to implement a scheme that hadn't been written with a view to encompassing as many of the objections as possible. WereSpielChequers (talk) 10:51, 15 June 2012 (UTC)
- If you want to make it device-specific rather than IP-specific, then you could implement it with cookies. Some people will prefer one, and others the other. People who want all of their home devices to automagically have the same settings will prefer an IP-based solution. People who want different settings depending on whether they're using Firefox or Safari, or whose IP changes every day, will prefer a cookie-based solution. It's also possible to do both, either as a both-necessary or as an either-sufficient solution. WhatamIdoing (talk) 17:16, 15 June 2012 (UTC)
- I'm happy to make it account specific, we know how to make that work. IP would work for some people, but it would not work for others and crucially there is no way to implement an IP version that differentiates between shared and non shared IPs. If it was possible I can't see any serious objection to implementing an IP solution for those instances where one IP equates to one person, but I'm pretty sure there is no way to do that. As I've explained above there are fundamental problems with this if it was deployed at IP level. Cookie based solutions would be a whole different ball game, and would partly depend on whether you were talking session cookies or not. I think the idea was briefly toyed with on the Foundation mailing list and quickly rejected last year. Before you consider reviving the Cookie idea I'd suggest reading those archives. For example, I'm not sure how common it is for Internet Cafes to clear session cookies between clients. WereSpielChequers (talk) 19:27, 15 June 2012 (UTC)
- I've read through your proposal, WereSpielChequers, and it seems a lot more complicated for the reader than mine, and only useful to logged-in readers. I don't know the stats but would guess 95% of readers don't have, and won't bother to create, an account. I like mine better :) --Anthonyhcole (talk) 12:17, 17 June 2012 (UTC)
- If you use the slightly higher bar of ever doing anything with the account after creating it (like, say, changing the prefs or even logging in after the initial default expires), then I believe that 99% of the users are just "readers". I assume that's why the Board insists that whatever solution is implemented be one that could be used without logging into an account. WhatamIdoing (talk) 03:53, 18 June 2012 (UTC)
- It would be conceivable, however, for someone to enter an account name in some sort of "Use display settings for..." box. That way, the account holder could have personal, independent choice of what to show and what to hide, and the reader could simply follow that. Someone might set up some accounts with certain short, memorable names solely for this purpose, to make it easier for them. Wnt (talk) 00:04, 1 July 2012 (UTC)
- Do you really want any person on the planet to be able to find out what your display settings are, simply by saying "Use display settings for Wnt"? WhatamIdoing (talk) 23:19, 2 July 2012 (UTC)
- @WhatamIdoing. Currently readers have few reasons to create an account unless they wish to edit. If we add the incentive that an image filter is available if you create an account then we can expect more people to create accounts. As long as creating an account is free and requires no personal information then I consider it a perfectly reasonable offer. And it avoids all those nasty anomalies that make all the IP based proposals so problematic. WereSpielChequers (talk) 00:24, 22 July 2012 (UTC)
- I do not see any connection between this most recent comment and my concern about your penultimate proposal ("It would be conceivable, however, for someone to enter an account name in some sort of "Use display settings for..." box", to which my question is "Do you really want any person on the planet to be able to find out what your display settings are, simply by saying "Use display settings for Wnt"?").
- So, again, do you really want any person on the planet to be able to find out what your display settings are, simply by saying "Use display settings for Wnt"? Or do we agree that letting any person on the planet see your filter settings would be a significant privacy invasion? WhatamIdoing (talk) 18:57, 24 July 2012 (UTC)
- I think that's a comment for WNT rather than me, but the indentation implies that you are responding to me. My initial proposal was to have personal filters private, however I can see the advantages of having bespoke filters available so that people could opt in to having their filter be similar to certain filters that editors have opted in to sharing the use of. Of course just because you've opted in to using a shared filter doesn't mean you need to know what is in that filter, and with tens of millions of images involved it wouldn't be easy to work that out by seeing which files it hid from you. WereSpielChequers (talk) 22:14, 27 July 2012 (UTC)
- Sorry, I forgot about this page. At w:User:Wnt/Personal image blocking where I laid out my scheme I mentioned the usefulness of a privacy setting, but as that would require dev-assistance I have little hope that it would be implemented. Otherwise I think it is possible to implement an image filter with a mere user script. Wnt (talk) 17:46, 22 August 2012 (UTC)
- I think that's a comment for WNT rather than me, but the indentation implies that you are responding to me. My initial proposal was to have personal filters private, however I can see the advantages of having bespoke filters available so that people could opt in to having their filter be similar to certain filters that editors have opted in to sharing the use of. Of course just because you've opted in to using a shared filter doesn't mean you need to know what is in that filter, and with tens of millions of images involved it wouldn't be easy to work that out by seeing which files it hid from you. WereSpielChequers (talk) 22:14, 27 July 2012 (UTC)
- It would be conceivable, however, for someone to enter an account name in some sort of "Use display settings for..." box. That way, the account holder could have personal, independent choice of what to show and what to hide, and the reader could simply follow that. Someone might set up some accounts with certain short, memorable names solely for this purpose, to make it easier for them. Wnt (talk) 00:04, 1 July 2012 (UTC)
- If you use the slightly higher bar of ever doing anything with the account after creating it (like, say, changing the prefs or even logging in after the initial default expires), then I believe that 99% of the users are just "readers". I assume that's why the Board insists that whatever solution is implemented be one that could be used without logging into an account. WhatamIdoing (talk) 03:53, 18 June 2012 (UTC)
- I've read through your proposal, WereSpielChequers, and it seems a lot more complicated for the reader than mine, and only useful to logged-in readers. I don't know the stats but would guess 95% of readers don't have, and won't bother to create, an account. I like mine better :) --Anthonyhcole (talk) 12:17, 17 June 2012 (UTC)
- I'm happy to make it account specific, we know how to make that work. IP would work for some people, but it would not work for others and crucially there is no way to implement an IP version that differentiates between shared and non shared IPs. If it was possible I can't see any serious objection to implementing an IP solution for those instances where one IP equates to one person, but I'm pretty sure there is no way to do that. As I've explained above there are fundamental problems with this if it was deployed at IP level. Cookie based solutions would be a whole different ball game, and would partly depend on whether you were talking session cookies or not. I think the idea was briefly toyed with on the Foundation mailing list and quickly rejected last year. Before you consider reviving the Cookie idea I'd suggest reading those archives. For example, I'm not sure how common it is for Internet Cafes to clear session cookies between clients. WereSpielChequers (talk) 19:27, 15 June 2012 (UTC)
- If you want to make it device-specific rather than IP-specific, then you could implement it with cookies. Some people will prefer one, and others the other. People who want all of their home devices to automagically have the same settings will prefer an IP-based solution. People who want different settings depending on whether they're using Firefox or Safari, or whose IP changes every day, will prefer a cookie-based solution. It's also possible to do both, either as a both-necessary or as an either-sufficient solution. WhatamIdoing (talk) 17:16, 15 June 2012 (UTC)
- Me too. I was thinking an IP address identifies each individual device but I know nothing at all about networking, so perhaps I'm mistaken, Just to be clear, I'm not really pitching this to the community. I'm putting it before the community for your critique but I'm pitching it to the foundation. We can make our recommendations but it's their decision. As WhatamIdoing points out, it is their duty, and they have the power, to implement the controversial images resolution. --Anthonyhcole (talk) 18:50, 14 June 2012 (UTC)
- I think the Board and the WMF are looking for a workable compromise. If you read Sue Gardner's comments on my proposal She certainly wasn't saying that there was no room for compromise. In fact she seemed to accept that if a system came in it would not be category based, though she didn't comment on the IP issue. As for IPs and image filters, you seem to be thinking in terms of a shared computer with different browsers sequentially on it. I'm thinking in terms of Internet cafes and other institutions where there could be hundreds of users concurrently on the same IP - in one case I think there is an entire gulf state behind one IP. So yes one user could refresh the filter - but that might then mean that the person who has just carefully set it would find it unset..... With three or four people browsing Wikipedia at the same time and resetting the filter for each other the results could be problematic. Especially if the person who'd set the filter was doing so before viewing an article that contained images that were illegal to view where they were. If it was set by the network then you'd have the problems of people censoring each other. If you don't mean IP but instead it would be each PC within a network, then it starts to slip beyond my technological grasp. WereSpielChequers (talk) 17:27, 14 June 2012 (UTC)
cultural dependency
editOne of the problems pointed out by WereSpielChequers in a discussion of my (lengthy) personal essay on the topic is cultural dependency of all norms defining what is and what isn't NSFW. If the categories system is to be used, it can be applied only to cases cross-culturally NSFW (this could be a good solution, since it would be minimalistic). Otherwise, we could use a system of tagging/labeling following the interwiki code logic (marking images as NSFW for specific projects, i.e. en-NSFW). Pundit (talk) 18:26, 21 July 2012 (UTC)
- The better proposals (measured as receiving fewer complaints) don't say "Not Safe For Work", but instead say things like "violent image", "sexual acts", etc. Then you can decide, for example, that you like seeing photographs of sexual acts, but that you don't like seeing photographs of fistfights. There is less controversy with identifying images this way, too, because people can generally agree that an image of someone gouging out someone else's eye is "violent" even if they disagree over whether that is an image that they want displayed on their own computers by default/on first page loading. Also, these could be useful labels, e.g., if it's your job to find images of sexual acts for an educational presentation or to find violent images for a psychology study. Labeling something as "NSFW" isn't really useful to anyone. WhatamIdoing (talk) 19:05, 24 July 2012 (UTC)
- I agree. More specific advisory is better. I'm not a fan of "NSFW" in its wording, if any general label is to be used, I'd rather go for "advisory", or something more neutral indeed. Pundit (talk) 21:32, 24 July 2012 (UTC)
- One problem with a minimalistic approach is that not all offensive imagery is cross-cultural, much will be very specific to particular cultures. Where I come from most people really don't like to see images of roast dog, other cultures differ. If you limit the filter just to things that are universally offensive you will be opposed by those who want to filter things that most find inoffensive and those like me who support them, but also by those who see a sub-optimal system as the thin end of the wedge - start by filtering the things that almost everyone can agree to filter and then steadily extend it. I think it would be better and of course easier to start with a system that allows anyone to filter out the things that they object to, without imposing any unwelcome filters on anyone. WereSpielChequers (talk) 21:27, 27 July 2012 (UTC)
instead of global filters, local labels
editI would like to discuss with you a proposal of introducing "local" (project-dependent) labels instead of global filters. The proposal relies on the assumption that any global filtering is fiercely opposed by some projects and users. Therefore I believe it would be good if instead of filtering tools we just offered a possibility to label images as requiring advisory, and this possibility would be project dependent (e.g. en-wiki could decide differently than de-wiki). The way I see that this could work would be not by using categories, but through separate tags, similar to interwiki links. Each image could be tagged for "advisory" for the purposes of a given project, by adding a tag (e.g. nsfw:en or similar). "Advisory" would only mean that according to the best knowledge of the tagging user, the image may be disturbing for some viewers, nothing more (of course, separate kinds of advisories could be developed if needed, as discussed here before). Anyone could tag an image as requiring advisory, but such tagging could be removed by any user, by starting a discussion in a special namespace in the talk pages of the article (e.g. /talk/nsfw/en/). These discussions would be conducted in the language of the discussed project, and would only affect tagging for this particular project itself. Final decisions would be based on consensus (and a lack of consensus would mean that the tag cannot be applied). The benefits of such a solution:
- no global filtering would be pushed down the projects' throat (the issue has already caused a stir and threats of forking),
- all projects would have the right to decide how they want to use this labeling (some would not use it at all, some would allow filtering, some would introduce textual warnings, etc.). With our size, it would be difficult to agree on a common solution for a filter, but we don't have to have one-size-fits-all approach. Different cultures have different needs.
- Individual logged-in users would also be able to individually use those tags for filtering (for example, someone could decide that they don't want to see images tagged for advisory by en-wiki and fr-wiki users). In this sense, this would be a step complementary to Jimbo's proposal of a personal image filter, but it would allow for more flexibility for the projects (some projects may decide that their vast majority of users, who do not log in, may want warnings/filters/etc.).
As long as we do not force the projects to introduce filters, this will be widening the scope of their freedom. Also, this solution helps in fighting preventive censorship (on some projects images are not used, because they may be too controversial, but they would be if there was a satisfactory warning label). In fact, while I perfectly understand why some projects (notably, de-wiki and fr-wiki) strongly protest against global filtering solutions, I can't see a reason why would they forbid some form of labeling to other projects, if these other projects want it themselves. Perceptions of censorship, free speech, balance with the well-being of the viewer etc. are also culturally dependent and each project should be able to decide about their rules on its own. Comments? Pundit (talk) 23:36, 21 July 2012 (UTC)
- I think that the variation in what people want is probably not an exact fit to the language based projects. There will be some Christian Arabs who are not averse to seeing photographs of Mohammed and some German speakers who are more prudish than the consensus seems to be on the project there. So whilst this is clearly less centralist than Jimmy's proposal and the withdrawn WMF one, I see it still resulting in a lot of timewasting discussions as to where each project wants to draw the line, and some minorities will be less well served by this than by a personal private filter under which filtering decisions are no one's business but the individual. Also there is the practical issue re Commons, I suspect that a large number of editors use both Commons and their home Wikipedia, and that such editors would like to be able to make one filtering choice that covers them on both. WereSpielChequers (talk) 00:42, 22 July 2012 (UTC)
- You're absolutely right. Yet, I believe that projects are ultimately the most local units we operate on. Also, it is not about being personally averse to some images, but about recognizing the fact that they MAY be controversial to some of the readers of the local projects. Thus, I would expect that Christian Arabs would understand the need not to publish drawings of Mohammed much better than the general public. Regarding the practical issue with commons - filtering applications of a label do not theoretically prohibit their use both on local projects and commons. In fact, since the labels would be saved in the image article space (on Commons), they could be used in ANY application, when the image is used. The only difference is that each project would (a) decide about tagging images independently, by adding their own tag to the image and (b) be able to make independent decisions on how to use the labels, if use them at all. Applications could include not only filters, but also textual warnings, different blurrings, etc., and all these would be discussed by the projects. With this solution at least projects which have strong objections to introducing any solutions would not interfere with the ones, which desire them. Pundit (talk) 00:59, 22 July 2012 (UTC)
- I accept that my Christian Arab example wasn't ideal - the Arabic Wikipedia is probably far less likely to host content that would offend moslems than some other languages. But your system relies on storing these tags on Commons, and Commons is one of the most vociferously anti censorship WMF wiki communities. WereSpielChequers (talk) 01:12, 22 July 2012 (UTC)
- Yes. But I don't think that the Commons community would dispute the fact that they serve the needs of the projects. If there was a consensus for storing info about projects' choices about each image, the Commons would not object. Commons editors, could, of course, dispute each case for each project (in the language of the project). Pundit (talk) 01:17, 22 July 2012 (UTC)
- Why put the label on commons? Put it on the link to the image on en-wiki and let de-wiki and fr-wiki make their own decisions. --Guy Macon (talk) 01:37, 22 July 2012 (UTC)
- That definitely is a possibility, too, but I'm thinking that it would be useful e.g. for smaller projects to e.g. use de-wiki or en-wiki choices. Also, keep in mind that the same image can be used many times in a project. Keeping a label on Commons allows for tagging it just once. Pundit (talk) 19:04, 22 July 2012 (UTC)
- By now this is out of question. We had a resolution against such tagging. It would be no different from adding categories like "Category:NSWF" to the category tree of Commons, which most participants opposed. Since it would affect all projects using images from Commons this is out of question. See the FAQ to the resolution and it's front page. --Niabot (talk) 10:08, 23 July 2012 (UTC)
- Thank you for your feedback, but I don't think you read the proposal at all. This feeling bases on two facts: (1) labels are NOT similar to categories at all (categories allow browsing) and (2) it WOULD NOT affect all projects at all. It would only affect projects which decide to use labeling for their own purposes, and each project would develop their own labeling (unless they decided to blindly trust some other project's judgment). I'm rather inclined to say that forbidding projects to create labels for their own purposes is out of question. The Commons, after all, are our shared repository. No project should force other projects not to introduce solutions, which are not affecting anything globally. Pundit (talk) 15:36, 23 July 2012 (UTC)
- Categories and labels/tags are very similar to each other. The only difference between categories and labels/tags is that categories are structured in a tree like fashion, while labels/tags are a flat structure. Of course it would affect all project that are utilizing and contributing to Commons if labels/tags are placed inside the description pages. What if two projects (languages/wiki pojects/...) have a different opinion if a label should be placed or not, because it is visible to all viewers? In this case both projects interfere with each other. That works against the centralized approach you describe and makes it only more complicated. A silly idea. A simple solution would be something else. Aside the fact that such tagging/labeling is a violation of the NPOV principle that even Commons applies to it's categories. --Niabot (talk) 21:07, 23 July 2012 (UTC)
- As emphasized previously, one more important difference is that you can browse categories, but lot labels. It does not affect projects globally at all. Just the visibility of the fact that one project decided to use a label and some other did not has nothing to do with affecting them. "What if two projects have different opinion?", well, nothing, because labels only acknowledge this already existing fact, without pushing anything. Projects do not interfere with each other in any way (except for possible voyeurism of users interested in other project's decisions, not different in any way than checking if some article is considered notable on some other wiki). There is no violation of NPOV of any sort - seriously, from the way you describe it, one would actually have to start believing that ANY decision made by a project (e.g. about notability, required sources, etc.) is unavoidably POV and we should just shut everything down. While I appreciate your honest judgment of the idea as "silly", I don't see how it is constructive. Also, when you write that "a simple solution would be something else", but you refrain from sharing your insight, it is difficult to evaluate how your simple solution might look like. cheers Pundit (talk) 22:09, 23 July 2012 (UTC)
- I give up. I doubt that you will ever understand what prejudicial labeling/tagging is and why I'm and many others are so opposed to it. Talking to a wall is more effective. It doesn't talk back, but at least there is the possibility that it is able to listen and to understand what I'm saying. --Niabot (talk) 23:45, 23 July 2012 (UTC)
- Again, thank you for your feedback and for assuming good faith. I understand that you are concerned, and why. In the posted material I believe the sentence "When labeling is an attempt to prejudice attitudes, it is a censor’s tool" is particularly important, and in principle I agree with it, with emphasizing the keyword: "when". I hope you at least are able to assume that the discussed proposal is not an attempt to prejudice attitudes. Labels on the Commons, just like interwiki links, would not even have to be visible without looking into the code. Also, please recognize that the Commons is a repository diametrically different from a book library. With our projects, users stumble upon unexpected materials all the time, while they barely ever do so in book libraries, and this makes a huge difference. I have nothing against people reading legal porn in the libraries, I also don't personally mind images from the Commons, I'm only trying to find a solution serving those, who do. Again, you or one of the unspecified "many others" can divulge the details of "better solutions" instead of calling other proposals silly. I'll be really more than happy to endorse something, which works, if you come up with it. Pundit (talk) 00:01, 24 July 2012 (UTC)
- Can you understand that your wording is highly provocative? You pick out the word "when" (best starting point ever) and tell me that it is not an attempt to prejudice attitudes in case that this labels are used to display discouraging warning messages on top of articles or in the worst case to display images together with an advisory message, since we only add something more or less invisible to Commons. Directly after this farce you continue with the next one. I quote it: "users stumble upon unexpected materials all the time". This is a blatant lie to strengthen the argument based on a Bad Faith Assumption (BFA).
- To be clear. I have no intention to create a working filter/advisory/whatever system, since it already exists. It is the reader himself who choses the articles and who choses Wikipedia as his information source. If someone searches for "human penis", gets an article about the human penis and then blames the writers that there is a depiction of a "human penis", then he has chosen the wrong source. It is the same rude behavior like trying to buy bananas at a sex shop or asking for a dildo in the next vegetable store. This whole discussion suits a the catch phrase introduction of Bodo Bach "Ich hätt’ da gern mal ein Problem, ...", with which he started phone calls, pranking people. Roughly translated: "Hello, I'd really like to have problem..." instead of the usual German phrase "Hello, i have a problem..." (Hallo, ich habe da ein Problem...). --Niabot (talk) 02:10, 24 July 2012 (UTC)
- Since I prefer to stick to discussions following AGF and other good Wikipedia policies, I think I'm not going to comment on that. Thank you for your feedback, nevertheless. Pundit (talk) 16:51, 24 July 2012 (UTC)
- Again, thank you for your feedback and for assuming good faith. I understand that you are concerned, and why. In the posted material I believe the sentence "When labeling is an attempt to prejudice attitudes, it is a censor’s tool" is particularly important, and in principle I agree with it, with emphasizing the keyword: "when". I hope you at least are able to assume that the discussed proposal is not an attempt to prejudice attitudes. Labels on the Commons, just like interwiki links, would not even have to be visible without looking into the code. Also, please recognize that the Commons is a repository diametrically different from a book library. With our projects, users stumble upon unexpected materials all the time, while they barely ever do so in book libraries, and this makes a huge difference. I have nothing against people reading legal porn in the libraries, I also don't personally mind images from the Commons, I'm only trying to find a solution serving those, who do. Again, you or one of the unspecified "many others" can divulge the details of "better solutions" instead of calling other proposals silly. I'll be really more than happy to endorse something, which works, if you come up with it. Pundit (talk) 00:01, 24 July 2012 (UTC)
- I give up. I doubt that you will ever understand what prejudicial labeling/tagging is and why I'm and many others are so opposed to it. Talking to a wall is more effective. It doesn't talk back, but at least there is the possibility that it is able to listen and to understand what I'm saying. --Niabot (talk) 23:45, 23 July 2012 (UTC)
- As emphasized previously, one more important difference is that you can browse categories, but lot labels. It does not affect projects globally at all. Just the visibility of the fact that one project decided to use a label and some other did not has nothing to do with affecting them. "What if two projects have different opinion?", well, nothing, because labels only acknowledge this already existing fact, without pushing anything. Projects do not interfere with each other in any way (except for possible voyeurism of users interested in other project's decisions, not different in any way than checking if some article is considered notable on some other wiki). There is no violation of NPOV of any sort - seriously, from the way you describe it, one would actually have to start believing that ANY decision made by a project (e.g. about notability, required sources, etc.) is unavoidably POV and we should just shut everything down. While I appreciate your honest judgment of the idea as "silly", I don't see how it is constructive. Also, when you write that "a simple solution would be something else", but you refrain from sharing your insight, it is difficult to evaluate how your simple solution might look like. cheers Pundit (talk) 22:09, 23 July 2012 (UTC)
- Categories and labels/tags are very similar to each other. The only difference between categories and labels/tags is that categories are structured in a tree like fashion, while labels/tags are a flat structure. Of course it would affect all project that are utilizing and contributing to Commons if labels/tags are placed inside the description pages. What if two projects (languages/wiki pojects/...) have a different opinion if a label should be placed or not, because it is visible to all viewers? In this case both projects interfere with each other. That works against the centralized approach you describe and makes it only more complicated. A silly idea. A simple solution would be something else. Aside the fact that such tagging/labeling is a violation of the NPOV principle that even Commons applies to it's categories. --Niabot (talk) 21:07, 23 July 2012 (UTC)
- Thank you for your feedback, but I don't think you read the proposal at all. This feeling bases on two facts: (1) labels are NOT similar to categories at all (categories allow browsing) and (2) it WOULD NOT affect all projects at all. It would only affect projects which decide to use labeling for their own purposes, and each project would develop their own labeling (unless they decided to blindly trust some other project's judgment). I'm rather inclined to say that forbidding projects to create labels for their own purposes is out of question. The Commons, after all, are our shared repository. No project should force other projects not to introduce solutions, which are not affecting anything globally. Pundit (talk) 15:36, 23 July 2012 (UTC)
- By now this is out of question. We had a resolution against such tagging. It would be no different from adding categories like "Category:NSWF" to the category tree of Commons, which most participants opposed. Since it would affect all projects using images from Commons this is out of question. See the FAQ to the resolution and it's front page. --Niabot (talk) 10:08, 23 July 2012 (UTC)
- That definitely is a possibility, too, but I'm thinking that it would be useful e.g. for smaller projects to e.g. use de-wiki or en-wiki choices. Also, keep in mind that the same image can be used many times in a project. Keeping a label on Commons allows for tagging it just once. Pundit (talk) 19:04, 22 July 2012 (UTC)
- Why put the label on commons? Put it on the link to the image on en-wiki and let de-wiki and fr-wiki make their own decisions. --Guy Macon (talk) 01:37, 22 July 2012 (UTC)
- Yes. But I don't think that the Commons community would dispute the fact that they serve the needs of the projects. If there was a consensus for storing info about projects' choices about each image, the Commons would not object. Commons editors, could, of course, dispute each case for each project (in the language of the project). Pundit (talk) 01:17, 22 July 2012 (UTC)
- I accept that my Christian Arab example wasn't ideal - the Arabic Wikipedia is probably far less likely to host content that would offend moslems than some other languages. But your system relies on storing these tags on Commons, and Commons is one of the most vociferously anti censorship WMF wiki communities. WereSpielChequers (talk) 01:12, 22 July 2012 (UTC)
- You're absolutely right. Yet, I believe that projects are ultimately the most local units we operate on. Also, it is not about being personally averse to some images, but about recognizing the fact that they MAY be controversial to some of the readers of the local projects. Thus, I would expect that Christian Arabs would understand the need not to publish drawings of Mohammed much better than the general public. Regarding the practical issue with commons - filtering applications of a label do not theoretically prohibit their use both on local projects and commons. In fact, since the labels would be saved in the image article space (on Commons), they could be used in ANY application, when the image is used. The only difference is that each project would (a) decide about tagging images independently, by adding their own tag to the image and (b) be able to make independent decisions on how to use the labels, if use them at all. Applications could include not only filters, but also textual warnings, different blurrings, etc., and all these would be discussed by the projects. With this solution at least projects which have strong objections to introducing any solutions would not interfere with the ones, which desire them. Pundit (talk) 00:59, 22 July 2012 (UTC)
Proposal to ignore all comments from Niabot
editGiven the insulting tone adopted by the user in the above thread, I intend not reading any of their future contributions, and recommend that course of action to others. Naibot, if I don't respond to your comments, please don't mistake that for assent. --Anthonyhcole (talk) 06:21, 27 July 2012 (UTC)
- Sorry if my words are sometimes harsh and may not suit your opinion, but i see the proposal from Pundit as a violation of our policies, if we need to give up the neutral point of view. Thats what i tried to explain in above discussion. Don't reading my comments doesn't improve the situation. If you can provide a method that does not need to utilize prejudicial labels, then I would not oppose the proposal. --Niabot (talk) 17:14, 27 July 2012 (UTC)
- This is a contentious subject, and not an easy one to agree a consensus solution. I still believe it is possible to design a filter that delivers the filter functionality that some people want whilst avoiding most of the objections that have been made to the category based proposals. However to achieve such a solution we need to get the opponents to be clear as to which aspects of the previous proposals they object to, and those of us who support an image filter need to be flexible to try and resolve as many of those objections as we can. My understanding is that Niabot is objecting to public filters - not an uncommon objection and one I myself largely share. We could implement a filter where the contents of each filter were as private as a watchlist. I believe that such a proposal has a chance of getting community consensus. In light of the board's recent Uturn on filters I would hope that those who previously took the view that a filter was coming regardless of community objections will now try and to understand and accommodate those objections. My belief is that those who object in principle to all and any sorts of filter are few, and certainly fewer than those like Niabot who are objecting to specific ways of implementing a filter. Which is a very longwinded way of saying, Anthony please try to resolve Niabot's concerns even if you don't like his tone. WereSpielChequers (talk) 18:12, 27 July 2012 (UTC)
- The issue our community as a whole has with global filters is largely due to the fact that the decisions would not be made on the project level. Projects' independence in decision-making is of high value. But it works both ways: we, as a collective community of all projects, should not usurp right to forbid any separate project to use their own labels. Also, we should be constantly reminded that 99% of our readers don't log in and will never do, even if it'd allow them to use filters. They do, however, strongly and fiercely object to controversial images. They are upset, angry, offended - just ask people working on OTRS, to get the right picture. This is why the labeling proposal avoids the pitfalls of global filtering, and still addresses the issue where it will make an actual difference (while a filter for logged in users will not, and also will again be a global solution, not adjusted to local projects' needs). Regarding Niabot's comments and repeated accusations of prejudice, I don't think there is anything more to be said. Clearly, for Niabot book libraries (operating under state-funding, politicians', and many other pressures) where people take books from the shelves are just the same thing as an electronic images repository (run by an independent foundation and ruled by the community of editors), where images are used for various purposes and in articles where the readers do not have to expect the visualizations at all. We just disagree there and there is not much to be done about it. I would, however, really appreciate trying to keep to a civilized discourse, and AGF. Pundit (talk) 18:23, 27 July 2012 (UTC)
- That some people want an image filter is, or should be, common ground to everyone who is trying to design one that would both work and be acceptable to the community. The idea that people want to opt in to a filter but would be deterred by it requiring them to create a free account is one of the things that divides us, and I'm not sure that there is an easy resolution to that divide. Except perhaps a survey of OTRS complainers - asking them whether they'd prefer a free image filter that they could tune to their sensibilities, or an option to go with one that was tuned for the mainstream in each language. Also I would point out that it is the only practical solution for people who use Internet Cafes or other shared IPs, otherwise they could have carefully configured the filter before opening a page where they anticipated seeing an image they didn't want, only to see that same image because someone else in the same cafe had just rest the filter to their preference. As for the idea of only doing this at the project level, that might work for the less prudish of us, but it wouldn't be much help to a strict Moslem who was German speaking. WereSpielChequers (talk) 19:18, 27 July 2012 (UTC)
- The issue our community as a whole has with global filters is largely due to the fact that the decisions would not be made on the project level. Projects' independence in decision-making is of high value. But it works both ways: we, as a collective community of all projects, should not usurp right to forbid any separate project to use their own labels. Also, we should be constantly reminded that 99% of our readers don't log in and will never do, even if it'd allow them to use filters. They do, however, strongly and fiercely object to controversial images. They are upset, angry, offended - just ask people working on OTRS, to get the right picture. This is why the labeling proposal avoids the pitfalls of global filtering, and still addresses the issue where it will make an actual difference (while a filter for logged in users will not, and also will again be a global solution, not adjusted to local projects' needs). Regarding Niabot's comments and repeated accusations of prejudice, I don't think there is anything more to be said. Clearly, for Niabot book libraries (operating under state-funding, politicians', and many other pressures) where people take books from the shelves are just the same thing as an electronic images repository (run by an independent foundation and ruled by the community of editors), where images are used for various purposes and in articles where the readers do not have to expect the visualizations at all. We just disagree there and there is not much to be done about it. I would, however, really appreciate trying to keep to a civilized discourse, and AGF. Pundit (talk) 18:23, 27 July 2012 (UTC)
- The problem of shared IPs doesn't really need to result in us refusing to support unregistered editors entirely. Sure: it will be imperfect in some limited situations, like multiple computers in simultaneous use through the same IP address. Occasionally someone will have to click a button to toggle it on or off, if the previous user's settings were different from what the current user wants. Perhaps someone would have to click the button twice, if they use two different computers or two different browsers. But that's really a small problem, and not one that needs to deter us from serving 99% of users.
- I believe that the real push for getting users to have accounts is because the fully personalized proposals can't be done any other way. We can serve up a quick choice for "Do you want to see violence/sex/sacred Mormon underwear?" We can't serve up a quick choice that stores lists of individual images, one by one, that the user has previously viewed and would prefer not to view again. So "let's ignore 99% of the users" sounds to me like a backhanded way of saying "please implement only my preferred style". WhatamIdoing (talk) 22:03, 27 July 2012 (UTC)
- Of course, personal filter is common ground, but clearly it addresses less than 1% of those who want image advisory. Technically, it can rely on javascripts installed on a single computer, probably, so it is not as much about IPs, as about registering. Pundit (talk) 03:24, 28 July 2012 (UTC)
Anthonyhcole's proposal
edit- We can have a project-specific filter that (1) allows IP readers to choose not to see images in specific, pre-selected categories (say, nudity, sex, violence, religious) for a session, and allows them to hide a specific image for a session, and that (2) allows the logged-in reader to indefinitely hide categories of images and indefinitely hide a specific image.
- And we can do this while allowing the readers to determine what is and isn't offensive, not a few individuals.
- Briefly, I'm proposing a per-project filter. Each thumbnail in an article has a "Hide this image" button. When the reader clicks it they're asked "Is the content violent, sexual, religious, nudity, none of those?" Once they've selected one or more categories they can click "Hide", and the thumbnail image is replaced by a blank place-holder with the caption or alt text and a "Show this image" button. If the reader is not logged in, their selection is stored in a cookie for that session. If the reader is logged in, it is added to their personal project-hosted blacklist.
- At the top of each page is an "Image filter" tab, where a user can select from "Filter nudity", "Filter sexual images", "Filter images of violence" and "Filter religious images." Selecting one of these categories will hide images in that category for that session for IP users, and indefinitely (until the user changes their preference) for logged-in users.
- The category of an image (uncontroversial, nudity, sex, violence, religious) is determined by the ratio of the number of times the image is viewed on the project (en.Wikipedia, de.Wikipedia, etc.) to the number of times the image is hidden on that project. Images that are frequently hidden are automatically added to the relevant blacklist, hosted by the project. By categorising images in this way, we're not allowing a few individuals to decide what readers want to filter, we're letting the readers themselves tell us, which I hope will allay Niabot's concerns about POV labeling. --Anthonyhcole (talk) 15:59, 28 July 2012 (UTC)
- I have read similar proposals before and they come really close to something anyone could agree with. Giving the power of choice to the reader is always a good point to start with. Letting him define what he wants to see is the right way. But i see some smaller devils in the details we might want to discuss to improve this/your proposal. So i will start to list some things that came to my mind after reading your proposal the second time:
- EN-WP is read by people from various places. I was surprised some time ago to find out that more readers from Arabian countries read on EN instead inside the local language Wikis. (I will search for the statistic if you want to take a look at them) My guess is that the average (per project) rating of images would not be representative for the taste of such minority reader groups.
- You mentioned a ratio between "views and rating". We somehow have define a good ratio that is working. Any suggestions how it should be defined/implemented?
- Beside the ratio definition we would need to gather trustworthy data. For example we can't assume that someone who rated an image once would only open the article once. He might vote once but read/load the article multiple times without voting again.
- How to prevent/recognize rating manipulations without gathering much private data (IP, timestamp, MAC for Gateways with many users sharing same IP)?
- How can we implement this feature without increasing the server load magnificently. So far most readers see the cached results of the same page.
- Which tags should be predefined? Is giving the users the more flexibility to define own tags a better solution? For example "religion" is such a wide spread topic with different view points, that many images could be blocked by religion A while religion B has no problem with them and the other way around.
- I hope we can have a constructive discussion on how to tackle at least some of the mentioned issues. If we might find even more issues, we also could give it a try to find something else which would have the same effect. --Niabot (talk) 17:10, 28 July 2012 (UTC)
- OK. You make some good points. It's way past bedtime so I'll respond when I've had some sleep - provided you don't start insulting people. --Anthonyhcole (talk) 17:55, 28 July 2012 (UTC)
- My answer to the questions above is relying fully on the wisdom of specific projects. The projects know the cultures they operate in best, and the sensitivity of their readers. If some Arabic readers go to en-wiki - they surely realize that the sensitivity of en-wiki may be different than of their own project. Pundit (talk) 19:02, 28 July 2012 (UTC)
- Niabot, my first response to your comments is that the server shouldn't be unduly burdened by this solution. If it adds undue cost to the project, then of course it shouldn't be implemented. Another important question, the crucial one in my opinion, is, where to draw the line: the ratio of number of "hides" to the number of page views that determines an image's category.
- OK. You make some good points. It's way past bedtime so I'll respond when I've had some sleep - provided you don't start insulting people. --Anthonyhcole (talk) 17:55, 28 July 2012 (UTC)
- I have read similar proposals before and they come really close to something anyone could agree with. Giving the power of choice to the reader is always a good point to start with. Letting him define what he wants to see is the right way. But i see some smaller devils in the details we might want to discuss to improve this/your proposal. So i will start to list some things that came to my mind after reading your proposal the second time:
- We can let the reader decide the ratio. On a slider, like a volume control. I'm still thinking about this, though, and your other comments. --Anthonyhcole (talk) 13:01, 30 July 2012 (UTC)
- A quick idea is that we could only count the views and votes from users that activate the feature (logged in or not logged in). This would exclude the "view" and "views" of users that don't consider the feature as useful at all. This would connect the feature closer to the target audience and might also decrease the server load magnificently. But at the same time we have to expect (by design) that this change affects the rating significantly. I except a lower number of viewers and votes, and also a bias toward filtering.
- But that still does not answer the question what a good ratio is. Given that we have good numbers of viewers and votes (one vote and multiple views by the same person count as 1 view and 1 vote), then i would expect a 50% ratio to be a good solution: 100 views, 51 votes = filtered/collapsed/... by default.
- If we would have data on how many of our readers activate the feature, then we could let all users participate in the votes, by shifting this ratio respectively. It might be good to have everyone participate, but it might also be more open to manipulations. I would expect that 4chan & co. could rate the picture of the pope as sexually offensive or fans of football teams would vote down other teams and so on. So i guess that it would still be the best solution to limit the voters to the users. In short: Why would you vote if you don't care to use the feature.
- So far my thoughts on how we could come closer to a solution. Of course still under some idealistic assumptions. --Niabot (talk) 13:24, 30 July 2012 (UTC)
- (after EC) Letting the reader decide the ratio would also be an viable option. But some value still has to be the default, if we don't want to force the reader to adjust it manually every time. From previous statistics (not waterproof, my own and selective topics) i expect that less then 1 % of all users would vote for a picture like the one found in hentai, while more then 50% of the filter users would vote for it. Just to give a rough relation on numbers i expect. --Niabot (talk) 13:24, 30 July 2012 (UTC)
- A slider is a nice idea. I'd expect a significant majority (definitely more than 51%) to default a filter. Again, making results project-dependent is much better, than relying on a global system. Pundit (talk) 13:34, 30 July 2012 (UTC)
- If this proposal is ripe and would be implemented then we can check the real numbers. I expect much lower numbers as you do, but who knows. This isn't an really important factor at the moment. The current assumption is that it is used by one project. Still we have to assume that one language consists out of various interest groups. The better a solution fits such an scenario, the better it is, no matter if the cultural differences are small or big. --Niabot (talk) 14:24, 30 July 2012 (UTC)
- The slider is at the top of the page. There are four of them. When a reader opts in to filtering they're presented with four categories, each with a slider with "strict" at one end, "no filtering" at the other. As they slide the bar up they are altering the ratio of "hides" to views for the category (nudity, sex, violence, religious) which they would like to filter. First step on the slider (minimal filtering) would filter images that get (say) a thousand or more "hides" for every ten thousand views; step two would filter all images that get 900 or more, step three, 700 or more, and on up the continuum to very strict filtering at, say, five or more "hides" per ten thousand views. We calibrate the filters so that most nude images are filtered at the strict end of the nudity slider, most violent images are filtered at the strict end of the violence slider, and likewise for sex and images of Muhammad and Mormon garments. --Anthonyhcole (talk) 16:48, 30 July 2012 (UTC)
- What basis would you prefer. Views and votes from all users, or views and votes only from users that use the feature? Some explanation would be nice. --Niabot (talk) 16:56, 30 July 2012 (UTC)
- I don't see a problem with the ratio being number of "hides" on that project to number of views on that project. I realise that some people will view a page more than once, but that should apply roughly equally to people who hide an image and those who don't. As for images of Justin Beiber mysteriously being filtered as religiously offensive or similar, admins could protect obvious anomalies like that. --Anthonyhcole (talk) 17:04, 30 July 2012 (UTC)
- I just don't want it to become a burden to the admins if this gets a real problem, that steals the time of our admins. --Niabot (talk) 17:58, 30 July 2012 (UTC)
- Me too. If that kind of thing becomes a common problem, we'll have to deal with it somehow. I don't actually envision it becoming unmanageable though. --Anthonyhcole (talk) 18:10, 30 July 2012 (UTC)
- I just don't want it to become a burden to the admins if this gets a real problem, that steals the time of our admins. --Niabot (talk) 17:58, 30 July 2012 (UTC)
- I don't see a problem with the ratio being number of "hides" on that project to number of views on that project. I realise that some people will view a page more than once, but that should apply roughly equally to people who hide an image and those who don't. As for images of Justin Beiber mysteriously being filtered as religiously offensive or similar, admins could protect obvious anomalies like that. --Anthonyhcole (talk) 17:04, 30 July 2012 (UTC)
- What basis would you prefer. Views and votes from all users, or views and votes only from users that use the feature? Some explanation would be nice. --Niabot (talk) 16:56, 30 July 2012 (UTC)
- The slider is at the top of the page. There are four of them. When a reader opts in to filtering they're presented with four categories, each with a slider with "strict" at one end, "no filtering" at the other. As they slide the bar up they are altering the ratio of "hides" to views for the category (nudity, sex, violence, religious) which they would like to filter. First step on the slider (minimal filtering) would filter images that get (say) a thousand or more "hides" for every ten thousand views; step two would filter all images that get 900 or more, step three, 700 or more, and on up the continuum to very strict filtering at, say, five or more "hides" per ten thousand views. We calibrate the filters so that most nude images are filtered at the strict end of the nudity slider, most violent images are filtered at the strict end of the violence slider, and likewise for sex and images of Muhammad and Mormon garments. --Anthonyhcole (talk) 16:48, 30 July 2012 (UTC)
- If this proposal is ripe and would be implemented then we can check the real numbers. I expect much lower numbers as you do, but who knows. This isn't an really important factor at the moment. The current assumption is that it is used by one project. Still we have to assume that one language consists out of various interest groups. The better a solution fits such an scenario, the better it is, no matter if the cultural differences are small or big. --Niabot (talk) 14:24, 30 July 2012 (UTC)
- A slider is a nice idea. I'd expect a significant majority (definitely more than 51%) to default a filter. Again, making results project-dependent is much better, than relying on a global system. Pundit (talk) 13:34, 30 July 2012 (UTC)
- We can let the reader decide the ratio. On a slider, like a volume control. I'm still thinking about this, though, and your other comments. --Anthonyhcole (talk) 13:01, 30 July 2012 (UTC)
- Anthonyhcole, perhaps you should create a separate section with this proposal? I think it is workable, it effectively includes the labeling solution I've suggested, and also has the beauty of serving only those who want it, and who in the same time are the ones who decide about the level of advisory needed. Pundit (talk) 20:14, 30 July 2012 (UTC)
- I've created a section heading. This is a slightly modified version of something I suggested further up the page, --Anthonyhcole (talk) 01:42, 31 July 2012 (UTC)
Niabot, addressing your last bullet point, I got those four categories - nudity, sex, violence, religious - from the board working group's advice to the board concerning controversial content. I haven't read it for six months but I think that's the categories they suggested. I've thought about it a bit and can't think of a missing category or another I'd want to include. But the choice of categories is of course, up to the community. --Anthonyhcole (talk) 12:28, 31 July 2012 (UTC)
- I believe it is a very good proposal. Also, since categorizing into advisory labels can be crowd-sourced, people would be theoretically able to decide what kind of consensus they need for a filter to work (for instance, I could decide that I would only like to hide violence pictures marked on average as 8 or higher on a scale of 1-10 by viewers). By the way, we should include "medical" category as well, many complaints are about pictures of diseases, body deformations, etc. On the other hand, nudity and sex can be possibly combined (nudity as a lower range on a general sex category sounds reasonable). Pundit (talk) 14:40, 31 July 2012 (UTC)
- I think you want "disgusting" rather than "medical".
- Also, the server load isn't our problem. Figuring out how to do something that doesn't kill the serves is the purview of the Mediawiki developer community, not the content editors. We should tell them our ideal and let them see whether the ideal is feasible. Otherwise, our imperfect guesses about what's feasible may result in us getting less than what's possible. WhatamIdoing (talk) 01:06, 1 August 2012 (UTC)
- The exact categories will take a bit of time to pin down. What do you think of this proposal as it presently stands, WhatamIdoing, in principle? Is it something you could support? --Anthonyhcole (talk) 16:16, 1 August 2012 (UTC)
- Ignoring for a moment the IP issues, including the sliders makes this better than the WMF proposal, but you are still taking decisions as to what gets a filter - porn, violence etc. I'd prefer if that was entirely free to the filterers. More seriously I think that porn is more complex than a simple sliding scale and religious offence doesn't lend itself to a simple scale at all. But my biggest reservation is its propensity to be gamed. How do you tell the difference between an image of a girl in translucent clothing that is being rated as more pornographic that a similar but completely nude male because a bunch of kids from a trolling site are deliberately doing so, or because one image is linked to an article that is disproportionately read by religious people and the other to an article disproportionately read by fans of soft porn? WereSpielChequers (talk) 18:50, 1 August 2012 (UTC)
- As I explained to Niabot above, I took those categories from the working group's advice to the board and they seem to cover the main bases. As Pundit and WhatamIdoing point out, they may require tweaking, and that's up to the individual project of course. Having more than a handful of options would make this proposal unworkable, in my opinion. There will be some images people would rather not see that don't fit into those categories but, frankly, I can't think of many, and are you saying that because this system won't be perfect, we shouldn't try it? I'm not sure how to respond to your claim that the sliding scale, from no to strict filtering won't work; could you please elaborate? As for people gaming an image of Justin Beiber into the religiously offensive category, I guess it might happen occasionally but we won't know how often or how much of a problem it'll be until we try it. You might be right. You might be wrong. Only one way to find out.
- What are the IP issues? --Anthonyhcole (talk) 19:19, 1 August 2012 (UTC)
- I can support many proposals in principle, so long as they're usable by 100% of readers (not just those with registered accounts) and can be reversed at will.
- I strongly prefer options that don't require users to first look at the images that they don't want to see, which means having some sort of system for identifying groups of images. I prefer "don't ever show me the picture of that guy gouging out the other guy's eyeball", rather than "now that I'm already going to have nightmares for a month, don't show me the picture of that guy gouging out the other guy's eyeball again". WhatamIdoing (talk) 00:11, 2 August 2012 (UTC)
- That's what this solution does; it offers logged-out users per session graded filtering (strict to none) on several major categories (violent, sex/nudity, religious, disgusting, whatever), and enables them to selectively hide (per session) individual images, at the same time "voting" for that image's offensiveness. If you would like the logged-out user's filter settings (individual images and categories) to be indefinite, that can be done by developers setting the cookie session to indefinite. So, once a user has set their filter categories and chosen individual images on a particular browser, on a particular device, those settings remain in place until the user changes them or deletes their cookies. --Anthonyhcole (talk) 06:00, 2 August 2012 (UTC)
- I guess WhatamIdoing meant that users of a filter should not need to hide images themselves. I guess that is achievable if your filter is running long enough. It requires the users to look at such content at first (startup), but over time it will improve itself, so that users wont have to tag images manually in future, because there is already enough statistical median data (the ratio). The only thing your proposal can't handle very well right now is the difference in culture/religion. I gave above the example that religion A finds the images A* offensive while Religion B finds the images B* offensive. The median would favor majority groups or the main religion. So it will happen (assuming A is the majority) that images from A* have a higher ratio as images from B*. If a user belonging to B would adjust the filter for it's needs, then he has to choose a low ratio to effectively filter out images belonging to B*. But if he does so, then he will automatically block all images belonging to A*, B*, C*, D*,..., even so he isn't necessary offended by images belonging to A*, C*, D*,... The ratio for images in B* is low, because there are many readers (views) not offended by the images (no vote), belonging to a different cultures or religions. In short: Minorities, more or less the groups we want to address, have not very good results, but the majority does. --Niabot (talk) 08:04, 2 August 2012 (UTC)
- I agree. It will take time for robust "hide/report" data to accumulate for an image before it begins being automatically blocked by the category filters. It's a flaw, and one I can't see a simple way around, but I don't believe it's a fatal or even important flaw.
- And your second point is also true. If we take images of Muhammad and Mormon sacred garments as examples, it may be that an image of Muhammad is hidden/reported 300 times per 10,000 views, while a Mormon garment is only hidden/reported 20 times per 10,000 views. In that example, when the Mormon sets her filter to block out Mormon garments (strict filtering) she will necessarily also hide images of Muhammad, so, if she encounters an image of Muhammad while she's strictly filtering that category, she'll need to click "view this image". It's an imperfection but, again, not a very significant imperfection in my opinion. --Anthonyhcole (talk) 12:58, 2 August 2012 (UTC)
- I think that my last point is very significant. The majority view is usually already expressed inside the articles (even if it ideally shouldn't). But this filter is meant to reach out to minority groups. At least it is the cool story the WMF told us some time ago. "We need a filter to reach out to more readers and contributers". Do you agree? -- Niabot (talk) 18:12, 2 August 2012 (UTC)
- I guess WhatamIdoing meant that users of a filter should not need to hide images themselves. I guess that is achievable if your filter is running long enough. It requires the users to look at such content at first (startup), but over time it will improve itself, so that users wont have to tag images manually in future, because there is already enough statistical median data (the ratio). The only thing your proposal can't handle very well right now is the difference in culture/religion. I gave above the example that religion A finds the images A* offensive while Religion B finds the images B* offensive. The median would favor majority groups or the main religion. So it will happen (assuming A is the majority) that images from A* have a higher ratio as images from B*. If a user belonging to B would adjust the filter for it's needs, then he has to choose a low ratio to effectively filter out images belonging to B*. But if he does so, then he will automatically block all images belonging to A*, B*, C*, D*,..., even so he isn't necessary offended by images belonging to A*, C*, D*,... The ratio for images in B* is low, because there are many readers (views) not offended by the images (no vote), belonging to a different cultures or religions. In short: Minorities, more or less the groups we want to address, have not very good results, but the majority does. --Niabot (talk) 08:04, 2 August 2012 (UTC)
- That's what this solution does; it offers logged-out users per session graded filtering (strict to none) on several major categories (violent, sex/nudity, religious, disgusting, whatever), and enables them to selectively hide (per session) individual images, at the same time "voting" for that image's offensiveness. If you would like the logged-out user's filter settings (individual images and categories) to be indefinite, that can be done by developers setting the cookie session to indefinite. So, once a user has set their filter categories and chosen individual images on a particular browser, on a particular device, those settings remain in place until the user changes them or deletes their cookies. --Anthonyhcole (talk) 06:00, 2 August 2012 (UTC)
- I don't think that we should expect perfection some things will be over-hidden and others under-hidden. It's even possible for images to have different levels of complaints based on which articles they're in, or how it is presented. en:Pregnancy had an art nude in the lead until last year that drew lots of complaints, but since moving out of a "decorative" slot and giving it a detailed caption pointing out several visible features, I don't believe that there has been a single complaint about it. So I don't think that we can expect perfection. I think we can only expect to show that we've made an effort to address a common reader complaint.
- It might be possible to seed the filter initially by adding a handful of images. That would give people an idea of what we expected to be included in each filter group, e.g., religious filter should include the Mormon temple garments. Then we could see how it develops from there in real use. It's also possible that a certain amount of hard-coding would be appropriate. The same system that keeps a headshot of GW Bush from being filtered as "violent" could be used to keep the Mormon temple garments labeled as "religious" (assuming a community discussion determined that it was appropriate for any given image). WhatamIdoing (talk) 23:15, 2 August 2012 (UTC)
- No I don't agree, Niabot. It isn't a big deal that a Mormon who selects strict filtering of controversial religious imagery has to click "Show this image" to view an image of Muhammad, or that a Muslim who selects moderate filtering has to look at Mormon garments on one or two Wikipedia pages. --Anthonyhcole (talk) 00:15, 3 August 2012 (UTC)
- It isn't a big deal, but it would be a "nice to have" feature.
- One other aspect: How do you think the slider (page creation) should be implemented? There is the resource intensive way (no possible caching) to create pages depending on the settings, and there is the low cost alternative to expose the current rating directly in the HTML code, so that Javascript can be used to hide the images on load. The first one would keep the rating more or less private (polling with different settings would still allow to approximate the rating, but is complicated), while the second approach would directly show the rating (could be even made visible with an additional user script). --Niabot (talk) 01:33, 3 August 2012 (UTC)
- Implementation is Not Our Problem. How to implement our recommendation is 100% up to the Mediawiki developer community.
- When the editor/contributor community wants a new feature, we make a request that tells them what we'd like to be able to do. We don't tell them how to make that happen. (We also don't tell them that they must make it happen or give them deadlines; they're mostly volunteers, just like us, and we need to respect that.) WhatamIdoing (talk) 15:16, 3 August 2012 (UTC)
- Before taking it to developers, I'd like to be sure there is support for the idea. Should I run it by Jimbo first, who has asked developers to create a light, category-based per-project filter at http://en.wikipedia.org/wiki/User_talk:Jimbo_Wales/Personal_Image_Filter ? I was going to propose it on that page but (a) they're talking developer-speak, and (b) Jimbo specified using pre-existing Commons categories. --Anthonyhcole (talk) 15:24, 3 August 2012 (UTC)
- I've asked FormerIP to chime in. I haven't read all of their concerns about the previous filter proposals, but those I've read were well-reasoned and valid. I'd like to know whether this proposal addresses any of those concerns, and which ones, if any, it does not. --Anthonyhcole (talk) 16:09, 3 August 2012 (UTC)
- @WhatamIdoing: Thinking about the ways it could be implemented is also important. It isn't a good idea to write down a proposal that causes more problems for the developers then necessary. I have quite a lot of programming experience and I'm always interested in a more detailed description, because the exact description would be the algorithm itself. The closer an idea is to a reference implementation, the easier/quicker/cheaper is the way to a working result. --Niabot (talk) 20:37, 3 August 2012 (UTC)
This seems to me like the bare bones of a good proposal and, provided no-one comes up with anything better, I think it would support it with the following addition.
There should be (in fact, if you think about it, there would pretty much have to be) an ability for admins to override inappropriate blacklistings. This could be requested by talkpage consensus, as we do now for, for example, page moves. Projects will develop guidelines as to what constitutes a valid request for de-blacklisting. These will vary from project to project but, to aid understanding of what I am getting at, I would expect en.wp to have:
- a presumption against philistinism (so that, for example, Michaelangelo's David and the sleeve to Never Mind The Bollocks, Here's The Sex Pistols are not blacklistable);
- a presumption against social or cultural prejudice (rendering unblacklistable non-explicit depictions of homosexuality, images of people who are disfigured, images of women who are bare-breasted because they are in traditional dress and so on).
FormerIP (talk) 00:20, 4 August 2012 (UTC)
- I agree with enabling de-blacklisting. Just what is and isn't overridden would be up to the individual project, though. Thanks for taking the trouble to review this. I really appreciate it. --Anthonyhcole (talk) 02:30, 4 August 2012 (UTC)
- As long as it is not a 0-1 decision, but rather a sliding scale of advisory, and as long as it is crowdsourced, there is no need to "de-blacklist". Every user who cares to evaluate the image on a scale of 0-10 (no advisory - strong advisory) is actually also voting and influencing the way the image is displayed. All users can also decide what kind of crowdsourced advisory images and in which of the advisory categories they care to hide. Pundit (talk) 20:23, 4 August 2012 (UTC)
- Pundit, I'm not proposing readers rate individual images, I'm only offering them the options to hide the image and select a category. --Anthonyhcole (talk) 16:18, 10 August 2012 (UTC)
- So what would you suggest in the case where a twitter campaign gets Mitt Romney's infobox photo blacklisted as "disgusting"? FormerIP (talk) 23:45, 4 August 2012 (UTC)
- Good point. We can definitely make ratings dependent on logged-in users' choices only, and then treat ridiculing the system accordingly. But all in all, if some pictures get hidden, it will be a problem only of those, who decide they don't want to see them anyway. Also, see Anthony's point below. Pundit (talk) 22:09, 5 August 2012 (UTC)
- Personally, on en.Wikipedia, I can see room for two sets of permissions: one for protecting individual images, another for adjusting the sensitivity of a filter. I'd favour allowing admins to protect obvious anomalies like the Romney example, but if Botticellis and swimsuit models are too often being blocked at the "minimal" or "moderate" end of the nudity/sex filter, there is something wrong with the sensitivity setting for that filter, and adjusting that should be, in my opinion, a fairly special permission. An elected committee governed by a community guideline is an option there. Probably, the sensitivity of the nudity/sex filter should be set so that Botticellis and Rubenses are almost never blocked on the "minimal" setting and almost always blocked at the "strict" end - if we're going to offer the reader full choice in the matter. --Anthonyhcole (talk) 01:11, 5 August 2012 (UTC).
- There's the rub. We're not going to offer the reader a full choice in the matter. No way. No-one has a right to view en.wp and hide Botticellis. The very suggestion is more obscene than any jpeg could ever manage. FormerIP (talk) 01:42, 5 August 2012 (UTC)
- Why? --Anthonyhcole (talk) 02:12, 5 August 2012 (UTC)
- Wrong question. Why should they? FormerIP (talk) 02:15, 5 August 2012 (UTC)
- Why? --Anthonyhcole (talk) 02:12, 5 August 2012 (UTC)
- There's the rub. We're not going to offer the reader a full choice in the matter. No way. No-one has a right to view en.wp and hide Botticellis. The very suggestion is more obscene than any jpeg could ever manage. FormerIP (talk) 01:42, 5 August 2012 (UTC)
- As long as it is not a 0-1 decision, but rather a sliding scale of advisory, and as long as it is crowdsourced, there is no need to "de-blacklist". Every user who cares to evaluate the image on a scale of 0-10 (no advisory - strong advisory) is actually also voting and influencing the way the image is displayed. All users can also decide what kind of crowdsourced advisory images and in which of the advisory categories they care to hide. Pundit (talk) 20:23, 4 August 2012 (UTC)
- No, Anthony's got the right question. It's my computer and my eyes. Why shouldn't I be able to control what I'm looking at?
- Let me give you a scenario: A teenager is growing up in a household with an abusive nutcase. The kid will get beaten if s/he views a nude image, no matter how inoffensive you might think it is. Do you really think that this kid should have zero access to Wikipedia, including to articles about child abuse, because you personally think it's "obscene" that some nutcase would object to the contents of some old paintings? Because that's what you just said: if, for any reason, including your personal safety, you don't want to view those paintings, then you should not be permitted to read anything at all on the English Wikipedia. WhatamIdoing (talk) 02:30, 5 August 2012 (UTC)
- That is an really bad example. If the parents are that extreme, then i doubt that they would not be aggressive if they find out the lemma of the article which is displayed at multiple places (header of browser window, task bar, address bar, headlines in articles, ...). But i have to agree with FormerIP that any manipulation of the rating done by the community would be as bad as if the community would set up the filter itself (tagging). The only advantage (rating by readers and not a third party) of the proposal would be nullified.
- So far i see still problems with the proposal:
- The rating can be easily manipulated/misused
- It only suits the majority group of the readers, not the minorities that are complaining about the article images. (an abused teenager belongs to the minority, having a hard time to find any usable setting; there are multiple minorities with very different viewpoints)
- Do we use a sliding window average (older views/votes will be forgotten, adjusting to the current situation) or do we use an absolute average (statistic will include anything from even five years later)
- --Niabot (talk) 09:34, 5 August 2012 (UTC)
- I think it's a good example. FormerIP is making an absolutist statement without considering the undeniable fact that some of our readers are living in desperate circumstances. Child abuse is a fact of life, and idiotic zero-tolerance policies are also a fact of life. We have people who, because of extreme circumstances in their lives, might be harmed if they view what you and I call an inoffensive old painting. I think those people ought to be able to access WMF websites without fear. FormerIP either hasn't thought about this, or doesn't think that those people deserve access to any information on WMF websites. WhatamIdoing (talk) 21:08, 5 August 2012 (UTC)
- You ignored the point i was trying to make. If people are under such high pressure then a filter would not be a big help. Most of this "oppressors" are also not happy with the textual content. Thats why only filtering images can't really help until the "victim" can ensure his own privacy. --Niabot (talk) 09:14, 6 August 2012 (UTC)
- "Most of" is not "all". FormerIP said he would restrict even those for whom only images were a risk. And while getting beaten might seem quite extreme to you—most of us here live in countries where beating children is no longer generally acceptable—I suspect that you could imagine milder punishments, like losing certain privileges at home if such images are displayed while young siblings are in the room, being asked to leave a library or public access place if a complaint is made by a child, or getting in trouble at school for violating a poorly written "no naked pictures" zero-tolerance policy (the intended nature of a zero-tolerance policy, after all, is that you use no discretion, and therefore treat hard-core porn the same as the old masters, when the rule is "no naked people in pictures"). WhatamIdoing (talk) 14:41, 6 August 2012 (UTC)
- I doubt that you would have to leave a library if a child would complain. The only thing a library would ask of you is to take your material to a private place. That happened in San Francisco recently and is common practice. If schools want to push a zero tolerance policy then they failed entirely. I would never send my children to a school with such a policy. Education and tolerance are inseparable. One can't exist without the other. --Niabot (talk) 15:16, 6 August 2012 (UTC)
- Thinking about the placement of computers in the library nearest me, there are no "private places". San Francisco might have a different arrangement, but that does not seem to be typical. And how nice for you that you have a choice of where to send your children to school. I understand that, in practice, most people don't have enough money to make such a choice. WhatamIdoing (talk) 15:00, 7 August 2012 (UTC)
- Every library i know has private reading places, even if not called like that. Mostly it are single rooms in which you can go to have true silence, reading/working undisturbed. --Niabot (talk) 21:00, 7 August 2012 (UTC)
- Perhaps you have only experienced very large libraries, such as university libraries. I understand that the German library system is far more limited than the American one. Even quite small towns in the US often have a free circulating library open to the general public, many of them no bigger than an apartment flat. The smallest that I've heard of is in Arkansas: it is a freestanding building of just 170 square feet (15.8 square meters).
- The one nearest me amounts to one large room, with a couple of partial walls to divide up the space and two glass-walled meeting rooms. There are no computers in the meeting rooms, but even if there were, anyone walking by can easily see what you're doing. There are three banks of computers: screens facing the children's section, screens facing the teens section, and screens facing a row of tables. The result is that the contents of all computer screens in the building can be seen by anyone walking by. There is literally no place in the entire library except the restrooms where your choice of reading material could be unobserved, regardless of whether that reading is done on a computer screen or on paper. In my experience, this is absolutely typical of neighborhood libraries in the USA. WhatamIdoing (talk) 19:14, 8 August 2012 (UTC)
- Every library i know has private reading places, even if not called like that. Mostly it are single rooms in which you can go to have true silence, reading/working undisturbed. --Niabot (talk) 21:00, 7 August 2012 (UTC)
- Thinking about the placement of computers in the library nearest me, there are no "private places". San Francisco might have a different arrangement, but that does not seem to be typical. And how nice for you that you have a choice of where to send your children to school. I understand that, in practice, most people don't have enough money to make such a choice. WhatamIdoing (talk) 15:00, 7 August 2012 (UTC)
- I doubt that you would have to leave a library if a child would complain. The only thing a library would ask of you is to take your material to a private place. That happened in San Francisco recently and is common practice. If schools want to push a zero tolerance policy then they failed entirely. I would never send my children to a school with such a policy. Education and tolerance are inseparable. One can't exist without the other. --Niabot (talk) 15:16, 6 August 2012 (UTC)
- "Most of" is not "all". FormerIP said he would restrict even those for whom only images were a risk. And while getting beaten might seem quite extreme to you—most of us here live in countries where beating children is no longer generally acceptable—I suspect that you could imagine milder punishments, like losing certain privileges at home if such images are displayed while young siblings are in the room, being asked to leave a library or public access place if a complaint is made by a child, or getting in trouble at school for violating a poorly written "no naked pictures" zero-tolerance policy (the intended nature of a zero-tolerance policy, after all, is that you use no discretion, and therefore treat hard-core porn the same as the old masters, when the rule is "no naked people in pictures"). WhatamIdoing (talk) 14:41, 6 August 2012 (UTC)
- You ignored the point i was trying to make. If people are under such high pressure then a filter would not be a big help. Most of this "oppressors" are also not happy with the textual content. Thats why only filtering images can't really help until the "victim" can ensure his own privacy. --Niabot (talk) 09:14, 6 August 2012 (UTC)
- I think it's a good example. FormerIP is making an absolutist statement without considering the undeniable fact that some of our readers are living in desperate circumstances. Child abuse is a fact of life, and idiotic zero-tolerance policies are also a fact of life. We have people who, because of extreme circumstances in their lives, might be harmed if they view what you and I call an inoffensive old painting. I think those people ought to be able to access WMF websites without fear. FormerIP either hasn't thought about this, or doesn't think that those people deserve access to any information on WMF websites. WhatamIdoing (talk) 21:08, 5 August 2012 (UTC)
@FormerIP, you seem to be saying that I don't have a right to filter paintings of nude people on my computer. If that's the case, can you explain the background to that idea, the philosophical, ethical or political ideas supporting it? Or can you point to the harm it could do? --Anthonyhcole (talk) 15:32, 5 August 2012 (UTC)
- As far as I'm concerned, you have a right to do absolutely anything you like with your own computer, so long as you don't smash anyone over the head with it. But you don't have a right to expect websites to provide you with an absolutely exhaustive set of filtering options so that you can feel secure and supported in your (note: non-personalised use of "your") excessive prudishness, racism or whatever it happens to be. FormerIP (talk) 17:52, 7 August 2012 (UTC)
- You seem to go along with readers being able to filter anal fisting images (or have I got that wrong?) but draw the line at them being able to filter bathing suits and art nudes on their computer. If that is the case, can you tell me the reasoning behind that distinction? I'm looking for a philosophical basis (out of ethics or rights, I suppose), or just a clarification of the harm it would do. --Anthonyhcole (talk) 09:58, 8 August 2012 (UTC)
- If we go philosophical, we should consider our role: it is subservient to our readers. If a significant part of them want filters, if introducing them is technically easy, and if it does not affect other readers, it is an imperative that we do so. It does not matter if it are anal fisting images, or pictures of movie androids. Pundit (talk) 12:30, 8 August 2012 (UTC)
- Well, I have to be honest, I'm finding it difficult to cite a philosophical/ethical authority that goes against the complete reasonableness of facilitating the filtering of movie androids or Mitt Romney's face.
- Seriously, you guys need to think about this on a practical level. Such an extremist proposal is not likely to ever be implemented, on en.wp or probably anywhere. You need to be willing to accommodate reasonable suggestions, or you may as well stop talking about it altogether. FormerIP (talk) 12:44, 8 August 2012 (UTC)
- I will go along with whatever en.Wikipedia wants to do with the filter. What I've proposed is an infinitely gradable filter. If the community wants to restrict the range of filtering available to the reader, I'm fine with that; really. Ultimately, we'll need to argue that out at en.Wikipedia, before the filter is implemented; and if I were a betting man, I'd bet enWikipedia will support your view on this, FormerIP. Do you think this is a proposal worth putting to en.WP?
- The proposal should make it clear that two questions are unresolved: (1) where to draw the line at the strict end of filtering and (2) how much of a problem, if at all, gaming will be. We can resolve (1) before implementation but (2) will probably only truly be resolved by implementation. --Anthonyhcole (talk) 19:18, 8 August 2012 (UTC)
- Whatever ends up being proposed needs to be the full package. It has been made clear that the technology will not be significantly different for different projects, so guidelines about how to use it are all that can vary. With that in mind, you need not only a complete proposal, but a proposal that you can realistically expect to be implemented at least somewhere, or what's the point? If you don't think your proposal will go down well on en.wp, what makes you think it will be accepted on any project? My guess is that it will be rejected by most projects as too extreme, and rejected by the very conservative projects in the family as too permissive. FormerIP (talk) 20:47, 8 August 2012 (UTC)
- I'm suggesting we put this as a design in principle to en.WP and let that project decide where to draw the line on strict filtering. Once that's agreed, then we'll know it's worth handing to the developers for implementation. If en.WP can't agree on that, we don't take it any further. I'm not going to arbitrarily assume, a priori, what en.WP wants for strict filtering, and lock it into the proposal. It's not up to me, or you, to decide that. We just tell them, "You can set this thing so a reader who selects strict automatic filtering of sex/nudity will block most Botticellis and Sports Illustrated photos, or disallow that level of filtering, beginning strict filtering at most images of realistic nudity and sex. (Or somewhere else.) What would you like?" This is something we have to put to the community. --Anthonyhcole (talk) 22:00, 8 August 2012 (UTC)
- OK, the problem with this is that your proposed system doesn't actually have levels of filtering called "Sports Illustrated" or "Botticelli". All it has is numbers of votes. So the only way to take a Botticelli out of filtering is through a user-right, which is why I proposed that above. FormerIP (talk) 23:30, 8 August 2012 (UTC)
- Actually, we have some data about user preferences from previous surveys, and en.wp ranks in the middle. Users at Asian- and African-language projects want more restrictive settings. The users in northern Europe want no filters at all. The cultural difference is largely about whether the speaker's or the listener's wishes are assumed to be more important. English, Spanish, and French, being geographically widespread, seem to get a mix of users. WhatamIdoing (talk) 23:16, 8 August 2012 (UTC)
- It would be interesting to see that. Is it on-wiki? FormerIP (talk) 23:55, 8 August 2012 (UTC)
- I have a big problem with that data from the last "referendum". I just read the Japanese translation and it is bad, really bad. In some points the wording is so different that we could call it the "Japanese referendum", asking different questions. The same might apply to other languages as well. The next issue is that it never asked the question if "the reader needs a filter to read on Wikipedia" (a question directed a the reader/voter himself and not a "do you think that others somewhere in this universe..."), it circumvented it with asking for "how important it is for the Wikimedia projects to offer this feature to readers". Additionally not a single argument against a filter was given inside the introduction, which made the whole questioning extremely biased. A more complex, detailed reasoning why the referendum produced useless data was issued by an worker of a polling agency on the talk page. But I'm to lazy to go through the entire page and it's archives at the moment. --Niabot (talk) 06:52, 9 August 2012 (UTC)
- PS: A good hint on how inaccurate the results are can be found if we compare the local (language) polls with the results from the referendum. For example the German average inside the poll was 4.0−4.5, but inside the local poll it was 86% against, which would be an average of roughly 1.4. French: 5.5−6.0 vs 2.0. A whole different story which indicates how strongly the result is influenced by the questioning/representation itself. --Niabot (talk) 07:04, 9 August 2012 (UTC)
- Niabot, that's very sloppy of you. The official survey allowed basically all registered users to provide an opinion. The de.wp RFC was explicitly limited to experienced editors. All that shows is that people with 200+ edits at the German-language Wikipedia have a significantly different opinion than the people who read, but do not usually edit, the German-language Wikipedia. WhatamIdoing (talk) 17:34, 9 August 2012 (UTC)
- PS: A good hint on how inaccurate the results are can be found if we compare the local (language) polls with the results from the referendum. For example the German average inside the poll was 4.0−4.5, but inside the local poll it was 86% against, which would be an average of roughly 1.4. French: 5.5−6.0 vs 2.0. A whole different story which indicates how strongly the result is influenced by the questioning/representation itself. --Niabot (talk) 07:04, 9 August 2012 (UTC)
- Actually, we have some data about user preferences from previous surveys, and en.wp ranks in the middle. Users at Asian- and African-language projects want more restrictive settings. The users in northern Europe want no filters at all. The cultural difference is largely about whether the speaker's or the listener's wishes are assumed to be more important. English, Spanish, and French, being geographically widespread, seem to get a mix of users. WhatamIdoing (talk) 23:16, 8 August 2012 (UTC)
- OK, the problem with this is that your proposed system doesn't actually have levels of filtering called "Sports Illustrated" or "Botticelli". All it has is numbers of votes. So the only way to take a Botticelli out of filtering is through a user-right, which is why I proposed that above. FormerIP (talk) 23:30, 8 August 2012 (UTC)
- I'm suggesting we put this as a design in principle to en.WP and let that project decide where to draw the line on strict filtering. Once that's agreed, then we'll know it's worth handing to the developers for implementation. If en.WP can't agree on that, we don't take it any further. I'm not going to arbitrarily assume, a priori, what en.WP wants for strict filtering, and lock it into the proposal. It's not up to me, or you, to decide that. We just tell them, "You can set this thing so a reader who selects strict automatic filtering of sex/nudity will block most Botticellis and Sports Illustrated photos, or disallow that level of filtering, beginning strict filtering at most images of realistic nudity and sex. (Or somewhere else.) What would you like?" This is something we have to put to the community. --Anthonyhcole (talk) 22:00, 8 August 2012 (UTC)
- Whatever ends up being proposed needs to be the full package. It has been made clear that the technology will not be significantly different for different projects, so guidelines about how to use it are all that can vary. With that in mind, you need not only a complete proposal, but a proposal that you can realistically expect to be implemented at least somewhere, or what's the point? If you don't think your proposal will go down well on en.wp, what makes you think it will be accepted on any project? My guess is that it will be rejected by most projects as too extreme, and rejected by the very conservative projects in the family as too permissive. FormerIP (talk) 20:47, 8 August 2012 (UTC)
- If we go philosophical, we should consider our role: it is subservient to our readers. If a significant part of them want filters, if introducing them is technically easy, and if it does not affect other readers, it is an imperative that we do so. It does not matter if it are anal fisting images, or pictures of movie androids. Pundit (talk) 12:30, 8 August 2012 (UTC)
- You seem to go along with readers being able to filter anal fisting images (or have I got that wrong?) but draw the line at them being able to filter bathing suits and art nudes on their computer. If that is the case, can you tell me the reasoning behind that distinction? I'm looking for a philosophical basis (out of ethics or rights, I suppose), or just a clarification of the harm it would do. --Anthonyhcole (talk) 09:58, 8 August 2012 (UTC)
- Yes, you're right about that, FormerIP, the sex/nudity filter won't actually be graduated with "anal fisting" at the bottom and Botticelli at the top, but as the reader slides the filter from bottom to top, they'll be gradually filtering more and more. We can't set the strictest grade at one hide/report per ten thousand views (1:10,000) because that would sweep up pretty much any image anyone has ever hidden just to test the feature. So a decision needs to be made as to what is the ratio for the strictest setting. Above, I proposed an elected committee, guided by a community guideline, should make that decision. The community guideline can be decided in the RfC that adopts the filter.
- As I said above, I'm in favour of admins protecting obviously uncontroversial images (Mitt Romney eating cake) that are being gamed into one anomalous category or another, but that's a very different thing from allowing admins to force the display of a provocative image in "strict" mode because it isn't nude enough.
- I don't envisage you, me or some other editors micromanaging individual images. The whole point of this proposal is we take categorisation out of the hands of a few editors and allow community behaviour to determine an image's category. If we settle for a ratio that displays most images of swimsuit models and Botticellis in the strictest setting, there will still be some outliers that are filtered. If that happens too often, the solution is to adjust the ratio. --Anthonyhcole (talk) 08:50, 9 August 2012 (UTC)
- OK, so to summarise, the idea you are interested in is one that you know can never be implemented, but you are not interesting in adapting it so that it might be. What's the point of this discussion, exactly? FormerIP (talk) 12:49, 9 August 2012 (UTC)
- I want en.WP to adopt this filter, because it takes the estimation of offensiveness out of the hands of a few, and it is done automatically according to how many readers have hidden/reported the image compared with how many have viewed it. If en.WP decides to allow admins to manually protect images from filtering (and under what circumstances), that's up to the project. I would prefer admins to only protect anomalies (like Mitt Romney being hidden when the reader selects "Filter sex/nudity"), and that they don't engage in forcing the display of provocative images because they're not nude enough or whatever; and I would prefer the sex/nudity filter hides most nude Boticellis when it is set to "strict". The first, admins forcing the display of images they deem are not provocative enough to warrant filtering, I'm likely to win. The second, where to draw the line for strict filtering, I'm less confident.
- As to the point of this discussion, most of what you and I are discussing now can move to en.WP; we're discussing settings and permissions, and that's up to individual projects. --Anthonyhcole (talk) 13:28, 10 August 2012 (UTC)
- OK, so to summarise, the idea you are interested in is one that you know can never be implemented, but you are not interesting in adapting it so that it might be. What's the point of this discussion, exactly? FormerIP (talk) 12:49, 9 August 2012 (UTC)
- Do we really need a "always show" option? In a worst case scenario you could set the ratio to 1:infinity and only see the images that are deemed as non controversial by the community. At start this might not be an issue, but vandals are relentless and sooner or later more and more images will carry something like a "safe" flag instead of a "NSFW" flag. This is then somehow equivalent to NSFW tagging by the community itself. --Niabot (talk) 10:31, 9 August 2012 (UTC)
- I don't fully understand your question, Niabot. By "always show", do you mean where an inoffensive image is protected from filtering by admins because it's been gamed into an offensive category? --Anthonyhcole (talk) 13:28, 10 August 2012 (UTC)
- Yes. That admins label a picture as "inoffensive" is practically the same as labeling another image as "NSFW". This might not be obvious as long only some of the images are tagged like this, but over time this list/divide will grow. Looking at it from an different angle opens up an second case. How do we handle images that might receive a moderate amount of votes (for example the images inside ecchi), but are voted down massively by vandals. "always show" (excluding from filtering) would not work in this case, but doing nothing ruins the quality of the results. My assumption is that there has to be a method (like with views on Youtube: The magical 301 views) that recognizes such irregularities and stops counting (both, views & votes) during such a period.
- One additional question is: Will the vote count be separated for an image that is used in multiple articles? I think of the scenario when a picture might suit much better in one article as into another. -- Niabot (talk) 16:42, 10 August 2012 (UTC)
- To be clear, I'm not proposing the reader votes for or against an image, or rates an image. The reader will not know that when they report/hide an image as sex/nudity they are changing its ranking in the sex/nudity filter at the top of the page. As I said to FormerIP, below, I'd prefer admins only deblacklist obvious jokes - Mitt Romney being hidden on the sex/nudity scale for example. If too many innocuous Botticellis and bikini models are being hidden in the "moderate" sex/nudity range, the overall setting of that parameter (the ratio for "moderate") will need to be adjusted, rather than an admin manually forcing each image to display.
- I don't envisage you, me or some other editors micromanaging individual images. The whole point of this proposal is we take categorisation out of the hands of a few editors and allow community behaviour to determine an image's category. If we settle for a ratio that displays most images of swimsuit models and Botticellis in the strictest setting, there will still be some outliers that are filtered. If that happens too often, the solution is to adjust the ratio. --Anthonyhcole (talk) 08:50, 9 August 2012 (UTC)
- In the example you give, I think a well-set filter should hide those images from a reader who sets the sex/nudity filter near the high ("strict") end, and display them in the mid- to low end of the range. I see what you mean about vulnerability to vandals swarming onto the page and all hiding an image, and artificially boosting its ranking. I just don't know how best to deal with that, but talking about these issues is the way forward. As you say, perhaps we can scrounge an algorithm from Google that recognises that kind of anomaly. I'm expecting the feature only to gather one data-point per image, per IP. So I don't expect an individual user to be able to repeatedly hide an image to boost its ranking.
- On your last point, I'm thinking site-wide views and site-wide hides for an image would be used. As WhatamIdoing pointed out above, where an image is placed, on the project or even within an article, will affect how readers respond to it but the alternative, different rankings for the same image on different pages seems an order of complexity we may not need. But we can certainly put it to the developers and see if it's doable. --Anthonyhcole (talk) 14:08, 11 August 2012 (UTC)
- "I'm expecting the feature only to gather one data-point per image, per IP. So I don't expect an individual user to be able to repeatedly hide an image to boost its ranking.": That would have to be discussed with the developers since some providers (for example the Deutsche Telekom, which is mainly used in Germany) assign you a new IP every day or after every dial in, which is still the standard procedure even for broadband access. With IPV6 this might apply to many other users as well. There really needs to be a serious effort on the developer side to collect representative data. Thats why i would propose to have a discussion with developers beforehand to ensure that it is possible to do and does not violate privacy concerns (stored IP of readers, not editors). If it is a good voting system then we should not have much problems with vandals and possibly wouldn't need to bother admins to adjust the filter in special cases.
- The only thing that isn't really pleasant to me is the fact that the rating (views vs votes) will be somehow exposed. Everyone could easily write a simple program/bot that searches for the most controversial pictures. It would open the possibility to use proxies for hard censoring (in the sense that the reader isn't able to control the feature itself). --Niabot (talk) 15:33, 11 August 2012 (UTC)
- The safest way is to leave actual voting/decisions to the logged-in. Pundit (talk) 16:55, 11 August 2012 (UTC)
- But most readers don't log in. Any ideas? --Niabot (talk) 18:04, 11 August 2012 (UTC)
- Just as with decisions about porn (the community of logged-in users decides about it), I think it will be enough to rely on local projects here. Pundit (talk) 19:53, 13 August 2012 (UTC)
- But most readers don't log in. Any ideas? --Niabot (talk) 18:04, 11 August 2012 (UTC)
- I'm not convinced that the rating really has to be directly exposed (to anyone without Toolserver access, at least). I'm also not convinced that exposed ratings are a significant problem, because the ratings could be used for non-censorship purposes, like duplicating the system on your mirror or researching what qualities make an image get complaints (e.g., whether lead images are more frequently flagged, whether better-written or more controversial pages get fewer complaints, etc.). A system that can be used to find images just as easily as it can be used to avoid images is no different than the libraries filing the pornographic magazines under "pornography". WhatamIdoing (talk) 15:54, 12 August 2012 (UTC)
- Niabot, I agree with WhatamIdoing, we can hide the ratings for individual images, and make that data available only those who need it, such as researchers.
- Pundit, anybody who hides an image, logged-in or not, will be voting (though they probably won't realise hiding an image on the page will affect its ranking on the automatic filter at the top of the page), so we can't restrict voting to logged-in users because voting is hiding an image, and we want everybody to be able to do that. --Anthonyhcole (talk) 21:17, 12 August 2012 (UTC)
- How should that be possible? I can't imagine any solution which will not directly expose the "rating" to the reader in some way. It is rating, because there is no definition for "pornography" that the voter has to follow. A library would go after the definition, our readers don't care about definitions, they act independently after own feeling. --Niabot (talk) 06:36, 13 August 2012 (UTC)
- The programming details aren't really our problem. There are all sorts of things right now that aren't directly or easily visible to users, so why not this, too? And if it's not possible, then the devs can tell us that. WhatamIdoing (talk) 15:33, 13 August 2012 (UTC)
- I don't see how it could fail to be visible to users. Surely the whole point is that it will hide images? If you have the gadget turned on, you will be able to see what it does to different images with different settings. FormerIP (talk) 19:40, 13 August 2012 (UTC)
- The information would have to be available indirectly (i.e., if I set the filter to "X" and load the page, then this image appears, but if I set it to "Y", then it does not), but there's no need to have it displayed in text for anyone to directly look at. WhatamIdoing (talk) 15:18, 14 August 2012 (UTC)
- I don't see how it could fail to be visible to users. Surely the whole point is that it will hide images? If you have the gadget turned on, you will be able to see what it does to different images with different settings. FormerIP (talk) 19:40, 13 August 2012 (UTC)
- The programming details aren't really our problem. There are all sorts of things right now that aren't directly or easily visible to users, so why not this, too? And if it's not possible, then the devs can tell us that. WhatamIdoing (talk) 15:33, 13 August 2012 (UTC)
- How should that be possible? I can't imagine any solution which will not directly expose the "rating" to the reader in some way. It is rating, because there is no definition for "pornography" that the voter has to follow. A library would go after the definition, our readers don't care about definitions, they act independently after own feeling. --Niabot (talk) 06:36, 13 August 2012 (UTC)
- The safest way is to leave actual voting/decisions to the logged-in. Pundit (talk) 16:55, 11 August 2012 (UTC)
Impractical
editI had essentially the same idea as Anthonyhcole's when I was thinking about the problem a while ago. I didn't put it forward because in thinking about it long enough, it became clear that any practical attempt to implement it would at best satisfy no-one whilst being an enormous amount of work; at worst, it would simply fall apart under the weight of the contradictory things it's trying to achieve. Rd232 (talk) 19:09, 9 August 2012 (UTC)
- Thanks for the feedback. Can you explain to me what the main problem is, so I don't waste any more of my or anyone else's time on this? I would like to drop this if there is a fatal problem, but so far no one has pointed one out. --Anthonyhcole (talk) 12:41, 10 August 2012 (UTC)
- The fundamental problem is that (for any filter) we're trying to guess what users might want to see based on either (i) top-down analysis (ii) community discussion (iii) aggregated user preferences. For (i), a Google Safe-Search approach is at least simple - "take it or leave it, well alright, here's a "moderate" setting." But doing something similar algorithmically on Wikimedia projects is difficult and unlikely. For (ii), any solution based on community categorisation quickly becomes full of conflict over definitions and categorisation decisions; but at least they're out in the open. For (iii), any solution based on trying to aggregate user-indicated preferences (I don't want to see image X, because...) will have similar conflicts to (ii), but be subject to gaming and arguments about the validity of the collected preferences. The only escape route is to be really blunt about it and use whatever geographical data the IP provides - users in Indonesia normally don't want to see images of type X, so since you're visiting WP from there we assume that's your preference (unless you somehow tell us otherwise). Geographical profiling is perhaps more effective, thinking globally, than we sometimes credit, but it's still pretty awful. And try to go beyond geographical data in terms of profiling users' desires, and it just becomes a mess. It would have more of a chance of working if we could reasonably well tie user input to individual real people and collect demographic data, but that's out of the question in the Wikimedia model, and without that, there's no way for the aggregation of user inputs to be meaningful. .... I'm sorry, this explanation is perhaps not as good as I'd hoped, but I can only reiterate that I had a similar idea, liked it initially, and then thinking about how to do it concluded it's just not practical. Rd232 (talk) 13:57, 10 August 2012 (UTC)
- Thanks. Your (iii) appears to cover what I'm proposing. "...will have similar conflicts to (ii), but be subject to gaming and arguments about the validity of the collected preferences." Your point about gaming is valid but we can only know how much of a problem that will be, if at all, after we have the actual statistics. Your point about the validity of the collected preferences is fair, too. We can answer these questions before offering the filter, by offering the "hide/report this image" feature on its own first, and analysing the data. That will tell us what the feature will do. We can test the filters and know how fine or coarse they are. We'll know whether editors hiding images of Mitt Romney as religiously offensive will impact the image's placement significantly, and if it does, we can discuss whether that is appropriate. (I don't know the answer to that question.) --Anthonyhcole (talk) 15:50, 10 August 2012 (UTC)
- The fundamental problem is that (for any filter) we're trying to guess what users might want to see based on either (i) top-down analysis (ii) community discussion (iii) aggregated user preferences. For (i), a Google Safe-Search approach is at least simple - "take it or leave it, well alright, here's a "moderate" setting." But doing something similar algorithmically on Wikimedia projects is difficult and unlikely. For (ii), any solution based on community categorisation quickly becomes full of conflict over definitions and categorisation decisions; but at least they're out in the open. For (iii), any solution based on trying to aggregate user-indicated preferences (I don't want to see image X, because...) will have similar conflicts to (ii), but be subject to gaming and arguments about the validity of the collected preferences. The only escape route is to be really blunt about it and use whatever geographical data the IP provides - users in Indonesia normally don't want to see images of type X, so since you're visiting WP from there we assume that's your preference (unless you somehow tell us otherwise). Geographical profiling is perhaps more effective, thinking globally, than we sometimes credit, but it's still pretty awful. And try to go beyond geographical data in terms of profiling users' desires, and it just becomes a mess. It would have more of a chance of working if we could reasonably well tie user input to individual real people and collect demographic data, but that's out of the question in the Wikimedia model, and without that, there's no way for the aggregation of user inputs to be meaningful. .... I'm sorry, this explanation is perhaps not as good as I'd hoped, but I can only reiterate that I had a similar idea, liked it initially, and then thinking about how to do it concluded it's just not practical. Rd232 (talk) 13:57, 10 August 2012 (UTC)
- Regarding gaming: I'm assuming we'll collect one data point per image per IP per (time - day, week?) that reports/hides it. One editor won't be able to spend 20 minutes repeatedly hiding an image to boost its offensiveness ranking. Gaming would take a significant orchestrated effort and the result would only be visible to people who have opted in to automatic filtering, and all they have to do is click "show" to see it. That strikes me as a lot of effort for very small lulz and I'm, personally, not expecting much of it. I understand that an IP may represent more than one reader but that won't affect the statistical validity of the data. --Anthonyhcole (talk) 14:56, 11 August 2012 (UTC)
- I don't think we'll be able to avoid gaming in any way other than relying on logged-in users, but that's fine (also, allows us to weed out labeling vandals). Pundit (talk) 16:14, 13 August 2012 (UTC)
- By counting only one "hide" per image per IP for the purposes of estimating the image's offensiveness ratio, we can make gaming very difficult, but nothing will stop an orchestrated campaign, should someone attempt that. I'm just struggling to imagine it's going to be much of a problem, though. Truly, we just don't know. The only way to find that out is to deploy the feature. --Anthonyhcole (talk) 20:05, 20 August 2012 (UTC)
- I don't think we'll be able to avoid gaming in any way other than relying on logged-in users, but that's fine (also, allows us to weed out labeling vandals). Pundit (talk) 16:14, 13 August 2012 (UTC)
- Regarding gaming: I'm assuming we'll collect one data point per image per IP per (time - day, week?) that reports/hides it. One editor won't be able to spend 20 minutes repeatedly hiding an image to boost its offensiveness ranking. Gaming would take a significant orchestrated effort and the result would only be visible to people who have opted in to automatic filtering, and all they have to do is click "show" to see it. That strikes me as a lot of effort for very small lulz and I'm, personally, not expecting much of it. I understand that an IP may represent more than one reader but that won't affect the statistical validity of the data. --Anthonyhcole (talk) 14:56, 11 August 2012 (UTC)
Updated summary
editThis proposal has evolved a bit over the course of this discussion, so I'll summarise it's present state. This is a project-specific filter that allows readers to (1) preemptively, indefinitely filter images in specific, pre-selected categories (say, nudity/sex, violence, religious), and (2) allows them to indefinitely hide a specific image. And we can do this while allowing reader behaviour, not the opinions of a few individuals, to determine the offensiveness of an image.
Stage one
editEach image has a "report/hide this image" button in the caption box. The reader who clicks the button is asked something like "Is this image violent, sexual/nudity, religious, none of these?" Once the reader has selected one or more categories they can click "Hide", and the image is replaced by a blank place-holder with the caption or alt text and a "Show this image" button. If the reader is not logged in, their selection is stored in a cookie. If the reader is logged in, it is added to their personal project-hosted blacklist.
Stage two
editOnce the above feature has been running for a while, we analyse the data. We count the number of editors who have hidden an image (only one count per IP or logged-in user) and compare that with the number of times that image has been viewed on the project. This ratio will be an index of the image's offensiveness.
At the top of each page will be an image filter with individual sliders (like volume controls) marked (say) "sex/nudity", "religious" and "violence", with "strict filtering" at the top and "no filtering" at the bottom. As the reader slides a filter, s/he is selecting the offensiveness ratio of images that will be automatically, preemptively filtered. Images that have a very high number of hides per views would be filtered at all positions along the scale except for "No filtering", and those with a very low ratio would only be filtered at the "strict" end.
That's the theory. The advantage of the design is it takes categorisation out of the hands of a few and leaves it to reader behaviour to decide. Initially, there will be a lot of "noise" as readers click the "hide" option just to test the new feature, so meaningful data collection should start a few months after the "hide" feature goes live. Even then, there will be test hides, so "strict" filtering cannot filter every image that has ever been hidden: offline trials ahead of stage two deployment should find a ratio that filters most offensive images but not images that have only been hidden in tests. --Anthonyhcole (talk) 21:14, 20 August 2012 (UTC)
- Minor clarification: by "only one count per IP or logged-in user", we actually mean "only count one per person, using whatever method the devs decide is the best way to count people". Apparently there are some more sophisticated options available, and we're not fixated on any particular method. WhatamIdoing (talk) 16:34, 21 August 2012 (UTC)
- I can see a few problems with this.
- The amount of clutter on the page, even those who are quite tolerant at the idea of a filter might baulk at the sliders and other buttons that this proposes "At the top of each page will be an image filter with individual sliders". We can do an image filter via user preferences such that only those who opt into the filter get a little button by each image. If that then raises the issue of how we promote this it would be easy to market it to people - including by a central notice that they can click hide on once they've read it.
- Making it project specific. The wisdom of crowds doesn't hold up well when you continually subdivide them, and many of our projects have very small groups of editors. For reasons of WikiPolitics it may be necessary to allow individual projects to opt out of this altogether, but that is a very different thing than having a thousand or so different filters. The English language Wikipedia and a few other projects might be able to make a project specific filter work, but I suspect many others will simply be saying "we have the same sort of diversity of opinions as many other projects, why can't we have a cross project filter?" A language level filter would be slightly less unworkable as it would bring some overlapping groups together by language and reduce the filters to around 300. But one system would be far simpler and more likely to give people the service they want.
- A sliding scale as opposed to grouping by computer of people with similar preferences. Some things simply don't fit a sliding scale, religious offensiveness being an obvious example. Instead of a scale from "only offensive to hardline moslems - to offensive to moslems or to Bahai or to Mormons" we really need a system that lets Moslems, Mormons and Bahai each avoid things that are offensive to them without having too many bizarre false positives. Porn is a similar case, especially when you consider different people's attitudes to straight and gay porn. How do you get consistency as to where a gay kiss a straight kiss and a lesbian kiss are on such a scale, let alone when more intimate or revealing images are concerned?
- IP level operation. I'm yet to be convinced that we can have any IP level which doesn't risk the scenario where one person carefully sets the IP filter to exclude something and then goes into the article whose images they know they don't want to see, only to find that someone else has reset the IP in such a way as to show them the images they planned to avoid. If we weren't a top thousand website then this would be such a rare scenario as to be a risk one might consider, but we are a top ten scenario and of course there will be lots of shared use computers and networks where multiple readers share an IP and even a cookie. By contrast if we make this registered users only we will be offering all our readers a free opt in personal filter. The only risk being that we may find that the 99% who never edit simply don't care enough about the filter to opt in to one by creating an account.
- Gaming. We've had to make 1.8 million blocks in the last 8 years, and after the Article Feedback Tool problems the community is not going to buy a solution that assumes people won't try to game it. The AFT had to be hastily modified because it was initially launched without ways to delete abusive comments, and it now gives the community a huge new task of patrolling comments to delete the abuse. But at least it is possible to see who is doing the abuse and if necessary block them. With this system people could be quietly marking hardcore porn as less offensive than softcore porn and we'd have no way to identify the people doing it. By contrast if we use a preference based system, then if someone labels a stack of faces as not what they want to see then my preferences will be sufficiently dissimilar to theirs that their filter choices won't affect mine, and we don't need to care whether those people are US teenagers trying to game the system or religious fundamentalists who deem any visible flesh as pornographic.
- Context. If we rely on ratios of offensiveness then an image that has been displayed in an article on a relatively "adult" topic might well get a lower offensiveness ratio than a less explicit image in a topic that is of more interest to relatively prudish people. That might not matter much if the images continue to be used in the same articles, but we can't assume that will happen.
- Making the offensiveness score public has BLP implications, especially in a project specific system. Someone who agreed to having their photo uploaded to Wikimedia Commons might well be annoyed if they are told that "Wikipedia has categorised a photo you gave them as pornographic" and it won't be much consolation for us to say it was the Urdu and Aceh projects that did that. Especially if their political opponent says it was fair comment to say that Wikipedia had classified an image of them as pornographic.
Proposal Comparison
editEvery solution has it's flaws. The solution presented by Anthonyhcole isn't bad, but i guess that it isn't a prefect solution either, or at least at the moment. While being way better as the first proposal by the WMF, it tends to be easily manipulated. Maybe we should create a comparison chart of all major proposals to have a better way to look at their strong points and weaknesses.
There are very simple solutions like "hide all or nothing" which are not manipulatable, always fair and very easy to implement. But if i understood correctly, then they are not custom enough, because some of us dream about a filter that represents everyone exactly what he wishes to see and nothing of that what he finds disgusting, while requiring as little input (configuration) as possible.
Wouldn't it make sense to at least try a "hide all or nothing" proposal until we figured out a system that would at least work nearly flawless in theory? As explained, this quick solutions might not be enough. But they are uncontroversial, quickly implemented and allow us to gather more data on how many people would like to use the filter tool. (We could combine it) Later on we could improve it, if there is a need for an adapting filter. --Niabot (talk) 19:59, 1 August 2012 (UTC)
- That's not a filter. A filter, by definition, allows some things through and blocks others. --Anthonyhcole (talk) 20:09, 1 August 2012 (UTC)
- It's the most simple filter (a switch). It lets anything trough or nothing, depending on setting. This view might differ from definition to definition. But wouldn't you think that we might gather some data that way to figure out the parameters, statistic data for a real filter after your definition, and wouldn't it be a small step forward that wouldn't cost us anything until we have working implementation? --Niabot (talk) 21:52, 1 August 2012 (UTC)
- I agree with Anthony: that's not really a filter. Additionally, it's the one approach that the user could already implement without the WMF's help, since browsers normally have a preference tickbox that permit users to not load any images from any website. So it's kind of like offering someone their own possessions, and then pretending that you're giving them a brand-new gift. WhatamIdoing (talk) 00:16, 2 August 2012 (UTC)
- The usual argument against "Why not let the user hide the images himself?" was that this feature is hidden very deep inside the browsers configuration and is unknown to most users. The argument that was also used against existing filter software like Addblock. --Niabot (talk) 07:43, 2 August 2012 (UTC)
- I agree with Anthony: that's not really a filter. Additionally, it's the one approach that the user could already implement without the WMF's help, since browsers normally have a preference tickbox that permit users to not load any images from any website. So it's kind of like offering someone their own possessions, and then pretending that you're giving them a brand-new gift. WhatamIdoing (talk) 00:16, 2 August 2012 (UTC)
- I'm sorry, but the proposal to allow users only to opt out of all images, after all the discussions we have had, seems to me rather like an attempt to ridicule and divert the discussion on filters and labels. I think that Anthony's proposal is sensible, some particulars can be borrowed from my labeling idea, and we should start working on it in a separate namespace. I'd be more than eager to work with Anthony on the proposal in his namespace to polish it and then allow a discussion on something concrete. Pundit (talk) 17:13, 2 August 2012 (UTC)
- It is the responsability of the users, not of this website. If users dont know how to use a computer perhaps they should not use one. People who are easily offended should not use the internet anyway, and I for one do not care what they think: our goal is the spreading of knowledge, not hiding it. Everyone who doesnt want to see something should not be looking. I'd rather remove all Billboards from the side of the road because you dont have a choice but to see them, and I am offended by them. On the internet there is always a choice. Does the board still want to push through this ridiculous plan? Zanaq (talk) 17:51, 2 August 2012 (UTC)
- If it does go after Jimbo himself, then we can be sure to see a much worse (inherently silly) approach. --Niabot (talk) 18:06, 2 August 2012 (UTC)
- Zanaq, your attitude is neither sympathetic nor kind. Why should a reader have to give up everything, just to avoid images that frighten him? If you don't wish to see pictures of rape victims, because you were raped, should you not be able to read articles about rape without looking at the images? You say "On the internet there is always a choice". The whole purpose of this idea is to provide a choice to these readers that allow them to make their own choice about whether they see photographs in these categories. WhatamIdoing (talk) 22:54, 2 August 2012 (UTC)
KISS
editI think an on/off switch is the best thing we can do in the short term, and maybe even in the longer term. It's easy enough:
- put an images on/off switch at the top right of every article, defaulting to images on.
- in images off mode, leave the image space blank, and show images when hovering over them
- allow logged-in users to change their preferences, so images can be off by default.
C'mon, how hard is this? Not very. How effective is this? Arguably not very, but it's a hell of a lot better than absolutely nothing. Much of the problem (real and perceived) is forcing users to consume images along with text (fiddling with browser settings etc is beyond the average user). Give users the choice, and you've made an enormous dent in the problem for very little cost. Rd232 (talk) 19:17, 9 August 2012 (UTC)
- I suggested it as a first quick step, but the usual response is a "NO, we want a real filter", while the other proposal are more or less the same as what was opposed with different naming. The only thing that comes close to a compromise so far is the proposal by Anthonyhcole, but which has the flaw to be manipulatable as long there are no effective counter measures which are not falling back to opposed approaches. --Niabot (talk) 01:03, 10 August 2012 (UTC)
- KISS solution doesn't need to prevent work on a "real filter" though. And the cost is low enough that it's not much of a distraction. I suppose people who want a "real filter" might worry that the KISS solution would solve too much of the problem, making it even less likely a "real filter" ever happens... Rd232 (talk) 13:28, 10 August 2012 (UTC)
- I Support this. It's not a filter but it's an option we should offer the reader. I support implementing it now on en.WP. And if an actual filter ever gets adopted, we can add the "hide all images" feature to it. --Anthonyhcole (talk) 13:41, 10 August 2012 (UTC)
- An additional advantage is that potentially user input data (suitably anonymised) can be collected and published for analysis (though this would be a lot of extra effort of course). That would probably help in designing a "real filter". Rd232 (talk) 14:13, 10 August 2012 (UTC)
- I Support this. It's not a filter but it's an option we should offer the reader. I support implementing it now on en.WP. And if an actual filter ever gets adopted, we can add the "hide all images" feature to it. --Anthonyhcole (talk) 13:41, 10 August 2012 (UTC)
- KISS solution doesn't need to prevent work on a "real filter" though. And the cost is low enough that it's not much of a distraction. I suppose people who want a "real filter" might worry that the KISS solution would solve too much of the problem, making it even less likely a "real filter" ever happens... Rd232 (talk) 13:28, 10 August 2012 (UTC)
NB There's nothing to stop us doing KISS right now - I think it can be done entirely in Javascript, and doesn't need any MediaWiki changes. It can be a gadget at first, and after a bit of testing, we can seek community approval for it to be a default gadget. Rd232 (talk) 14:13, 10 August 2012 (UTC)
- Draft an RfC and advertise it widely over en.WP. Can I suggest you rule out discussion of the different technical options, as that's up to the dev's and will only distract from the main question. --Anthonyhcole (talk) 14:30, 10 August 2012 (UTC)
- An option to turn images off does not address the filtering/labeling problem well, but it is good for other reasons, costs of the Internet access being just one (in some regions of the world people still pay per transfer and would welcome a possibility to turn images off by default). Pundit (talk) 11:04, 11 August 2012 (UTC)
- I assumed it would be possible to do this in Javascript; but the effectiveness of doing that from the point of view of saving bandwidth isn't clear, and doing it serverside trickier than you would think. See en:Wikipedia:Village_pump_(technical)#Gadget_for_control_of_image_display. I'm hoping someone still does the gadget, as I don't think it's very hard for someone who knows Javascript, and the gadget would still do the image-filter-KISS job, but I'm not terribly optimistic. Rd232 (talk) 16:27, 14 August 2012 (UTC)
- Meh, this is bad news indeed. Pundit (talk) 09:09, 23 August 2012 (UTC)
- I assumed it would be possible to do this in Javascript; but the effectiveness of doing that from the point of view of saving bandwidth isn't clear, and doing it serverside trickier than you would think. See en:Wikipedia:Village_pump_(technical)#Gadget_for_control_of_image_display. I'm hoping someone still does the gadget, as I don't think it's very hard for someone who knows Javascript, and the gadget would still do the image-filter-KISS job, but I'm not terribly optimistic. Rd232 (talk) 16:27, 14 August 2012 (UTC)
English Wikipedia RFC: en:Wikipedia:Village_pump_(proposals)#KISS_image_filter. Rd232 (talk) 18:58, 29 August 2012 (UTC)