Talk:Oversight policy/Archives/2020

Latest comment: 3 years ago by Xaosflux in topic Policy question/suggestion

Problem in the Oversight policy

The Oversight policy allows or forces the deletion of threats etc. that are present in usernames. The policy does not remove threats, harassment, execution calls, etc., despite the fact that it is generally had more attention. This is a serious omission in the policy and can have a serious impact on Wikipedians or other people. In addition, not removing such content may also result in legal problems. The policy would have to be supplemented so that pages are also removed.--π–π’π€π’ππšπ²πžπ« πŸ‘€πŸ’¬ 08:44, 3 August 2020 (UTC)

@WikiBayer: inappropriate edits with life-safety concerns are normally handled by the WMF T&S team under the Threats of harm process, who can take any measures they need to address the situation - including suppression. In many case, if a threat does not include personal information I would think that deletion normally would suffice, and is more readily available on projects. Are there some example scenarios you could outline where there is a gap in the response capability? (no real examples please). β€” xaosflux Talk 13:54, 3 August 2020 (UTC)
@xaosflux For example, politicians or people in public are threatened & LTAs write death threats against Stewards/global sysops/sysops. I had previously written to a team member who also agreed that this was a loophole in the OS policy. (The occasion was a call for execution) CSteinberger (WMF), suggested that I start discussion.--π–π’π€π’ππšπ²πžπ« πŸ‘€πŸ’¬ 14:13, 3 August 2020 (UTC)
@WikiBayer: I don't see an account for "CSteinberger (WMF)" - can you link to the account and ask for feedback? The OS policy shouldn't be hindering T&S? β€” xaosflux Talk 15:15, 3 August 2020 (UTC)
Saw your email too, I don't read that language - but is that a case where you think revision deletion isn't enough and where WMF T&S wouldn't get involved either? (i.e. something where the policy needs to be expanded to allow stewards to act where they currently can not?) β€” xaosflux Talk 15:19, 3 August 2020 (UTC)
Sorry the for false account name is CSteigenberger (WMF). So far I have only spoken to one member of WMF T&S and as far as I know there was no conversation with colleagues at that time, so I cannot say whether WMF T&S generally does not intervene in something like that. Yes, the policy is to be expanded so that Stewards / Oversighters can remove any type of serious attack / threat / execution call--π–π’π€π’ππšπ²πžπ« πŸ‘€πŸ’¬18:14, 3 August 2020 (UTC)
@WikiBayer: I understand that you want to expand the policy, but not why you think deletion is insufficient - can you expand on that? β€” xaosflux Talk 18:51, 3 August 2020 (UTC)
Threats / most serious insults etc. always have a negative impact on the person concerned, therefore, this should be removed consistently to protect them. Wikimedia is a social project and therefore has a special responsibility to protect people. In addition, non-removal may be legally questionable in some countries.--π–π’π€π’ππšπ²πžπ« πŸ‘€πŸ’¬ 12:24, 4 August 2020 (UTC)
@WikiBayer: so you are concerned that an administrator needs to be protected from using their view-delete access to view a negative message that is about them, on their own talk page for example? Deletion already "removes" the edit for all other cases. β€” xaosflux Talk 16:35, 6 August 2020 (UTC)
I am concerned that someone will retrieve deleted content that are negative to other people and not responsible handling deleted content. Even though most administrators handle the deleted content well, there is a risk that such edits will be disseminated. There are several hundred people globally who have access to deleted content in at least 1 project. There is always a risk.--π–π’π€π’ππšπ²πžπ« πŸ‘€πŸ’¬ 14:27, 7 August 2020 (UTC)
there is a risk that such edits will be disseminated But nobody is interested in reading them.. Copyvios are a greater legal risk, I think. β€” Alexis Jazz (talk or ping me) 16:38, 14 August 2020 (UTC)

Expand for malicious code removal

Follow up from phab:T202989. In the event that dangerous/malicious code pages are uploaded, suppression may be the best way to deal with it - suppose we should RfC this to expand the use case? β€” xaosflux Talk 19:23, 17 October 2020 (UTC)

I don't believe that this is needed - "dangerous/malicious code pages" can simply be deleted normally if that falls within a projects deletion policy. I don't really understand how this has something to do with phab:T202989 - that task will not allow admins to run any new code, just view it. We don't oversight "dangerous/malicious code " that is added to non-code page (i.e. just displayed, and not executed) as far as I know either DannyS712 (talk) 20:50, 17 October 2020 (UTC)
@DannyS712: In which case there should be no reason why administrators should be prevented from viewing deleted .js/.css pages (which they currently cannot). This is contrary to the arguments against making the change requested at phab:T202989 where it is claimed that it is important admins should not be able to view deleted code pages as they might contain malicious or otherwise harmful code. The counter argument is that such harmful code can be hidden from administrators using suppression, which it clearly can be technically but not necessarily by policy. Thryduulf (talk: meta Β· en.wp Β· wikidata) 21:12, 17 October 2020 (UTC)
"In which case there should be no reason why administrators should be prevented from viewing deleted .js/.css pages (which they currently cannot)." - I agree, which is why I have sent a patch to restore the ability. "This is contrary to the arguments ... where it is claimed that it is important admins should not be able to view deleted code pages as they might contain malicious or otherwise harmful code." - I do not consider those arguments to be convincing DannyS712 (talk) 22:08, 17 October 2020 (UTC)
I agree that this is tangential to the phab task. MediaWiki (and suppression) is used beyond WMF servers, and our community policies shouldn't dictate the permissions structure for downstream users. The concern is that admins should not be allowed to view malicious code, and since oversight provides a technical solution to that problem the concern is obviated. Whether the end users want to implement that solution is their business. So I think these two discussions can happen in parallel without being bound to each other. That said, I agree with xaosflux that an RfC on allowing suppression of malicious code is a good idea. To avoid wasting time, we should probably figure out a list of criteria for "malicious" so that oversighters (who may not know javascript) can more easily evaluate requests. Wugapodes (talk) 22:53, 17 October 2020 (UTC)
What harm comes from being able to view "malicious" code, however that may be defined? As long as it is not being run, I don't see the problem (assuming malicious refers to code doing bad things, not code that includes text that is malicious, eg outing) DannyS712 (talk) 23:17, 17 October 2020 (UTC)
If administrators can view deleted revisions they can also copy deleted revisions, even if the page is never restored in the logs. If you can get an unwitting user to request that an unwitting admin provide a deleted script with code that does bad stuff, then the code remains a vulnerability. Currently we restrict it by default to interface administrators. The proposed solution is that we restrict it case-by-case using oversight. I think technologically it's in line with full transparency and should go ahead without delay, but the result is that we need to determine whether and how to implement it as a matter of Oversight policy on WMF wikis. Wugapodes (talk) 00:12, 19 October 2020 (UTC)
We restrict it by default to interface administrators not because the intention was to restrict it to interface administrators, but because this was an unintentional side effect of removing the ability of normal administrators to edit the scripts. If someone knows to request a copy of a deleted script that would do bad stuff, the code only remains a vulnerability if the user requesting a copy decides to run it. DannyS712 (talk) 01:41, 19 October 2020 (UTC)
I tend to agree with Danny that deletion/revision deletion is enough to hide bad code in general where needed. On the other hand, I might be persuaded to support this proposal, but only to deal with especially dangerous or damaging code (a concept to be defined which can include e.g. code that can be used to hijack other user accounts, or to violate their privacy, etc.) if only just to prevent administrators inadvertently undeleting perilous code where there is a credible fear that such code is so malicious that it needs to be restricted (if the Phabricator task gets fixed and administrators are allowed again to see deleted revisions in CSS/JS/JSON pages...). My opinion is that the proposed criterion be as specific and as narrow as possible, with just a bit of leeway for common sense, but that in no way means that Suppression is allowed to be used in all instances of bad code. Just my two cents. β€”MarcoAurelio (talk) 18:58, 18 October 2020 (UTC)
  • Already covered. I’ll ping Risker since she was instrumental in developing the Oversight policy, but as I said on our list, it’s important to remember that the OS policy is fairly unique amongst Wikimedia governance policies as it is intended to create a minimum standard for when suppression must/should occur. It is not an all-inclusive listing of the only times suppression can occur, because creating such a list would be impossible as its tantamount to creating a list of all the way humans can be cruel to one another. Whether or not a local oversighter wants to act in these cases is one thing, but they can if there’s general agreement on their team that content falls within the intent of the policy.

    Tl;dr: the OS policy is the opposite of the CU policy. The latter lists the only circumstances where an action can occur, while this lists the minimum circumstances where it should occur. TonyBallioni (talk) 23:01, 18 October 2020 (UTC)

    That may be your personal interpretation, but I believe it is incorrect. Both the CU and OS policies establish the general conditions under which both tools can be used, while allowing for judgement to be exercised by functionaries in determining whether actions will fall under a given criteria established in the policy. They do not work opposite to each-other. I will also say in my capacity as an ombuds that we have recommended that the Foundation work with the community to modernize the oversight policy to allow for some different local practices, and provide for greater clarity of the criteria for suppression. – Ajraddatz (talk) 23:47, 18 October 2020 (UTC)
    They don’t work opposite each other as their intent is the same: to protect privacy. How they do that though is in fact in an opposite manner: local communities cannot create a looser CheckUser policy than the global one. They can create a more liberal local oversight policy than the global one, however. There’s also the routine use of suppression on projects for things not explicitly covered in the wording: self-disclosures of personal information by adults early in their time on Wikimedia projects if they later regret it probably being the most prominent as that’s pretty clearly not covered by the wording (see β€œwho have not made their identity public”.)

    The same could be said for suppression of information about minors that they self-disclose. That ones trickier because of the age consideration, but it’s not unambiguously covered by either the global policy or to my knowledge any local one, but many (not all) projects do it anyway. We also suppress renames in very rare circumstances involving real life harassment, suppress user details people have self-revealed if there’s a credible fear of harm, and many other things that don’t fall strictly within the wording of any global or local OS policy, but are clearly within the intent of protecting people by limiting access to their data. You could say all of these are ignore all rules types of moves, but even if that’s the case, it’s evidence the OS policy in practice is not applied strictly

    The wording of the policies around abuse is also significant: the CU policy foresees use of the tool itself as a potential abuse, while the OS policy is primarily concerned about release of private data in that regard. It is certainly possible to abuse OS by using it and the policy wording acknowledges this, but the bigger risk is the read access which can directly impact the privacy policy. We shouldn’t be suppressing for just anything, but a lot of the work is in the grey area that’s not clearly addressed. I pinged Risker on this because she and I have had a fair amount of discussions on it and she eventually won me over to a more liberal interpretation, and I think her input here would be valuable. She can correct me if I’ve overstated anything, but I don’t think a strict reading of the OS policy matches either it’s implementation or the sections of it where it discusses what abuse is. TonyBallioni (talk) 04:47, 19 October 2020 (UTC)

    Acknowledging this ping, but not able to respond tonight. Will do so tomorrow. Thank you for looking in to this. I will note, however, that anything that involves malicious code is strictly against the TOU. Risker (talk) 05:52, 19 October 2020 (UTC)
    I agree that the policy should work the way you described. I don't think it currently does, and I think you should be caveating what you say with "in my opinion" rather than stating it as fact, because I would argue that it is not. I would personally interpret the enwiki practice of suppressing self disclosures by minors as against the current policy, though I am not suggesting that the practice be stopped. – Ajraddatz (talk) 15:40, 19 October 2020 (UTC)

Policy question/suggestion

Re the first-listed grounds for suppression, Removal of non-public personal information, I note that en-wiki goes a step further, including: "IP data of editors who accidentally logged out and thus inadvertently revealed their own IP addresses". Has consideration been given to including this as global policy? JGHowes talk - 02:46, 26 October 2020 (UTC)

@JGHowes: as a m-w OS'er I can say that at least here on the meta-wiki, in practice we do generally remove for that reason, however that use case is not strongly rooted in the WMF Privacy Policy - which warns that logged out editing may be publicly and permanently archived in the public interest among other cases how it is not covered. As this page represents a global policy, I don't think we should just amend it directly while these are at-odds without a larger discussion. β€” xaosflux Talk 19:12, 26 October 2020 (UTC)
Return to "Oversight policy/Archives/2020" page.