Grants talk:IdeaLab/Cumulate Likes and Unlikes to automatise harassment limitations

Latest comment: 7 years ago by Rical in topic Oh boy

I disagree edit

"Like" and "unlike" method can not be reliable. Furthermore appreciation or disappreciation could be determined by many factors that are not necessarily related to harassment practices. For example a rude user can receive many "likes" because he is writing a lot of good articles, while a polite user can receive a lot of "unlike" because is writing articles related to biographies of persons that nobody like (e.g. politicians). --Daniele Pugliesi (talk) 03:34, 3 June 2016 (UTC)Reply

I disagree too. 'Per Daniele Pugliesi, but not only: The "thanks" system is already used to harass people (when one wants to show somebody he is following all his or her edits) ; the proposed system could just be an additional tool for harassment. --La femme de menage (talk) 04:14, 3 June 2016 (UTC)Reply

The real problem is that "like" and "dislike" is just used to mean "agree" and "disagree" on literally every website that uses it; only a tiny number of trained users use these buttons "properly". Titanium Dragon (talk) 08:12, 3 June 2016 (UTC)Reply

We have edit at the same time. Please, could you read my answer at 8:12 below in "I agree". --Rical (talk) 08:27, 3 June 2016 (UTC)Reply

I agree edit

In order the user rating system should function correctly there would need to be two types of rating scores. 1.An overall user rating score that would show a sum of how many Likes and how many Unlikes the user accumulated throughout the users lifetime. 2.An individual article and single page "Like" and "Unlike" grading system based on each edit the user commits to the single page's history.

Looking at and making decisions based on a user's overall score is not an accurate way of judging a user. That is because the user can have lots of Likes from the many outstanding writing and editing the user has done. But for a single article (for some reason) the user can be biased and manipulate the page to his liking. That is why we need the second type of a more specific page related rating system so others can restrict the targeted abusive behavior.

One problem that this "Like" and "Dislike" functionality can emit is that this tool in itself can become a method of harassing other users. To solve this there can be an option for the user to request an review of his rating score. If enough people find that the Dislikes the user received was incorrectly assigned to him and was an act of abuse, they can then vote to remove it. But in general the rating system will be accurate enough, unless Wikipedia will have more evil abusive people than good honest ones.

The liking action can/must have a comment to explain the motivation of the like/unlike. This motivation can be in 2 forms: 1- A free comment, 2- A guided selection in a short list of classic harass (only one, or some, to select). This comment help to understand and classify the harass. This comment can be needed which helps the liker to clarify his opinion and reduces too ligth opinions. --Rical (talk) 07:58, 3 June 2016 (UTC)Reply
This is actually temporal accumulation of karma. — Jeblad 22:53, 7 June 2016 (UTC)Reply

Comments and Questions edit

I can see the potential advantages of making it easier to flag edits as problematic. However, I can see a few potential problems with the proposal as it stands. Caeciliusinhorto (talk) 09:17, 3 June 2016 (UTC)Reply

Perhaps another way to reduce harassment is in the task task T141177 Wikipedia main content losts sources because too reverts, try to preserve them. --Rical (talk) 05:49, 2 August 2016 (UTC)Reply

Rename Like/Unlike edit

  1. Like/Unlike is confusing. It has an established meaning on popular sites such as facebook, and trying to establish a different meaning for the words on-wiki is likely to be confusing to new users at best. At worst, users will simply use the like/unlike buttons as they would on facebook, or as they would vote on reddit, and they will be very unhelpful for identifying genuinely problematic content.
  2. There is already a "thank the user for this edit" option. Having both that and a "like this edit option" is potentially confusing: what is the difference between the two?

Caeciliusinhorto (talk) 09:17, 3 June 2016 (UTC)Reply

  • Addressing problem 1: The wording is not much of an issue. You can use Report Abuse, Agree/Disagree, or anything you can think of. You can even use an Up and Down arrow to vote a users work (like at stackoverflow.com and many other websites). The point is not what wording to use, it is the underlining problem of halting abusive users that is really the issue. User:Stomyid

Understand opinions edit

  1. Users might be more likely to give established/well known/popular users the benefit of the doubt, while new users are likely to have less consideration extended. This is already percieved to be a problem on wikipedia as it is.

Caeciliusinhorto (talk) 09:17, 3 June 2016 (UTC)Reply

Thinking this through further, I suppose my fundamental issue is that I'm not sure how easy it is to democratically determine whether someone is being problematic. Sure, a single edit might be obviously problematic, and inspire enough "dislikes" that an editor gets brought up for sanction, but the aim of this proposal isn't to deal with single obviously problematic edits; it's to deal with harrassment. And harrassment is frequently enacted through a series of apparently innocuous acts, each of which in isolation seems okay, but when you look at the long term pattern serves to drive people away. I don't see that the mechanism suggested is going to stop that. That's not to say that it's entirely a bad idea – there are use cases where it might be useful, I suspect – but I'm not convinced that stopping harrassment is one of them.Caeciliusinhorto (talk) 20:44, 3 June 2016 (UTC)Reply

Who can like/unlike edit

  1. Who is able to like/unlike edits? Can IPs do it? If so, it will be relatively easy to game the system. If not, that's another instance in which IP users are treated as second-class citizens, as they can have their edits judged for problematic behaviour but cannot contribute to judging others' edits.

Caeciliusinhorto (talk) 09:17, 3 June 2016 (UTC)Reply

How compute thresholds and delays edit

  1. Different languages on wikipedia, and even different pages in the same language wikipedia, have radically different viewership frequency. How does the limiting threshold work? How many unlikes are needed to trigger a sanction of a user? If the threshold is too high, then realistically problematic edits on obscure articles will never be sanctioned in this way; if the threshold is too low, then controversial but unproblematic edits on high-trafficked and controversial articles are likely to attract sanction, and either users will be unfairly sanctioned, or much admin time will be used up deciding whether or not a sanction is appropriate.

Caeciliusinhorto (talk) 09:17, 3 June 2016 (UTC)Reply

  • Addressing problem 5: There can be a system in place which will use an algorithm to determine "how many unlikes are needed to sanction a users work" based on how many edits the article has. If it is a "high-trafficked" article then the trigger should also have high threshold; if it is an "obscure article" article then the trigger should relate to the low amount of activity and should therefore have a lower threshold. The amount of dislikes needed in order to punish an user should grow according to the amount of editing traffic an article has. User:Stomyid
This will most likely work, it is a temporal karma-system with global normalization.
  1. This is about a temporal accumulation of up- and down-votes, and what you call those votes are of less concern. For example in Slashdot you can assign different terms, but the system underneath uses a single scale.
  2. Popular/unpopular users might get more or less votes, but that can be normalized against their votes on normal posts on a longer term and/or globally.
  3. It is simply different scales that can be folded into a single one.
  4. To score an edit you should have earned some points (karma) already. Typically you earn points through what you do, like editing or patrolling other users edits. Voting comes with a cost. If you don't have enough points you simply can't vote. And if your votes does not conform to other users votes on a global term they get a low weight.
  5. Reported posts should go to a special page that lists them, and they should also show up in the notification center.
It should probably have sentiment analysis, and/or export learning data for sentiment analysis. — Jeblad 23:14, 7 June 2016 (UTC)Reply

How give and record opinions edit

I have no experience with harassment on Wikimedia projects but I imagine this is usually realized through some written piece of text (mostly on Discussion or Talk pages?). So I would propose full reification of acts of harassments and thus making them (as - though disputable - evidence) visible in the first place. So users should become able to tag perceived instances of harassment. As for which tags to use, the list on the campaign FAQ page is a good start: name-calling, purposeful embarrassment, threat, sexual harassment, unwanted sexual attention, invasion of privacy, publishing of private personal information, stalking, bullying.

All such cases could be listed on a special page, as others have suggested. In the list view the tagged texts themselves would be displayed, each with users' reactions and a link to the hosting page in order to provide context.

More importantly, next to each user's ID there would be two, well designed, signs, one linking to cases reported on them, and another to cases reported by them. Such visibility of personal history alone may deter many from being unfair either way.

Of course, other (logged in) users would have the option to agree/disagree with each tag. Cases with more votes (or only with more agreeing votes?) could be placed higher in the list.

Third persons' feedback should be aggregated and dynamically shown in the design of the signs. With size / shape / colour codes, they should suggest more significance along with:

  • the severity of the types of harassments perceived (e.g. threat is probably worse than name-calling);
  • the number of cases;
  • the net number of agreeing votes;
  • the popularity of the cases, e.g. the vote-to-tag ratio relative to the current general such ratio (i.e. the tendency of users to vote anyhow on any reported harassment at all);
  • the recency of cases (because people – and mores – change);
  • the lack of public excuses made by the one reported (because people can have bad moments and then be forgiven).

However, I have some doubts whether it is deliverable to preserve and protect against manipulation each disputable content for long times. --Providus (talk) 23:13, 9 June 2016 (UTC)Reply

Translation? edit

It does not make sense to start translating pages with an endorsement section as it effectively blocks further endorsement. Can someone please remove the translation. (A way to make this work is by transcluding a subpage that is translated.) — Jeblad 22:24, 7 June 2016 (UTC)Reply

Grants to improve your project edit

Greetings! The Project Grants program is currently accepting proposals for funding. The deadline for draft submissions is tommorrow. If you have ideas for software, offline outreach, research, online community organizing, or other projects that enhance the work of Wikimedia volunteers, start your proposal today! Please encourage others who have great ideas to apply as well. Support is available if you want help turning your idea into a grant request.

The next open call for Project Grants will be in October 2016. You can also consider applying for a Rapid Grant, if your project does not require a large amount of funding, as applications can be submitted anytime. Feel free to ping me if you need help getting your proposal started. Thanks, I JethroBT (WMF) 22:49, 1 August 2016 (UTC)Reply

Oh boy edit

This will not go down well on en.wiki, where "voting is (considered) evil". KATMAKROFAN (talk) 00:09, 29 December 2016 (UTC)Reply

If voting is evil, how do you accept admins? And how do you banish trolls? --Rical (talk) 00:35, 29 December 2016 (UTC)Reply
Return to "IdeaLab/Cumulate Likes and Unlikes to automatise harassment limitations" page.