Grants:IdeaLab/Talk Page Moderation Assistant

Talk Page Moderation Assistant
Having a system to facilitate talk page moderation that can 1) list discussions that are highly likely toxic, encouraging volunteers to take actions to discourage toxic behavior to limit further damage; 2) allow easier investigations for administrators into the reported cases of personal attacks, by providing text search functionality for a given author.
countryUSA
themepersonal experience
contact email• yiqing@cs.cornell.edu
idea creator
PaigePhault
developer
Vegetable68
volunteer
Iislucas
this project needs...
community organizer
designer
volunteer
join
endorse
created on15:53, 9 August 2018 (UTC)


Project idea edit

What Wikimedia project(s) and specific areas will you be evaluating? edit

Is this project measuring a specific space on a project (e.g. deletion discussions), or the project as a whole?
All English WikiMedia talk pages (to start with).

Describe your idea. How might it be implemented? edit

Provide details about the method or process of how you will evaluate your community or collect data. Does your idea involve private or personally identifying information? Take a look at the Privacy Policy for Wikimedia’s guidelines in this area.
The system will be based on the work “Building a rich conversation corpus from Wikipedia Talk pages” that has been introduced in the Wikimedia research showcase in June 2018. The project builds a pipeline that can record the user actions of adding, deleting, modifying and restoring of a change to any talk page, derived from the Wikipedia history data dump.

Building upon this, we process every new revision and score it according to the likelihood of the utterance being toxic using the Perspective API, a machine-learning model that identifies toxicity in utterances. This tool has provided significant help to the New York Times moderation process, and we believe it can help with the Wikipedia community as well. Instead of using the API, we could also consider building on Jigsaw's public Kaggle toxicity dataset, but the model quality would likely be worse.

Are there experienced Wikimedians who can help implement this project? edit

If applicable, please list groups or usernames of individuals who you can work with on this project, and what kind of work they will do.

How will you know if this project is successful? What are some outcomes that you can share after the project is completed? edit

The outcome of the project will be a moderation assistant for the community. The success could be measured in three ways:

  • The fraction of personal attacks that are identified and the decrease in time they are publicly shown (e.g. following the methodology of https://arxiv.org/abs/1610.08914).
  • Via interviews with the community to compare with previous data to see if the harassment issue can be moderated better using this system.
  • The system can also be A/B tested to see if the machine provided toxicity scores can help with moderation.

How would your measurement idea help your community make better decisions? edit

After you are finished measuring or evaluating your Wikimedia project, how do you expect that information to be used to benefit the project?
The system can provide evidence for administrators when making decisions about the reported cases of online harassment, and can potentially help with wiki-hounding, since with the system, users reporting these cases can easily search for evidence to help the community reach a judgement.

Do you think you can implement this idea? What support do you need? edit

Do you need people with specific skills to complete this idea? Are there any financial needs for this project? If you can’t implement this project, can you scale down your project so it is doable?
Yes, although we’ve already made a lot of progress in an open source project hosted on github, to implement this proposal we would need support from Wikimedia for access to the streaming API of the database, and also to a pub-sub for suppressed revisions so that removals of PII can be propagated into any additional databases used.

Get Involved edit

About the idea creator edit

Researcher, engineer, interested in online harassment issues, who has been working on the Wikipedia data for a year.

Participants edit

  • Volunteer Provide support, occasional coding, product clarification, and whitelist for perspective API access. Iislucas (talk) 15:58, 9 August 2018 (UTC)
  • Developer I would like to help implement the system. Vegetable68 (talk) 16:07, 9 August 2018 (UTC)

Endorsements edit

  • Based on a lot of existing tools, this is very doable, and I'd be happy to help too. Iislucas (talk) 15:57, 9 August 2018 (UTC)
  • I think this is very Important! Tabarak yta (talk) 22:45, 9 August 2018 (UTC)

Expand your idea edit

Would a grant from the Wikimedia Foundation help make your idea happen? You can expand this idea into a grant proposal.

Expand into a Rapid Grant

No funding needed? edit

Does your idea not require funding, but you're not sure about what to do next? Not sure how to start a proposal on your local project that needs consensus? Contact Chris Schilling on-wiki at I JethroBT (WMF) (talk · contribs) or via e-mail at cschilling wikimedia.org for help!