Research:Revision scoring as a service/Sandbox

Revision Scoring as a Service logo.svg
Created
21:23, 23 August 2014 (UTC)
Duration:  2014- – ??
GearRotate.svg

This page documents a research project in progress.
Information may be incomplete and change as the project progresses.
Please contact the project lead before formally citing or reusing results from this page.


In this project, we are building machine scoring models that rank, sort, categorize or otherwise evaluate revisions of MediaWiki pages. These scorer models are made available through a web service. See the Objective Revision Evaluation Service. We gather examples of human judgement to train the models with wiki labels, a human-computation system that integrates with MediaWiki.

Software componentsEdit

revscoringEdit

"revscoring" is the name of a python library that we use to define and extract features to use in building and training machine prediction models.

code · docs

ORESEdit

The Objective Revision Evaluation Service is a web service for hosting scoring models built using revscoring.

code · docs

Wiki labelsEdit

Wiki labels is a human-computation system that runs on top of MediaWiki/OAuth to deliver a flexible hand-coding interface for gathering human judgment for use in training and evaluating scorer models

code · docs


Sub-projectsEdit


Contact usEdit

We're an open project team. There are many ways you can get in contact with us.

TeamEdit

Tools that use ORESEdit

Other possible usesEdit

See alsoEdit