Grants talk:IdeaLab/Controversy Monitoring Engine
More, please. edit
Hi Radfordj, I love this idea, and I'd love to know more! How will determine "words"? Also, you may want to take a look at https://meta.wikimedia.org/wiki/Grants:IdeaLab/Gender-gap_admin_training and consider how you might collaborate/support. --Mssemantics (talk) 21:01, 18 March 2015 (UTC)
- @Mssemantics: I think the words would probably come from an initial hand curated list of the typical four-letter variety: the b-word, c-word, n-word, f-words, etc. (I don't say them here so this post never gets flagged :). But, once we're able to identify posts that contain intimidating language, an inductive list of words weighted by the number of times they appear in intimidating posts will probably be used. See these word clouds (language warning!) for nice examples of the words that occur around intimidating language.
- As for admin training (which I love by the way), I think this project can fold into it in two ways. The first is by wrapping in some kind of first-response training. I don't know what admins will get into with this monitoring. The monitor might pick up flame wars that spread out over weeks or it might pick up wars that emerge and die out in a matter of minutes. I don't know. Maybe using this engine would help the training in getting admins to think about how gap-conscious interventions can happen in a live-controversy context. The other connection could be in training admins on how the controversy monitor works. There's an underlying theory of controversy that the algorithm will be built on and which admins can contribute to or maybe learn more about controversies based on how the algorithm works. I'm sure there are other synergies too. Did you have some in mind? --Radfordj (talk) 12:32, 19 March 2015 (UTC)
Eligibility confirmed, Inspire Campaign edit
This Inspire Grant proposal is under review!
We've confirmed your proposal is eligible for the Inspire Campaign review. Please feel free to ask questions and make changes to this proposal as discussions continue during this community comments period.
The committee's formal review begins on ’’6 April 2015’’, and grants will be announced at the end of April. See the schedule for more details.
Questions? Contact us at grants(at)wikimedia.org.
pointers and comments edit
hi there, I really like the idea, I would just like to learn more about certain parts of it (and have some pointers for you).
two distinct approaches how to show conflict. Maybe it helps or is even reusable.
- regarding your estimates, I would think that 40 h is too little time to develop and test a robust method to identify harmful controversy (as there is of course also good controversy apart from simple vandalism fighting) . Especially how you actually want to test this does not become quite clear in the proposal and I think it would be good to expand on that. I think that is an important point if you would want the tool to be widely used later on, as a) it can be quite subjective what an intimidating behavior might be and b) a low precision or recall might render the tool not effective enough for actual use by Wikipedians (esp. precision, i.e. raising false alarms).
- another point relating to the last one: just showing how controversial an article is might not directly relate to how intimidating some editors are towards, e.g., women and newcomers. For a controversy to "heat up" to a degree where it produces a high conflict score, it requires at least two parties going back and forth on each other. Especially newcomers (and maybe women also) might be less inclined to even go into an escalation like this, so the big "wars" would actually not be the ones you would like to detect, but rather instances where user tried a couple of times to change something, but were unilaterally reverted (without fighting back very much themselves). Or maybe this is already included in your concept (I was not sure)?
just some thoughts.
- Thanks for so many references and good questions. I've been looking at Contropedia and whoVIS along with Wiki War Monitor. These tools have been great examples for me to think through the technical and measurement decisions going in to designing and implementing this engine.
- What's been the most productive question for me to ask myself is where controversy monitoring fits in with Wikipedians' everyday editing. I think both WhoVIS and Contropedia do a great job of laying out the histories of edit conflict in a way that's intuitively navigable. The issue I have, even for my own idea for a page modeled on stats.wikipedia, is in envisioning how Wikipedians would find these external pages and refer to them on some regular basis. Either Wikipedians would have to know they're there and be able to find the kind of pages they're interested in or a new community of Wikipedians (like the New Pages Patrol) would spring up around it. I think either vision could work but would take a lot of time to create.
- Because of this, my thinking has evolved more towards implementing bot-constructed boxes like the "Current Status of Articles" box on Project Pages (see the [Feminism Project Page] for example). In this framework, the pages in a project page would be scraped and evaluated for controversy and then scored. The box might then link to the article itself or to a more in-depth analytic engine like WhoVIS for further analysis.
- Finally, to your questions about use, positive and negative controversy, and intimidation versus controversy. My hope with this is that the engine brings more eyes and ears to a controversy ("sunlight is the best disinfectant"). The premise is not that controversy is positive or negative, but that the actions people take when they disagree are destructive or constructive. Controversy can lead to harassment, intimidation, and a range of other negative behaviors and I think the more people are watching the better-behaved people will be. In this same vein, this is not an incivility detection engine meant to automate the classification of individuals or specific actions as intimidation or harassment or to characterize particular pages or groups as hostile or uncivil. I'm working on this civility question in my research, but implementing and publishing incivility scores or labels on individual editors, communities, or pages seems to me to be policing by code. But, then again, maybe there's a way to clearly define intimidation in a way that, in automatically labeling or scoring someone as such, it's taken as legitimate. --Radfordj (talk) 12:08, 29 April 2015 (UTC)
- I forgot your question about validating the scale. The controversy calculation would begin with the Wiki War formula, which I take to be useful at face value (the aggregate list of controversial pages in their paper look like what you'd think is controversial). The first round of testing is meant to properly scale the score. The Wiki War controversy scores were long-tailed, meaning most articles will get crammed in the low-controversy end of the scale. In this step, I'd run the controversy scoring on different subsamples of articles to examine the distribution of controversiality for modest sample sizes. After that, I'd add several other features like reversion and cross-talk on the discussion and user pages and maybe some other features. But, for the most part, I imagine the Wiki War score and the Number of Disagreement Actions from WhoVIS/Contropedia should go the furthest in adding new information to the scale.
- Having said that, I'll be continuing to work on controversy scoring (and other things) moving forward. This grant project is really meant to build the infrastructure for the initial code and for presenting controversiality to Wikipedians. Changes will continue to happen after the grant is over. The grant is just covering a minimally viable implementation. --Radfordj (talk) 12:29, 29 April 2015 (UTC)
Aggregated feedback from the committee for Controversy Monitoring Engine edit
|(A) Impact potential
|(B) Community engagement
|(C) Ability to execute
|(D) Measures of success
|Additional comments from the Committee:
Inspire funding decision edit
This project has not been selected for an Inspire Grant at this time.
We love that you took the chance to creatively improve the Wikimedia movement. The committee has reviewed this proposal and not recommended it for funding, but we hope you'll continue to engage in the program. Please drop by the IdeaLab to share and refine future ideas!
Comments regarding this decision:
Thanks for engaging in the Inspire campaign! We’d love to see you return in a future round of Individual Engagement Grants if you’ve got other ideas, or with this idea if you are able to incorporate feedback to address some of the suggestions and concerns from the committee’s review.
- Review the feedback provided on your proposal and to ask for any clarifications you need using this talk page.
- Visit the IdeaLab to continue developing this idea and share any new ideas you may have.
- To reapply with this project in the future, please make updates based on the feedback provided in this round before resubmitting it for review in a new round.
- Check the Individual Engagement Grant schedule for the next open call to submit proposals or the Project and Event Grant pages if your idea is to support expenses for offline events - we look forward to helping you apply for a grant in the future.