Grants talk:IEG/StrepHit: Wikidata Statements Validation via References
Your feedback is crucial to the project! Feel free to add your thoughts here! edit
Repherence -> reference? edit
Wikidata Annotation Tool Google Summer of Code 2014 project edit
In the GSoC of last year (2014) Apsdehal worked to build a Wikidata annotation tool. You can review the weekly updates for the project and grab the source code on GitHub. The idea at the core of the Wikidata Annotation Tool was using the semantic annotation tool Pundit (http://thepund.it) to create statement to be automatically fed in Wikidata (keeping the original source as a reference). It looks to me that this project is quite similar in its spirit and goal, is my interpretation correct? I am (of course) happy to know that there are several efforts in this direction. --CristianCantoro (talk) 18:24, 7 September 2015 (UTC)
- This is very much for texts outside of WMF. Consequently the tools part is most welcome. Thanks, GerardM (talk) 09:25, 8 September 2015 (UTC)
- Thanks @CristianCantoro for the pointer! It indeed seems closely related work. Unfortunately, I cannot seem to try it: the Web service does not exist and I cannot login to Pundit using the provided bookmarklet. From what I understand in the project completion report, this tool allows to perform a manual annotation based on some text fragment of interest, while StrepHit will do it automatically. Is my understanding close to the truth?
- --Hjfocs (talk) 17:33, 8 September 2015 (UTC)
Hello Marco, congratulations, I like your draft proposal!
Your work packages and project goals are ambitious and realistic. For your use case soccer you propose a corpus collection from a set of authoritative web sources from 50 different sources like The Telegraph, Encyclopedia Britannica, DFB and Spiegel. 250,000 documents should be chosen from these sources, where 1 document yields 1 reference URL. In the use case in the proposal these are two media sources and one online encyclopedia which is not Wikipedia and one "official" site.
The english Wikipedia article for Cordoba has already 10 references, so it would be useful (and maybe quite easy) to add these references to StrepHit as some primary suggestions. (Or are they already somewhere there in the primary sources tool?)
Instead of the source The Telegraph I would recommend as more authoritative FIFA.com, e.g. http://www.fifa.com/news/y=2015/m=2/news=krankl-the-hero-of-cordoba-2524061.html : Sentence: "The Germans, for example, always enjoy recalling the 1954 World Cup Final, better known as the ‘Miracle of Bern’, …. Whenever these two nations meet, football aficionados rummage through the history books to reference Austria’s 3-2 win at the 1978 World Cup in Argentina. …. The encounter is remembered as the ‘Miracle of Cordoba’ by one set of fans and the ‘Shame of Cordoba’ by the other."
I understand that this is probably a more challenging task than the sentence "The Miracle of Cordoba, when they eliminated Germany from the 1978 World Cup". But this task could show a lot of natural language processing capabilities. (By the way, is there really a property "eliminated in"? I did not find it.) --Projekt ANA (talk) 22:59, 12 September 2015 (UTC)
- Hi @Projekt ANA, thanks for your interest and for your suggestions!
- Regarding references in Wikipedia, there is Sourcery by @Magnus Manske. It is interesting to see that reference URLs proposed by StrepHit can also be added to Wikidata through this tool (I tried with Andrea Pirlo). I think that an integration between Sourcery and the primary sources tool can be of great benefit for Wikidata.
- The sources selection phase is intended to stick to the Wikidata verifiability guidelines and the Wikipedia ones. Said that, I agree the FIFA source you mentioned may be more authoritative than the Telegraph one, since FIFA is the "official" reference for soccer. The examples were all meant to be simple, just to facilitate the understanding of the idea: that is also the reason why you found a property that does not exist in Wikidata.
- --Hjfocs (talk) 09:55, 14 September 2015 (UTC)
Frame Semantics and embodied cognition edit
The proposed implementation is based on Frame Semantics. Did you also consider an embodied cognition approach, like ITALK and Poeticon for the iCub? The soccer use case deals also with the human body, e.g. Player_Body_Parts has 41 Lexical Units in kicktionary.de. Embodied cognition and a model of human anatomy would be very useful for the use case. --Projekt ANA (talk) 12:07, 13 September 2015 (UTC)
- Hi @Projekt ANA,
- thanks again for your stimulating links, I did not consider such approach. In which way do you think it can be integrated to StrepHit? Currently the focus is on verbal lexical units, which are likely to trigger factual frames and may be more suitable for Wikidata properties, while I assume human anatomy will be modeled through nominal lexical units, as in the kicktionary example. I am not sure how to map such relations to Wikidata, and am eager to hear your thoughts.
- --Hjfocs (talk) 10:27, 14 September 2015 (UTC)
- Hello @Hjfocs,
- thank you for your quick response!
- I think an integration of embodied cognition concepts into StrepHit could be started by expanding the concept of lexical units, frames and scenarios, starting from Perception_active, _body, _experience. Nominal lexical units can be transformed very often to verbs (e.g. the knee – to knee; feeling - to feel).
- Let us take the sentence again:
- The Miracle of Cordoba, when they eliminated Germany from the 1978 World Cup
- One result is
- <Germany, eliminated in, Miracle of Cordoba> (assumed that there will be this property)
- But there are also:
- <Germany, participant, Miracle of Cordoba> <Germany, participant, 1978 World Cup > <they, participant, 1978 World Cup > <they, participant, Miracle of Cordoba> <Miracle of Cordoba, location, Cordoba> < Miracle of Cordoba, point in time, 1978> <1978 World Cup, point in time, 1978> <1978 World Cup, location, Cordoba > < Miracle of Cordoba, part of, 1978 World Cup >
- And if we take
- "The Germans, for example, always enjoy recalling the 1954 World Cup Final, better known as the ‘Miracle of Bern’, …. Whenever these two nations meet, football aficionados rummage through the history books to reference Austria’s 3-2 win at the 1978 World Cup in Argentina. …. The encounter is remembered as the ‘Miracle of Cordoba’ by one set of fans and the ‘Shame of Cordoba’ by the other."
- We could also get:
- < Miracle of Cordoba, named after, Miracle of Berne>
- This would be because of the similarity of the phrase (Miracle of …) and because of the context of this reference in which both “Miracles” are cited (remains to be seen if this named after is correct or not).
- Embodied cognition also aims at emotions and feelings – to enjoy a victory, but also to make a mock of the loser, and the corresponding frame has to deal with the sarcasm of the miracle.
- May I ask, what are your plans concerning the parser? Will you use TurboParser V 2.3, on Github)? And will you use the 1214 Lexical Units)? And SEMAFOR and the 1926 Lexical Units (en-de-fr) of the kicktionary, including the 16 scenes? Or will you develop a different parser? Thank you very much for your information!
- Cheers --Projekt ANA (talk) 22:02, 15 September 2015 (UTC)
- Hey @Projekt ANA,
- Let me clarify that the approach I propose follows a data-driven, bottom-up strategy: the set of frames that will be used will emerge after the input corpus analysis step, according to the set of top verbal lexical units (cf. point #2 in the implementation details and T3 in the work package).
- Said that, I like your idea to first choose the frames you mentioned and then develop the pipeline.
- The additional statements you pointed out can indeed be extracted, as they may all represent frame elements (depending on the frame definition). This is not made explicit in my examples just for the sake of simplicity.
- Your example seems a challenging one indeed, as it would require further NLP techniques: for instance, co-reference resolution may be needed to extract
< Miracle of Cordoba, named after, Miracle of Bern>.
- With respect to the parser, the idea is to apply part of speech tagging only and to avoid syntactic parsing. Hence, we may consider to reuse the TurboParser part of speech tagging module. A review is planned, cf. T2 in the work package. The same applies to SEMAFOR, although the starting idea is to use a supervised classifier based on SVM.
- Regarding the lexical units, it is not planned to exploit the full FrameNet or Kicktionary, since lexical units will emerge from the corpus analysis step. A subset of those lexical databases shall then be considered.
- Cheers! --Hjfocs (talk) 10:54, 17 September 2015 (UTC)
- P.S.: why don't you consider a collaboration in the project? You can do so through the join blue button in the proposal page.
9/29/15 Proposal Deadline: Reminder to change status to 'proposed' edit
This draft is looking like it's well on its way. I'm writing to remind you to make sure to change the status of your proposal from 'draft' to 'proposed' by the September 29, 2015 deadline in order to submit it for review by the committee in the current round of IEG. If you have any questions or would like to discuss your proposal, let me know. We're hosting a few IEG proposal help sessions this month in Google Hangouts. I'm also happy to set up an individual session. Warm regards, --Marti (WMF) (talk) 20:44, 20 September 2015 (UTC)
- Thanks for the reminder @Marti (WMF), the status is updated. Looking forward to hearing @I_JethroBT (WMF)'s thoughts as well, then we can schedule a discussion. Cheers --Hjfocs (talk) 08:37, 22 September 2015 (UTC)
Clarifications as per Markus's endorsement edit
Hi @Markus, thanks for your endorsement!
Let me address here the points you raised:
- The project plan is very fine-grained, maybe too fine-grained for a 6 month project (speaking from experience here).
- the Work Package has been built with as much specificity and pragmatism as possible. On the other hand, I understand that the tasks may be broken down too much: the risk here would be the additional effort of reporting eventual changes in the subtasks. Or do you think the work package is too optimistic and I should be more conservative?
- I would like a clearer commitment to creating workable technical infrastructure here. Content (extracted facts) should not be the main outcome of an IEG; rather there should be a fully open sourced processing pipeline that can be used after the project.
- I completely agree and have not highlighted this, since I assumed it is an implicit requirement. I will stress this in the goals.
- How does the interaction with OntoText fit into the open source strategy of WMF? (As far as I recall, OntoText does not have open source options for its products.)
- Ontotext has expressed the willing to collaborate as a partner of the European Union Multisensor research project. Due to the public nature of such EU-funded efforts, I assume its outcomes will be published as open source. I will check that with @Vladimir Alexiev.
- We'd contribute effort and advice, not closed source tools. In Multisensor we're using open source tools (eg SEMAFOR), and converting to RDF (an embedding of FN into NIF). We're interested to advance our FN knowledge and skills, and interested in WD in general. --Vladimir Alexiev (talk) 14:04, 28 September 2015 (UTC)
- One of the main goals are 100 new active users of the primary sources tool. But how would this be measured? Since Primary Sources is still under development, it is to be expected that the user numbers will grow naturally over the next year. How can this be distinguished from the users attracted by this IEG project?
- A qualitative measure to catch StrepHit-specific success signals can be set up, using for instance a dedicated project page, a request for comment and a survey.
- From a quantitative perspective, the primary sources
statusAPI endpoint does not seem to handle dataset grouping yet. I opened a ticket in the repo, and will explicitly mention this in the proposal.
Basic flaw edit
In this proposal it is suggested that the primary sources tool is a success. It is not. Arguably data languishes in there and there is hardly any move of data to Wikidata. When this is how success is measured, show me failure. Thanks, GerardM (talk) 11:04, 27 September 2015 (UTC)
- @GerardM: these seem to be value judgments that are not supported by evidence. The primary sources tool status page reports a constantly growing number of edits day by day: as of September 28th, they amount to 24,472, compared to 19,201 as of two weeks earlier, thus showing a steady progression. New top users are also regularly appearing. Hence, this suggests the tool is gaining traction despite it is still under development. See also this article for a higher level overview and @Markus's comment above. The quantitative measures of success for StrepHit are built upon such evidences, which contrast your criticism. Cheers, --Hjfocs (talk) 08:57, 28 September 2015 (UTC)
Eligibility confirmed, round 2 2015 edit
This Individual Engagement Grant proposal is under review!
We've confirmed your proposal is eligible for round 2 2015 review. Please feel free to ask questions and make changes to this proposal as discussions continue during this community comments period.
The committee's formal review for round 2 2015 begins on 20 October 2015, and grants will be announced in December. See the schedule for more details.
fnielsen's endorsement feedback edit
first, thanks for your endorsement! I like a lot your idea of a URL + HTML tags whitelist for verifiable sources. I'm not sure whether the tool you suggest is already available or not. @Platonides maintains a spam blacklist, maybe he can point us to something relevant. Cheers! --Hjfocs (talk) 09:22, 19 October 2015 (UTC)
- @Fnielsen: This page contains a URL blacklist for the primary sources tool, although it does not seem to be as exhaustive as @Platonides's one. As a first step, we could add a whitelist page there. Cheers, --Hjfocs (talk) 09:55, 20 October 2015 (UTC)
Aggregated feedback from the committee for StrepHit: Wikidata Statements Validation via References edit
- I would like to thank all the reviewers for their comments.
- Let me address below the points needing additional explanations.
- The applicants should be commended for identifying a third-party basis for the proposed labor rates.
- I have included in the proposal a supplementary clarification of the budget items concerning the human resources.
- See Markus' concerns about scope.
- I would like to stress that the Work Package has been built upon previous work carried out under the Google Summer of Code 2015 program (cf. this section). The project was successful not only in terms of technical implementation, but also in terms of community attraction, even though it was achieved in a shorter time frame (i.e., 3 months), and at a smaller scale.
- how it will integrate into Wikimedia projects.
- StrepHit is intended to serve as a reusable tool which will generate content for Wikidata, when run. Wikidata was born to become the central structured data hub for all the Wikimedia projects: currently, a large amount of Wikis can already be fed by Wikidata via the arbitrary access efforts, most notably 250 Wikipedia language chapters, Wikisource, and Wikivoyage. Hence, StrepHit will potentially benefit all the Wikimedia projects thanks to its inclusion in what is meant to become the standard flow for content addition in Wikidata, namely the primary sources tool.
- As a final note, let me highlight a crucial point for further development besides the IEG time frame, for which I feel like I have a special interest and skills: multilingualism, as pointed out in phase 2 of the community engagement.
- English is scheduled for implementation in the 6-months scope, due to its high coverage, thus high impact. However, I would love to make StrepHit support more and more languages: since I am proficient in French, Spanish, and native speaker of Italian, these would be the next ones. Therefore, reaching out to such language communities at an early stage is an important step.
- Cheers, --Hjfocs (talk) 14:36, 20 November 2015 (UTC)
Round 2 2015 decision edit
Congratulations! Your proposal has been selected for an Individual Engagement Grant.
The committee has recommended this proposal and WMF has approved funding for the full amount of your request, $30,000
Comments regarding this decision:
The Committee is excited to be a funding partner in the investigation into what has been described as a “canonical question” of the Wikimedia movement. We appreciate your work to establish key partnerships--as reflected in your endorsements--which are critical to this project’s success, and we would like to see some of these formalize into advisor positions, where possible. We look forward to the research documentation to come from this project, as well as the concrete data quality enhancements on Wikidata.
- You will be contacted to sign a grant agreement and setup a monthly check-in schedule.
- Review the information for grantees.
- Use the new buttons on your original proposal to create your project pages.
- Start work on your project!
Letters of Support edit
Support from the Spanish DBpedia edit
Me and many members in our research group at UPM (Universidad Politécnica de Madrid) are delighted with the StrepHit project. We consider that this initiative will increase the value of DBpedia, one of the most used datasets. Additionally, this project can also provide a valuable generic infrastructure for multilingual conversion of natural language to semantic data, applicable to other areas outside DBpedia.
Therefore, we want to show our interest to the people responsible for funding this project.
The Spanish DBpedia is the second DBpedia, just after the English version, in terms of number of semantic data generated from Wikipedia. The results of the phase 1 of the StrepHit project, focused on the Italian DBpedia, are very promising, and its extension to languages such as Spanish would produce an important increase in the number of semantic data available. These new data could be exploited not only in academia but by companies. The analysis of the data requests made to the Spanish DBpedia shows an increasing interest by companies; even higher, in terms of number of requests, than the ones from academia.
Support from ContentMine edit
ContentMine and StrepHit have a very similar interest in mining and extracting information in bulk from sources and producing a machine readable output. We're interested in making any data that the community would like in Wikidata quickly and easily available. To this end we're very excited in the work done by Marco on both the Primary Sources tool and in aligning the kind of information extracted from fulltext sources with the Wikidata datamodel. We're hoping to start using the Primary Sources tool in the next few days/weeks as a means of making our output available.
The role of usability edit
It seems currently the grant focuses a lot on the data-mining part. Improving the usability however isn't a big part of the grant. I would advocate that future grants for StrepHit take usability concerns more in mind and fix issues such as the page reloading every time a statement is approved. ChristianKl (talk) 08:07, 24 August 2016 (UTC)