Talk:Machine Wikipedia
Misunderstanding of Wikidata
editYour proposal claims that:
Wikidata only provides structured data for one concept and the article may contain many concepts that are not included in its Wikidata item.
You seem to be assuming that a Wikidata item can only exist for a "concept" which is represented by a Wikipedia article; this is not the case. In fact, there are substantially more Wikidata items already in existence (currently about 131 million) than there are articles on all language editions of Wikipedia combined.
(Alternatively, if what you mean is that Wikipedia articles often contain facts which are not encoded in Wikidata - this is undoubtedly true, but is probably best addressed by adding these facts to Wikidata where possible, and, where that's difficult, by suggesting improvements which could be made to Wikidata to rectify that, rather than a completely new project which covers the same ground.)
Omphalographer (talk) 07:11, 19 November 2024 (UTC)
- @Omphalographer Because Wikidata is now filled by humans, and humans are reluctant to fill that, even for the main concept of an article, the corresponding Wikidata item has very few data for machines. I surprise why the autofilling process of Wikidata by NLP has not implemented yet.
- See, Wikidata does not implement Web 3.0, because Wikidata is not based on RDF Schema, so we cannot encode many sentences using only Wikidata.
- Additionally, each article contains at least 20 sentences, and each sentence produces at least 3 or 4 triples of RDF. So each article has nearly 60 RDF. Then we should insert each RDF triple in the related concept in Wikidata. Inserting 60 RDF to relevant Wikidata item is time-consuming and may cause problems for false-data. Tim Berners-Lee did not propose using integrated databases. In fact, he proposed placing machine-readable data in the same location as of human-readable data.
- So Wikidata does not implement Web 3.0. Thanks, Hooman Mallahzadeh (talk) 05:52, 21 November 2024 (UTC)