Community Wishlist Survey 2022/Larger suggestions/Support more than 1 link to each wikiprojects in each QID

Support more than 1 link to each wikiprojects in each QID

  • Problem: Currently, each wikidata QID entry, representing a concept, are only allowed to link 1 article per each wikiprojects, e.g. each wikipedia language edition.

    But there are, for example, Wikipedia written in languages in multiple scripts, that are unable to convert mechanically, and thus require individual articles for each concepts. And then there are also the like of Multilingual Wikisource, Beta Wikiversity, and Incubator, which are designed to contain multiple languages version of same item.

    They currently cannot be linked to wikidata normally, and require pre-wikidata-era interwiki links to provide proper inter-language linking between them, and creating wikidata items that are completely duplicate of other existing items, through the property of d:Property:P2952.

    This basically rendered wikidata useless for entry in such situation.

  • Proposed solution: Support more than 1 link to each wikiprojects in each QID
  • Who would benefit: Anyone who try to use Wikidata expecting each QID to contain all relevant language edition of Wikipedia articles.

    Anyone who try to access wikipedia in different languages that are written in more than 1 script.

  • More comments: This proposal is based on and downsized from Sandbox, and is a re-submission of similar proposal from 2016 Community Wishlist as well as 2017 Community Wishlist. Similar proposal was also proposed but withdrawn from the 2019 Community Wishlist.

    Such situation is currently handled using the P2952 property in wikidata as mentioned above, in situation as described in discussion leading to the original creation of the property d:Wikidata:Property_proposal/same_as_(permanently_duplicated_item), however this do not actually link different articles/entities from different wikiprojects together as they still have two different QIDs, thus things like interlanguage links and template using wikidata data would not work properly.

    Partial fix on interwiki linking have been implemented to the Multilingual Wikisource, by designating the wiki as a multilingual wiki thus links to them are stored differently in wikidata. However, such workaround still unable to make Multiligual Wikisource entry in Wikidata QID item to show up in other language edition panel of wikisource of other languages, as far as I am aware of.

  • Phabricator tickets: phab:T206426
  • Proposer: C933103 (talk) 10:09, 16 January 2022 (UTC)[reply]

Discussion

  • I have previously made my comment at phab:T54971#3569663.--GZWDer (talk) 19:22, 16 January 2022 (UTC)[reply]
    That comment isn't useful for at least cdo and hak (and probably nan?!), as their problems aren't simply "Bonnie and Clyde" articles. Though we could have some COIs about Wikisource (somewhat off-topic), I would still support your such proposals to allow linking multiple links for one Wikidata item on the selected (not all, please, this really needs "blaocklist" by default, and whiteallowlist for some rarely-defined wikis per local and/or meta consensus), but @Superchilum, ChristianKl, and Sannita: do you three still oppose such a solution? --Liuxinyu970226 (talk) 01:41, 17 January 2022 (UTC)[reply]
  • Just a note on the situation of Multilingual Wikisource: I'm not sure the multiple items problem applies there as much as for Incubator wikis, because separate Wikisource works are supposed to be separate editions and modelled as such in Wikidata (i.e. separate items). SWilson (WMF) (talk) 03:31, 17 January 2022 (UTC)[reply]
    But there are also Eastern Min and Hakka pages on mul.wikisource, and for linking them, this solution would still have benefits or otherwise are not possible, isn't right? Liuxinyu970226 (talk) 04:10, 17 January 2022 (UTC)[reply]
    @Liuxinyu970226: yes that's true, good point. SWilson (WMF) (talk) 05:18, 17 January 2022 (UTC)[reply]
    @SWilson (WMF): Not all languages get their own wikisource project. Those that don't are designated to be stored on the Multilingual Wikisource (https://wikisource.org). And unlike Incubator, some of those are not expected to become a full wikiprojects in any foreseeable future. See related phabricator task phab:T275958. The phabricator ticket on interwiki link to Multilingual Wikisource through Wikidata is now marked as "resolved" through a workaround, but it still have a few problems, including a.) Link to the project is located under the "In other projects" section instead of the "In other languages" section, making users unable to find the link when they are looking for the text in other languages and also make it not possible for mobile apps and mobile webs to include this in the "other languages" section of those frontends, and b.) It still doesn't appears to allow multiple entry to the Multilingual Wikisource in same Wikidata QID item (Like for example if there is a text in Linear Scrip A and then another text in Linear Script B, with content being the same, after uploading them to the Multilingual Wikisource, it still doesn't seems possible to link both of them in the same Wikidata QID items and it would still be impossible to link to both of them from, say, English Wikisource, through the "In other languages" section of the toolbar.) C933103 (talk) 07:38, 17 January 2022 (UTC)[reply]
    @C933103: You're right, it's not ideal. I think the problem is not with sitelinks though, but with the need to have two pages on one Wikisource that are of the same thing (i.e. in two different scripts). It seems like it'd be better to keep the "one page, one concept" idea (and so have singular sitelinks), and fix the other issues where they occur. That said, it looks like trying to resolve all of this is too big for CommTech to achieve within a reasonable time, so we've moved this to 'larger suggestions' (where it can still be voted on, and the relevant team will be able to act on if appropriate). SWilson (WMF) (talk) 03:12, 25 January 2022 (UTC)[reply]
    I cannot see that as a good way of organizing content on Wikisource when Wikisource usually have even different copies of same document in same language on different pages. C933103 (talk) 16:15, 25 January 2022 (UTC)[reply]
  • There are three groups of wikis.
    • One (biggest) have always 1 page for 1 item - this is valid for majority of wikis.
    • Second - multilingual - are dedicated to be multilingual, have usually multiple pages for 1 item, but every page have it'S own language (betawikiversity, incubator, maybe commons galleries, language specific pages (eg. village pumps) in meta, commons, wikidata etc.)
    • And the last group are (usually) wikipedias and mul.wikisource with more pages for 1 item, which differs in script or dialect. But probably these can be defined - there is probably no need for this in most of main languages and large wikis. And this should be solved by adding second (third...) language link
      • Let'S imagine, there is language XX which should have two entries for every item. For ordinary wiki we write langcode (en, de, cs) or name (enwiki, dewikibooks, cswikinews). For XX we should use xx or xxwikipedia, when need to add second page xx1 or xx1wikipedia. (the first link should have alias xx0) This can be easily stored; can be both be written in sitelinks list and probably should be patch for recognizing [[xx0:Foo]] and [[xx1:Φοο]] as valid interwiki for xx.wikipedia. (I found case, when lad.wikipedia have three entries (2, 3), so simly use lad2.wikipedia, and hope than no more than 10 entries will be for one item in one wiki). JAn Dudík (talk) 09:33, 17 January 2022 (UTC)[reply]
    @JAn Dudík By my understanding, the final goal of this proposal is to deprecate Wikimedia permanent duplicate item (Q21286738) that results too many dormant items created. Liuxinyu970226 (talk) 10:18, 17 January 2022 (UTC)[reply]
  • Oppose This is absolutely not a viable solution. The 1:1 linking on Wikidata has a reason to be, and Wikidata should not adapt to what other projects do, since it is a project with its own standing and its own purpose. What should be done is try to find a concerted solution all together - Wikidatans, Wikipedians, and Commonists - on how to overcome such a situation. Every project has its rights to its own particular situation, and so does Wikidata - it's been 9+ years of life for Wikidata, and people still do not understand this. --Sannita - not just another it.wiki sysop 11:14, 17 January 2022 (UTC)[reply]
    P.S. This, of course, does not apply to beta projects, mul.wikisource, and the likes - which all come with their own set of problems. I am merely talking about the rest of the projects and the perennial discussion about duplicated items, which should be solved by the communities by talking to each other, and not forcing solutions down people's throats. Sannita - not just another it.wiki sysop 11:21, 17 January 2022 (UTC)[reply]
    Why should not? This is already a blocker of deploying phase I (sitelinks support) for Incubator as per 2016 Wishlist. Just remember: Not all people like the de facto 1:1 link schemes, there are really peoples to break it down. Or, do you really know what cdowiki and hakwiki wanna have?! Liuxinyu970226 (talk) 14:04, 17 January 2022 (UTC)[reply]
    The current use of permanently duplicated item is not going to benefit wikidata more than the proposed approach. Since wikipedia articles are being linked into multiple different entries on wikidata, all the data field in the two different QIDs could not be shared, and result in data quality discrepancy. Not to mention people from outside WMF projects, like even Unicode emoji proposals, are trying to adopt QID as an identifier on objects, and by keeping the QIDs non-unique for each concept does not appears to be a good idea for data consumer either. C933103 (talk) 21:50, 17 January 2022 (UTC)[reply]
    I never said I am a fan of the Wikimedia permanent duplicate item (Q21286738) solution - which isn't a solution. I said that we need to overcome the problem thinking about other solutions than this one and the one proposed here. Sannita - not just another it.wiki sysop 16:06, 23 January 2022 (UTC)[reply]
    There are two articles on Wikipedia featuring content correspond to same QID. You say you are not a fans of storing the two articles on two different QIDs. Yet you are against storing them on the single one QID. So what are other possible alternative solutions? Not storing them at all? Or storing them in even more numbers of QIDs? C933103 (talk) 12:11, 24 January 2022 (UTC)[reply]
    If there is the problem of recalling the right data from the right item, it can already be done with arbitrary access. As simple as that. It's a rough patch, but it works, and it doesn't require years of destroying and reconstructing a software, just because.
    The problem with wikis such as nap, eml, cdo, hak, etcetera is that we are asking users to do the same articles all over again two or more times, just because they need to switch the script or the local variant. This to me is the real problem, and it will not be solved by violating the 1:1 rule on Wikidata, but investigating a way to make easier switching between scripts/variants.
    Will it take time? Yes. About as much time it's taking to realise the 2016 wish to make Incubator and Multilingual Wikisource linkable by Wikidata (which is based off the idea of violating the 1:1 rule of linking), but at least we'd be asking the right request to devs. Sannita - not just another it.wiki sysop 20:41, 24 January 2022 (UTC)[reply]
    The solution of hosting multiple scripts as different articles on Wikipedia have long existed before the creation of Wikipedia, and breaking this will also break the Language Committee policy unless in specific situations according to my understanding, in addition to causing difficulty in navigation.
    There are already Language Converter in the Mediawiki software which handle situation where such conversion is possible, but Latin to Han character conversion is not among such circumstances. First of all, in addition to characters with exactly same pronunciation, the romanized writing system of the Chinese languages do not include tones, which increase the number of possible Han character with same romanized spelling by a couple of times, requiring human to select characters for each word from candidate list if one is typing Han character through such romanization scheme. It is not realistic to combine both versions of articles into a single one just to have every single words notated with proper character, creating a source code for each articles that no one can easily understand or edit. In addition, imported words are also treated differently according to script versions. Romanized writing system would simply include the Latin script name of foreign words when they're importing such words, except for some with established localized name, however the Han character version always have to translate these words into Chinese characters by identifying whether they should be translated phonetically or contextually, and that which characters among homonyms would fit the context of the word the best if phonetic translation is opted. This process is not something that can be automated. In some case, depends on the words being imported, like if you say the word "Googling" devised from the word "Google", in romanized text it would be reasonable to keep such word in the sentence, however in Han Character edition it would be necessary to reword the entire sentence structure to write it out with Han script. In order words, this is the sort of translation that even Google Translate with their machine learning couldn't perfect, and I think it would be much easier for Wikidata's system to adopt for actual real world usage by numerous Wikis than to rework all the relevant wikis according to Wikidata's standard, not to mention doing so require something even more powerful than the best commercial translation software in the world nowadays.
    And that is not to mention the like of Multilingual Wikisource, which the project goal outright state that they are supposed to contain text of multiple languages. And of course, even if automated translation is available, it wouldn't make sense to use them on Wikisource, because Wikisource is supposed to keep the original form and spelling and word choice of text, unless translated versions.
    As for "investigating a way to make easier switching between scripts/variants", that is of course also part of the problem. Currently there are only 1-to-1 linking within each Wikipedia. Such interarticle linking was the supposed purpose of the development of Wikidata. If Wikidata couldn't handle this, then should additional Wikibase instances be installed at every relevant Wikiprojects, so as to allow linking between different projects linking within themselves, and have Wikidata pointing to each of those individual Wikibase instances? I don't think that is something desirable to anyone due to complexity in operation and maintenance. C933103 (talk) 13:27, 25 January 2022 (UTC)[reply]
  • I am here to clarify what I mean: Min Dong is written in Han script (cdo-hani) and Latin script (cdo-latn), so we introduce two virtual site id: cdo-haniwiki and cdo-latnwiki. Users can add sitelinks to these two virtual site ids. Wikibase will map them to pages in cdowiki and guanaratee one cdowiki page may only be linked once. Another example in Alemannic Wikipedia, you may introduce "alswikisource" vitrual site id and link pages there which automatically maps to alswiki page.--GZWDer (talk) 20:15, 17 January 2022 (UTC)[reply]
    My concern on this approach is the technical difficulty behind this. C933103 (talk) 21:51, 17 January 2022 (UTC)[reply]
    @GZWDer: you propose xx-variant, I propose xx1, technically the same ;-) But xx-variant is better for case this feature is limited to certain wikis. xx1 is more universal, but means more mess JAn Dudík (talk) 07:16, 19 January 2022 (UTC)[reply]

Voting

  •   Support Opposing this will also oppose many communities that are interested in multiple scripts, this is inappropriate for at least Asian users. About some oppose comments like "this is absolutely not a viable solution", I would say that permanently duplicate items are even not a solution, they are dummy items. --Liuxinyu970226 (talk) 08:03, 29 January 2022 (UTC)[reply]
  •   Support Aca (talk) 13:14, 29 January 2022 (UTC)[reply]
  •   Oppose I would rather prefer we take the time to build a system that allows Wikimedia project pages to link to their relative Wikidata item instead of Wikidata items linking to their relevant pages using the sitelink system. That way we can stably have multiple pages link to one item. Lectrician1 (talk) 20:42, 29 January 2022 (UTC)[reply]
    @Lectrician1: Arbitrary access isn't available on some wikis due to, and affected by this block, so such a system will also not able to work well, afaik. --Liuxinyu970226 (talk) 01:28, 30 January 2022 (UTC)[reply]
    @Liuxinyu970226 I don't exactly understand? What block? Lectrician1 (talk) 03:01, 30 January 2022 (UTC)[reply]
    Such alternative approach can only provide unidirection link from individual wiki article to specific wikidata item, and cannot enable linking into those affected articles. Making those articles similar to orphan pages. C933103 (talk) 10:27, 30 January 2022 (UTC)[reply]
    @C933103 If you want interlinks with this system, the system just shows articles on other Wikis that link to the same item. Lectrician1 (talk) 16:13, 31 January 2022 (UTC)[reply]
    Hence that's part of the reason why current approach is not sufficient, without getting into more in-depth cases of wikidata usages C933103 (talk) 17:26, 1 February 2022 (UTC)[reply]
  •   Support Libcub (talk) 01:26, 31 January 2022 (UTC)[reply]
  •   Strong oppose As I already said during the discussion phase, this is absolutely not a viable solution. The 1:1 linking on Wikidata has a reason to be, and Wikidata should not adapt to what other projects do, since it is a project with its own standing and its own purpose. If there is the problem of recalling the right data from the right item, it can already be done with arbitrary access. As simple as that. It's a rough patch, but it works, and it doesn't require years of destroying and reconstructing a software, just because. Moreover, what should be IMHO done is try to find a concerted solution all together - Wikidatans, Wikipedians, and Commonists - on how to overcome such a situation. More specifically, the problem with wikis such as nap, eml, cdo, hak, etcetera is that we are asking users to do the same articles all over again two or more times, just because they need to switch the script or the local variant. This to me is the real problem, and it will not be solved by violating the 1:1 rule on Wikidata, but investigating a way to make easier switching between scripts/variants. Will it take time? Yes. About as much time it's taking to realise the 2016 wish to make Incubator and Multilingual Wikisource linkable by Wikidata (which is based off the idea of violating the 1:1 rule of linking), but at least we'd be asking the right request to devs. Sannita - not just another it.wiki sysop 12:03, 31 January 2022 (UTC)[reply]
    Note that those are also "the 2016 wish to make Incubator and Multilingual Wikisource linkable by Wikidata (which is based off the idea of violating the 1:1 rule of linking)" is also part of the wish here as that doesn't seems to be fully implemented C933103 (talk) 17:39, 1 February 2022 (UTC)[reply]
    @Sannita

    Q: The 1:1 linking on Wikidata has a reason to be
    A: [citation needed]

Liuxinyu970226 (talk) 02:49, 11 February 2022 (UTC)[reply]

  •   Support not for every wiki, but for selected group of wikis JAn Dudík (talk) 21:47, 31 January 2022 (UTC)[reply]
  •   Oppose For same reasons as other negative votes. --Sascha (talk) 06:59, 2 February 2022 (UTC)[reply]
    @Sascha this is already a blocker on deploying Phase I support on Incubator, why against is still useful? Liuxinyu970226 (talk) 02:46, 11 February 2022 (UTC)[reply]
  •   Oppose KingAntenor (talk) 07:23, 2 February 2022 (UTC)[reply]
  •   Support Mitar (talk) 20:58, 2 February 2022 (UTC)[reply]
  •   Oppose Rendering a page in more than one script is a valid problem that should be solved, but not by breaking a fundamental principle on which Wikidata is built. Silver hr (talk) 15:21, 3 February 2022 (UTC)[reply]
    It have been analyzed and concluded as cannot be solve, because each script lack necessary information to be convert to another script. C933103 (talk) 17:53, 3 February 2022 (UTC)[reply]
    If the problem for converting some scripts is missing information, then that's a subproblem that should be solved by adding the missing information. Silver hr (talk) 21:01, 3 February 2022 (UTC)[reply]
    @Silver hr: The scripts are structured as do not write in such information. As an example, English as we are writing here cannot be convert into IPA-script, because when we type English with these 26 characters, we do not specify which type of accents we are speaking in. Your proposed solution is like reforming the English writing system and add extra symbols and markings into it so that it can reflect phonetically how different people speak different words in different occasions. C933103 (talk) 13:43, 4 February 2022 (UTC)[reply]
    @C933103: English can be converted into IPA, either by specifying a particular pronunciation, or by using the standard pronunciation (or if there's no official standard, some sensible default). As I said, the problem you're presenting is valid, but it seems you're too fixated on a single, non-optimal solution. Let's find better solutions by discussing the actual problem. Silver hr (talk) 17:10, 4 February 2022 (UTC)[reply]
    @Silver hr: Yes, imagine requiring editors "specifying a particular pronunciation" for every words in English Wikipedia when editing and see what hassles that would be. This would be what you are proposing to necessitate conversion of articles written in Abjad scripts, which do not usually have vowel, into other scripts, or converting articles written in Latin or Syllabric script, which do not carry the etymology of each sound, into different Han characters. Not to mention special orthographical rules. You can try to match each character to a default, but the conversion is far from guaranteed to be correct because there are many possible corresponding output for quite a number of single input.
    Language Converter facilitate such conversion, and also allow specifying special conversion rules for each conversational topics and each individual Wikipedia articles, but using Chinese Wikipedia as example, despite there're only a small part of the Chinese script that have special non-1-to-1 conversion rule between Traditional and Simplified script, and even after all the aforementioned conversion table at various levels being implemented, it is still not without additional burden on editors having to fix incorrectly converted words inside article text, and every once in a while there are still incorrect conversion that need customization of rules to match usage conversion, and that's after 2 decades of continuous work to make it as functional as in current state, and this is for such a large wiki as in Chinese Wikipedia. I simply cannot see it as workable in languages where its different scripts have more complex relationship between each others than the two Chinese scripts, or in cases where it is "just" as complex but doesn't have as much users to maintain, or when only a relatively small portion of the whole wikiproject user base is interested in the non-mainstream version of script.
    Note that, even if a conversation system have 99.99% accuracy in converting one word in one writing system to another, when you apply it to a 5000-words article, by binomial distribution, it would still mean there would be 40% chance at least a word in the article will be converted incorrectly. And that's just one article. C933103 (talk) 23:14, 4 February 2022 (UTC)[reply]
    @C933103: In this hypothetical example, editors specifying a particular pronunciation for every English word would not be necessary. If one wants to read an English article in IPA script, one would select a pronunciation and the whole article could be automatically converted because the conversion is already defined for every word. If it is important that a particular word have a particular pronunciation, then an editor can specify that as a manual override. I don't see why conversions between other writing system pairs couldn't be defined as well. From what you're saying, it can be a lot of work, and I'm sure it is, but the amount of work is finite and it'll get done at some point. Until it is, we'll have to deal with imperfection and fix mistakes when we see them, which is pretty much standard for volunteer projects. Silver hr (talk) 01:56, 5 February 2022 (UTC)[reply]
    What is the point in viewing the one-one policy in Wikidata? Language converters can't work in some titles, but for example: we type a page title in zh-tw (Taiwanese dialect of Chinese), it sometimes say "This page is auto-redirected to proper zh term article, or manual redirect (with tw template) (I don't know Chinese, so I can't give any examples). Sometimes, it does not exist and we have to search it. Also, we need to improve auto-redirect by capitalization. And Wikidata would work as the same way as before. Thingofme (talk) 14:27, 5 February 2022 (UTC)[reply]
    @Silver hr: Please see the table at w:en:Heteronym_(linguistics)#English, it have to either be specified or guessed each instances these words appear. And then this list of English word having such problem is just a minor part of English language, and many of the languages I mentioned before have the same problem for essentially all words in the language. C933103 (talk) 14:45, 5 February 2022 (UTC)[reply]
    @C933103: Right, so the problem is that by just looking at a single word, out of context, it can be impossible to determine its meaning, such as lead the element vs lead the activity. The solution I prefer is to enable editors to express their meaning clearly in such cases (with defaults selected according to grammatical, and potentially semantic analysis). If a language exists where such ambiguity of meaning between different writing systems is the rule instead of the exception, then it makes more sense to treat them as different languages, at least from a technical standpoint. Silver hr (talk) 17:11, 5 February 2022 (UTC)[reply]
    @Silver hr: Yes that is a proposed solution to the wish from GZWDer and JAn Dudík in the discussion section above. But again my concern is whether it's going to be even more technically complex than my proposed form of implementation. However I am open to whichever form of possible implementation that can get the task done. It would probably be up to anyone who end up changing the code to decide which is easier to implement. C933103 (talk) 17:47, 5 February 2022 (UTC)[reply]
      Comment Regarding LanguageConverter, it's confirmed impossible for Romanian due to political problems accounted by another RFC, and could be hard-than-world for each Min languages and Ladino as explained by each Wikipedias. See also Wikipedias in multiple writing systems, and feel free to add informations if you know. Liuxinyu970226 (talk) 04:14, 6 February 2022 (UTC)[reply]
    @Silver hr ^^ Liuxinyu970226 (talk) 02:44, 11 February 2022 (UTC)[reply]
    @Liuxinyu970226: Romanian script conversion is not impossible because that would mean it's contrary to the known laws of physics. And I do realize it's a hard problem in some instances, but solving a problem the right way is worth it, no matter the difficulty. Silver hr (talk) 06:23, 11 February 2022 (UTC)[reply]
    @Silver hr Not impossible? Liuxinyu970226 (talk) 07:26, 11 February 2022 (UTC)[reply]
    @Liuxinyu970226: I don't understand the point of your reply. As I've already said, the laws of physics determine whether something is or isn't possible, not politics. Silver hr (talk) 18:43, 11 February 2022 (UTC)[reply]
  •   Support only in Incubator, Old Wikisource and Beta Wikiversity (should merge Beta Wikiversity and Incubator), and each test wiki only has 1 link, and   Oppose for others. No need to do this at all (but conversation of Chinese scripts are concerns) Thingofme (talk) 14:22, 5 February 2022 (UTC)[reply]
    On for example Minnan and Hakka projects, the topic of concern is to link to Han script pages in addition to Latin script page, as it isn't possible to convert from Latin script to Han script with reasonable accuracy. C933103 (talk) 14:46, 5 February 2022 (UTC)[reply]
  •   Support --Ciao • Bestoernesto 20:01, 6 February 2022 (UTC)[reply]
  •   Support only for incubator Zache (talk) 08:33, 7 February 2022 (UTC)[reply]
  •   Oppose This doesn't exactly sound like a Wikidata issue — if the issue is language conversion, maybe we can find other ways to do it in the wikis where it is a problem. Xn00bit (talk) 09:23, 8 February 2022 (UTC)[reply]
    It have been explained that automatic language conversion is not possible and thus the fastest way is manually retype the article in different scripts. C933103 (talk) 20:33, 8 February 2022 (UTC)[reply]
  •   Support either this proposal, or Lectrician1's idea. This is definitely needed for the wikipedias: different language versions make different choices about how to partition topics (and groups of topics) into articles, and there are many, many cases where there isn't anything like a 1:1 match between articles. See d:Wikidata:WikiProject Cross Items Interwikis (and also e.g. this discussion for the quandaries that arise when dealing with articles about single-species genera, of which there are thousands on the English Wikipedia). Uanfala (talk) 23:14, 10 February 2022 (UTC)[reply]
  •   Support * Pppery * it has begun 02:46, 11 February 2022 (UTC)[reply]