Mass content adding
Mass content adding is the project which main goals are (1) semi-automatic content adding to small Wikipedias (but not limited on small Wikipedias) and (2) making centralized data source for semi-automatic or automatic change of content on all Wikipedias. It can be used for other Wikimedian and other wiki projects (or anything else which is able to accept project methods).
Some of the examples for this project may be using different templates from English Wikipedia, like Infobox Country or templates for movies, minerals, species etc. for primitive translation of template content into templates and sentences into other languages.
- Adding content related to geographic data: places, rivers, mountains etc..
- Primitive translation of articles on English Wikipedia (form English to other languages).
- Keeping data up to date.
- ... (add your goal, too) ...
- National Geospatial Agency of US military
- Data which is possible to get from various institutions which are getting data (for example, institutes for statistics of various countries; but other organizations, too).
- English Wikipedia, as well as other big Wikipedias.
- The Common Locale Data Repository, an active project hosted at Unicode.org, has XML files with localized language and country names for many languages
- ... (add your source, too) ...
- The main method should be wiki (maybe separate Wikimedia wiki in the future). Data, localization etc. should be kept on the wiki (in this case Meta Wikimedia) and should be used by different program platforms.
- Localization: Sentences, templates etc. should be localized.
- We should make standard methods for all of the issues through the time.
- There are a lot of very useful projects on English Wikipedia (such as w:Wikipedia:Naming conventions (Cyrillic) etc.) which results can be used in this project.
- Translation of general articles - this could be done using OmegaT providing it with a particular feature that retrieves the text from one wikipedia, it is being translated creating a translation memory, and the translated text is then saved with it destination name on the other wikipedia. This helps when it comes to repetitive text: templates for example will be translated correctly, the translation memories can be exchanged, so others will re-use the same template names (we cases of double names already) and the translation memory, besides the glossary, will help to find terminology quite fast - so the time of any contributor is re-evaluated and people will be able to do more in the same timeframe. OmegaT is already used for translation of contents from Italian to Neapolitan and as much as I know also for Sicilian.
- Used content should be in public domain or licensed with some free license.
Main page: Mass content adding/Localization.
Main article Mass content adding/Software.
Main article Mass content adding/Geographic data.
Using English WikipediaEdit
Using whatever WikipediaEdit
- Calendar translations - IT-NAP, EN-??
If you want to participate in this project, please, sign here.
- --Millosh 16:00, 23 February 2006 (UTC) (General issues, Countries of the world, Places and municipalities in Macedonia, Places and municipalities in Serbia and Montenegro)
- --misos 16:03, 23 February 2006 (UTC) (Places and municipalities in Macedonia).
- --Babbage 17:54, 23 February 2006 (UTC) (Learn to use pywikipediabot to import CLDR data?)
- --Sabine 12:20, 24 February 2006 (UTC) Use of OmegaT for translation.
- - Slavik IVANOV 12:41, 9 March 2006 (UTC) providing a place for a good practice at os:, cv: and probably other smaller but growing Wikipedias.
- --Bonzo 23:31, 27 August 2006 (UTC)
- Representing the Tajik Wikipedia. - FrancisTyers 23:50, 30 August 2006 (UTC)
- I think that this work should be licensed under GNU GPL, not under GNU FDL. This is combination of public domain data and software. --Millosh 20:25, 15 August 2006 (UTC)
Please, add your bot name here if it can be available for this project.