InternetArchiveBot/Questions fréquentes

This page is a translated version of the page InternetArchiveBot/FAQ and the translation is 33% complete.
Other languages:
Deutsch • ‎English • ‎Nederlands • ‎español • ‎français • ‎galego • ‎italiano • ‎magyar • ‎norsk bokmål • ‎shqip • ‎čeština • ‎русский • ‎українська • ‎العربية • ‎فارسی • ‎مصرى • ‎অসমীয়া • ‎中文 • ‎日本語 • ‎한국어

Cette page contient une liste des questions fréquentes posées concernant InternetArchiveBot.

Q : Comment puis-je exécuter le robot sur un ensemble de pages, j’ai besoin qu’il y soit exécuté ?

R : Vous pouvez utiliser l’outil d’ajout à file d’attente pour faire cela. Si vous avez une liste de pages qui nécessitent d’être analysées par le robot, placez la liste des articles sur lesquels vous voulez envoyer le robot, un article par ligne, et cliquez sur envoyer. C’est tout. Votre demande va se voir attribuée un numéro d’identifiant de tâche et sera placée en file d’attente jusqu’à ce qu’un processus puisse travailler dessus. Vous pouvez suivre son évolution en temps réel sur l’interface.

Q : Le robot a changé l’URL d’archive contre l’URL d’origine, dans un modèle de référence. Cependant, l’URL originale est morte. Que s’est-il passé ?

A : IABot a simplement déplacé l’URL d’archive dans le bon paramètre, celui conçu pour les URL d’archive, et laissé l’URL originale dans le champ dédiée à l’URL d’origine. Cela permet de rester cohérent avec les autres modèles de référence utilisés, ainsi que pour se conformer à l’utilisation prévue. Si cela est possible, les archives devraient être placées dans le paramètre URL de l’archive approprié, et l’URL originale devrait être placée dans le paramètre URL approprié.

Q: The bot keeps messing up with a specific source on the page. How can I stop this?

A: That depends, if it's persistently trying to tag the source as dead, or is persistently adding a bad archive, then you should help the bot by telling it the archive is no good by using the URL management tool. This tool lets users look up URLs the bot encountered and to fix any errors that are associated with the URL. This allows the bot to become more reliable. If the bot is breaking the syntax, or is mis-formatting the source, you should report that with the bug reporting tool.

Q: The bot is making bad edits to all the sources on the page. I don't think it's worth having the bot run on this specific page, how do I stop the bot from editing the page completely?

A: Be careful on this. Are you sure the bot is harming the page more than it is helping? Consider this, is it breaking the formatting of the page by making disruptive edits, or is it simply mis-tagging sources, and providing bad archives for others? If you answered the former, please report this with the bug reporting tool, as it is a bug needing fixing. Otherwise, if you are certain that the current sources on the page will not benefit from the bot's work, or that you will end up cleaning up after the bot more, then you can place {{nobots|deny=InternetArchiveBot}} on the article, or use {{cbignore}} on the individual sources that the bot will screw up. This will keep the bot away from the page. This also means link rot will not be addressed on that article until the tag is removed again. Also, please help the bot become more reliable by fixing bad data with the URL management tool and the domain management tool.

Q: What is {{cbignore}}? What is {{nobots}}?

A: {{cbignore}} is a specific blank template some wikis use to signal IABot to completely ignore a reference or external link on a page. An example of it's documentation and usage can be found at w:en:Template:Cbignore. {{nobots}} is an exclusion template some wikis use to signal compliant bots to stay away from a page altogether. IABot is bots and nobots compliant, and to specifically keep away InternetArchiveBot, use {{nobots|deny=InternetArchiveBot}} anywhere on the page to keep away only IABot.

Q: The bot tagged a source as dead, but the source isn't dead. What happened?

A: If the site was down only temporarily long enough, the bot may have considered the source as dead, because the site failed to validate as alive 3 times in a row, during 3 separately spaced out checks, or the site has blacklisted the bot from further access and is unable to assess the heartbeat of the site. At this point the bot now considers it permanently dead, and you should report it with the false positive reporting tool. This tool will in most cases correct the issue on the bot on it's own. If it can't, it will get reported to the interface roots.

Q: The bot mangled the page. What happened?

A: Please report it with the bug reporting tool, and do not apply {{cbignore}} or {{nobots|deny=InternetArchiveBot}}. The page will be used to replicate the bug, and chances are when the bug is fixed, the bot will prove to be useful to the page.

Q: The number of links rescued is different from actual links rescued. Is something wrong?

A: Don't be alarmed, this can happen. If the number actually fixed is lower than advertised, then that means something went wrong with the link, and it got skipped. This is easily fixed by manually fixing the source using the provided source, if it works. If it is higher, then that means there are 2 sources that are identical character for character, and was picked up once internally, but got replaced more than once externally. This is nothing to be worried about.