Information for webmasters
Information for webmasters
The National Library of Luxembourg harvests websites automatically in accordance with the law of the 25th of June 2004 “portant réorganisation des instituts culturels de l’Etat” and the “Règlement grand-ducal du 6 novembre 2009 relatif au dépôt légal”. Publications in non-material forms which are accessible to the public through electronic means, for instance through the Internet, are subject to legal deposit in Luxembourg.
The websites that are harvested through these means enrich the patrimonial collections of the national library which can thus collect and preserve the digital publications for future generations.
How does it work?
The harvesting is done using the Heritrix webspider. This program doesn’t interpret Javascript completely and hence it can happen that it generates some false URLs. This is of course not the intention of the BnL but can unfortunately not be avoided at this stage of the technology.
robots.txt
The spider of the BnL respects the robots.txt file with a few exceptions. Any file necessary for a complete display of a webpage (e.g. css, images, …) is downloaded even if it is in the robots.txt exclusion list. Moreover all landing pages for all sites are collected regardless of the robots.txt settings. In any event, the BnL reserves the right to change this policy as needed, in accordance with the “Règlement grand-ducal du 6 novembre 2009 relatif au dépôt légal”.
Contact
If our spider has an adverse effect on the performance of your website, please contact us at: . We’ll try to fix it as soon as possible.
Webarchive.lu
Please visit the website of the Luxembourg Web Archive for further information about our activities and collections.