From EPrints Documentation
Revision as of 14:16, 9 August 2016 by (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Web crawling robots are a fact of life. There are many "out there" on the web, many doing a good job at indexing our content. However there are also an increasing number of robots which are causing repository owners problems. These robots cause unnecessary load on the repository servers, as well as skewing the download statistics for the published data. We at EPrints Services and IRUS have observed a number of harmful robots either by their IP address or their user agent. Are we working to produce and maintain a simple list of these, so they can be more easily filtered or blocked by repository systems administrators.