Page tree

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.



Broad crawl

We finished our third broad crawl for 2019 (with a limit of 50 MB/step 1 and 16 GB/step 2) on 10 September. In 602 jobs we harvested a total of about 93 TB or 187 million objects. There are lots of sites blocking us, we will solve that by giving our new broad crawl harvesters new IP addresses and updating our throttling firewall rules. Simultaniously we ran the selective crawls connected to the broad crawls: Research databases, Municipalities and regions, Ministries and Government agencies, YouTube.

Now we are doing the “cleaning up” and improvements to prepare the next broad crawl

Selective crawls

Getting IP-validated access to content behind paywalls is still a big issue (to get in touch with the right person from the website owners). Vi are also trying to solve another issue: we are not able to capture comments on news articles.

Ongoing projects

  • Implementation of SolR Wayback (the important step is almost the risc assessment)
  • New user access procedure and form
  • Data mining/extractions from the archive: make sure with our legal department, that we follow all relevant laws, if we create a workspace for users in SolR wayback
  • Visual instant QA of https-seeds: configuration of pc’s for reading Warc-files (with SolR wayback)




  • We are almost finished with the first stage of our yearly domain crawl. We still experiencing the Duplicate Job Generation Error. Next step is to move the database to a stronger server, as we are thinking that the slow responding database is probably a reason for this error. We we will do that after finishing the domain crawl. Then we also are able to upgrade to Version 5.6 to test this Version in Production. Also we will going to downgrade Open Mq Version to 4.5.2 as mentioned in the Meeting today.
  • Now we are using a dedicated Ip-Range for the Crawlers which is not connected to the library anymore.
  • In the yearly budget discussion the management decided that we will not get more Storage per year. So our total budget for the next year will be the same, 6 TB for Domain and selective Crawls.



Broad Crawl

We have started our annual broad crawl on September 23.

There are almost 2 million websites that we divide in sets of 500 domains per job with a limit of 150 MB/domain.

We use two specific networks (FTTH-Fiber to The Home) to make the broad crawl in order to leave the regular network for our selective collections

We have already collected 38% of the websites (26 TB of information) without important inconveniences