Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Panel

Broad crawl

We finished step 1 of our third broad crawl for 2019 (with a limit of 50 MB/step 1 and domain) and are now running step 2 (16 GB/step 2domain) on 10 September. . Step 1 results: In 602 jobs we harvested a total of about 93 TB or 187 million objects. There are lots of sites blocking us, we will solve that by giving our new broad crawl harvesters new IP addresses and updating our throttling firewall rules. Simultaniously we ran the selective crawls connected to the broad crawls: Research databases, Municipalities and regions, Ministries and Government agencies, YouTube.

Now we are doing the “cleaning up” and improvements to prepare the next broad crawl

Selective crawls

Getting IP-validated access to content behind paywalls is still a big issue (to get in touch with the right person from the website owners). Vi are also trying to solve another issue: we are not able to capture comments on news articles.

Ongoing projects

  • Implementation of SolR Wayback (the important step is almost the risc assessment)
  • New user access procedure and form
  • Data mining/extractions from the archive: make sure with our legal department, that we follow all relevant laws, if we create a workspace for users in SolR wayback
  • Visual instant QA of https-seeds: configuration of pc’s for reading Warc-files (with SolR wayback)

...