Status of the production sites
We started our first broad crawl for 2019 on January 26 – step 1, with a limit of 10 MB per domain. We have withdrawn a number of sites from the normal broad crawl, they are crawled simultaneously in three definitions, “ultra big sites”, “OAI extraction” (research databases) and “ministries and government agencies”
A big issue for our broad crawls are webhosting companies. In order not to be blocked by the webhostings we make agreements with them and set up throttling in order not to overload there servers
We focus on getting content behind paywalls by negotiating for IP validation. Paywalls are an issue for almost all national news media and we will miss essential content, if we do not get content behind paywalls
We will have parliamentary elections this year before the end of June/beginning of July. We are preparing our strategy – both for the parliamentary elections and for the Elections for the European Parliament, which will take place on 26 May in Denmark.
Access forms and procedures
We try to set up a more userfriendly procedure for getting access to our archived content
Netarchive and GDPR
We are giving all our procedures a check for to be sure that we are following the new European Data Protection regulation. We have made changes to google analytics on netarkivet.dk, now we only collect user data allowed by GDPR
|Our broad crawl finished on December the 23rd. It represents 2.1 billion URLs and 106.46 TB. Due to technical difficulties it took a long time: 11 weeks (compared to 6 weeks in 2017). The technical difficulties came from the new computer architecture and the hardware, the broker and the version of NAS, resulting in multiples jobs being created that failed and thus an overall slowdown of the crawl. We will discuss this subject during the NAS workshop.The percentage of domains that are fully crawled has also decreased. We haven't finished anlaysing this collection but we've chosen to focus on the websites published for the young.|
We have finished analysing all the 2018 crawl reports. Over the year we crawled 2.6 billion URLs and 136.15 TB. This is 9 TB less than 2018 due to deduplication: we've crawled more in 2019 but with deduplication, especially for the broad crawl. The proportion of the broad crawl compared to the selective crawls is still growing: the broad crawl represents 78% of the 2018 collections and 70% in 2017. Our collections now represent more than 1 Petabyte (1 074.73 TB).
From mid-December to mid-January, we organised an internal workshop to improve the harvesting of social media (Facebook, Instagram, Twitter). We are able to crawl Facebook with the same Heritrix template we used for Twitter. But the quality of the crawl isn't guaranteed: the quality is significantly downgraded when there are more than 500 accounts in the job, and from one crawl to another the quality is very variable (sometimes we crawl nothing). We crawl basically the homepage, the posts and a lot of images: it's difficult to know exactly which images we crawl because a lot of them are not visible in the Wayback. During the workshop, we tried to crawl social media with Umbra. Umbra is very complex to install and there's no information exchange between Umbra and NAS: sometimes Umbra failed and Heritrix continued to collect. However Umbra allow us to crawl the images on Instagram that we couldn't crawl with Heritrix. We compared also the restitution of the web archives in Python Wayback with OpenWayback. The restitution is better with Python especially for Instagram: the images are displayed while in the OpenWayback we have just a white page. For Twitter, the scroll down seems to work in the access tool (but we must do more tests). But for Facebook, we hardly noticed any change.
· Web-curators collaborative work and training:o