THE BRITISH LIBRARY

UK Web Archive blog

Information from the team at the UK Web Archive, the Library's premier resource of archived UK websites

Introduction

News and views from the British Library’s web archiving team and guests. Posts about the public UK Web Archive, and since April 2013, about web archiving as part as non-print legal deposit. Editor-in-chief: Jason Webber. Read more

10 November 2017

Driving Crawls With Web Annotations

By Dr Andrew Jackson, Web Archive Technical Lead, The British Library

The heart of the idea was simple. Rather than our traditional linear harvesting process, we would think in terms of annotating the live web, and imagine how we might use those annotations to drive the web-archiving process. From this perspective, each Target in the Web Curator Tool is really very similar to a bookmark on an social bookmarking service (like PinboardDiigo or Delicious1), except that as well as describing the web site, the annotations also drive the archiving of that site2.

In this unified model, some annotations may simply highlight a specific site or URL at some point in time, using descriptive metadata to help ensure important resources are made available to our users. Others might more explicitly drive the crawling process, by describing how often the site should be re-crawled, whether robots.txt should be obeyed, and so on. Crucially, where a particular website cannot be ruled as in-scope for UK legal deposit automatically, the annotations can be used to record any additional evidence that permits us to crawl the site. Any permissions we have sought in order to make an archived web site available under open access can also be recorded in much the same way.

Once we have crawled the URLs and sites of interest, we can then apply the same annotation model to the captured material. In particular, we can combine one or more targets with a selection of annotated snapshots to form a collection. These ‘instance annotations’ could be quite detailed, similar to those supported by web annotation services like Hypothes.is, and indeed this may provide a way for web archives to support and interoperate with services like that.3

Thinking in terms of annotations also makes it easier to peel processes apart from their results. For example, metadata that indicates whether we have passed those instances through a QA process can be recorded as annotations on our archived web, but the actual QA process itself can be done entirely outside of the tool that records the annotations.

To test out this approach, we built a prototype Annotation & Curation Tool (ACT) based on Drupal. Drupal makes it easy to create web UIs for custom content types, and we were able to create a simple, usable interface very quickly. This allowed curators to register URLs and specify the additional metadata we needed, including the crawl permissions, schedules and frequencies. But how do we use this to drive the crawl?

Our solution was to configure Drupal so that it provided a ‘crawl feed’ in a machine readable format. This was initially a simple list of data objects (one per Target), containing all the information we held about that Target, and where the list could be filtered by crawl frequency (daily, weekly, monthly, and so on). However, as the number of entries in the system grew, having the entire set of data associated with each Target eventually became unmanageable. This led to a simplified description that just contains the information we need to run a crawl, which looks something like this:

[
    {
        "id": 1,
        "title": "gov.uk Publications",
        "seeds": [
            "https://www.gov.uk/government/publications"
        ],
        "schedules": [
            {
                "frequency": "MONTHLY",
                "startDate": 1438246800000,
                "endDate": null
            }
        ],
        "scope": "root",
        "depth": "DEEP",
        "ignoreRobotsTxt": false,
        "documentUrlScheme": null,
        "loginPageUrl": null,
        "secretId": null,
        "logoutUrl": null,
        "watched": false
    },
    ...



This simple data export became the first of our web archiving APIs – a set of application programming interfaces we use to try to split large services into modular components4.

Of course, the output of the crawl engines also needs to meet some kind of standard so that the downstream indexing, ingesting and access tools know what to do. This works much like the API concept described above, but is even simpler, as we just rely on standard file formats in a fixed directory layout. Any crawler can be used as long as it outputs standard WARCs and logs, and puts them into the following directory layout:

/output/logs/{job-identifer}/{launch-timestamp}/*.log
/output/warcs/{job-identifer}/{launch-timestamp}/*.warc.gz

Where the {job-identifer} is used to specify which crawl job (and hence which crawl configuration) is being used, and the {launch-timestamp} is used to separate distinct jobs launched using the same overall configuration, reflecting repeated re-crawling of the same sites over time.

In other words, if we have two different crawler engines that can be driven by the same crawl feed data and output the same format results, we can switch between them easily. Similarly, we can make any kind of changes to our Annotation & Curation Tool, or even replace it entirely, and as long as it generates the same crawl feed data, the crawler engine doesn’t have to care. Finally, as we’ve also standardised the crawler output, the tools we use to post-process our crawl data can also be independent of the specific crawl engine in use.

This separation of components has been crucial to our recent progress. By de-coupling the different processes within the crawl lifecycle, each of the individual parts is able to be move at it’s own pace. Each can be modified, tested and rolled-out without affecting the others, if we so choose. True, making large changes that affect multiple components does require more careful management of the development process, but this is a small price to pay for the ease by which we can roll out improvements and bugfixes to individual components.

A prime example of this is how our Heritrix crawl engine itself has evolved over time, and that will be the subject of the next blog post.

  1. Although, noting that Delicious is now owned by Pinboard, I would like to make it clear that we are not attempting to compete with Pinboard. 

  2. Note that this is also a feature of some bookmarking sites. But we are not attempting to compete with Pinboard. 

  3. I’m not yet sure how this might work, but some combination of the Open Annotation Specification and Memento might be a good starting point. 

  4. For more information, see the Architecture section of this follow-up blog post 

03 November 2017

Guy Fawkes, Bonfire or Fireworks Night?

What do you call the 5th of November? As a child of the 70s and 80s it was 'Guy Fawkes' night and my friends and I might make a 'guy' to throw on the bonfire. It is interesting to see through an analysis of the UK Web Archive SHINE service that the popularity of the term 'Guy Fawkes' was overtaken by 'Bonfire night' in 2009. I've included 'Fireworks night' too for comparison.

Bonfire-night

Is this part of a trend away from the original anti-catholic remembrance and celebration to a more neutral event?

Examine this (and other) trends on our SHINE service.

By Jason Webber, Web Archive Engagement Manager, The British Library

24 October 2017

Web Archiving Tools for Legal Deposit

By Andy Jackson, Web Archive Technical Lead, The British Library - re-blogged from anjackson.net

Before I revisit the ideas explored in the first post in the blog series I need to go back to the start of this story…

Between 2003 and 2013 – before the Non-Print Legal Deposit regulations came into force – the UK Web Archive could only archive websites by explicit permission. During this time, the Web Curator Tool (WCT) was used to manage almost the entire life-cycle of the material in the archive. Initial processing of nominations was done via a separate Selection & Permission Tool (SPT), and the final playback was via a separate instance of Wayback, but WCT drove the rest of the process.

Of course, selective archiving is valuable in it’s own right, but this was also seen as a way of building up the experience and expertise required to implement full domain crawling under Legal Deposit. However, WCT was not deemed to be a good match for a domain crawl. The old version of Heritrix embedded inside WCT was not considered very scalable, was not expected to be supported for much longer, and was difficult to re-use or replace because of the way it was baked inside WCT.1

The chosen solution was to use Heritrix 3 to perform the domain crawl separately from the selective harvesting process. While this was rather different to Heritrix 1, requiring incompatible methods of set-up and configuration, it scaled fairly effectively, allowing us to perform a full domain crawl on a single server2.

This was the proposed arrangement when I joined the UK Web Archive team, and this was retained through the onset of the Non-Print Legal Deposit regulations. The domain crawls and the WCT crawls continued side by side, but were treated as separate collections. It would be possible to move between them by following links in Wayback, but no more.

This is not necessarily a bad idea, but it seemed to be a terrible shame, largely because it made it very difficult to effectively re-use material that had been collected as part of the domain crawl. For example, what if we found we’d missed an important website that should have been in one of our high-profile collections, but because we didn’t know about it had only been captured under the domain crawl? Well, we’d want to go and add those old instances to that collection, of course.

Similarly, what if we wanted to merge material collected using a range of different web archiving tools or services into our main collections? For example, for some difficult sites we may have to drive the archiving process manually. We need to be able to properly integrate that content into our systems and present them as part of a coherent whole.

But WCT makes these kind of things really hard.

If you look at the overall architecture, the Web Curator Tool enforces what is essentially (despite the odd loop or dead-end) a linear workflow (figure taken from here). First you sort out the permissions, then you define your Target and it’s metadata, then you crawl it (and maybe re-crawl it for QA), then you store it, then you make it available. In that order.

WCT-workflow

But what if we’ve already crawled it? Or collected it some other way? What if we want to add metadata to existing Targets? What if we want to store something but not make it available. What if we want to make domain crawl material available even if we haven’t QA’d it?

Looking at WCT, the components we needed were there, but tightly integrated in one monolithic application and baked into the expected workflow. I could not see how to take it apart and rebuild it in a way that would make sense and enable us to do what we needed. Furthermore, we had already built up a rather complex arrangement of additional components around WCT (this includes applications like SPT but also a rather messy nest of database triggers, cronjobs and scripts). It therefore made some sense to revisit our architecture as a whole.

So, I made the decision to make a fresh start. Instead of the WCT and SPT, we would develop a new, more modular archiving architecture built around the concept of annotations…

  1. Although we have moved away from WCT it is still under active development thanks to the National Library of New Zealand, including Heritrix3 integration! ↩
  2. Not without some stability and robustness problems. I’ll return to this point in a later post. ↩